Dev Guide
Dev Guide
Dev Guide
InterBase 2020
Update 1
Developer's Guide
© 2020 Embarcadero Technologies, Inc. Embarcadero, the Embarcadero Technologies logos, and all other
Embarcadero Technologies product or service names are trademarks or registered trademarks of Embar-
cadero Technologies, Inc. All other trademarks are property of their respective owners.
Embarcadero Technologies, Inc. is a leading provider of award-winning tools for application developers.
Embarcadero enables developers to design systems right, build them faster and run them better, regard-
less of their platform or programming language. Ninety of the Fortune 100 and an active community of
more than three million users worldwide rely on Embarcadero products to increase productivity, reduce
costs and accelerate innovation. The company's flagship tools include: Embarcadero® RAD Studio™, Del-
phi®, C++Builder®, JBuilder®, and the IoT Award winning InterBase®. Founded in 1993, Embarcadero
is headquartered in Austin, with offices located around the world. Embarcadero is online at www.embar-
cadero.com.
April, 2020
Table of Contents
iii
Table of Contents
iv
Table of Contents
2.2. Connecting to a Database Server ....... 102 3.2. Backing Up Databases ......................... 117
2.3. Working with Network Protocols ........ 103 3.3. Restoring Databases ............................ 120
2.4. Using ODBC ......................................... 103 4. Performing Database Maintenance ....... 123
2.5. Disconnecting from a Database Serv- 4.1. Validating a Database .......................... 123
er .................................................................. 103 4.2. Displaying Limbo Transaction Informa-
2.6. Iterating Through a Database Compo- tion ............................................................... 124
nent’s Datasets ............................................ 104 4.3. Resolving Limbo Transactions ............ 124
3. Requesting Information about an Attach- 5. Requesting Database and Server Status
ment ............................................................... 104 Reports .......................................................... 125
3.1. Database Characteristics ...................... 104 5.1. Requesting Database Statistics ............ 125
3.2. Environmental Characteristics ............. 105 6. Using the Log Service ............................. 126
3.3. Performance Statistics ......................... 105 7. Configuring Users .................................... 127
3.4. Database Operation Counts ............... 106 7.1. Adding a User to the Security
3.5. Requesting Database Information ...... 106 Database ..................................................... 127
7.2. Listing Users in the Security
IMPORTING AND EXPORTING DATA ....... 108 Database ..................................................... 128
7.3. Removing a User from the Security
1. Exporting and Importing Raw Data ....... 108 Database ..................................................... 128
1.1. Exporting Raw Data .............................. 108 7.4. Modifying a User in the Security
1.2. Importing Raw Data ............................. 109 Database ..................................................... 129
2. Exporting and Importing Delimited Da- 8. Displaying Server Properties .................. 129
ta ..................................................................... 109 8.1. Displaying the Database Informa-
2.1. Exporting Delimited Data ..................... 110 tion ............................................................... 129
2.2. Importing Delimited Data .................... 110 8.2. Displaying InterBase Configuration Pa-
rameters ...................................................... 130
WORKING WITH INTERBASE SER-
8.3. Displaying the Server Version ............. 130
VICES ............................................................ 112
PROGRAMMING WITH DATABASE
1. Overview of the InterBase Service Com- EVENTS ........................................................ 132
ponents .......................................................... 112
1.1. About the Services Manager ................ 112 1. Setting up Event Alerts ........................... 132
1.2. Service Component Hierarchy ............. 112
1.3. Attaching to a Service Manager ........... 113 WORKING WITH CACHED UPDATES ........ 133
1.4. Detaching from a Service Manager ..... 113
2. Setting Database Properties Using Inter- 1. Deciding When to Use Cached Up-
Base Services ................................................ 113 dates ............................................................... 133
2.1. Bringing a Database Online ................. 113 2. Using Cached Updates ............................ 133
2.2. Shutting Down a Database Using Inter- 2.1. Enabling and Disabling Cached Up-
Base Services ............................................... 114 dates ............................................................ 134
2.3. Setting the Sweep Interval Using Inter- 2.2. Fetching Records ................................. 135
Base Services ............................................... 114 2.3. Applying Cached Updates .................. 135
2.4. Setting the Async Mode ...................... 115 2.4. Canceling Pending Cached Up-
2.5. Setting the Page Buffers ...................... 115 dates ............................................................ 138
2.6. Setting the Access Mode ..................... 115 2.5. Undeleting Cached Records ............... 139
2.7. Setting the Database Reserve 2.6. Specifying Visible Records in the
Space ............................................................ 116 Cache ........................................................... 140
2.8. Activating the Database Shadow ........ 116 2.7. Checking Update Status ...................... 141
2.9. Adding and Removing Journal 3. Using Update Objects to Update a
Files ............................................................... 116 Dataset ........................................................... 141
3. Backing up and Restoring Databas- 3.1. Specifying the UpdateObject Property
es .................................................................... 117 for a Dataset ............................................... 142
3.1. Setting Common Backup and Restore 3.2. Creating SQL Statements for Update
Properties ..................................................... 117 Components ............................................... 144
v
Table of Contents
3.3. Executing Update Statements ............. 148 6.2. Supplying parameters at runtime ....... 172
3.4. Using Dataset Components to Update 6.3. Using a data source to bind parame-
a Dataset ...................................................... 151 ters ............................................................... 173
4. Updating a Read-only Dataset ............... 151 7. Executing a query .................................... 175
5. Controlling the Update Process ............. 152 7.1. Executing a query at design time ........ 175
5.1. Determining if you Need to Control the 7.2. Executing a query at runtime .............. 175
Updating Process ........................................ 152 8. Preparing a query .................................... 176
5.2. Creating an OnUpdateRecord Event 9. Unpreparing a query to release re-
Handler ........................................................ 152 sources ........................................................... 177
6. Handling Cached Update Errors ............ 154 10. Improving query performance ............. 177
6.1. Referencing the Dataset to Which to 10.1. Disabling bi-directional cursors .......... 177
Apply Updates ............................................ 154 11. Working with result sets ....................... 178
6.2. Indicating the Type of Update that
Generated an Error .................................... 154 WORKING WITH TABLES ........................... 179
6.3. Specifying the Action to Take .............. 155
1. Using table components ......................... 179
UNDERSTANDING DATASETS ................... 157 2. Setting up a table component ............... 179
2.1. Specifying a table name ...................... 180
1. What is TDataSet? ................................... 157 2.2. Opening and closing a table .............. 180
2. Opening and Closing Datasets .............. 158 3. Controlling read/write access to a ta-
3. Determining and Setting Dataset ble ................................................................... 181
States .............................................................. 158 4. Searching for records .............................. 181
3.1. Deactivating a Dataset ......................... 160 5. Sorting records ......................................... 181
3.2. Browsing a Dataset ............................. 160 5.1. Retrieving a list of available indexes with
3.3. Enabling Dataset Editing ...................... 161 GetIndexNames .......................................... 182
3.4. Enabling Insertion of New Records ..... 162 5.2. Specifying an alternative index with In-
3.5. Calculating Fields ................................. 162 dexName ..................................................... 182
3.6. Updating Records ................................ 162 5.3. Specifying sort order for SQL ta-
4. Navigating Datasets ................................ 163 bles .............................................................. 182
5. Searching Datasets .................................. 163 6. Specifying fields with IndexField-
6. Modifying Dataset Data .......................... 163 Names ............................................................ 182
7. Using Dataset Events .............................. 163 6.1. Examining the field list for an index ..... 182
7.1. Aborting a Method .............................. 163 7. Working with a subset of data ............... 183
7.2. Using OnCalcFields .............................. 164 8. Deleting all records in a table ................ 183
8. Using Dataset Cached Updates ............. 164 9. Deleting a table ....................................... 183
10. Renaming a table .................................. 184
WORKING WITH QUERIES ......................... 166 11. Creating a table ..................................... 184
12. Synchronizing tables linked to the same
1. Queries for desktop developers ............ 166 database table .............................................. 185
2. Queries for server developers ................ 167 13. Creating master/detail forms ............... 186
3. When to use TIBDataSet, TIBQuery, and 13.1. Building an example master/detail
TIBSQL ............................................................ 167 form ............................................................. 186
4. Using a query component: an
overview ........................................................ 167 WORKING WITH STORED PROCE-
5. Specifying the SQL statement to exe- DURES .......................................................... 189
cute ................................................................. 168
5.1. Specifying the SQL property at design 1. When Should You use Stored Proce-
time .............................................................. 169 dures? ............................................................ 189
5.2. Specifying a SQL statement at run- 2. Using a Stored Procedure ....................... 190
time .............................................................. 170 2.1. Creating a Stored Procedure Compo-
6. Setting parameters .................................. 171 nent .............................................................. 190
6.1. Supplying parameters at design 2.2. Creating a Stored Procedure ............... 191
time .............................................................. 172
vi
Table of Contents
vii
Developer's Guide
Developer's Guide
This guide covers the "how-to" information on developing InterBase database applications using Embar-
cadero development tools, JDBC, and ODBC. Topics include:
• Connecting to databases
• Understanding datasets
• Working with tables, queries, stored procedures, cached updated and events
• Working with UDF and Blob filters
• Importing and exporting data
• Working with InterBase Services
• Writing install wizards
Embarcadero Technologies 1
Using the InterBase Developer’s Guide
NOTE
For additional information and support on Embarcadero’s products, please refer to the Embarcadero web site at http://
www.embarcadero.com.
Embarcadero Technologies 2
Client/Server Concepts
Client/Server Concepts
This chapter describes the architecture of client/server systems using InterBase. The chapter covers topics
including the definition of an InterBase client and server, and options for application development.
1. Definition of a Client
An InterBase client is an application, typically written in C, C++, Delphi or Java, that accesses data in an
InterBase database.
In the more general case, an InterBase client is any application process that uses the InterBase client library,
directly or via a middleware interface, to establish a communication channel to an InterBase server. The
connection can be local if the application executes on the same node as the InterBase server, or remote
if the application must use a network to connect to the InterBase server.
InterBase is designed to allow clients to access an InterBase server on a platform and operating system
different from the client’s platform and operating system.
The client library provides a set of high-level functions in the form of an Application Programmer’s Interface
(API) for communication with an InterBase server. All client applications or middleware must use this API
to access InterBase databases. The API Guide provides reference documentation and guidelines for using
the API to develop high-performance applications.
3. Definition of a Server
The InterBase server is a software process that executes on the node that hosts the storage space for
databases. The server process is the only process on any node that can perform direct I/O to the database
files.
Clients send to the server process requests to perform actions on the database, including:
Embarcadero Technologies 3
Client/Server Concepts
The server process is fully network-enabled; it services connection requests that originate on another node.
The server process implements the same InterBase application protocol that the client uses.
Many clients can remain connected to the multi-threaded server process simultaneously. The server reg-
ulates access to individual data records within the database, and enforces exclusive access to records when
clients request to modify the data in the records.
4. Application Development
Once you create and populate a database, you can access the information through an application. If
you use one of Embarcadero’s client tools, you can access information through your existing application.
You can also design and implement a new application by embedding SQL statements, or API calls in an
application written in a programming language such as C or C++.
4.1.2. DbExpress (DBX)
dbExpress is a set of database drivers that provide fast access to a variety of SQL database servers. For
each supported type of database, dbExpress provides a driver that adapts the server-specific software to
a set of uniform dbExpress interfaces. When you deploy a database application that uses dbExpress, you
need only include a dll (the server-specific driver) with the application files you build.
dbExpress lets you access databases using unidirectional datasets, which are designed for quick lightweight
access to database information, with minimal overhead. Unidirectional datasets can only retrieve a unidi-
rectional cursor. They do not buffer data in memory, which makes them faster and less resource-intensive
than other types of dataset. However, because there are no buffered records, unidirectional datasets are
also less flexible than other datasets. For example, the only supported navigation methods are the First and
Next methods. Most others raise exceptions. Some, such as the methods involved in bookmark support,
simply do nothing.
There is no built-in support for editing because editing requires a buffer to hold the edits. The CanModify
property is always False, so attempts to put the dataset into edit mode always fail. You can, however, use
unidirectional datasets to update data using an SQL UPDATE command or provide conventional editing
support by using a dbExpress-enabled client dataset or connecting the dataset to a client dataset.
There is no support for filters, because filters work with multiple records, which requires buffering. If you
try to filter a unidirectional dataset, it raises an exception. Instead, all limits on what data appears must be
imposed using the SQL command that defines the data for the dataset.
Embarcadero Technologies 4
Client/Server Concepts
There is no support for lookup fields, which require buffering to hold multiple records containing lookup
values. If you define a lookup field on a unidirectional dataset, it does not work properly.
Despite these limitations, unidirectional datasets are a powerful way to access data. They are the fastest
data access mechanism, and very simple to use and deploy.
NOTE
If you have previously installed RAD Studio on this computer, you do not need to perform any additional installation
procedures in order to use the ADO.NET 2.0 driver with Microsoft Visual Studio 2005/2008.
• Prerequisites:
• .NET 2.0 SDK with update
• Microsoft Visual Studio 2005/2008
• InterBase XE or later
• Installation Instructions:
• Run the InterBase ADO.NET 2.0 installer.
• Usage Instructions:
• Start Visual Studio 2005/2008.
• File new C# Windows application.
• Project - Add Reference and add the AdoDbxClient.dll, DbxCommonDriver, DBXInterBaseDriver to
your project.
• Add a DataGridView component to your Windows Form.
• The sample code below fills a DataGridView component with the contents of the employee table of
the employee.gdb sample InterBase database:
>>>
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Text;
using System.Windows.Forms;
using Borland.Data;
using Borland.Data.Units;
using System.Data.SqlClient;
using System.Data.Common;
namespace IBXEApplication1
{
Embarcadero Technologies 5
Client/Server Concepts
The embedded InterBase engine can run faster with these types of database applications because network
connections are not used and data is not passed between the application and a separate database server.
This gives application developers more choice and flexibility in their database design and deployment.
Embarcadero Technologies 6
Client/Server Concepts
The InterBase ToGo edition is available on all Windows platforms (XP, Vista, Server), and can access any
InterBase database created by InterBase desktop or server editions, versions 2007 and later.
The InterBase ToGo edition allows only a single application to connect to the database. However, the
application can make multiple connections to the database by using multiple threads, where a connection
can be made for each thread. The database service, when accessing a database, exclusively locks the
database file so that another application or InterBase server cannot access the database simultaneously.
• For Microsoft-based linkers, replace the “gds32.lib” on your link path with the new “ibtogo.lib” located
in the interbase\sdk\lib_ms\ibtogo.lib
• For Embarcadero-based linkers, use interbase\sdk\lib\ibtogo.lib
NOTE
If you want to use the ToGo edition and the ibtogo.dll with older applications or applications linked with gds32.lib,
rename the ibtogo.dll to gds32.dll and make sure this is the first file in the path of your application. Ibtogo.lib and
gds32.lib surface and export the same APIs, so they can be used interchangeably, and renamed and moved from one
to another.
In order to allow read-only media to access and create temporary files, InterBase relies on a configura-
tion parameter called “TMP_DIRECTORY”. “TMP_DIRECTORY” controls the location of write files such as
“interbase.log”. If the “TMP_DIRECTORY” configuration parameter is not set, then InterBase checks for the
existence of environment variables in the following order and uses the first path found:
The InterBase ToGo edition does not need any registry settings
Embarcadero Technologies 7
Client/Server Concepts
• That the ibtogo.dll file is available to your application when it is launched. The easiest way to ensure
this is to include it in the same directory as your application.
• Create an InterBase sub-directory under your application directory and copy the required InterBase
configuration and interbase.msg files.
4.3. Embedded Applications
You can write your application using C or C++, or another programming language, and embed SQL state-
ments in the code. You then preprocess the application using gpre, the InterBase application development
preprocessor. gpre takes SQL embedded in a host language such as C or C++ and generates a file that
a host-language compiler can compile.
The gpre preprocessor matches high-level SQL statements to the equivalent code that calls functions in
InterBase client API library. Therefore, using embedded SQL affords the advantages of using a high-level
language, and the runtime performance and features of the InterBase client API.
For more information about compiling embedded SQL applications, see the Embedded SQL Guide.
Some applications are designed with a specific set of requests or tasks in mind. These applications can
specify exact SQL statements in the code for preprocessing. gpre translates statements at compile time into
an internal representation. These statements have a slight speed advantage over dynamic SQL because
they do not need to incur the overhead of parsing and interpreting the SQL syntax at runtime.
Dynamic Applications
Embarcadero Technologies 8
Client/Server Concepts
Some applications must handle ad hoc SQL statements entered by users at run time; for example, allowing
a user to select a record by specifying criteria to a query. This requires that the program constructs the
query based on user input.
InterBase uses Dynamic SQL (DSQL) for generating dynamic queries. At run time, your application passes
DSQL statements to the InterBase server in the form of a character string. The server parses the statement
and executes it.
BDE provides methods for applications to send DSQL statements to the server and retrieve results. ODBC
applications rely on DSQL statements almost exclusively, even if the application interface provides a way
to visually build these statements. For example, Query By Example (QBE) or Microsoft Query provide
convenient dialogs for selecting, restricting and sorting data drawn from a BDE or ODBC data source,
respectively.
You can also build templates in advance for queries, omitting certain elements such as values for searching
criteria. At run time, supply the missing entries in the form of parameters and a buffer for passing data
back and forth.
For more information about DSQL, see the Embedded SQL Guide.
4.4. API Applications
The InterBase API is a set of functions that enables applications to construct and send SQL statements to
the InterBase engine and receive results back. All database work can be performed through calls to the API.
While programming with the API requires an application developer to allocate and populate underlying
structures commonly hidden at the SQL level, the API is ultimately more powerful and flexible. Applications
built using API calls offer the following advantages over applications written with embedded SQL:
API functions can be divided into seven categories, according to the object on which they operate:
Embarcadero Technologies 9
Client/Server Concepts
The API Guide has complete documentation for developing high-performance applications using the In-
terBase API.
The Install API provides a library of functions that enable you to install InterBase programmatically. You
have the option of creating a silent install that is transparent to the end user. The functions in the Licensing
API permit you to install license certificates and keys as well.
4.5. Multi-database Applications
Unlike many relational databases, InterBase applications can use multiple databases at the same time. Most
applications access only one database at a time, but others need to use several databases that could have
the same or different structures.
For example, each project in a department might have a database to keep track of its progress, and
the department could need to produce a report of all the active projects. Another example where more
than one database would be used is where sensitive data is combined with generally available data. One
database could be created for the sensitive data with access to it limited to a few users, while the other
database could be open to a larger group of users.
With InterBase you can open and access any number of databases at the same time. You cannot join tables
from separate databases, but you can use cursors to combine information. See the Embedded SQL Guide
for information about multi-database applications programming.
Embarcadero Technologies 10
Programming Applications with RAD Studio
To optimize the InterBase driver, you can change the following options:
• DRIVER FLAGS
• SQLPASSTHRU MODE
• SQLQUERY MODE
Depending on your database needs, you should set the DRIVER FLAGS option to either 512 or 4608 to
optimize InterBase. The recommended value for DRIVER FLAGS is 4608.
• If you set DRIVER FLAGS to 512, you specify that the default transaction mode should be repeatable
read transactions using hard commits. This reduces the overhead that automatic transaction control
incurs.
• If you set DRIVER FLAGS to 4608, you specify that the default transaction mode should be repeatable
read transactions using soft commits. Soft commits are an InterBase feature that lets the driver retain
the cursor while committing changes. Soft commits improve performance on updates to large sets
of data.
When using hard commits, the BDE must re-fetch all records in a dataset, even for a single record change.
This is less expensive when using a desktop database because the data is transferred in core memory.
For a client/server database such as InterBase, refreshing a dataset consumes the network bandwidth and
degrades performance radically. With soft commit, the cursor is retained, and a re-fetch is not performed.
NOTE
Soft commits are never used in explicit transactions started by BDE client applications. This means that if you use
the StartTransaction and Commit methods to explicitly start and commit a transaction, then the driver flag for soft
commit is ignored.
Embarcadero Technologies 11
Programming Applications with RAD Studio
The SQLPASSTHRU MODE option specifies whether the BDE and passthrough SQL statements can share
the same database connections. By default, SQLPASSTHRU MODE is set to SHARED AUTOCOMMIT. To re-
duce the overhead that automatic transaction control incurs, set this option to SHARED NOAUTOCOMMIT.
However, if you want to pass transaction control to your server, set this option to NOT SHARED. Depending
on the quantity of data, this can increase InterBase performance by a factor of ten.
Set the SQLQRYMODE to SERVER to allow InterBase, instead of the BDE, to interpret and execute SQL
statements.
Although TTable is very convenient for its RAD methods and its abstract data-aware model, it is not de-
signed to be used with client/server applications; it is designed for use on relatively small tables in a local
database, accessed in core memory.
TTable gathers information about the metadata of a table and tries to maintain a cache of the dataset in
memory. It refreshes its client-side copy of the data when you issue the Post method or the TDatabase.Roll-
back method. This incurs a huge network overhead for most client/server databases, which have much
larger datasets and are accessed over a network. In a client/server architecture, you should use TQuery
instead.
Set the following TQuery properties and methods as indicated to optimize InterBase performance:
• CachedUpdates property: set this property to <False> to allow the server to handle updates, deletes,
and conflicts.
• RequestLive property: set this property to <False> to prevent the VCL from keeping a client-side copy
of rows; this has a benefit to performance because fewer data must be sent over the network.
Embarcadero Technologies 12
Programming Applications with RAD Studio
• Filter property: let the server do the filtering before sending the dataset over the network.
• Commit method: forces a fetch-all when the BDE DRIVER FLAGS option is not set to 4096 (see Opti-
mizing the InterBase SQL Links Driver), or when you are using explicit transaction control.
For example, when your client sends a record to the server, the primary key is NULL. Using a trigger,
InterBase inserts a value into the primary key and posts the record. When the BDE tries to verify the
existence of the just-inserted record, it searches for a record with a NULL primary key, which it will be
unable to find. The BDE then generates a record or key deleted error message.
1. Create a trigger similar to the following. The “if” clause checks to see whether the primary key being
inserted in NULL. If so, a value is produced by the generator; if not, nothing is done to it.
2. Create a stored procedure that returns the value from the generator:
3. Add a TStoredProc component to your Delphi or C++Builder application and associate it with the
COUNTRY_Pkey_Gen stored procedure.
4. Add a TQuery component to your application and add the following code to the BeforePost event:
This solution allows the client to retrieve the generated value from the server using a TStoredProc compo-
nent and an InterBase stored procedure. This assures that the Delphi or C++Builder client will know the
primary key value when a record is posted.
Embarcadero Technologies 13
Programming with JDBC
When you edit the connection properties of a Database component, JBuilder displays the Connection
dialog.
To connect to an InterBase database with your Java application/applet, you need to specify the following
connection parameters: the name of a JDBC driver class, a username, a password, and a connection URL.
The name of the InterClient JDBC driver class is always the same:
interbase.interclient.Driver
Spelling and capitalization are important. If you spell the driver class incorrectly, you may get a ClassNot-
FoundException, and consequently, a “No suitable driver” error when the connection is attempted. The
username and password parameters are the same that you would use when connecting to a database
with IBConsole or any other tool. For the sake of simplicity, these examples use <sysdba> (the InterBase
root user) and <masterkey> for username and password, respectively.
There are other useful features of this dialog, as well. Once you fill in your URL, you can press the Test
connection button to ensure that the connection parameters are correct. The Prompt user password check
Embarcadero Technologies 14
Programming with JDBC
box forces the user to enter a proper username and password before establishing a connection. The Use
extended properties check box and property page is not used by InterClient.
The third part of an InterClient URL holds the name of the server that is running InterBase and the location
of the database to which you want to connect. In the following syntax, “absolute” and “relative” are always
with respect to the server, not the client:
On Unix:
jdbc:interbase://servername//absolutePathToDatabase.ib
jdbc:interbase://servername/relativePathToDatabase.ib
On Microsoft Windows:
jdbc:interbase://servername//DriveLetter:/absolutePathToDatabase.ib
jdbc:interbase://servername/relativePathToDatabase.ib
Here are a few possible configuration options and their corresponding JDBC URLs.
For the atlas database on a Unix machine named sunbox you might use something like this (the path on
the Unix machine is /usr/databases/atlas.ib):
jdbc:interbase://sunbox//usr/databases/atlas.ib
jdbc:interbase://localhost//inetpub/test.ib
jdbc:interbase://localhost/inetpub/test.ib
To access the jupiter database on an NT machine named mrbill, you might use something like this (notice
the drive letter):
jdbc:interbase://mrbill/c:/interbas/examples/jupiter.ib
If the client and the server are on the same machine and you wanted to make a local connection, use
loopback as the server name. For example, on Microsoft Windows:
Embarcadero Technologies 15
Programming with JDBC
jdbc:interbase://loopback/c:/interbas/examples/jupiter.ib
Other than these connection-specific issues, InterClient can be used like any other JDBC driver with JBuilder.
With Local InterBase, JBuilder Professional and Client/Server versions, it makes it easy to develop and test
powerful database applications in Java.
The following examples illustrates how to use the JDBC URL argument for passing additional parameters:
Example:
String url =
"jdbc:interbase://localhost:3050/c:/dbs/books.ib?logWriterFile=logfile.txt";
Example:
String url =
"jdbc:interbase://localhost:3050/c:/dbs/
books.ib?logWriterFile=logfile.txt;create=true”;
Legacy methods provide for by the Datasource, and the DriverManager class are still retained and work as
before, however, note that the new functionality takes precedence over the Datasource and Drivermanager
methods. Consider the following Java code as an example:
{
String url =
"jdbc:interbase://localhost:3050/c:/dbs/
books.ib?logWriterFile=logfile.txt;create=true”;
dataSource.setServerName ( "localhost");
dataSource.setDatabaseName ( url );
dataSource.setCreateDatabase ( false);
}
In this case, the create database flag in the URL will have precedence.
Example:
Embarcadero Technologies 16
Programming with JDBC
?logWriterFile=c:/smistry/interclient.log
The setLogWriter call takes a defined PrintWriter, while the new logWriterFile takes an actual filename
to be used as a logWriter.
NOTE
For information on InterBase Over-the-Wire (OTW) encryption, see Setting up OTW Encryption.
The client application will indicate to the JDBC driver that it needs to perform OTW encryption via the
database connection string. The connection string will take OTW properties as specified in the syntax below.
TIP
Example: The following is an example of how this will be used in a java program.
Embarcadero Technologies 17
Programming with JDBC
you can deploy InterClient-based applets without having to manually load platform-specific JDBC drivers
on each client system. Web servers automatically download the InterClient classes along with the applets.
Therefore, there is no need to manage local native database libraries, which simplifies administration and
maintenance of customer applications. As part of a Java applet, InterClient can be dynamically updated,
further reducing the cost of application deployment and maintenance.
InterBase developers who are writing new Java-based client programs can use InterClient to access their
existing InterBase databases. Because InterClient is an all-Java driver, it can also be used on the Sun NC
(Network Computer), a desktop machine that runs applets. The NC has no hard drive or CD ROM; users
access all of their applications and data via applets downloaded from servers.
2.1. InterClient Architecture
The InterClient product consists of a client-side Java package called InterClient, which contains a library of
Java classes that implement most of the JDBC API and a set of extensions to the JDBC API. This package
interacts with the JDBC Driver Manager to allow client-side Java applications and applets to interact with
InterBase databases.
• As Java applets, which are Java programs that can be included in an HTML page with the <APPLET>
tag, served via a web server, and viewed and used on a client system using a Java-enabled web
browser. This deployment method does not require manual installation of the InterClient package on
the client system. It does require a Java-enabled browser and the JDBC Driver Manager to be installed
on the client system.
• As Java applications, which are stand-alone Java programs for execution on a client system. This
deployment method requires the InterClient package, the JDBC Driver Manager, and the Java Runtime
Environment (JRE), which is part of the Java Developer's Kit (JDK) installed on the client system.
Embarcadero Technologies 18
Programming with JDBC
2.2. InterClient Communication
InterClient is a driver for managing interactions between a Java applet or application and an InterBase
database server. On a client system, InterClient works with the JDBC Driver Manager to handle client
requests through the JDBC API. To access an InterBase database, InterClient communicates via a TCP/IP
connection to the InterBase server and passes back the results to the InterClient process on the client
machine.
java.sql.DriverManager Loads the specific drivers and supports creating new database connections.
java.sql.Connection Represents a connection to a specific database.
java.sql.Statement Allows the application to execute a SQL statement.
java.sql.PreparedStatement Represents a pre-compiled SQL statement.
• Public void setObject (int parameterIndex, Object x) now works when parameter x
is of type java.io.Inputstream.
• All variations of the setCharacterStream () method are implemented.
• All variations of the setAsciiStream() and setBinaryStream() methods are imple-
mented.
• All variations of setBlob () and setClob() methods are implemented.
• The isClosed() method is implemented.
java.sql.CallableStatement Represents a call to a stored procedure in the database.
java.sql.ResultSet Controls access to the rows resulting from a statement execution.
Embarcadero Technologies 19
Programming with JDBC
import java.sql.*;
To access an InterBase database, the InterClient driver communicates via a TCP/IP connection with the
InterBase server. InterBase processes the SQL statements and passes the results back to the InterClient
driver.
Multithreading
Any JDBC driver must comply with the JDBC standard for multithreading, which requires that all operations
on Java objects be able to handle concurrent execution.
For a given connection, several threads must be able to safely call the same object simultaneously. The
InterClient driver is thread safe. For example, your application can execute two or more statements over
the same connection concurrently, and process both result sets concurrently, without generating errors
or ambiguous results.
Embarcadero Technologies 20
Programming with JDBC
The Connection object establishes and manages the connection to your particular database. Within a given
connection, you can execute SQL statements and receive the result sets.
The java.sql.Connection interface represents a connection to a particular database. The JDBC specifica-
tion allows a single application to support multiple connections to one or more databases, using one or
more database drivers. When you establish your connection using this class, the DriverManager selects an
appropriate driver from those loaded based on the subprotocol specified in the URL, which is passed as
a connection parameter.
Class.forName("interbase.interclient.Driver");
The first time the Java interpreter sees a reference to interbase.interclient.Driver, it loads the InterClient
driver. When the driver is loaded, it automatically creates an instance of itself, but there is no handle for it
that lets you access that driver directly by name. This driver is anonymous; you do not need to reference
it explicitly to make a database connection. You can make a database connection simply by using the
java.sql.DriverManager class.
It is the responsibility of each newly loaded driver to register itself with the DriverManager; the programmer
is not required to register the driver explicitly. After the driver is registered, the DriverManager can use it
to make database connections.
Now you can reference the driver classes and methods with driver.XXX(). If all you need to do is connect
to the database and execute SQL statements, you do not need to create a driver object explicitly; the
DriverManager handles everything for you. However, there are a few cases when you need to reference
the driver by name. These include:
Embarcadero Technologies 21
Programming with JDBC
• Tailoring a driver for debugging purposes. For more information, see Debugging your Application.
The DriverManager sees a driver as only one of many standard JDBC drivers that can be loaded. If you need
to create a connection to another type of database in the future, you need only to load the new driver
with forName() or declare another driver explicitly with
The java.sql.Driver class has different methods than java.sql.DriverManager. If you want to use any of
the java.sql.Driver methods, you need to create an explicit driver object. The following are a few of the
driver methods:
The example below shows how to interact with the database by referencing the driver directly:
IMPORTANT
If your application ever needs to access non-InterBase databases, do not define a driver object as a type interbase.in-
terclient.Driver as follows:
This method creates a driver object that is an instance of the interbase.interclient.Driver class, not a
generic instance of the java.sql.Driver class. It is not appropriate for a database-independent client pro-
gram because it hard-codes the InterClient driver into your source code, together with all of the classes and
methods that are specific to the InterClient driver. Such applications could access only InterBase databases.
interbase.interclient.Driver
interbase.interclient.Connection
Embarcadero Technologies 22
Programming with JDBC
interbase.interclient.Statement
java.sql.ResultSet() interface does not have a function to check if a particular column has a value NULL.
The InterBase JDBC driver provides an extension ResultSet.isNull(int) to check if a particular column in the
returned record has the value NULL. You will need to explicitly qualify the call to the extended API with
interbase.interclient.ResultSet as follows.
For example: The following example prints out a message when the column "phone_ext" is NULL for certain
employees.
if (true == ((interbase.interclient.ResultSet)rs).isNull(2))
System.out.println ("Employee phone extension is NULL");}
TIP
By using explicit casts whenever you need to access InterClient-specific extensions, you can find these InterClient-specific
operations easily if you ever need to port your program to another driver.
The Connection object in turn provides access to all of the InterClient classes and methods that allow you
to execute SQL statements and get back the results.
Embarcadero Technologies 23
Programming with JDBC
For example:
NOTE
This is not the case when you are using the InterClient Monitor extension to trace a connection. See Debugging your
Application for a detailed explanation.
InterClient follows the JDBC standard for specifying databases using URLs. The JDBC URL standard provides
a framework so that different drivers can use different naming systems that are appropriate for their own
needs. Each driver only needs to understand its own URL naming syntax; it can reject any other URLs that
it encounters. A JDBC URL is structured as follows:
jdbc:subprotocol:subname
The subprotocol names a particular kind of database connection, which is in turn supported by one or
more database drivers. The DriverManager decides which driver to use based on which subprotocol is
registered for each driver. The contents and syntax of subname in turn depend upon the subprotocol. If
the network address is included in the subname, the naming convention for the subname is:
//hostname:/subsubname
subsubname can have any arbitrary syntax.
Embarcadero Technologies 24
Programming with JDBC
jdbc:interbase://server/full_db_path[?properties]
“InterBase” is the sub-protocol, and server is the hostname of the InterBase server. full_db_path (that is,
“sub-subname”) is the full pathname of a database file, including the initial slash (/). If the InterBase server
is a Windows system, you must include the drive name as well. InterClient does not support passing any
attributes in the URL. For local connections, use:
server = "localhost"
NOTE
The “/” between the server and full_db_path is a delimiter. When specifying the path for a Unix-based database, you
must include the initial “/” for the root directory in addition to the “/” for the delimiter.
In a Unix-based database, the following URL refers to the database orders.ib in the directory /dbs on the
Unix server accounts.
dbURL = "jdbc:interbase://accounts//dbs/orders.ib"
In a Windows server, the following URL refers to the database customer.ib in the directory /dbs on drive
C of the server support.
dbURL = "jdbc:interbase://support/C:/dbs/customer.ib"
Connection properties must also be defined before trying to open a database connection. To do this, pass
in a java.util.Properties object, which maps between tag strings and value strings. Two typical properties
are “user” and “password.” First, create the Properties object:
Now create the connection arguments. user and password are either literal strings or string variables. They
must be the username and password on the InterBase database to which you are connecting:
Now create the connection with the URL and connection properties parameters:
java.sql.Connection connection =
java.sql.DriverManager.getConnection(url, properties);
Embarcadero Technologies 25
Programming with JDBC
3.4.5. Connection Security
Client applications use standard database user name and password verification to access an InterBase
database. InterClient encrypts the user name and password for transmission over the network.
• Statement
• PreparedStatement
• CallableStatement
3.5.1.1. PreparedStatement
The PreparedStatement object allows you to execute a set of SQL statements more than once. Instead
of creating and parsing a new statement each time to do the same function, you can use the Prepared-
Statement class to execute pre-compiled SQL statements multiple times. This class has a series of “setXXX”
methods that allow your code to pass parameters to a predefined SQL statement; it is like a template to
which you supply the parameters. Once you have defined parameter values for a statement, they remain
to be used in subsequent executions until you clear them with a call to the PreparedStatement.clearPa-
rameters method.
For example, suppose you want to be able to print a list of all new employees hired on any given day. The
operator types in the date, which is then passed into the PreparedStatement object. Only those employees
or rows in “emp_table” where “hire_date” matches the input date are returned in the result set.
3.5.1.2. CallableStatement
The CallableStatement class is used for executing stored procedures with OUT parameters. Since InterBase
does not support the use of OUT parameters, there is no need to use CallableStatement with InterClient.
Embarcadero Technologies 26
Programming with JDBC
NOTE
You can still use a CallableStatement object if you do not use the OUT parameter methods.
3.5.2. Querying Data
After creating a Connection and a Statement or PreparedStatement object, you can use the executeQuery
method to query the database with SQL SELECT statements.
This example shows the sequence for executing SELECT statements, assuming that you have defined the
getConnection arguments:
Embarcadero Technologies 27
Programming with JDBC
3.5.3. Finalizing Objects
Applications and applets should explicitly close the various JDBC objects (Connection, Statement, and Re-
sultSet) when they are done with them. The Java “garbage collector” may periodically close connections,
but there's no guarantee when, where, or even if this will happen. It's better to immediately release a
connection's database and JDBC resources rather than waiting for the garbage collector to release them
automatically. The following close statements should appear at the end of the previous executeQuery()
example.
resultSet.close();
statement.close();
connection.close();
If you do not know the default order of the columns, the syntax is:
Embarcadero Technologies 28
Programming with JDBC
For example, suppose an employee, Sara Jones, gets married wants you to change her last name in the
“last_name” column of the EMPLOYEE table:
Embarcadero Technologies 29
Programming with JDBC
java.sql.PreparedStatement preparedStatement;
//Create the Prepared_Statement object
preparedStatement = connection.prepareStatement(
"UPDATE emp_table SET last_name = ? WHERE emp_no = ?");
//input the last name and employee number
String lname;
String empno;
System.in.readln("Enter last name: ", + lname);
System.in.readln("Enter employee number: ", + empno);
int empNumber = Integer.parseInt(empno);
//pass in the last name and employee id to
preparedStatement's ? //parameters
//where '1' is the 1st parameter, '2' is the 2nd, etc.
preparedStatement.setString(1,lname);
preparedStatement.setInt(2,empNumber);
//now update the table
int rowCount = preparedStatement.executeUpdate();
The following example deletes the entire “Sara Zabrinski” row from the EMPLOYEE table:
• Select procedures are used in place of a table or view in a SELECT statement. A selectable procedure
generally has no IN parameters. See note below.
• Executable procedures can be called directly from an application with the EXECUTE PROCEDURE state-
ment; they may or may not return values to the calling program.
Use the Statement class to call select or executable procedures that have no SQL input (IN) parameters.
Use the PreparedStatement class to call select or executable stored procedures that have IN parameters.
NOTE
Although it is not commonly done, it is possible to use IN parameters in a SELECT statement. For example:
Embarcadero Technologies 30
Programming with JDBC
as
begin
for select a_field1 from a_table
where a_field2 = :in_var
into :out_data
do suspend;
end
3.6.1. Statement Example
An InterClient application can call a select procedure in place of a table or view inside a SELECT statement.
For example, the stored procedure multiplyby10 multiplies all the rows in the NUMBERS table (visible
only to the stored procedure) by 10, and returns the values in the result set. The following example uses
the Statement.executeQuery() method to call the multiplyby10 stored procedure, assuming that you have
already created the Connection and Statement objects:
3.6.2. PreparedStatement Example
In the example below, the multiply stored procedure is not selectable. Therefore, you have to call the
procedure with the PreparedStatement class. The procedure arguments are the scale factor and the value
of KEYCOL that uniquely identifies the row to be multiplied in the NUMBERS table.
Embarcadero Technologies 31
Programming with JDBC
4.1. Handling_Installation_Problems
Call interbase.interclient.InstallTest to test an InterClient installation. InstallTest provides two static
methods for testing the installation. A static main is provided as a command line test that prints to Sys-
tem.out. main() uses the other public static methods that test specific features of the installation. These
methods can be used by a GUI application as they return strings, rather than writing the diagnostics to
System.out as main() does. InstallTest allows you to:
Every driver instance has one and only one monitor instance associated with it. The initial monitor for the
default driver instance that is implicitly registered with the DriverManager has no logging/tracing enabled.
Enabling tracing for the default driver is not recommended. However, if you create your own driver in-
stance, you can tailor the tracing and logging for your driver without affecting the default driver registered
with the DriverManager.
NOTE
If you want to use the Monitor to trace connections and statements, you must create the original objects using the
connect() method of the tailored driver. You cannot create a connection with DriverManager.getConnection()
method and then try to trace that connection. Since tracing is disabled for the default driver, there will be no data.
Embarcadero Technologies 32
Programming with JDBC
After running the program and executing some SQL statements, you can print out the trace messages
associated with the driver, connection, and statement methods. The tracing output distinguishes between
implicit calls, such as the garbage collector or InterClient driver calling close() versus user-explicit calls.
This can be used to test application code, since it would show if result sets or statements aren't being
cleaned up when they should.
An InterClient applet uses JDBC to provide access to a remote InterBase server in the following manner:
1. A user accesses the HTML page on which the InterClient applet resides.
2. The applet bytecode is downloaded to the client machine from the Web server.
3. The applet code executes on the client machine, downloading the InterClient package (that is, the
InterClient classes and the InterClient driver) from the Web server.
4. The InterClient driver communicates with the InterBase server, which executes SQL statements and
returns the results to the user running the InterClient applet.
5. When the applet is finished executing, the applet itself and the InterClient driver and classes disappear
from the client machine.
Embarcadero Technologies 33
Programming with JDBC
NOTE
If your program needs to access data from more than one server/machine, you must develop a stand-alone InterClient
application, since you cannot use an applet to do this.
Embarcadero Technologies 34
Programming with JDBC
InterBase XE3 Update 3 introduces a new connection property in the JDBC driver, returnColumnLabelAs-
ColumnName. Pentaho requires ResultSetMetaData.getColumnName() to actually return the alias/label
name (if provided by the application). In order to comply with JDBC specifications, and to keep backward
compatibility for existing InterBase JDBC apps, this new connection property will be FALSE by default.
If you want to use the new property for the Pentaho-type behavior, set the following connection property:
properties.put(“returnColumnLabelAsColumnName”, “true”)
NOTE
The following table lists the JDBC classes, methods, and features not supported by this version of InterClient.
Embarcadero Technologies 35
Programming with JDBC
Types BIT<br/> TINYINT<br/> BIGINT InterBase does not support these data types.
DatabaseMetaData getCatalogs( ) InterBase does not support catalogs or schemas.
getCatalogSeparator( )
getCatalogTerm( )
getMaxCatalogName-
Length( )
getMaxSchemaNameLength( )
getSchemas( )
getSchemaTerm( )
isCatalogAtStart( )
Embarcadero Technologies 36
Programming with JDBC
Embarcadero Technologies 37
Programming with JDBC
Embarcadero Technologies 38
Programming with JDBC
7.2. Extended Properties
Data Source Extended properties
Name Type Description Default
Value
charSet String Specifies the character encoding for the connection; No default value
used for sending all SQL and character input data to the
database and for all output data and InterBase messages
retrieved from the database.
Embarcadero Technologies 39
Programming with JDBC
Embarcadero Technologies 40
Programming with JDBC
NOTE
For more information on how to generate the certificate and private key files, refer to the Network Configuration chapter
in the Operations Guide.
<driver-datasource-jndiname>serial://datasources/driverDataSource</driver-data-
source-jndiname>
<property>
<prop-name>connectionType</prop-name>
<prop-type>Enumerated</prop-type>
<prop-value>Direct</prop-value>
</property>
<property>
<prop-name>dialect</prop-name>
<prop-type>Enumerated</prop-type>
<prop-value>interbase</prop-value>
</property>
</visitransact-datasource>
<driver-datasource>
<jndi-name>serial://datasources/driverDataSource</jndi-name>
Embarcadero Technologies 41
Programming with JDBC
<datasource-class-
name>interbase.interclient.JdbcConnectionFactory</datasource-class-name>
<property>
<prop-name>user</prop-name>
<prop-type>String</prop-type>
<prop-value>SYSDBA</prop-value>
</property>
<property>
<prop-name>password</prop-name>
<prop-type>String</prop-type>
<prop-value>masterkey</prop-value>
</property>
<property>
<prop-name>serverName</prop-name>
<prop-type>String</prop-type>
<prop-value>agni</prop-value>
</property>
<property>
<prop-name>databaseName</prop-name>
<prop-type>String</prop-type>
<prop-value>c:/admin.ib</prop-value>
</property>
<property>
<prop-name>sqlDialect</prop-name>
<prop-type>int</prop-type>
<prop-value>3</prop-value>
</property>
<property>
<prop-name>create</prop-name>
<prop-type>boolean</prop-type>
<prop-value>true</prop-value>
</property>
</driver-datasource>
</jndi-definitions>
8. InterClient Scrollability
8.1. The InterClient Connection Class
To achieve JDBC 2.0 core compliance, InterClient now allows a value of TYPE_SCROLL_INSENSITIVE for the
resultSetType argument for the following Connection methods:
Embarcadero Technologies 42
Programming with JDBC
Previously, the only allowable value for resultSetType was TYPE_FORWARD_ONLY. Currently, the only type
not allowed is the TYPE_SCROLL_SENSITIVE
The following methods now return a valid value when the resultSets that are of the new resultSet-
Type.TYPE_SCROLL_INSENSITIVE:
8.3. Additional Functions
Additional functions that implement the JDBC 2.x API functionality are listed below.
Function Functionality
int Statement.getResultSetType() Returns the type if resultSet is open, otherwise throws an
exception
int Statement. getResultSetConcurreny() Returns the concurrency if resultSet is open.
int Statement. getFetchDirection() Returns the fetch direction if resultSet is open, the return
value is always FETCH_FORWARD for InterBase.
int ResultSet. getFetchDirection() Returns FETCH_FORWARD in all cases
int ResultSet. getFetchSize() Returns the fetch size for the result set of the statement.
int ResultSet. setFetchSize() Allows you to set the fetch size of the resultset and the state-
ment.
int ResultSet. setFetchDirection() Throws an exception; it can only work with TYPE_SCROL-
L_SENSITIVE and TYPE_SCROLL_INSENSITIVE. Neither of
these are supported by InterBase, since InterBase does not
support scrollable cursors. The only ResultSet type allowed
by InterClient/InterBase is TYPE_FORWARD_ONLY.
Embarcadero Technologies 43
Programming with JDBC
9. Batch Updates
9.1. Methods for the Statement and PreparedStatement Classes
The following methods have been added to both the Statement and the PreparedStatement classes. The
methods listed below now work according to the JDBC specifications.
Method/Constructor Functionality
public BatchUpdateException( Constructs a BatchUpdateException object where:
String reason,
String SQLState, reason is a string describing the exception.
int vendorCode,
int [] updateCounts) SQLState is an object containing Open Group code identification.
Embarcadero Technologies 44
Programming with JDBC
Method/Constructor Functionality
The following values are implicitly set: the vendorCode is set to zero
and the Open Group code identification is set to null.
public BatchUpdateException(int [] up- Constructs a BatchUpdateException object where updateCounts
dateCounts) contains an array of INT values in which each element indicates the
row count for each SQL UPDATE command that executed successfully
before the exception was thrown.
The following values are implicitly set: reason is set to null, vendor-
Code is set to zero, and the Open Group code identification is set to
null.
public BatchUpdateException() The following values are implicitly set:
Function Functionality
boolean DatabaseMetaData.supportsBatchUpdates() Can now return TRUE.
9.4. Code Examples
Code example for the batch update functions:
Statement Class
con.setAutoCommit(false);
Statement stmt = con.createStatement();
stmt.addBatch("INSERT INTO foo VALUES (1, 10));
stmt.addBatch("INSERT INTO foo VALUES (2, 21));
int[] updateCounts = pstmt.executeBatch();
con.commit();
Embarcadero Technologies 45
Programming with JDBC
try
{
int[] updateCounts = pstmt.executeBatch();
}
catch (BatchUpdateException b)
{
int [] updates = b.getUpdateCounts();
for (int i = 0; i < updates.length; i++)
{
System.err.println ("Update Count " + updates[i]);
}
}
• public void setObject (int parameterIndex, Object x) now works when parameter x is of type java.io.In-
putstream.
• All variations of the setCharacterStream () method are implemented.
• All variations of the setAsciiStream() and setBinaryStream() methods are implemented.
• All variations of the setBlob() and setClob() methods are implemented.
• The isClosed() method is implemented.
Embarcadero Technologies 46
Programming with JDBC
Embarcadero Technologies 47
Programming Applications with ODBC
1. Overview of ODBC
Microsoft standard, similar in intent to the BDE, is called Open Database Connectivity (ODBC). One standard
API provides a unified interface for applications to access data from any data source for which an ODBC
driver is available. The InterBase client for Windows platforms includes a 32-bit client library for developing
and executing applications that access data via ODBC. The driver is in the file iscdrv32.dll. The ODBC driver
follows the ODBC 3.5 specification, which includes the 2.0 and 3.0 specifications.
You configure a data source using the ODBC Administrator tool, much as you do in BDE. If you need to
access InterBase databases from third party products that do not have InterBase drivers, you need to install
this ODBC driver. The install program then asks you if you want to configure any ODBC data sources.
“Configuring” means providing the complete path to any databases that you know you will need to access
from non-InterBase-aware products, along with the name of the ODBC driver for InterBase.
ODBC is the common language of data-driven client software. Some software products make use of
databases, but do not yet have specific support for InterBase. In such cases, they issue data queries that
conform to a current SQL standard. This guarantees that these requests can be understood by any compli-
ant database. The ODBC driver then translates these generic requests into InterBase-specific code. Other
ODBC drivers access other vendors’ databases.
Microsoft Office, for example, does not have the technology to access InterBase databases directly, but it
can use the ODBC driver that is on the InterBase CDROM.
You do not need to install an ODBC driver if you plan to access your InterBase databases only from
InterBase itself or from products such as Delphi, C++Builder, and JBuilder that use either native InterBase
programming components or SQL-Links components to query InterBase data.
To access the ODBC Administrator on Windows machines, display the Control Panel and choose ODBC.
(In some cases, it appears as “32-Bit ODBC Administrator”).
Embarcadero Technologies 48
Programming Applications with ODBC
NOTE
A user data source is a data source visible to the user, whereas a system data source is visible to the system.
1. Select Start | Settings | Control Panel and double-click the ODBC entry. (If you have the ODBC
SDK installed, you can run the “32bit ODBC Administrator” utility instead). The “ODBC Data Source
Administrator” window opens.
2. On the User DSN tab, click Add. The “Create New Data Source” window opens.
3. Select the InterBase ODBC driver and click Finish. The “InterBase ODBC Configuration” window opens.
4. Enter the following information:
The following example shows connecting using the TQuery component, and also displaying the results
of an SQL statement.
Database- Pick from the list the data source name created using ODBC Administrator
Name
SQL enter the SQL statement to be executed; for example, “SELECT * FROM Table1”
Active Set to True to connect; supply user name and password on connection
Data Set Set to the name of the TQuery component, or “query1” in this case
Embarcadero Technologies 49
Programming Applications with ODBC
Data Set Set to the name of the TDatasource component, or “datasource1” in this case
5. Inspect the returned results from the SELECT statement, in the DBGrid area.
Embarcadero Technologies 50
Working with UDFs and Blob Filters
1. UDF Overview
ust as InterBase has built-in SQL functions such as MIN(), MAX(), and CAST(), it also supports libraries of
user-defined functions (UDFs). User-defined functions (UDFs) are host-language programs for performing
customized, often-used tasks in applications. UDFs enable the programmer to modularize an application
by separating it into more reusable and manageable units. Possibilities include statistical, string, and date
functions. UDFs are extensions to the InterBase server and execute as part of the server process.
InterBase provides a library of UDFs, documented in The InterBase UDF Library section of this chapter.
You can access UDFs and Blob filters through isql or a host-language program. You can also access UDFs
in stored procedures and trigger bodies.
UDFs can be used in a database application anywhere that a built-in SQL function can be used. This chapter
describes how to create UDFs and how to use them in an application.
1. Write the function in any programming language that can create a shared library. Functions written
in Java are not supported.
2. Compile the function and link it to a dynamically linked or shared library.
3. Use DECLARE EXTERNAL FUNCTION to declare each UDF to each database in which you need to use it.
x:\ProgramData\Embarcadero\InterBase\gd_db\examples\ib_uds.sql
• The UDF library, named ib_udf.dll on Windows platforms and ib_udf on UNIX platforms is located in
<InterBase_home>/UDF and its functions are all implemented using the standard C library.
Put the ib_udf.dll into your UDF directory (if it is not already there) and run the ib_udf.sql script to define
the functions in it. You can read the comments in the script for what each does and expected input.
Once that script is applied to your database, you can call the UDF functions defined in that script. IBExpert
will list your defined UDFs in its UDF section.
Embarcadero Technologies 51
Working with UDFs and Blob Filters
2.1. Writing a UDF
In the C language, a UDF is written like any standard function. The UDF can require up to ten input
parameters, and can return only a single C data value. A source code module can define one or more
functions and can use typedefs defined in the InterBase ibase.h header file. You must then include ibase.h
when you compile.
2.1.1. Specifying Parameters
A UDF can accept up to ten parameters corresponding to any InterBase data type. Array elements cannot
be passed as parameters. If a UDF returns a Blob, the number of input parameters is restricted to nine.
All parameters are passed to the UDF by reference.
Programming language data types specified as parameters must be capable of handling corresponding
InterBase data types. For example, the C function declaration for FN_ABS() accepts one parameter of type
double. The expectation is that when FN_ABS() is called, it will be passed a data type of DOUBLE PRECISION
by InterBase.
You can use a descriptor parameter to ensure that the InterBase server passes all of the information it has
about a particular data type to the function. A descriptor parameter assists the server in probing the data
type to see if any of its values are SQL NULL. For more information about the DESCRIPTOR parameter, see
Defining a Sample UDF with a Descriptor Parameter.
UDFs that accept Blob parameters require special data structure for processing. A Blob is passed by refer-
ence to a Blob UDF structure. For more information about the Blob UDF structure, see Writing a Blob UDF.
By default, return values are passed by reference. Numeric values can be returned by reference or by
value. To return a numeric parameter by value, include the optional BY VALUE keyword after the return
value when declaring a UDF to a database.
A UDF that returns a Blob does not actually define a return value. Instead, a pointer to a structure describing
the Blob to return must be passed as the last input parameter to the UDF. See Declaring a Blob UDF.
NOTE
A parameter passed as a descriptor cannot be used as a return type. This action will throw an error. For more information
about the DESCRIPTOR parameter, see Defining a Sample UDF with a Descriptor Parameter.
Note that the situation is different for calls to APIs. On UNIX, InterBase uses CDECL for all API calls. On
Windows platforms InterBase uses STDCALL for all functions that have a fixed number of arguments and
Embarcadero Technologies 52
Working with UDFs and Blob Filters
CDECL for functions that have a variable number of arguments. See “Programming with the InterBase API”
in the API Guide for a list of these functions.
When you declare a UDF that returns a C string, CHAR or VARCHAR, you must include the FREE_IT keyword
in the declaration in order to free the memory used by the return value.
2.2. Thread-safe UDFs
In InterBase, the server runs as a single multi-threaded process. This means that you must take some care
in the way you allocate and release memory when coding UDFs and in the way you declare UDFs. This
section describes how to write UDFs that handle memory correctly in the new single-process environment.
There are several issues to consider when handling memory in the single-process, multi-thread architec-
ture:
• UDFs must avoid static variables in order to be thread safe. You can use static variables only if you can
guarantee that only one user at a time will be accessing UDFs, since users running UDFs concurrently
will conflict in their use of the same static memory space. If you do return a pointer to static data,
you must not use FREE_IT.
• UDFs must allocate memory using ib_util_malloc() rather than static arrays in order to be thread-
safe. The UDF Declaration employs the "FREE_IT" keyword because memory must be released by the
same runtime library that allocated it and "FREE_IT" uses the Visual Studio runtime library, therefore,
the memory must be allocated by the Visual Studio runtime library which is facilitated by ib_util_mal-
loc(). Similar problems may occur where ib_util_malloc() is used in a function that does not employ
"FREE_IT" in the declaration.
NOTE
If malloc() is employed in a UDF for which "FREE_IT" is specified then there will be a mismatch if C++Builder runtime
library is used to allocate the memory because Visual Studio runtime library will be used to free it.
NOTE
In the case where "FREE_IT" is not specified in the declaration, it is fine to allocate memory using malloc() because
memory will be freed by the same runtime library.
• Memory allocated dynamically is not automatically released, since the process does not end. You
must use the FREE_IT keyword when you declare the UDF to the database (DECLARE EXTERNAL
FUNCTION).
In the following example for user-defined function FN_LOWER(), the array must be global to avoid going
out of context:
Multi-process Version :
Embarcadero Technologies 53
Working with UDFs and Blob Filters
char buffer[256];
char *fn_lower(char *ups)
{
. . .
return (buffer);
}
In the following version, the InterBase engine will free the buffer if the UDF is declared using the FREE_IT
keyword:
Thread-safe Version:
Notice that this example uses InterBase ib_util_malloc() function to allocate memory.
The procedure for allocating and freeing memory for return values in a fashion that is both thread safe
and compiler independent is as follows:
1. In the UDF code, use InterBase ib_util_malloc() function to allocate memory for return values. This
function is located as follows:
Windows <<InterBase_home>>/bin/ib_util.dll
Linux /usr/lib/ib_util.so
Solaris <<interbase_home>>/lib/ib_util.so
2. Use the FREE_IT keyword in the RETURNS clause when declaring a function that returns dynamically
allocated objects. For example:
InterBase FREE_IT keyword allows InterBase users to write thread-safe UDF functions without memory leaks.
Note that it is not necessary to provide the extension of the module name.
3. Memory must be released by the same runtime library that allocated it.
Embarcadero Technologies 54
Working with UDFs and Blob Filters
• Include ibase.h in the source code if you use typedefs defined in the InterBase ibase.h header file. All
“include” (*.h) libraries are in the <<InterBase_home>>/SDK/include directory.
• Link to gds32.dll if you use calls to InterBase library functions.
• Linking and compiling:
Microsoft Visual C/C++ Link with <<InterBase_home>>/SDK /lib_ms/ib_util_ms.lib and include <<In-
terBase_home>>/SDK/include/ib_util.h
Use the following options when compiling applications with Microsoft C++:
Examples: The following commands use the Microsoft compiler to build a DLL that uses InterBase:
See the makefiles (makefile.bc and makefile.msc on Windows platforms, makefile on UNIX) in the Inter-
Base examples subdirectory for details on how to compile a UDF library.
Embarcadero Technologies 55
Working with UDFs and Blob Filters
NOTE
A library, in this context, is a shared object that typically has a dll extension on Windows platforms, and a so extension
on Solaris and Linux.
The InterBase examples directory contains sample makefiles (makefile.bc and makefile.msc on Windows
platforms, makefile on UNIX) that build a UDF function library from udflib.c.
NOTE
On some platforms, object files can be added directly to existing libraries. For more information, consult the plat-
form-specific compiler and linker documentation.
To delete a UDF from a library, follow the linker’s instructions for removing an object from a library. Deleting
a UDF from a library does not eliminate references to it in the database.
Declaring a UDF to a database informs the database about its location and properties:
You can use the following syntax to execute a DECLARE EXTERNAL FUNCTION statement:
The following describes the arguments you can append to a DECLARE EXTERNAL FUNCTION statement.
Embarcadero Technologies 56
Working with UDFs and Blob Filters
DESCRIPTOR Ensures that the InterBase server passes all the information it has about a particular data type to the
function via the Descriptor control structure.
RETURNS Specifies the return value of a function.
BY VALUE Specifies that a return value should be passed by value rather than by reference.
PARAMETER <n>
• Specifies that the <n>th input parameter is to be returned.
• Used when the return data type is BLOB.
FREE_IT Frees memory of the return value after the UDF finishes running.
• The library must reside on the same machine as the InterBase server.
• On any platform, the module can be referenced with no path name if it is in <<Inter-
Base_home>>/UDF or <<InterBase_home>>/intl
• If the library is in a directory other than <<InterBase_home>>/UDF or <<InterBase_home>>./
intl, you must specify its location in InterBase configuration file (ibconfig) using the EXTER-
NAL_FUNCTION_DIRECTORY parameter.
Example:
Embarcadero Technologies 57
Working with UDFs and Blob Filters
Example:
/*********************************/
/* Descriptor control structure */
/*********************************/
#define DSC_VERSION2 2
#define DSC_CURRENT_VERSION DSC_VERSION2
#define DSC_null 1
#define DSC_no_subtype 2
#define DSC_nullable 4
#define dsc_ttype dsc_sub_type
#define DSC_GET_CHARSET( dsc ) ((( dsc )->dsc_ttype ) & 0x00FF)
#define DSC_GET_COLLATE( dsc ) ((( dsc )->dsc_ttype ) >> 8)
The following table describes the structure fields used in the example above.
Embarcadero Technologies 58
Working with UDFs and Blob Filters
For more details on the ranges of the data types and related information, see “Specifying Data Types” in
the Data Definition Guide. See also “Working with Dynamic SQL” in the API Guide.
The following table describes the dsc_types that you can use in a DECLARE EXTERNAL FUNCTION statement.
Explanation of dsc_type
dsc_dtype Numer- Explanation
ic value
dtype_text 1 Associated with the text data type; use the subtype for character set and collation in-
formation. This dtype maps to the SQL data type of SQL_TEXT, and CHAR.
dtype_string 2 Indicates a null-terminated string.
dtype_varying 3 Associated with the text data type; use the subtype for character set and collation in-
formation. This dtype maps to the SQL data type SQL_VARYING and VARCHAR.
dtype_short 8 Maps to the data type SQL_SHORT.
dtype_long 9 Maps to the data type SQL_LONG.
dtype_quad 10 Maps to the data type SQL_QUAD.
dtype_real 11 Maps to SQL_FLOAT.
dtype_double 12 Maps to SQL_DOUBLE.
dtype_d_float 13 Maps to SQL_D_FLOAT
dtype_sql_date 14 Maps to SQL_TYPE_DATE.
dtype_sql_time 15 Maps to SQL_TYPE_TIME.
dtype_timestamp 16 Maps to SQL_TIMESTAMP.
dtype_blob 17 Maps to SQL_BLOB.
dtype_array 18 Maps to SQL_ARRAY.
dtype_int64 19 Maps to SQL_INT64.
dtype_bollean 20 Maps to SQL_BOOLEAN.
The following table describes the dsc_flags that you can use in a DECLARE EXTERNAL FUNCTION state-
ment.
Embarcadero Technologies 59
Working with UDFs and Blob Filters
Explanation of dsc_flags
Flag Name Numer- Explanation
ic value
DSC_null 1 This flag indicates that data passed in the descriptor has an SQL NULL value.
DSC_no_subtype 2 Flag is set for text, cstring or varying dtypes, where the subtype is not set. Normally the
subtype for these dtypes will be set to the collation and charter set values.
DSC_nullable 4 Internal use. Ignore for now.
DSC_system 8 DSC_system flags a format descriptor for system (not user) relations/tables.
Whenever a UDF returns a value by reference to dynamically allocated memory, you must declare it using
the FREE_IT keyword in order to free the allocated memory.
NOTE
You must not use FREE_IT with UDFs that return a pointer to static data, as in the “multi-process version” example on
page Thread-safe UDFs.
• On any platform, the module can be referenced with no path name if it is in <<InterBase_home>>/UDF
or <<InterBase_home>>/intl.
• If the library in a directory other than <InterBase_home>/UDF or <<InterBase_home>>/intl, you must
specify its location in InterBase configuration file (ibconfig) using the EXTERNAL_FUNCTION_DIREC-
TORY parameter. Give the complete pathname to the library, including a drive letter in the case of
a Windows server.
When either of the above conditions is met, InterBase finds the library. You do not need to specify a path
in the declaration.
NOTE
The library must reside on the same machine as the InterBase server.
To specify a location for UDF libraries in ibconfig, enter a line such as the following:
Windows:
Embarcadero Technologies 60
Working with UDFs and Blob Filters
EXTERNAL_FUNCTION_DIRECTORY "C:\<InterBase_home>\Mylibraries"
Unix:
EXTERNAL_FUNCTION_DIRECTORY "/usr/interbase/Mylibraries"
Note that it is no longer sufficient to include a complete path name for the module in the DECLARE EXTERNAL
FUNCTION statement. You must list the path in the EXTERNAL_FUNCTION_DIRECTORY parameter of the InterBase
configuration file if the library is not located in <InterBase_home>/UDF or <InterBase_home>/intl.
IMPORTANT
For security reasons, InterBase strongly recommends that you place your compiled libraries in directories that are ded-
icated to InterBase libraries. Placing InterBase libraries in directories such as C:\WINNT\system32 or /usr/lib permits
access to all libraries in those directories and is a serious security hole.
This example does not need the FREE_IT keyword because only cstrings, CHAR, and VARCHAR return
types require memory allocation. The module must be in InterBase UDF directory or in a directory that
is named in the configuration file.
Example: The following isql script declares UDFs, ABS() and TRIM(), to the employee.gdb database:
CONNECT 'employee.gdb';
DECLARE EXTERNAL FUNCTION ABS
DOUBLE PRECISION
RETURNS DOUBLE PRECISION BY VALUE
ENTRY_POINT 'fn_abs' MODULE_NAME 'ib_udf';
DECLARE EXTERNAL FUNCTION TRIM
SMALLINT, CSTRING(256), SMALLINT
RETURNS CSTRING(256) FREE_IT
ENTRY_POINT 'fn_trim' MODULE_NAME 'ib_udf';
COMMIT;
Note that no extension is supplied for the module name. This creates a portable module. Windows ma-
chines add a dll extension automatically.
5. Calling a UDF
After a UDF is created and declared to a database, it can be used in SQL statements wherever a built-
in function is permitted. To use a UDF, insert its name in a SQL statement at an appropriate location, and
enclose its input arguments in parentheses.
Embarcadero Technologies 61
Working with UDFs and Blob Filters
For example, the following DELETE statement calls the ABS() UDF as part of a search condition that restricts
which rows are deleted:
UDFs can also be called in stored procedures and triggers. For more information, see “Working with Stored
Procedures” and “Working with Triggers” in the Data Definition Guide.
The following statement uses ABS() to guarantee that a returned column value is positive:
The next statement uses DATEDIFF() in a search condition to restrict rows retrieved:
The following statement uses TRIM() to remove leading blanks from firstname and trailing blanks from
lastname before inserting the values of these host variables into the EMPLOYEE table:
UPDATE COUNTRIES
SET COUNTRY = TRIM (2, ' ', COUNTRY);
Embarcadero Technologies 62
Working with UDFs and Blob Filters
For information on how to convert a BLOB to a VARCHAR , see the chapter Specifying Data Types of the
Data Definition Guide.
The function returns a signed short integer type with the following possible values:
short (*blob_get_segment) (isc_blob_handle, byte *, ISC_USHORT, ISC_USHORT *)
blob_handle
• Required Uniquely identifies a Blob passed to a UDF or returned by it.
number_segments Specifies the total number of segments in the Blob. Set this value to NULL if Blob data is not
passed to a UDF.
max_seglen Specifies the size, in bytes, of the largest single segment passed. Set this value to NULL if Blob
data is not passed to a UDF.
Embarcadero Technologies 63
Working with UDFs and Blob Filters
The blob_concatenate() function shown as the entry point above is defined in the following Blob UDF
example. The blob_concatenate() function concatenates two blobs into a third blob.
Embarcadero Technologies 64
Working with UDFs and Blob Filters
{
char *buffer;
unsigned short length, b_length;
Important notes:
• Do not move the UDF library file from its installed location unless you also move the utility file that is
located in the same directory (ib_util.dll on Windows, ib_util.so on Solaris and Linux). The UDF
library file requires the utility file to function correctly.
• There is a script, ib_udf.sql, in the <<InterBase_home>>/examples/udf directory that declares all of the
functions listed below. If you want to declare only a subset of these, copy and edit the script file.
• Several of these UDFs must be called using the new FREE_IT keyword if, and only if, they are written
in thread-safe form, using malloc to allocate dynamic memory.
• When trigonometric functions are passed inputs that are out of bounds, they return zero rather than
NaN.
Below is a list of the functions supplied in the InterBase UDF library. The description and code for each
function follow the table.
Embarcadero Technologies 65
Working with UDFs and Blob Filters
7.1. Abs
Returns the absolute value of a number.
Embarcadero Technologies 66
Working with UDFs and Blob Filters
7.2. Acos
Returns the arccosine of a number between -1 and 1; if the number is out of bounds it returns zero.
7.3. Ascii char
Returns the ASCII character corresponding to the value passed in.
7.4. Ascii val
Returns the ASCII value of the character passed in.
7.5. Asin
Returns the arcsin of a number between -1 and 1; returns zero if the number is out of range.
7.6. Atan
Returns the arctangent of the input value.
Embarcadero Technologies 67
Working with UDFs and Blob Filters
7.7. Atan2
Returns the arctangent of the first parameter divided by the second parameter.
7.8. Bin and
Returns the result of a binary AND operation performed on the two input values.
7.9. Bin or
Returns the result of a binary OR operation performed on the two input values.
7.10. Bin xor
Returns the result of a binary XOR operation performed on the two input values.
7.11. Ceiling
Returns a double value representing the smallest integer that is greater than or equal to the input value.
Embarcadero Technologies 68
Working with UDFs and Blob Filters
7.12. Cos
Returns the cosine of <x>. If <x> is greater than or equal to 263, or less than or equal to -263, there is a
loss of significance in the result of the call, and the function generates a _TLOSS error and returns a zero.
7.13. Cosh
Returns the hyperbolic cosine of x. If <x> is greater than or equal to 263, or less than or equal to -263, there
is a loss of significance in the result of the call, and the function generates a _TLOSS error and returns a zero.
7.14. Cot
Returns 1 over the tangent of the input value.
7.15. Div
Divides the two inputs and returns the quotient.
7.16. Floor
Returns a floating-point value representing the largest integer that is less than or equal to <x>.
Embarcadero Technologies 69
Working with UDFs and Blob Filters
7.17. Ln
Returns the natural log of a number.
7.18. Log
LOG(<x,y>) returns the logarithm base <x> of <y>.
7.19. Log10
Returns the logarithm base 10 of the input value.
7.20. Lower
Returns the input string as lowercase characters. This function works only with ASCII characters.
NOTE
This function can receive and return up to 80 characters, the limit on an InterBase character string.
7.21. Ltrim
Removes leading spaces from the input string.
NOTE
This function can receive and return up to 80 characters, the limit on an InterBase character string.
Embarcadero Technologies 70
Working with UDFs and Blob Filters
CSTRING(80)
RETURNS CSTRING(80) FREE_IT
ENTRY_POINT 'IB_UDF_ltrim' MODULE_NAME 'ib_udf';
7.22. Mod
Divides the two input parameters and returns the remainder.
7.23. Pi
Returns the value of pi = 3.14159...
7.24. Rand
Returns a random number between 0 and 1. The current time is used to seed the random number gen-
erator.
7.25. Rtrim
Removes trailing spaces from the input string.
NOTE
This function can receive and return up to 80 characters, the limit on an InterBase character string.
7.26. Sign
Returns 1, 0, or -1 depending on whether the input value is positive, zero or negative, respectively.
Embarcadero Technologies 71
Working with UDFs and Blob Filters
DOUBLE PRECISION
RETURNS INTEGER BY VALUE
ENTRY_POINT 'IB_UDF_sign' MODULE_NAME 'ib_udf';
7.27. Sin
Returns the sine of <x>. If <x> is greater than or equal to 263, or less than or equal to -263, there is a loss
of significance in the result of the call, and the function generates a _TLOSS error and returns a zero.
7.28. Sinh
Returns the hyperbolic sine of <x>. If <x> is greater than or equal to 263, or less than or equal to -263, there
is a loss of significance in the result of the call, and the function generates a _TLOSS error and returns a zero.
7.29. Sqrt
Returns the square root of a number.
7.30. Strlen
Returns the length of a the input string.
7.31. Substr
substr(<s,m,n>) returns the substring of <s> starting at position <m> and ending at position <n>.
NOTE
This function can receive and return up to 80 characters, the limit on an InterBase character string.
Embarcadero Technologies 72
Working with UDFs and Blob Filters
7.32. Tan
Returns the tangent of <x>. If <x> is greater than or equal to 263, or less than or equal to -263, there is a
loss of significance in the result of the call, and the function generates a _TLOSS error and returns a zero.
7.33. Tanh
Returns the hyperbolic tangent of <x>. If <x> is greater than or equal to 263, or less than or equal to
-263, there is a loss of significance in the result of the call, and the function generates a _TLOSS error
and returns a zero.
BLOB filters are user-written utility programs that convert data in BLOB columns from one subtype to
another. The subtype can be either an InterBase subtype or a user-defined one. Declare the filter to the
database with the DECLARE FILTER statement. For example:
isql automatically uses a built-in ASCII BLOB filter for a BLOB defined without a subtype, when asked to
display the BLOB. It also automatically filters BLOB data defined with subtypes to text, if the appropriate
filters have been defined.
Embarcadero Technologies 73
Working with UDFs and Blob Filters
You can use BLOB subtypes and BLOB filters to do a large variety of processing. For example, you can
define one BLOB subtype to hold:
• Compressed data and another to hold decompressed data. Then you can write BLOB filters for ex-
panding and compressing BLOB data.
• Generic code and other BLOB subtypes to hold system-specific code. Then you can write BLOB filters
that add the necessary system-specific variations to the generic code.
• Word processor input and another to hold word processor output. Then you can write a BLOB filter
that invokes the word processor.
For more information about creating and using BLOB filters, see the Embedded SQL Guide. For the com-
plete syntax of DECLARE FILTER, see the Language Reference Guide.
Embarcadero Technologies 74
Designing Database Applications
The InterBase Express (IBX) components provide support for relational database applications. Relational
databases organize information into tables, which contain rows (records) and columns (fields). These tables
can be manipulated by simple operations known as relational calculus.
When designing a database application, you must understand how the data is structured. Based on that
structure, you can then design a user interface to display data to the user and allow the user to enter new
information or modify existing data.
This chapter introduces some common considerations for designing a database application and the deci-
sions involved in designing a user interface.
The following topics introduce topics to be considered when designing a database application:
1.1. Local Databases
Local databases reside on your local drive or on a local area network. They use the InterBase proprietary
APIs for accessing the data. Often, they are dedicated to a single system. When they are shared by sev-
eral users, they use file-based locking mechanisms. Because of this, they are sometimes called file-based
databases.
Local databases can be faster than remote database servers because they often reside on the same system
as the database application.
Because they are file-based, local databases are more limited than remote database servers in the amount
of data they can store. Therefore, in deciding whether to use a local database, you must consider how
much data the tables are expected to hold.
Applications that use local databases are called single-tiered applications because the application and the
database share a single file system.
Embarcadero Technologies 75
Designing Database Applications
Remote database servers are designed for access by several users at the same time. Instead of a file-based
locking system such as those employed by local databases, they provide more sophisticated multi-user
support, based on transactions.
Remote database servers hold more data than local databases. Sometimes, the data from a remote
database server does not even reside on a single machine, but is distributed over several servers.
Applications that use remote database servers are called two-tiered applications or multi-tiered applica-
tions because the application and the database operate on independent systems (or tiers).
1.3. Database Security
Databases often contain sensitive information. When users try to access protected tables, they are required
to provide a password. Once users have been authenticated, they can see only those fields (columns) for
which they have permission.
For access to InterBase databases on a server, a valid user name and password is required. Once the user
has logged in to the database, that username and password (and sometimes, role) determine which tables
can be used. For information on providing passwords to InterBase servers, see Controlling Server Login.
There is also a chapter on database security in the Operations Guide.
If you are requiring your user to supply a password, you must consider when the password is required. If
you are using a local database but intend to scale up to a larger SQL server later, you may want to prompt
for the password before you access the table, even though it is not required until then.
If your application requires multiple passwords because you must log in to several protected systems or
databases, you can have your users provide a single master password which is used to access a table of
passwords required by the protected systems. The application then supplies passwords programmatically,
without requiring the user to provide multiple passwords.
In multi-tiered applications, you may want to use a different security model altogether. You can use CORBA
or MTS to control access to middle tiers, and let the middle tiers handle all details of logging into database
servers.
1.4. Transactions
A transaction is a group of actions that must all be carried out successfully on one or more tables in a
database before they are committed (made permanent). If any of the actions in the group fails, then all
actions are rolled back (undone).
Client applications can start multiple simultaneous transactions. InterBase provides full and explicit trans-
action control for starting, committing, and rolling back transactions. The statements and functions that
control starting a transaction also control transaction behavior.
InterBase transactions can be isolated from changes made by other concurrent transactions. For the life of
these transactions, the database appears to be unchanged except for the changes made by the transaction.
Records deleted by another transaction exist, newly stored records do not appear to exist, and updated
records remain in the original state.
For details on using transactions in database applications, see Using Transactions. For details on using
transactions in multi-tiered applications, see Creating Multi-tiered Applications in the Delphi Developer's
Guide. For more information about transactions refer to Understanding InterBase Transactions.
Embarcadero Technologies 76
Designing Database Applications
NOTE
Note: This content comes originally from 2004 BorCon and was published at Embarcadero Developer Network it is now
reproduced here for reference.
By: Bill Todd
Abstract: This session covers every aspect of transactions and save points and their effect on InterBase.
Topics include isolation levels, the wait option, the record version option, the OIT, OAT, OST and next
transaction; what they mean and when they change.
1.4.1.1. Introduction
A thorough understanding of transactions will let you get the data you want with maximum concurrent
access for all of your users. Understanding transactions will also let you avoid errors that can lead to poor
performance. The information in this paper applies to InterBase version 7.1 service pack 2 or later unless
otherwise noted.
1.4.1.2. What is a Transaction
A transaction is a group of changes to one or more tables in the database that are treated as a single
logical unit. All of the changes will succeed or all of the changes will fail as a unit. A transaction must exhibit
all of the characteristics shown in the following table.
Characteristic Description
Atomicity All of the changes that are part of the transaction will succeed or fail together as a single atomic
unit.
Consistency The database will always be left in a logically consistent state. If the server crashes all active trans-
actions will automatically roll back when the server is restarted. The database can never contain
changes that were made by a transaction that did not commit.
Isolation Changes made by other transactions are invisible to a transaction until the transaction that made
the change commits.
Durability When a transaction has been committed the changes are a permanent part of the database. They
cannot be lost or undone.
Note that some databases claim to implement transactions when, in fact, they do not. Paradox tables are
a good example. Paradox transactions exhibit neither consistency or isolation. Paradox transactions fail
the consistency test because active transactions are not rolled back on restart after a crash thus leaving
the database in a logically inconsistent state. Paradox transactions fail the isolation test because they use
read uncommitted (sometimes called dirty read) transaction isolation which allows other transactions to
see uncommitted changes.
Embarcadero Technologies 77
Designing Database Applications
InterBase does not support the ANSI standard transaction isolation levels. Instead, InterBase provides the
following transaction isolation levels.
Most databases use locking architecture. To provide a stable snapshot of the data in the database they
must apply table locks or index range locks to prevent other transactions from updating the data that
your transaction is using. This is not multi-user friendly. InterBase uses versioning architecture to provide
a snapshot of the database without preventing other transactions from updating the data.
All access to data in an InterBase database must take place within a transaction. When a Transaction starts
it is assigned a unique number. Transaction numbers are 32 bit integers so every two billion transactions
you must backup and restore your database to restart the numbering sequence. Each row version contains
the number of the transaction that created it.
For reasons that you will see in a moment, transactions must be able to determine the state of other trans-
actions. InterBase tracks the state of transactions on the transaction inventory pages (TIP). A transaction
can be in one of four states so two bits on the TIP are used for each transaction. The four states are:
• Active
• Committed
• Rolled back
• Limbo
Limbo requires some explanation. InterBase supports transactions that span multiple databases. Commit-
ting a transaction that includes changes to two or more databases is done using a process called two-
phase commit. Here is what happens when a multi-database transaction is committed.
1. The server where the commit is issued notifies all other servers involved that a commit has been
requested.
2. Each server sets the transaction state to limbo in each of its databases that is involved in the transaction
and notifies the controlling server that it is ready to commit.
Embarcadero Technologies 78
Designing Database Applications
3. When the controlling server receives a message from the other server(s) that they are ready to commit
it notifies each server to commit.
4. When each server receives the command to commit it changes the transaction state to committed.
If one of the servers crashes or if the network connection to one of the servers is lost between steps 2 and
3 above, the transaction will be left in limbo. More about this later.
When a transaction using snapshot isolation starts InterBase makes a copy of the TIP and gives the new
transaction a pointer to this copy. This enables the transaction to determine what the state of all other
transactions was when it started. When a read committed transaction starts it gets a pointer to the live TIP
(actually the TIP cache or TPC) so it can determine the current state of any transaction.
When a transaction updates a row it looks it its TIP to see if there are other active transactions with a
transaction number lower than its transaction number. If there are no older transactions that are active it
updates the row. If there are older transactions that are still active it creates a new version of the row and
enters its transaction number in the new version.
When a snapshot transaction reads a row it looks for the most recent version of that row that was created
by a transaction whose state is committed in the snapshot transaction's copy of the TIP. In other words,
the snapshot transaction finds the most recent version of the row that was already committed when the
snapshot transaction started. Another way to look at this process is that the snapshot transaction ignores
all of the changes made by transactions that committed after it started and returns the version of the row
that represents the state of the row when the snapshot transaction started. Consider the following example
of a row for which four versions exist.
Assume that a snapshot transaction with transaction number 90 attempts to read this row. The read trans-
action will not see the version of the row created by transaction 100 because the update that created
this version took place after transaction 90 began. Transaction 90 also will not read the version created
by transaction 80 although it has a lower transaction number because transaction 80 was not commited
when transaction 90 started. In this scenario I am assuming that transaction 80 committed after transaction
90 started but before transaction 100 started. Although the version for transaction 60 still exists on disk
transaction 60 has rolled back and rolled back versions are always ignored. Therefore, the version that
transaction 90 will read is the version created by transaction 40.
The same thing happens when a transaction using read committed transaction isolation reads a row. What
is different is that instead of looking a copy of the TIP that it got when it started to determine the state
of the transaction that created each version of the row, a read committed transaction looks at the live
TIP to determine the current state of the transaction that created the row version. This means that a read
committed transaction will get the most recent version of the row that was created by a transaction whose
current state is committed regardless of what the state of the creating transaction was at the time the
reading transaction started. Therefore, in the example above, a read committed transaction would read
the version of row 123 created by transaction 100.
1.4.1.5. Transaction Options
Besides the isolation level InterBase transactions offer a number of options that you can set to get the
behavior you need. These options are divided into several categories and are described in the following
sections.
Access Mode
Embarcadero Technologies 79
Designing Database Applications
InterBase transactions can be either read only or read write. The default access mode is read write. To
set the access mode for a transaction using IBTransaction use the read or, optionally, the write keyword.
For example:
concurrency
read
specifies a read only concurrency (snapshot) transaction. You can specify the access mode with concurrency
(snapshot), read committed and consistency (table stability) transactions.
Read only transactions consume less overhead and, in the case of read committed read only transactions,
do not inhibit garbage collection. Not interfering with garbage collection can lead to better performance.
If you do not need to make changes to the database you should always make your transaction read only.
Lock Resolution
When your transaction updates an existing row your transaction places a row level write lock on that row
until the transaction ends. This prevents another transaction from updating the same row before your
transaction either commits or rolls back. The lock resolution setting determines what happens to the other
transaction when it tries to update a row that your transaction has locked.
If the lock resolution setting is wait, the other transaction will wait until your transaction ends, then it will
proceed. If the lock resolution setting is nowait the other transaction will return an error immediately when
it tries to update the locked row. Here is an example of the IBTransaction parameters for a read write
snapshot transaction using the wait option.
concurrency
write
wait
Note that write is not required since the default access mode is read write.
Table Reservation
Normally, if your transaction cannot proceed because a record it needs to update is locked by another
transaction it will either return an error or wait depending on the lock resolution setting. If the lock resolution
setting is nowait and your transaction generates an error due to a lock conflict your code would probably
rollback and try again.
Although rare, there might be a situation where a transaction performs many time consuming operations,
possibly requiring hours to complete. In this case it might not be acceptable to get most of the way through
the processing and have the transaction fail due to a lock conflict. If, for some reason, you cannot use the
wait option there is another alternative.
The table reservation mechanism lets you lock tables when your transaction starts so you are guaranteed
the access you need to every row in the tables for the life of your transaction. The disadvantage of table
reservation is that no other transaction can update the reserved tables for the life of your transaction. This
is why this option is rarely used. The four table reservation options are described in the following table.
Reservation Description
Shared, lock_write Write transactions with concurrency or read committed isolation can update the table.
All transactions can read the table,
Embarcadero Technologies 80
Designing Database Applications
Shared, lock_read Any write transaction can update the table. Any transaction can read the table.
Protected, lock_write Other transactions cannot update the table. Concurrency and read committed transac-
tions can read the table
Protected, lock_read No other transaction can update the table. Any transaction can read the table
Here is an example of the IBTransaction. Params for a concurrency read write transactions that reserves
the employee table for protected read only access.
concurrency
nowait
protected
lock_read=EMPLOYEE
Note that the table name is case sensitive. The following shows a read commited transaction that locks
the employee table for shared read access.
read_committed
nowait
shared
lock_read=EMPLOYEE
You can reserve as many tables as you need and the tables can have different reservations. For example:
concurrency
nowait
protected
lock_read=EMPLOYEE
shared
lock_read=SALARY_HISTORY
The following tables give all of the keywords you can use in the IBTransaction Params property with a
description of each.
Embarcadero Technologies 81
Designing Database Applications
1.4.1.6. Ending a Transaction
You can end a transaction by committing it or by rolling it back. When you commit a transaction InterBase
changes the transaction's state on the TIP from active to committed.
When you roll back a transaction what happens depends on how many changes the transaction includes.
For reason's I will explain later in this paper, rolling back a transaction can degrade performance. Commit-
ting a transaction will never degrade performance. Therefore, if the number of changes in the transaction
is less than or equal to 100,000 InterBase undoes the changes to each row and commits the transaction. If
the number of changes is greater than 100,000 InterBase changes the transaction's state on the TIP from
active to rolled back and leaves any record versions created by the transaction in place.
Next Transaction
The next transaction is the number that will be assigned to the next transaction that starts.
OIT
The OIT is the oldest transaction whose state is other than committed. Put another way, the OIT will be
equal to the transaction number of whichever of the following three transactions is oldest.
• The OAT
• The oldest rolled back transaction
• The oldest limbo transaction
Normally the OIT and the OAT are the same transaction. When the OAT is comitted both the OIT and the
OAT advance to the number of the new oldest active transaction. However, there are three things that can
cause the OIT to stop advancing.
Embarcadero Technologies 82
Designing Database Applications
2. Automatic rollback of transactions that were active when the database server crashed. This happens
automatically when InterBase is restarted.
3. A transaction stuck in limbo.
If 1 or 2 happens the oldest transaction with a state other than committed will no longer by the OAT but
instead will be the oldest transaction that was rolled back. If a transaction is stuck in limbo due to a failed
two-phase commit it can be the oldest transaction with a state other than committed.
OST
The oldest snapshot transaction is the lowest number that appears in the Oldest Snapshot field of any
active transaction. Here is how the Oldest Snapshot value of a transaction is set.
1. When a read only read committed transaction starts the Oldest Snapshot field is not assigned a value.
2. When a read/write read committed transaction starts, its snapshot number is the same as its trans-
action number.
3. When a snapshot transaction starts, its snapshot number is set to the transaction number of the oldest
active read/write transaction.
The OST only moves forward when a new transaction starts, when a commit retaining is done or when a
sweep is run. Commit retaining on a snapshot transaction that has performed updates commits the existing
transaction and starts a new transaction whose snapshot number is the same as the snapshot number of
the transaction it is replacing. This can lead to an OST that is less than the OIT.
1.4.1.8. Garbage Collection
Since InterBase creates new row versions whenever a row is updated and there are other active transactions
that might need the current row values there must be a way to remove row versions that are no longer
needed to keep the database from growing rapidly. This process is called garbage collection. Garbage
collection happens automatically each time a row is accessed. Garbage collection is done by a background
thread so the user does not see any performance degradation when accessing a table with a lot of garbage
row versions that need to be deleted.
A sweep is an operation that garbage collects every row in every table in the database. If you leave the
sweep interval set to its default of 20,000 transactions a sweep will be triggered automatically when OAT
- OIT >= 20,000\. This will only happen if the OIT gets stuck as described above. If the OIT is stuck due to
a rollback the sweep will remove all of the row versions belonging to all rolled back transactions as well as
all row versions up to the most recent committed row version whose transaction number is less than the
OAT. This will unstick the OIT and allow it to move up to the OAT.
You can change the sweep interval using IBConsole by choosing Database > Properties from the menu
and setting the sweep interval in the dialog box below.
You can also change the sweep interval using the gfix command line utility. The following command sets
the sweep interval to 10,000.
If the OIT is stuck because a transaction is in limbo garbage collection cannot remove any record version
created by a transaction greater than the OIT. The only way to fix this problem is to commit or rollback
the limbo transaction. Using IBConsole choose Database > Maintenance > Transaction Recovery
Embarcadero Technologies 83
Designing Database Applications
from the menu. See the Operations Guide for detailed instructions for recovering limbo transactions with
IBConsole.
The two_phase switch decides automatically whether to commit or rollback each limbo transaction. To see
what choice gfix will make without actually committing or rolling back the transaction use the -list swtich. To
commit all limbo transactions or a specific transaction use the -commit switch. To rollback one or all limbo
transactions use the -rollback switch. For detailed information see the Operations Guide.
Since the events that can stick the OIT and cause an automatic sweep are very rare with InterBase 7.1
SP 1 and later automatic sweeps rarely happen. Automatic garbage collection cleans up rows as they are
accessed but in many database applications many rows are accessed rarely. This means that unneeded
record versions can remain in the database for a long time. The solution is to run a sweep manually on
a regular basis.
You can run a sweep from IBConsole by choosing Database > Maintenance > Sweep from the menu.
You can also run a sweep using gfix with the following command.
If you want to sweep the database regularly just create a batch file that contains the command above and
use Windows Scheduler to run it automatically at the interval you choose.
1.4.1.9. Possible Problems
There are a few transaction related problems that can occur. The following sections look at what they are,
how to prevent them and how to fix them if they occur.
Neither automatic garbage collection or a sweep can remove any record version created by a transaction
whose number is greater than the OIT. So, if the OIT gets stuck garbage collection stops. When garbage
collection stops and new transactions continue to be created performance begins to suffer for three rea-
sons.
1. Retrieving a row takes longer if there are many record versions that must be examined to find the
right one.
2. Database size increases.
3. The TIP gets larger.
The TIP gets larger because the TIP contains the state of every transaction with a number equal to or
greater than the OIT. If the OIT stops advancing and new transactions are being created the TIP has more
transactions to track and grows in size. This takes more memory on the database server. It also makes
starting a transaction that uses snapshot (concurrency) transaction isolation slower because each snapshot
transaction gets its own copy of the TIP when it starts. As the TIP gets larger it takes both more memory
and more time to make the copy. The higher the transaction volume the faster performance will degrade.
Fortunately this is a very rare problem with InterBase 7.1 and later because the events that will stick the
OIT are rare. If the OIT does get stuck running a sweep will correct the situation unless the cause is a limbo
Embarcadero Technologies 84
Designing Database Applications
transaction. If the cause is a limbo transaction you can correct the problem by commtting or rolling back
the limbo transaction. If you roll back the limbo transaction you will need to run a sweep to get the OIT
moving again.
When the OAT gets stuck because a transaction is left active the OIT also gets stuck. Therefore, the same
performance degradation will happen for the same reasons. However, with InterBase 7.1 SP 1 and later not
every active transaction will cause the OAT to stick. Here are the rules.
1. A read only read committed transaction can remain open indefinitely without causing the OAT to stick.
2. A read/write read committed transaction can remain open indefinitely as long as you call commit
retaining each time the transaction updates the database.
3. Any snapshot transaction will stick the OAT. Snapshot transactions should be committed as soon as
possible to prevent performance degradation.
Note that these rules apply only to InterBase 7.1 SP 1 and later. In earlier versions of InterBase any active
transaction will stick the OAT.
1.4.1.10. Savepoints
InterBase 7.1 introduced SQL 92 standard savepoints. A savepoint is a named point in a transaction that
you can rollback to without rolling back the entire transaction. Savepoints are particularly useful in stored
procedures and triggers. You can create a savepoint with the following statement.
SAVEPOINT MY_SAVEPOINT
where MY_SAVEPOINT is the savepoint's name. The name must be unique within the execution context which
is an application, a trigger or a stored procedure. For example you could have many savepoints with the
same name if one was in your application, and the others were in different stored procedures and triggers.
If you no longer need a savepoint release it as follows.
where MY_SAVEPOINT is the name of the savepoint to release. To rollback to a save point use the following
command.
When you rollback to a savepoint the savepoint is also released. If you rollback to a savepoint all savepoints
created after the one you rollback to are also rolled back and released. Use savepoints any time you
need to undo some of the changes within a transaction. For example, you could create a savepoint at the
beginning of a stored procedure and if the stored procedure is unable to complete its work your code
could roll back to that save point before exiting the stored procedure.
Embarcadero Technologies 85
Designing Database Applications
combination of transactions. For detailed documentation on ISQL see chapter 10 of the Operations Guide.
For a brief list of ISQL's command line switches enter:
isql -?
where sysdba is the username and masterkey is the password. When ISQL starts and connects to the
database you will see the ISQL command prompt as shown in the following image. Note that ISQL first
displays the name of the database you are attached to and the user you are logged in as. Then it displays
the ISQL> prompt to show that it is ready for a command. All ISQL commands must end with the current
terminator character which, by default, is the semicolon.
There are two ways to close ISQL. The EXIT command commits the current transaction and closes ISQL.
The QUIT command rolls back the current transaction and exits ISQL.
When ISQL connects to a database it automatically starts a transaction. You can end the current transaction
by issuing the COMMIT or ROLLBACK command. To start a new transaction issue the SET TRANSACTION COMMAND.
The following table shows the options for SET TRANSACTION.
Command Description
READ WRITE or READ ONLY Specifies the transaction's access mode
WAIT or NO WAIT Specifies how the transaction will handle lock conflicts
ISOLATION LEVEL SNAPSHOT Specifies the transaction isolation level
[TABLE STABILITY]
or READ COMMITTED
RECORD_VERSION
or READ COMMITTED NO
RECORD_VERSION
RESERVING <table name> Specifies the tables to reserve and the locks to be applied
FOR [SHARED or PROTECTED]
[READ or WRITE]
SET TRANSACTION READ ON- starts a read only read committed transaction that will return an error immidiately
LY NOWAIT ISOLATION LEVEL if it needs to lock a row that is already locked by another transaction. Of course the
READ COMMITTED; NOWAIT option is superfluous. Since this is a read only transaction it will never lock
a row
1.4.1.12. Monitoring Transactions
InterBase 7 introduced a set of temporary tables that provide information about the attachments, trans-
actions and statements the server is executing in the database you are connected to. The temporary ta-
bles give you the ability to analyze what the server is doing as it runs and, if necessary, force transactions
or attachments to terminate. InterBase 7.1 integrates Craig Stuntz's Performance Monitor into IBConsole.
This gives you a visual display of the information in the temporary tables. The following image shows the
Performance Monitor with the Transactions tab selected.
Embarcadero Technologies 86
Designing Database Applications
Here two transactions are active. The second one belongs to the SYSDBA user and is the read only read
committed transaction used by Perfromance Monitor. The first transaction is also a read only read com-
mitted transaction that belongs to user BILL.
The two toolbar buttons let you close the selected transaction and find the attachment the transaction
belongs to. Clicking the Find Attachment button takes you to the Attachments page and highlights the
attachment that owns the transaction as shown below.
To see summary information for all of the transactions that are active in the database switch to the Database
tab and scroll to the end of the grid as shown below.
The Transactions entry shows the total number of active transactions. A little farther down you can see the
Next transaction number, the oldest active transaction number (OAT), the oldest interesting transaction
number (OIT) and the oldest snapshot transaction number (OST). This display is particularly useful if you
suspect that either the OIT or the OAT is stuck. Since Performance monitor automatically refreshes the
display you can watch as transactions are started and committed and tell easily if either the OIT or the
OAT is not advancing.
1.4.1.13. Summary
Understanding transactions and how they interact is the key to writing high concurrency applications that
provide the the right view of the data for each task. The IBConsole Performance Monitor makes it easy
to see what transactions are active and their properties. With this information you can diagnose any con-
currency problem.
For example, if you frequently develop financial applications, you may create a number of specialized
field attribute sets describing different display formats for currency. When you create datasets for your
application at design time, rather than using the Object Inspector to set the currency fields in each dataset
by hand, you can associate those fields with an extended field attribute set in the data dictionary. Using
the data dictionary ensures a consistent data appearance within and across the applications you create.
In a Client/Server environment, the Data Dictionary can reside on a remote server for additional sharing
of information.
To learn how to create extended field attribute sets from the Fields editor at design time, and how to
associate them with fields throughout the datasets in your application, see rad en:Creating Attribute Sets for
Field Components in the rad en:Delphi Developer's Guide. To learn more about creating a data dictionary
and extended field attributes with the SQL and Database Explorers, see their respective online help files.
A programming interface to the Data Dictionary is available in the drintf unit (located in the lib directory).
This interface supplies the following methods:
Embarcadero Technologies 87
Designing Database Applications
Routine Use
DictionaryActive Indicates if the data dictionary is active.
DictionaryDeactivate Deactivates the data dictionary.
IsNullID Indicates whether a given ID is a null ID.
FindDatabaseID Returns the ID for a database given its alias.
FindTableID Returns the ID for a table in a specified database.
FindFieldID Returns the ID for a field in a specified table.
FindAttrID Returns the ID for a named attribute set.
GetAttrName Returns the name an attribute set given its ID.
GetAttrNames Executes a callback for each attribute set in the dictionary.
GetAttrID Returns the ID of the attribute set for a specified field.
NewAttr Creates a new attribute set from a field component.
UpdateAttr Updates an attribute set to match the properties of a field.
CreateField Creates a field component based on stored attributes.
UpdateField Changes the properties of a field to match a specified attribute set.
AssociateAttr Associates an attribute set with a given field ID.
UnassociateAttr Removes an attribute set association for a field ID.
GetControlClass Returns the control classs for a specified attribute ID.
QualifyTableName Returns a fully qualified table name (qualified by user name).
QualifyTableNameByName Returns a fully qualified table name (qualified by user name).
HasConstraints Indicates whether the dataset has constraints in the dictionary.
UpdateConstraints Updates the imported constraints of a dataset.
UpdateDataset Updates a dataset to the current settings and constraints in the dictionary.
Embarcadero Technologies 88
Designing Database Applications
2. Database Architecture
Database applications are built from user interface elements, components that manage the database or
databases, and components that represent the data contained by the tables in those databases (datasets).
How you organize these pieces is the architecture of your database application.
By isolating database access components in data modules, you can develop forms in your database appli-
cations that provide a consistent user interface. By storing links to well-designed forms and data modules
in the Object Repository, you and other developers can build on existing foundations rather than starting
over from scratch for each new project. Sharing forms and modules also makes it possible for you to
develop corporate standards for database access and application interfaces.
Many aspects of the architecture of your database application depend on the number of users who will be
sharing the database information and the type of information you are working with.
When writing applications that use information that is not shared among several users, you may want
to use a local database in a single-tiered application. This approach can have the advantage of speed
(because data is stored locally), and does not require the purchase of a separate database server and
expensive site licences. However, it is limited in how much data the tables can hold and the number of
users your application can support.
Writing a two-tiered application provides more multi-user support and lets you use large remote databases
that can store far more information.
When the database information includes complicated relationships between several tables, or when the
number of clients grows, you may want to use a multi-tiered application. Multi-tiered applications include
middle tiers that centralize the logic that governs database interactions so that there is centralized control
over data relationships. This allows different client applications to use the same data while ensuring that the
data logic is consistent. They also allow for smaller client applications because much of the processing is
off-loaded onto middle tiers. These smaller client applications are easier to install, configure, and maintain
because they do not include the database connectivity software. Multi-tiered applications can also improve
performance by spreading the data-processing tasks over several systems.
The VCL data-aware components make it easy to write scalable applications by abstracting the behavior
of the database and the data stored by the database. Whether you are writing a single-tiered, two-tiered,
or multi-tiered application, you can isolate your user interface from the data access layer as illustrated in
the following picture:
Embarcadero Technologies 89
Designing Database Applications
A form represents the user interface, and contains data controls and other user interface elements. The
data controls in the user interface connect to datasets which represent information from the tables in the
database. A data source links the data controls to these datasets. By isolating the data source and datasets
in a data module, the form can remain unchanged as you scale your application up. Only the datasets
must change.
A flat-file database application is easily scaled to the client in a multi-tiered application because both
architectures use the same client dataset component. In fact, you can write an application that acts as both
a flat-file application and a multi-tiered client (see Using the Briefcase Model).
If you plan to scale your application up to a three-tiered architecture eventually, you can write your one- or
two-tiered application with that goal in mind. In addition to isolating the user interface, isolate all logic that
will eventually reside on the middle tier so that it is easy to replace at a later time. You can even connect
your user interface elements to client datasets (used in multi-tiered applications), and connect them to
local versions of the datasets in a separate data module that will eventually move to the middle tier. If you
do not want to introduce this artifice of an extra dataset layer in your one- and two-tiered applications,
it is still easy to scale up to a three-tiered application at a later date. See Scaling Up to a Three-tiered
Application for more information.
A single application comprises the user interface and incorporates the data access mechanism. The type
of dataset component used to represent database tables is in a flat file.The following picture illustrates this:
For more information on building single-tiered database applications, see Building Multi-tiered Applica-
tions.
Embarcadero Technologies 90
Designing Database Applications
In this model, all applications are database clients. A client requests information from and sends information
to a database server. A server can process requests from many clients simultaneously, coordinating access
to and updating data.
Embarcadero Technologies 91
Designing Database Applications
You can use Delphi to create both client applications and application servers. The client application uses
standard data-aware controls connected through a data source to one or more client dataset compo-
nents to display data for viewing and editing. Each client dataset communicates with an application server
through an IProvider interface that is part of the application server’s remote data module. The client ap-
plication can use a variety of protocols (TCP/IP, DCOM, MTS, or CORBA) to establish this communication.
The protocol depends on the type of connection component used in the client application and the type
of remote data module used in the server application.
The application server creates the IProvider interfaces in one of two ways. If the application server includes
any provider objects, then these objects are used to create the IProvider interface. This is the method
illustrated in the previous figure. Using a provider component gives an application more control over the
interface. All data is passed between the client application and the application server through the interface.
The interface receives data from and sends updates to conventional datasets, and these components
communicate with a database server.
Usually, several client applications communicate with a single application server in the multi-tiered model.
The application server provides a gateway to your databases for all your client applications, and it lets
you provide enterprise-wide database tasks in a central location, accessible to all your clients. For more
information about creating and using a multi-tiered database application, see rad en:Creating Multi-tiered
Applications - Overview in the in the Delphi Developer's Guide.
Data-aware controls get data from and send data to a data source component, TDataSource. A data source
component acts as a conduit between the user interface and a dataset component that represents a set of
information from the tables in a database. Several data-aware controls on a form can share a single data
source, in which case the display in each control is synchronized so that as the user scrolls through records,
the corresponding value in the fields for the current record is displayed in each control. An application’s
data source components usually reside in a data module, separate from the data-aware controls on forms.
The data-aware controls you add to your user interface depend on what type of data you are displaying
(plain text, formatted text, graphics, multimedia elements, and so on). Also, your choice of controls is
determined by how you want to organize the information and how (or if) you want to let users navigate
through the records of datasets and add or edit data.
The following sections introduce the components you can use for various types of user interface:
Embarcadero Technologies 92
Designing Database Applications
Applications that display a single record are usually easy to read and understand because all database
information is about the same thing (in the previous case, the same order). The data-aware controls in
these user interfaces represent a single field from a database record. The Data Controls page of the Tool
palette provides a wide selection of controls to represent different kinds of fields. For more information
about specific data-aware controls, see Using Data Controls chapter of the Delphi Developer's Guide.
To display multiple records, use a grid control. Grid controls provide a multi-field, multi-record view of
data that can make your application’s user interface more compelling and effective. They are discussed in
Viewing and Editing Data with TDBGrid and Creating a Grid That Contains Other Data-aware Controls in
the Using data controls Index chapter of the Delphi Developer's Guide.
You may want to design a user interface that displays both fields from a single record and grids that
represent multiple records. There are two models that combine these two approaches:
• Master-detail forms: You can represent information from both a master table and a detail table by
including both controls that display a single field and grid controls. For example, you could display
information about a single customer with a detail grid that displays the orders for that customer. For
information about linking the underlying tables in a master-detail form, see Creating master/detail
forms.
• Drill-down forms: In a form that displays multiple records, you can include single field controls that
display detailed information from the current record only. This approach is particularly useful when
the records include long memos or graphic information. As the user scrolls through the records of
the grid, the memo or graphic updates to represent the value of the current record. Setting this up
is very easy. The synchronization between the two displays is automatic if the grid and the memo or
image control share a common data source.
NOTE
It is generally not a good idea to combine these two approaches on a single form. While the result can sometimes be
effective, it is usually confusing for users to understand the data relationships.
3.3. Analyzing Data
Some database applications do not present database information directly to the user. Instead, they analyze
and summarize information from databases so that users can draw conclusions from the data.
The TDBChart component on the Data Controls page of the Tool Palette lets you present database infor-
mation in a graphical format that enables users to quickly grasp the import of database information.
Embarcadero Technologies 93
Designing Database Applications
Also, some versions of Delphi include a Decision Cube page on the Tool Palette. It contains six components
that let you perform data analysis and cross-tabulations on data when building decision support appli-
cations. For more information about using the Decision Cube components, see “Using decision support
components” in the Delphi Developer’s Guide.
If you want to build your components that display data summaries based on various grouping criteria,
you can use maintained aggregates with a client dataset. For more information about using maintained
aggregates, see Using Maintained Aggregates in the Using Client Datasets - Overview chapter of the
Delphi Developer's Guide.
The data available to your database application is controlled by your choice of dataset component.
Datasets abstract the properties and methods of a database table, so that you do not need to make major
alterations depending on whether the data is stored in a database table or derived from one or more
tables in the database. For more information on the common properties and methods of datasets, see
Understanding Datasets.
Your application can contain more than one dataset. Each dataset represents a logical table. By using
datasets, your application logic is buffered from restructuring of the physical tables in your databases. You
might need to alter the type of dataset component, or the way it specifies the data it contains, but the rest
of your user interface can continue to work without alteration.
• Table components: Tables (TIBTable) correspond directly to the underlying tables in the database.
You can adjust which fields appear (including adding lookup fields and calculated fields) by using
persistent field components. You can limit the records that appear using ranges or filters. Tables are
described in more detail in Working with Tables. Persistent fields are described in “Persistent field
components” in the Delphi Developer’s Guide. Ranges and filters are described in Working with a
subset of data.
• Query components: Queries (TIBQuery, TIBDataSet, and TIBSQL) provide the most general mecha-
nism for specifying what appears in a dataset. You can combine the data from multiple tables using
joins, and limit the fields and records that appear based on any criteria you can express in SQL. For
more information on queries, see Working with Queries.
• Stored procedures: Stored procedures (TIBStoredProc) are sets of SQL statements that are named and
stored on an SQL server. If your database server defines a remote procedure that returns the dataset
you want, you can use a stored procedure component. For more information on stored procedures,
see Working with Stored Procedures.
• Client datasets: Client datasets cache the records of the logical dataset in memory. Because of that,
they can only hold a limited number of records. Client datasets are populated with data in one of
two ways: from an application server or flat-file data stored on disk. When using a client dataset to
represent flat-file data, you must create the underlying table programmatically. For more information
about client datasets, see Creating and using a client dataset in the Delphi Developer's Guide.
• Custom datasets: You can create your custom descendants of TDataSet to represent a body of data that
you create or access in code you write. Writing custom datasets allows you the flexibility of managing
the data using any method you choose while still letting you use the VCL data controls to build your
Embarcadero Technologies 94
Designing Database Applications
user interface. For more information about creating custom components, see Overview of Component
Creation in the Delphi Developer's Guide.
Embarcadero Technologies 95
Building Multi-tiered Applications
One- and two-tiered applications include the logic that manipulates database information in the same
application that implements the user interface. Because the data manipulation logic is not isolated in a
separate tier, these types of applications are most appropriate when there are no other applications sharing
the same database information. Even when other applications share the database information, these types
of applications are appropriate if the database is very simple, and there are no data semantics that must
be duplicated by all applications that use the data.
You may want to start by writing a one- or two-tiered application, even when you intend to eventually
scale up to a multi-tiered model as your needs increase. This approach lets you avoid having to develop
data manipulation logic up front so that the application server can be available while you are writing
the user interface. It also allows you to develop a simpler, cheaper prototype before investing in a large,
multi-system development project. If you intend to eventually scale up to a multi-tiered application, you
can isolate the data manipulation logic so that it is easy to move it to a middle tier at a later date.
The InterBase page of the Tool Palette contains various dataset components that represent the tables
contained in a database or logical tables constructed out of data stored in those database tables. See
Selecting What Data to Show for more information about these dataset components. You must include a
dataset component in your application to work with database information.
Each dataset component on the InterBase page has a published Database property that specifies the
database which contains the table or tables that hold the information in that dataset. When setting up
your application, you must use this property to specify the database before you can bind the dataset to
specific information contained in that database. What value you specify depends on whether or not you
are using explicit database components. Database components (TIBDatabase) represent a database in
your application. If you do not add a database component explicitly, a temporary one is created for you
automatically, based on the value of the Database property. If you are using explicit database components,
Database is the value of the Database property of the database component. See Persistent and Temporary
Database Components for more information about using database components.
1.1. Using Transactions
A transaction is a group of actions that must all be carried out successfully on one or more tables in a
database before they are committed (made permanent). If one of the actions in the group fails, then all
actions are rolled back (undone). By using transactions, you ensure that the database is not left in an
inconsistent state when a problem occurs completing one of the actions that make up the transaction.
For example, in a banking application, transferring funds from one account to another is an operation
you would want to protect with a transaction. If, after decrementing the balance in one account, an error
occurred incrementing the balance in the other, you want to roll back the transaction so that the database
still reflects the correct total balance.
Embarcadero Technologies 96
Building Multi-tiered Applications
By default, implicit transaction control is provided for your applications. When an application is under
implicit transaction control, a separate transaction is used for each record in a dataset that is written to
the underlying database. Implicit transactions guarantee both a minimum of record update conflicts and
a consistent view of the database. On the other hand, because each row of data written to a database
takes place in its own transaction, implicit transaction control can lead to excessive network traffic and
slower application performance. Also, implicit transaction control will not protect logical operations that
span more than one record, such as the transfer of funds described previously.
If you explicitly control transactions, you can choose the most effective times to start, commit, and roll
back your transactions. When you develop applications in a multi-user environment, particularly when your
applications run against a remote SQL server, you should control transactions explicitly.
NOTE
Ideally, a transaction should only last as long as necessary. The longer a transaction is active, the more
simultaneous users that access the database, and the more concurrent, simultaneous transactions that
start and end during the lifetime of your transaction, the greater the likelihood that your transaction will
conflict with another when you attempt to commit your changes.
IBTransaction.StartTransaction;
2. Once the transaction is started, all subsequent database actions are considered part of the transaction
until the transaction is explicitly terminated. You can determine whether a transaction is in process by
checking the transaction component’s InTransaction property.
3. When the actions that make up the transaction have all succeeded, you can make the database
changes permanent by using the transaction component’s Commit method:
IBTransaction.Commit;
Alternately, you can commit the transaction while retaining the current transaction context using the Com-
mitRetaining method:
IBTransaction.CommitRetaining;
Embarcadero Technologies 97
Building Multi-tiered Applications
Commit is usually attempted in a try...except statement. That way, if a transaction cannot commit suc-
cessfully, you can use the except block to handle the error and retry the operation or to roll back the
transaction.
If an error occurs when making the changes that are part of the transaction, or when trying to commit the
transaction, you will want to discard all changes that make up the transaction. To discard these changes,
use the database component’s Rollback method:
IBTransaction.Rollback;
You can also rollback the transaction while retaining the current transaction context using the Rollback-
Retaining method:
IBTransaction.RollbackRetaining;
• Exception handling code when you cannot recover from a database error.
• Button or menu event code, such as when a user clicks a Cancel button.
1.2. Caching Updates
InterBase Express (IBX) provides support for caching updates. When you cache updates, your application
retrieves data from a database, makes all changes to a local, cached copy of the data, and applies the
cached changes to the dataset as a unit. Cached updates are applied to the database in a single transaction.
Caching updates can minimize transaction times and reduce network traffic. However, cached data is local
to your application and is not under transaction control. This means that while you are working on your
local, in-memory, copy of the data, other applications can be changing the data in the underlying database
table. They also cannot see any changes you make until you apply the cached updates. Because of this,
cached updates may not be appropriate for applications that work with volatile data, as you may create
or encounter too many conflicts when trying to merge your changes into the database.
You can tell datasets to cache updates using the CachedUpdates property. When the changes are complete,
they can be applied by the dataset component, by the database component, or by a special update
object. When changes cannot be applied to the database without additional processing (for example,
when working with a joined query), you must use the OnUpdateRecord event to write changes to each
table that makes up the joined view.
For more information on caching updates, see Working with Cached Updates.
NOTE
If you are caching updates, you may want to consider moving to a multi-tiered model to have greater control over the
application of updates. For more information about the multi-tiered model, see Creating Multi-tiered Applications in
the Delphi Developer's Guide.
Embarcadero Technologies 98
Building Multi-tiered Applications
You can create tables either at design time, in the Forms Designer, or at runtime. To create a table, you
must specify the fields in the table using the FieldDefs property, add any indexes using the IndexDefs
property, and call the CreateTable method (or select the Create Table command from the table’s context
menu). For more detailed instructions on creating tables, see Creating a table.
You can add indexes to an existing table using the AddIndex method of TIBTable.
NOTE
To create and restructure tables on remote servers at design time, use the SQL Explorer and restructure the table using
SQL.
NOTE
The briefcase model is sometimes called the disconnected model, or mobile computing.
Suppose, however, that your on-site company database contains valuable customer contact data that your
sales representatives can use and update in the field. In this case, it would be useful if your sales reps could
download some or all of the data from the company database, work with it on their laptops as they fly
across the country, and even update records at existing or new customer sites. When the sales reps return
on-site, they could upload their data changes to the company database for everyone to use. The ability
to work with data offline and then apply updates online at a later date is known as the “briefcase” model.
By using the briefcase model, you can take advantage of the client dataset component’s ability to read
and write data to flat files to create client applications that can be used both online with an application
server, and off-line, as temporary one-tiered applications.
1. Create a multi-tiered server application as described in Creating Multi-tiered Applications in the Delphi
Developer's Guide.
2. Create a flat-file database application as your client application. Add a connection component and
set the RemoteServer property of your client datasets to specify this connection component. This
allows them to talk to the application server created in step 1. For more information about connection
components, see Connecting to the Application Server in the Delphi Developer's Guide.
3. In the client application, try on start-up to connect to the application server. If the connection fails,
prompt the user for a file and read in the local copy of the data.
4. In the client application, add code to apply updates to the application server. For more information
on sending updates from a client application to an application server, see Updating Records in the
Delphi Developer's Guide.
Embarcadero Technologies 99
Building Multi-tiered Applications
• Split your existing application into an application server that handles the database connection, and
into a client application that contains the user interface.
• Add an interface between the client and the application server.
There are a number of ways to proceed, but the following sequential steps may best keep your translation
work to a minimum:
1. Create a new project for the application server, duplicate the relevant database connection portions of
your former two-tiered application, and for each dataset, add a provider component that will act as a data
conduit between the application server and the client.
2. Copy your existing two-tiered project, remove its direct database connections, add an appropriate
connection component to it.
3. Substitute a client dataset for each dataset component in the original project.
4. In the client application, add code to apply updates to the application server.
5. Move the dataset components to the application server’s data modules. Set the DataSet property of
each provider to specify the corresponding datasets.
For more information, see Connecting to the Application Server and Creating and using a client dataset
in the Delphi Developer's Guide.
Connecting to Databases
When an InterBase Express (IBX) application connects to a database, that connection is encapsulated by a
TIBDatabase component. A database component encapsulates the connection to a single database in an
application. This chapter describes database components and how to manipulate database connections.
Another use for database components is applying cached updates for related tables. For more informa-
tion about using a database component to apply cached updates, see Applying Cached Updates with a
Database Component Method.
Temporary database components are created as necessary for any datasets in a data module or form for
which you do not create yourself. Temporary database components provide broad support for many typ-
ical desktop database applications without requiring you to handle the details of the database connection.
For most client/server applications, however, you should create your own database components instead
of relying on temporary ones. You gain greater control over your databases, including the ability to
The default properties created for temporary database components provide reasonable, general behaviors
meant to cover a wide variety of situations. For complex, mission-critical client/server applications with
many users and different requirements for database connections, however, you should create your own
database components to tune each database connection to your application’s needs.
2. Controlling Connections
Whether you create a database component at design time or runtime, you can use the properties, events,
and methods of TIBDatabase to control and change its behavior in your applications. The following sections
describe how to manipulate database components. For details about all TIBDatabase properties, events,
and methods, see TIBDatabase in the InterBase Express Reference.
At design time, a standard Login dialog box prompts for a user name and password when you first attempt
to connect to the database.
At runtime, there are three ways you can handle a request of a server for a login:
• Set the LoginPrompt property of a database component to True (the default). Your application displays
the standard Login dialog box when the server requests a user name and password.
• Set the LoginPrompt to False, and include hard-coded USER_NAME and PASSWORD parameters in
the Params property for the database component. For example:
USER_NAME=SYSDBA
PASSWORD=masterkey
Note that because the Params property is easy to view, this method compromises server security, so it
is not recommended.
• Write an OnLogin event for the database component, and use it to set login parameters at runtime.
OnLogin gets a copy of the Params property of the database component, which you can modify. The
name of the copy in OnLogin is LoginParams. Use the Values property to set or change login parameters
as follows:
LoginParams.Values['USER_NAME'] := UserName;
LoginParams.Values['PASSWORD'] := PasswordSearch(UserName);
On exit, OnLogin passes its LoginParams values back to Params, which is used to establish a connection.
Setting to True executes the Open method. Open verifies that the database specified by the
Connected
Database or Directory properties exists, and if an OnLogin event exists for the database component, it is
executed. Otherwise, the default Login dialog box appears.
NOTE
When a database component is not connected to a server and an application attempts to open a dataset associated
with the database component, the database component’s Open method is first called to establish the connection. If
the dataset is not associated with an existing database component, a temporary database component is created and
used to establish the connection.
Once a database connection is established the connection is maintained as long as there is at least one
active dataset. If a dataset is later opened which uses the database, the connection must be reestablished
and initialized. An event notifier procedure can be constructed to indicate whenever a connection to the
database is made or broken.
Establishing an initial connection between client and server can be problematic. The following trou-
bleshooting checklist should be helpful if you encounter difficulties:
2.4. Using ODBC
An application can use ODBC data sources (for example, Btrieve). An ODBC driver connection requires:
Setting Connected to False calls Close. Close closes all open datasets and disconnects from the server. For
example, the following code closes all active datasets for a database component and drops its connections:
IBDatabase1.Connected := False;
DataSets is an indexed array of all active datasets (TIBDataSet, TIBSQL, TIBTable, TIBQuery, and TIBStored-
Proc) for a database component. An active dataset is one that is currently open. DataSetCount is a read-
only integer value specifying the number of currently active datasets.
You can use DataSets with DataSetCount to cycle through all currently active datasets in code. For example,
the following code cycles through all active datasets to set the CachedUpdates property for each dataset
of type TIBTable to True:
var
I: Integer;
begin
for I := 0 to DataSetCount - 1 do
if DataSets[I] is TIBTable then
DataSets[I].CachedUpdates := True;
end;
After attaching to a database, you can use the TIBDatabaseInfo properties to return information on:
• Database characteristics.
• Environmental characteristics.
• Performance statistics.
• Database operation counts.
3.1. Database Characteristics
Several properties are available for determining database characteristics, such as size and major and minor
ODS numbers. The following table lists the properties that can be passed, and the information returned
in the result buffer for each property type:
3.2. Environmental Characteristics
Several properties are provided for determining environmental characteristics, such as the amount of mem-
ory currently in use, or the number of database cache buffers currently allocated. These properties are
described in the following table:
3.3. Performance Statistics
There are four properties that provide performance statistics of a database. The statistics accumulate for
a database from the moment the first process attaches to the database until the last remaining process
detaches from the database. For example, the value that the Reads property returns the number of reads
since the current database was first attached by any process. That number is an aggregate of all reads
done by all attached processes and not the number of reads done by the process of the current program.
The following table describes the properties that return count values for operations on the database:
end;
Descendents of this class can specify a file name (for input or output), and a TIBXSQLDA object representing
a record or parameters. The ReadyFile method is called right before performing the batch input or output.
NOTE
For information on exporting InterBase tables to XML using special API calls, see Exporting XML.
Use a SQL SELECT statement to export the data to the raw file, and an INSERT statement to import the raw
data into another database.
Raw files are probably the fastest way, aside from external tables, to get data in and out of an InterBase
database, although dealing with fixed-width files requires considerable attention to detail.
TIP
Use the Database Editor to set up the database component. To start the Database Editor, right click the database
component with the mouse and select Database Editor from the drop-down menu.
The following code snippet outputs selected data with a SQL SELECT statement from the SOURCE table
to the file source_raw.
end;
TIP
Use the Database Editor to set up the database component. To start the Database Editor, right click the database
component with the mouse and select Database Editor from the drop-down menu.
It is important to note that you must import data into a table with the same column definitions and data
types, and in the same order; otherwise, all sorts of unpredictable and undesirable results may occur.
The following code snippet inputs selected data with an SQL INSERT statement from the source_raw file
created in the last example into the DESTINATION table.
Use a SQL SELECT statement to export the data to the delimited file, and an INSERT statement to import
the delimited data into another database.
By default, the column delimiter is a tab, and the row delimiter is a tab-line feed (Z -{{{1}}}-F). Use the
ColDelimiter and RowDelimiter properties to change the column delimiter and row delimiter, respectively.
For example, to set the column delimiter to a comma, you could use the following line of code:
DelimOutput.ColDelimiter := ',';
NOTE
Columns may contain spaces before the delimiter. For example, if you have a column called NAME which is defined as a
CHAR(10), and the name “Joe” is in that column, then “Joe” will be followed by 7 spaces before the column is delimited.
TIP
Use the Database Editor to set up the database component. To start the Database Editor, right click the database
component with the mouse and select Database Editor from the drop-down menu.
The following code snippet outputs selected data with a SQL SELECT statement from the SOURCE table
to the file source_delim.
TIP
Use the Database Editor to set up the database component. To start the Database Editor, right click the database
component with the mouse and select Database Editor from the drop-down menu.
It is important to note that you must import data into a table with the same column definitions and data
types, and in the same order; otherwise, all sorts of unpredictable and undesirable results may occur.
The following code snippet inputs selected data with a SQL INSERT statement from the source_delim file
created in the last example into the DESTINATION table.
IBSQL4.Database.Open;
IBSQL4.Transaction.StartTransaction;
IBSQL4.SQL.Text := 'Insert into Destination values(:name, :number, :hired)';
DelimInput := TIBInputDelimitedFile.Create;
try
DelimInput.Filename := 'source_delim';
IBSQL4.BatchInput(DelimInput);
finally
DelimInput.Free;
IBSQL4.Transaction.Commit;
end;
end;
This chapter shows you how to build the following InterBase database services into your applications,
including:
• Configuration
• Backup and Restore
• Licensing
• Security
• Validation
• Statistics
• Log
• Server properties
2. Set the ServerName property for that component to the name of the server on which the services are
to be run.
3. Use the Protocol property to set the network protocol with which to connect to the server.
4. Set the Active property to <True>. A login dialog is displayed. If you do not wish to display the login
dialog, set the user name and password in the Params string editor, and set LoginPrompt to <False>.
NOTE
TIBLicensingService and TIBSecurityService do not require that you start the service using the ServiceStart
method. For example, to add a license you could use:
Action := LicenseAdd;
ServiceStart;
AddLicense;
For example, you could associate the BringDatabaseOnline method to a menu item:
end;
For example, you could use radio buttons to select the shut down mode and an Edit component to specify
the number of seconds before shutting down a database:
if RadioButton1.Checked then
ShutdownDatabase(Forced, (StrToInt(Edit4.Text)));
if RadioButton2.Checked then
ShutdownDatabase(DenyTransaction,(StrToInt(Edit4.Text)));
if RadioButton3.Checked then
ShutdownDatabase(DenyAttachment,(StrToInt(Edit4.Text)));
For more information, refer to “Database shutdown and restart” in the Operations Guide.
For example, you could set up an application that allows a user to set the sweep interval in an Edit com-
ponent:
For more information, refer to “Sweep interval and automated housekeeping” in the Operations Guide.
Set the SetAsyncMode method of the IBConfigService component to True to set the database write mode
to asynchronous.
For more information, refer to “Forced writes vs. buffered writes” in the Operations Guide.
For more information on page buffers, refer to “Default cache size per database” in the Operations Guide.
NOTE
Once you set the database to read-only, you will be unable to change any of the other database options until you set
SetReadOnly method to False again.
For more information on access mode, refer to “Read-only databases” in the Operations Guide.
You can use the following methods with the JournalInformation property:
AlterJournal - alters a pre-existing journal system. Not all properties can be altered. See the Journaling
chapter of the Update Guide for limitations.
GetJournalInformation - retrieves journaling information for this database and stores it in the Journal
Information property.
For more information on backup and restore, refer to Database Backup and Restore in the Operations
Guide.
3.2. Backing Up Databases
TIBBackupService contains many properties and methods to allow you to build a backup component into
your application. Only the SYSDBA user or the database owner will be able to perform backup operations
on a database.
When backing up a database under normal circumstances, the backup file will always be on the local server
since the backup service cannot open a file over a network connection. However, TIBBackupService can
create a remote file in one of the following is true:
• The server is running on Windows, the path to the backup file is specified as an UNC name, and the
destination for the file is another Windows machine (or a machine which can be connected to via
UNC naming conventions).
• The destination drive is mounted via NFS (or some equivalent) on the machine running the InterBase
server.
TIBBackupService options
Option Meaning
IgnoreChecksums Ignore checksums during backup
IgnoreLimbo Ignored limbo transactions during backup
TIBBackupService options
Option Meaning
MetadataOnly Output backup file for metadata only with empty tables
NoGarbageCollect Suppress normal garbage collection during backup; improves performance on some databases
OldMetadataDesc Output metadata in pre-4.0 format
NonTransportable Output backup file with non-XDR data format; improves space and performance by a negligible
amount
ConvertExtTables Convert external table data to internal tables
4. Attach to the service manager as described in Attaching to a Service Manager, or set the properties
in code:
with IBBackupService1 do
begin
ServerName := 'Poulet';
LoginPrompt := False;
Params.Add('user_name=sysdba');
Params.Add('password=masterkey');
Active := True;
end;
5. Set any other options in the Object inspector (or set them in code), and then start the service with
the ServiceStart method.
The final code for a backup application that displays verbose backup output in a Memo component might
look like this:
end;
3.3. Restoring Databases
TIBRestoreService contains many properties and methods to allow you to build a restore component
into your application. Only the SYSDBA user or the database owner may use the TIBRestoreService to
overwrite an existing database.
The username and password used to connect to the TIBRestoreService will be used to connect to the
database for restore.
PageBuffers := 3000
PageSize := 4096;
Changing the page size can improve database performance, depending on the data type size, row length,
and so forth. For a discussion of how page size affects performance, see Page size in the Operations Guide.
TIBRestoreService options
Option Meaning
DeactivateIndex Do not build indexes during restore
NoShadow Do not recreate shadow files during restore
NoValidity Do not enforce validity conditions (for example, NOT NULL) during restore
OneRelationATime Commit after completing a restore of each table
Replace Replace database if one exists
Create Restore but do not overwrite an existing database
UseAllSpace Do not reserve 20% of each datapage for future record versions; useful for read-only databases
4. Attach to the service manager as described in Attaching to a Service Manager, or set the properties
in code:
begin
with IBRestoreService1 do
begin
ServerName := 'Poulet';
LoginPrompt := False;
Params.Add('user_name=sysdba');
Params.Add('password=masterkey');
Active := True;
end;
5. Set any other options in the Object inspector (or set them in code), and then start the restore service
with the ServiceStart method. The final code for a restore application that displays verbose restore
output in a Memo component might look like this:
ServiceStart;
While not Eof do
Memo1.Lines.Add(GetNextLine);
finally
Active := False;
end;
end;
end;
Params.Add('user_name=sysdba');
Params.Add('password=masterkey');
Active := True;
try
Verbose := True;
Options := [Replace, UseAllSpace];
PageBuffers := 3000;
PageSize := 4096;
BackupFile.Add('c:\temp\employee1.gbk');
DatabaseName.Add('c:\temp\employee2.ib = 2048');
DatabaseName.Add('c:\temp\employee3.ib = 2048');
DatabaseName.Add('c:\temp\employee4.ib');
ServiceStart;
While not Eof do
Memo1.Lines.Add(GetNextLine);
finally
Active := False;
end;
end;
end;
4.1. Validating a Database
Use the Options property of TIBValidationService component to invoke a database validation. Set any of
the following options of type TValidateOption to <True> to perform the appropriate validation:
TIBValidationService options
Option Meaning
LimboTransactions Returns limbo transaction information, including:
• Transaction ID
• Host site
• Remote site
• Remote database path
• Transaction state
• Suggested transaction action
• Transaction action
• Multiple database information
CheckDB Request a read-only validation of the database without correcting any problems
IgnoreChecksum Ignore all checksum errors when validating or sweeping
KillShadows Remove references to unavailable shadow files
MendDB Mark corrupted records as unavailable so that subsequent operations skip them
TIBValidationService options
Option Meaning
SweepDB Request database sweep to mark outdated records as free space
ValidateDB Locate and release pages that are allocated but unassigned to any data structures
ValidateFull Check record and page structures, releasing unassigned record fragments; use with Vali-
dateDB
NOTE
Not all combinations of validation options work together. For example, you could not simultaneously mend and val-
idate the database at the same time. Conversely, some options are intended to be used with other options, such as
IgnoreChecksum with SweepDB or ValidateDB, or ValidateFull with ValidateDB.
try
Options := [LimboTransactions];
FetchLimboTransactionInfo;
for I := 0 to LimboTransactionInfoCount - 1 do
begin
with LimboTransactionInfo[i] do
begin
Memo1.Lines.Add('Transaction ID: ' + IntToStr(ID));
Memo1.Lines.Add('Host Site: ' + HostSite);
Memo1.Lines.Add('Remote Site: ' + RemoteSite);
Memo1.Lines.Add('Remote Database Path: ' + RemoteDatabasePath);
//Memo1.Lines.Add('Transaction State: ' + TransactionState);
Memo1.Lines.Add('-----------------------------------');
end;
end;
finally
TIBValidationService actions
Action Meaning
CommitGlobal Commits the limbo transaction specified by ID or commits all limbo transactions
RollbackGlobal Rolls back the limbo transaction specified by ID or rolls back all limbo transactions
RecoverTwoPhaseGlobal Performs automated two-phase recovery, either for a limbo transaction specified by ID
or for all limbo transactions
NoGlobalAction Takes no action
with IBValidationService1 do
try
if RadioButton1.Checked then GlobalAction := (CommitGlobal);
if RadioButton2.Checked then GlobalAction := (RollbackGlobal);
if RadioButton3.Checked then GlobalAction := (RecoverTwoPhaseGlobal);
if RadioButton4.Checked then GlobalAction := (NoGlobalAction);
TIBStatisticalService options
Option Meaning
HeaderPages Stop reporting statistics after reporting the information on the header page
DbLog Stop reporting statistics after reporting information on the log pages
IndexPages Request statistics for the user indexes in the database
DataPages Request statistics for data tables in the database
SystemRelations Request statistics for system tables and indexes in addition to user tables and indexes
The following example displays the statistics for a database. With a button click, HeaderPages and DBLog
statistics are returned until the end of the file is reached.
The following example displays the contents of the interbase.log file. With a click of the button, the log
file is displayed until the end of the file is reached.
Memo1.Lines.Add(GetNextLine);
finally
Active := False;
end;
end;
end;
7. Configuring Users
Security for InterBase relies on a central database for each server host. This database contains legitimate
users who have permission to connect to databases and InterBase services on that host. The database
also contains an encrypted password for the user. This user and password applies to any database on
that server host.
You can use the TIBSecurityService component to list, add, delete, and modify users. These are discussed
in the following sections.
For more information on InterBase database security, refer to “DataBase Security” in the Operations Guide.
TIBSecurityService properties
Property Purpose
UserName User name to create; maximum 31 characters
Password Password for the user; maximum 31 characters, only first 8 characters are significant
FirstName Optional first name of person using this user name
MiddleName Optional middle name of person using this user name
LastName Optional last name of person using this user name
UserID Optional user ID number, defined in /etc/passwd, to assign to the user
GroupID Optional groupID number, defined in /etc/group, to assign to the user
SQLRole Optional role to use when attaching to the security database; for more information on roles in InterBase,
refer to “ANSI SQL 3 roles” in the Operations Guide.
The following code snippet allows you to set user information in Edit components, and then adds the
user with the AddUser method.
try
UserName := Edit1.Text;
FirstName := Edit2.Text;
MiddleName := Edit3.Text;
LastName := Edit4.Text;
UserID := StrToInt(Edit5.Text);
GroupID := StrToInt(Edit6.Text);
Password := Edit7.Text;
AddUser;
finally
try
UserName := Edit1.Text;
DisplayUser(UserName);
Edit2.Text := UserInfo[0].FirstName;
Edit3.Text := UserInfo[0].MiddleName;
Edit4.Text := UserInfo[0].LastName;
Edit5.Text := IntToStr(UserInfo[0].UserID);
Edit6.Text := IntToStr(UserInfo[0].GroupID);
finally
try
DisplayUsers;
for I := 0 to UserInfoCount - 1 do
begin
with UserInfo[i] do
begin
Memo1.Lines.Add('User Name : ' + UserName);
Memo1.Lines.Add('Name: ' + FirstName + ' ' + MiddleName
+ ' ' + LastName);
Memo1.Lines.Add('UID: ' + IntToStr(UserId));
Memo1.Lines.Add('GID: ' + IntToStr(GroupId));
Memo1.Lines.Add('-----------------------------------');
end;
end;
finally
The following code snippet calls the DeleteUser method to delete the user indicated by the UserName
property:
try
UserName := Edit1.Text;
DeleteUser;
finally
Edit1.Clear;
Active := False;
end;
If you remove a user entry from the InterBase security database (admin.ib by default), no one can log
into any database on that server using that name. You must create a new entry for that name using the
AddUser method.
To modify user information you could display the user information using the example in Displaying Infor-
mation for a Single User. The TUserInfo record is displayed in the Edit boxes. Use the ModifyUser code in
the same way as the AddUser code.
You can set the Database option to <True> in the Object Inspector, or set it in code.
The following code displays the elements of the TDatabaseInfo record. NoOfAttachements and NoOfDatabases
are strings displayed in Label components, while DbName is an array of type string, and displayed in a Memo
component.
Options := [Database];
FetchDatabaseInfo;
Label1.Caption := 'Number of Attachments = ' +
IntToStr(DatabaseInfo.NoOfAttachments);
Label2.Caption := 'Number of Databases = ' +
IntToStr(DatabaseInfo.NoOfDatabases);
for I:= 0 to High(DatabaseInfo.DbName) do
Memo1.Lines.Add(DatabaseInfo.DbName[i])
The following code snippet shows how you could display configuration parameters as label captions.
Options := [ConfigParameters];
FetchConfigParams;
Label1.Caption := 'Base File = ' + ConfigParams.BaseLocation;
Label2.Caption := 'Lock File = ' + ConfigParams.LockFileLocation;
Label3.Caption := 'Message File = ' + ConfigParams.MessageFileLocation;
Label4.Caption := 'Security Database = ' +
ConfigParams.SecurityDatabaseLocation;
You could also set the ConfigFileData array to display server key values in a Memo component.
var
I: Integer;
st1: string;
.
.
.
for I:= 0 to High(ConfigParams.ConfigFileData.ConfigFileValue) do
begin
case ConfigParams.ConfigFileData.ConfigFileKey[i] of
ISCCFG_IPCMAP_KEY: st1 := 'IPCMAP_KEY';
ISCCFG_LOCKMEM_KEY: st1 := 'LOCKMEM_KEY';
.
.
.
ISCCFG_DUMMY_INTRVL_KEY: st1 := 'DUMMY_INTRVL_KEY';
end;
Memo1.Lines.Add(st1 + ' = ' +
IntTostr(ConfigParams.ConfigFileData.ConfigFileValue[i]));
You can set the Version option to <True> in the Object Inspector, or set the Options property in code.
The following code displays server properties in Label components when a button is clicked:
Options := [Version];
FetchVersionInfo;
Label1.Caption := 'Server Version = ' + VersionInfo.ServerVersion;
Use the TIBEvents component in your application to register an event (or a list of events) with the event
manager. The event manager maintains a list of events posted to it by triggers and stored procedures.
It also maintains a list of applications that have registered an interest in events. Each time a new event is
posted to it, the event manager notifies interested applications that the event has occurred.
1. Create a trigger or stored procedure on the InterBase server which will post an event.
2. Add a TIBDatabase and a TIBEvents component to your form.
3. Add the events to the Events list and register them with the event manager.
4. Write an OnEventAlert event handler for each event.
Events are passed by triggers or stored procedures only when the transaction under which they occur is
posted. Also, InterBase consolidates events before posting them. For example, if an InterBase trigger posts
20 x STOCK_LOW events within a transaction, when the transaction is committed these will be consolidated
into a single STOCK_LOW event, and the client will only receive one event notification.
For more information on events, refer to Working with Events in the Embedded SQL Guide.
TIBEvents.Events.Add( 'STOCK_LOW')
OnEventAlert is called everytime an InterBase event is received by an IBEvents component. The EventName
variable contains the name of the event that has just been received. The EventCount variable contains the
number of EventName events that have been received since OnEventAlert was last called.
OnEventAlert runs as a separate thread to allow for true asynchronous event processing, however, the
IBEvents component provides synchronization code to ensure that only one OnEventAlert event handler
executes at any one time.
This chapter describes when and how to use cached updates. It also describes the TIBUpdateSQL component
that can be used in conjunction with cached updates to update virtually any dataset, particularly datasets
that are not normally updatable.
While cached updates can minimize transaction times and drastically reduce network traffic, they may not
be appropriate for all database client applications that work with remote servers. There are three areas of
consideration when deciding to use cached updates:
• Cached data is local to your application, and is not under transaction control. In a busy client/server
environment this has two implications for your application:
• Other applications can access and change the actual data on the server while your users edit their
local copies of the data.
• Other applications cannot see any data changes made by your application until it applies all its
changes.
• In master/detail relationships managing the order of applying cached updates can be tricky. This is
particularly true when there are nested master/detail relationships where one detail table is the master
table for yet another detail table and so on.
• Applying cached updates to read-only, query-based datasets requires use of update ob-
jects.
The InterBase Express components provide cached update methods and transaction control methods you
can use in your application code to handle these situations, but you must take care that you cover all
possible scenarios your application is likely to encounter in your working environment.
To use cached updates, the following order of processes must occur in an application:
1. Enable cached updates. Enabling cached updates causes a read-only transaction that fetches as much
data from the server as is necessary for display purposes and then terminates. Local copies of the data
are stored in memory for display and editing. For more information about enabling and disabling
cached updates, see Enabling and Disabling Cached Updates.
2. Display and edit the local copies of records, permit insertion of new records, and support deletions
of existing records. Both the original copy of each record and any edits to it are stored in memory.
For more information about displaying and editing when cached updates are enabled, see Applying
Cached Updates.
3. Fetch additional records as necessary. As a user scrolls through records, additional records are fetched
as needed. Each fetch occurs within the context of another short duration, read-only transaction. (An
application can optionally fetch all records at once instead of fetching many small batches of records.)
For more information about fetching all records, see Fetching Records.
4. Continue to display and edit local copies of records until all desired changes are complete.
5. Apply the locally cached records to the database or cancel the updates. For each record written to the
database, an OnUpdateRecord event is triggered. If an error occurs when writing an individual record
to the database, an OnUpdateError event is triggered which enables the application to correct the
error, if possible, and continue updating. When updates are complete, all successfully applied updates
are cleared from the local cache. For more information about applying updates to the database, see
Applying Cached Updates.
If instead of applying updates, an application cancels updates, the locally cached copy of the records and
all changes to them are freed without writing the changes to the database. For more information about
canceling updates, see Canceling Pending Cached Updates.
NOTE
Client datasets always cache updates. They have no CachedUpdates property because you cannot disable cached up-
dates on a client dataset.
To use cached updates, set CachedUpdates to True, either at design time (through the Object Inspector),
or at runtime. When you set CachedUpdates to True, the dataset’s OnUpdateRecord event is triggered if you
provide it. For more information about the OnUpdateRecord event, see Creating an OnUpdateRecord Event
Handler.
For example, the following code enables cached updates for a dataset at runtime:
CustomersTable.CachedUpdates := True;
When you enable cached updates, a copy of all records necessary for display and editing purposes is
cached in local memory. Users view and edit this local copy of data. Changes, insertions, and deletions are
also cached in memory. They accumulate in memory until the current cache of local changes is applied
to the database. If changed records are successfully applied to the database, the record of those changes
are freed in the cache.
NOTE
Applying cached updates does not disable further cached updates; it only writes the current set of changes to the
database and clears them from memory.
To disable cached updates for a dataset, set CachedUpdates to False. If you disable cached updates when
there are pending changes that you have not yet applied, those changes are discarded without notification.
Your application can test the UpdatesPending property for this condition before disabling cached updates.
For example, the following code prompts for confirmation before disabling cached updates for a dataset:
if (CustomersTable.UpdatesPending)
if (Application.MessageBox(“Discard pending updates?”,
“Unposted changes”,
MB_YES + MB_NO) = IDYES) then
CustomersTable.CachedUpdates = False;
2.2. Fetching Records
By default, when you enable cached updates, datasets automatically handle fetching of data from the
database when necessary. Datasets fetch enough records for display. During the course of processing,
many such record fetches may occur. If your application has specific needs, it can fetch all records at one
time. You can fetch all records by calling the dataset’s FetchAll method. FetchAll creates an in-memory,
local copy of all records from the dataset. If a dataset contains many records or records with large Blob
fields, you may not want to use FetchAll.
Client datasets use the PacketRecords property to indicate the number of records that should be fetched
at any time. If you set the FetchOnDemand property to True, the client dataset automatically handles fetching
of data when necessary. Otherwise, you can use the GetNextPacket method to fetch records from the data
server. For more information about fetching records using a client dataset, see Requesting Data from the
Source Dataset or Document in the Delphi Developer's Guide.
IMPORTANT
To apply updates to a set of records retrieved by a SQL query that does not return a live result set, you must use
a TIBUpdateSQL object to specify how to perform the updates. For updates to joins (queries involving two or more
tables), you must provide one TIBUpdateSQL object for each table involved, and you must use the OnUpdateRecord
event handler to invoke these objects to perform the updates. For more information, see Updating a Read-only Dataset.
For more information about creating and using an OnUpdateRecord event handler, see Creating an OnUpdateRecord
Event Handler.
Applying updates is a two-phase process that should occur in the context of a transaction component to
enable your application to recover gracefully from errors.
When applying updates under transaction control, the following events take place:
2. Cached updates are written to the database (phase 1). If you provide it, an OnUpdateRecord event is
triggered once for each record written to the database. If an error occurs when a record is applied
to the database, the OnUpdateError event is triggered if you provide one.
The two-phased approach to applying cached updates allows for effective error recovery, especially when
updating multiple datasets (for example, the datasets associated with a master/detail form). For more
information about handling update errors that occur when applying cached updates, see Handling Cached
Update Errors.
There are actually two ways to apply updates. To apply updates for a specified set of datasets associated
with a database component, call the database component’s ApplyUpdates method. To apply updates for
a single dataset, call the dataset’s ApplyUpdates and Commit methods. These choices, and their strengths,
are described in the following sections.
To apply cached updates to one or more datasets in the context of a database connection, call the database
component’s ApplyUpdates method. The following code applies updates to the CustomersQuery dataset in
response to a button click event:
The above sequence starts a transaction, and writes cached updates to the database. If successful, it also
commits the transaction, and then commits the cached updates. If unsuccessful, this method rolls back
the transaction, and does not change the status of the cached updates. In this latter case, your application
should handle cached update errors through a dataset’s OnUpdateError event. For more information about
handling update errors, see Handling Cached Update Errors.
The main advantage to calling a database component’s ApplyUpdates method is that you can update any
number of dataset components that are associated with the database. The parameter for the ApplyUpdates
method for a database is an array of TIBCustomDataSet. For example, the following code applies updates
for two queries used in a master/detail form:
IBDatabase1.ApplyUpdates([CustomerQuery, OrdersQuery]);
For more information about updating master/detail tables, see Applying Updates for Master/detail Tables.
The following code illustrates how you apply updates within a transaction for the CustomerQuery dataset
previously used to illustrate updates through a database method:
IBTransaction1.StartTransaction;
try
CustomerQuery.ApplyUpdates; {try to write the updates to the database }
IBTransaction1.Commit; { on success, commit the changes }
except
IBTransaction1.Rollback; { on failure, undo any changes }
raise;
end;
If an exception is raised during the ApplyUpdates call, the database transaction is rolled back. Rolling back
the transaction ensures that the underlying database table is not changed. The raise statement inside the
try...except block re-raises the exception, thereby preventing the call to CommitUpdates. Because Commi-
tUpdates is not called, the internal cache of updates is not cleared so that you can handle error conditions
and possibly retry the update.
You can update master/detail tables at the database or dataset component levels. For purposes of control
(and of creating explicitly self-documented code), you should apply updates at the dataset level. The
following example illustrates how you should code cached updates to two tables, Master and Detail,
involved in a master/detail relationship:
IBTransaction1.StartTransaction;
try
Master.ApplyUpdates;
Detail.ApplyUpdates;
Database1.Commit;
except
IBTransaction1.Rollback;
raise;
end;
Master.CommitUpdates;
Detail.CommitUpdates;
If an error occurs during the application of updates, this code also leaves both the cache and the underlying
data in the database tables in the same state they were in before the calls to ApplyUpdates.
If an exception is raised during the call to Master.ApplyUpdates, it is handled like the single dataset case
previously described. Suppose, however, that the call to Master.ApplyUpdates succeeds, and the subse-
quent call to Detail.ApplyUpdates fails. In this case, the changes are already applied to the master table.
Because all data is updated inside a database transaction, however, even the changes to the master ta-
ble are rolled back when IBTransaction1.Rollback is called in the except block. Furthermore, UpdatesMas-
ter.CommitUpdates is not called because the exception which is re-raised causes that code to be skipped,
so the cache is also left in the state it was before the attempt to update.
To appreciate the value of the two-phase update process, assume for a moment that ApplyUpdates is a
single-phase process which updates the data and the cache. If this were the case, and if there were an
error while applying the updates to the Detail table, then there would be no way to restore both the data
and the cache to their original states. Even though the call to IBTransaction1.Rollback would restore the
database, there would be no way to restore the cache.
• To cancel all pending updates and disable further cached updates, set the CachedUpdates property
to False.
• To discard all pending updates without disabling further cached updates, call the CancelUpdates
method.
• To cancel updates made to the current record call RevertRecord.
From the update cache, deleted records are undeleted, modified records revert to original values, and
newly inserted record simply disappear.
NOTE
CustomersTable.CancelUpdates;
From the update cache, deleted records are undeleted, modified records revert to original values, and
newly inserted records simply disappear.
NOTE
Calling CancelUpdates does not disable cached updating. It only cancels currently pending updates. To disable further
cached updates, set the CachedUpdates property to False.
CustomersTable.RevertRecord;
Undoing cached changes to one record does not affect any other records. If only one record is in the cache
of updates and the change is undone using RevertRecord, the UpdatesPending property for the dataset
component is automatically changed from True to False.
If the record is not modified, this call has no effect. For more information about creating an OnUpdateError
handler, see Creating an OnUpdateRecord Event Handler.
TIBUpdateRecordType values
Value Meaning
cusModified Modified records
cusInserted Inserted records
cusDeleted Deleted records
cusUninserted Uninserted records
cusUnmodified Unmodified records
The default value for UpdateRecordTypes includes only cusModified, cusInserted, cusUnmodified, and
cusUninserted with deleted records (cusDeleted) not displayed.
The UpdateRecordTypes property is primarily useful in an OnUpdateError event handler for accessing deleted
records so they can be undeleted through a call to RevertRecord. This property is also useful if you wanted
to provide a way in your application for users to view only a subset of cached records, for example, all
newly inserted (cusInserted) records.
For example, you could have a set of four radio buttons (RadioButton1 through RadioButton4) with the
captions All, Modified, Inserted, and Deleted. With all four radio buttons assigned to the same OnClick event
handler, you could conditionally display all records (except deleted, the default), only modified records, only
newly inserted records, or only deleted records by appropriately setting the UpdateRecordTypes property.
For more information about creating an OnUpdateError handler, see Creating an OnUpdateRecord Event
Handler.
As you iterate through a set of pending changes, UpdateStatus changes to reflect the update status of the
current record. UpdateStatus returns one of the following values for the current record:
When a dataset is first opened all records will have an update status of usUnmodified. As records are
inserted, deleted, and so on, the status values change. Here is an example of UpdateStatus property used
in a handler for a dataset’s OnScroll event. The event handler displays the update status of each record
in a status bar.
NOTE
If a record’s UpdateStatus is usModified, you can examine the OldValue property for each field in the dataset to
determine its previous value. OldValue is meaningless for records with UpdateStatus values other than usModified.
For more information about examining and using OldValue, see Creating an OnUpdateRecord Event Handler in the
Delphi Developer's Guide.
NOTE
If you use more than one update component to perform an update operation, you must create an OnUpdateRecord
event to execute each update component.
An update component actually encapsulates four TIBQuery components. Each of these query components
perform a single update task. One query component provides a SQL UPDATE statement for modifying
existing records; a second query component provides an INSERT statement to add new records to a table;
a third component provides a DELETE statement to remove records from a table, and a forth component
provides a SELECT statement to refresh the records in a table.
When you place an update component in a data module, you do not see the query components it en-
capsulates. They are created by the update component at runtime based on four update properties for
which you supply SQL statements:
1. Selects a SQL statement to execute based on the UpdateKind parameter automatically generated on a
record update event. UpdateKind specifies whether the current record is modified, inserted, or deleted.
2. Provides parameter values to the SQL statement.
3. Prepares and executes the SQL statement to perform the specified update.
You must use one of these two means of associating update datasets with update objects. Without proper
association, the dynamic filling of parameters in the update object’s SQL statements cannot occur. Use
one association method or the other, but never both.
How an update object is associated with a dataset also determines how the update object is executed. An
update object might be executed automatically, without explicit intervention by the application, or it might
need to be explicitly executed. If the association is made using the dataset component’s UpdateObject
property, the update object will automatically be executed. If the association is made with the update
object’s DataSet property, you must execute the update object programmatically.
The following sections explain the process of associating update objects with update dataset components
in greater detail, along with suggestions about when each method should be used and effects on update
execution.
IBQuery1.UpdateObject := UpdateSQL1;
The update SQL statements in the update object are automatically executed when the ApplyUpdates
method of the update dataset is called. The update object is invoked for each record that requires updat-
ing. Do not call the ExecSQL method of the update object in a handler for the OnUpdateRecord event as this
will result in a second attempt to apply each record’s update.
If you supply a handler for the OnUpdateRecord event of the dataset, the minimum action that you need
to take in that handler is setting the UpdateAction parameter of the event handler to uaApplied. You may
optionally perform data validation, data modification, or other operations like setting parameter values.
IBUpdateSQL1.DataSet := IBQuery1;
The update SQL statements in the update object are not automatically executed when the ApplyUpdates
method of the update dataset is called. To update records, you must supply a handler for the OnUpdateRe-
cord event of the dataset component and call the ExecSQL or Apply methods of the update object. This
invokes the update object for each record that requires updating.
In the handler for the OnUpdateRecord event of the dataset, the minimal actions that you need to take in
that handler are:
• Calling the SetParams method of the update object (if you later call ExecSQL).
• Executing the update object for the current record with ExecSQL or Apply.
• Setting the UpdateAction parameter of the event handler to uaApplied.
You may optionally perform data validation, data modification, or other operations that depend on each
record’s update.
NOTE
It is also possible to have one update object associated with the dataset using the UpdateObject property of the dataset
component, and the second and subsequent update objects associated using their DataSet properties. The first update
object is executed automatically on calling the ApplyUpdates method of the dataset component. The rest need to be
manually executed.
As the update for each record is applied, one of the four SQL statements is executed against the base
table updated. Which SQL statement is executed depends on the UpdateKind parameter automatically
generated for each record’s update.
Creating the SQL statements for update objects can be done at design time or at runtime. The sections
that follow describe the process of creating update SQL statements in greater detail.
The Update SQL editor has two pages. The Options page is visible when you first invoke the editor. Use
the Table Name combo box to select the table to update. When you specify a table name, the Key Fields
and Update Fields list boxes are populated with available columns.
The Update Fields list box indicates which columns should be updated. When you first specify a table, all
columns in the Update Fields list box are selected for inclusion. You can multi-select fields as desired.
The Key Fields list box is used to specify the columns to use as keys during the update. Instead of setting
Key Fields you can click the Primary Keys button to choose key fields for the update based on the table’s
primary index. Click Dataset Defaults to return the selection lists to the original state: all fields selected as
keys and all selected for update.
Check the Quote Field Names check box if your server requires quotation marks around field names.
After you specify a table, select key columns, and select update columns, click Generate SQL to generate the
preliminary SQL statements to associate with the update component’s ModifySQL, InsertSQL, RefreshSQL,
and DeleteSQL properties. In most cases you may want or need to fine tune the automatically generated
SQL statements.
To view and modify the generated SQL statements, select the SQL page. If you have generated SQL state-
ments, then when you select this page, the statement for the ModifySQL property is already displayed in
the SQL Text memo box. You can edit the statement in the box as desired.
IMPORTANT
Keep in mind that generated SQL statements are starting points for creating update statements. You may need to modify
these statements to make them execute correctly. For example, when working with data that contains NULL values, you
need to modify the WHERE clause to read
rather then using the generated field variable. Test each of the statements directly yourself before accepting
them.
Use the Statement Type radio buttons to switch among generated SQL statements and edit them as de-
sired.
To accept the statements and associate them with the update component’s SQL properties, click OK.
When the parameter name matches a column name in the table, the new value in the field in the cached
update for the record is automatically used as the value for the parameter. When the parameter name
matches a column name prefixed by the string “OLD_”, then the old value for the field will be used. For
example, in the update SQL statement below, the parameter :LastName is automatically filled with the new
field value in the cached update for the inserted record.
New field values are typically used in the InsertSQL and ModifySQL statements. In an update for a modified
record, the new field value from the update cache is used by the UPDATE statement to replace the old field
value in the base table updated.
In the case of a deleted record, there are no new values, so the DeleteSQL property uses the “:OLD_Field-
Name” syntax. Old field values are also normally used in the WHERE clause of the SQL statement for a mod-
ified or deletion update to determine which record to update or delete.
In the WHERE clause of an UPDATE or DELETE update SQL statement, supply at least the minimal number of
parameters to uniquely identify the record in the base table that is updated with the cached data. For
instance, in a list of customers, using just a customer’s last name may not be sufficient to uniquely identify
the correct record in the base table; there may be a number of records with “Smith” as the last name. But by
using parameters for last name, first name, and phone number could be a distinctive enough combination.
Even better would be a unique field value like a customer number.
For more information about old and new value parameter substitution, see Creating an OnUpdateRecord
Event Handler in the Delphi Developer's Guide.
The DeleteSQL property should contain only a SQL statement with the DELETE command. The base table to
be updated must be named in the FROM clause. So that the SQL statement only deletes the record in the base
table that corresponds to the record deleted in the update cache, use a WHERE clause. In the WHERE clause,
use a parameter for one or more fields to uniquely identify the record in the base table that corresponds
to the cached update record. If the parameters are named the same as the field and prefixed with “OLD_”,
the parameters are automatically given the values from the corresponding field from the cached update
record. If the parameter are named in any other manner, you must supply the parameter values.
Some tables types might not be able to find the record in the base table when fields used to identify the
record contain NULL values. In these cases, the delete update fails for those records. To accommodate this,
add a condition for those fields that might contain NULLs using the IS NULL predicate (in addition to a
condition for a non-NULL value). For example, when a FirstName field may contain a NULL value:
The InsertSQL statement should contain only a SQL statement with the INSERT command. The base table
to be updated must be named in the INTO clause. In the VALUES clause, supply a comma-separated list of
parameters. If the parameters are named the same as the field, the parameters are automatically given the
value from the cached update record. If the parameter are named in any other manner, you must supply
the parameter values. The list of parameters supplies the values for fields in the newly inserted record.
There must be as many value parameters as there are fields listed in the statement.
The RefreshSQL statement should contain only a SQL statement with the SELECT command. The base table
to be updated must be named in the FROM clause. If the parameters are named the same as the field, the
parameters are automatically given the value from the cached update record. If the parameter are named
in any other manner, you must supply the parameter values.
The ModifySQL statement should contain only a SQL statement with the UPDATE command. The base table
to be updated must be named in the FROM clause. Include one or more value assignments in the SET clause.
If values in the SET clause assignments are parameters named the same as fields, the parameters are
automatically given values from the fields of the same name in the updated record in the cache. You can
assign additional field values using other parameters, as long as the parameters are not named the same
as any fields and you manually supply the values. As with the DeleteSQL statement, supply a WHERE clause
to uniquely identify the record in the base table to be updated using parameters named the same as the
fields and prefixed with “OLD_”. In the update statement below, the parameter :ItemNo is automatically
given a value and :Price is not.
UPDATE Inventory I
SET I.ItemNo = :ItemNo, Amount = :Price
WHERE (I.ItemNo = :OLD_ItemNo)
Considering the above update SQL, take an example case where the application end-user modifies an
existing record. The original value for the ItemNo field is 999. In a grid connected to the cached dataset,
the end-user changes the ItemNo field value to 123 and Amount to 20. When the ApplyUpdates method
is invoked, this SQL statement affects all records in the base table where the ItemNo field is 999, using the
old field value in the parameter :OLD_ItemNo. In those records, it changes the ItemNo field value to 123
(using the parameter :ItemNo, the value coming from the grid) and Amount to 20.
The statement below uses the UpdateKind constant ukDelete with the Query property to access the
DeleteSQL property.
Normally, the properties indexed by the Query property are set at design time using the Update SQL editor.
You might, however, need to access these values at runtime if you are generating a unique update SQL
statement for each record and not using parameter binding. The following example generates a unique
Query property value for each row updated:
ukDeleted:
{...}
end;
end;
UpdateAction := uaApplied;
end;
NOTE
Query returns a value of type TIBDataSetUpdateObject. To treat this return value as a TIBUpdateSQL component, to
use properties and methods specific to TIBUpdateSQL, typecast the UpdateObject property. For example:
For an example of using this property, see Calling the SetParams Method.
Below, the third line of a SQL statement is altered using an index of 2 with the ModifySQL property.
1. Values for the record are bound to the parameters in the appropriate update SQL statement.
Call the Apply method to apply the update for the current record in the update cache. Only use Apply when
the update object is not associated with the dataset using the dataset component’s UpdateObject property,
in which case the update object is not automatically executed. Apply automatically calls the SetParams
method to bind old and new field values to specially named parameters in the update SQL statement.
Do not call SetParams yourself when using Apply. The Apply method is most often called from within a
handler for the dataset’s OnUpdateRecord event.
If you use the dataset component’s UpdateObject property to associate dataset and update object, this
method is called automatically. Do not call Apply in a handler for the dataset component’s OnUpdateRecord
event as this will result in a second attempt to apply the current record’s update.
In a handler for the OnUpdateRecord event, the UpdateKind parameter is used to determine which update
SQL statement to use. If invoked by the associated dataset, the UpdateKind is set automatically. If you invoke
the method in an OnUpdateRecord event, pass an UpdateKind constant as the parameter of Apply.
If an exception is raised during the execution of the update program, execution continues in the OnUpda-
teError event, if it is defined.
NOTE
The operations performed by Apply are analogous to the SetParams and ExecSQL methods described in the following
sections.
SetParams sets the parameters of the SQL statement indicated by the UpdateKind parameter. Only those
parameters that use a special naming convention automatically have a value assigned. If the parameter has
the same name as a field or the same name as a field prefixed with “OLD_” the parameter is automatically
a value. Parameters named in other ways must be manually assigned values. For more information see the
section Understanding Parameter Substitution in Update SQL Statements.
SetParams(UpdateKind);
if UpdateKind = ukModified then
IBQuery[UpdateKind].ParamByName('DateChanged').Value := Now;
ExecSQL(UpdateKind);
end;
UpdateAction := uaApplied;
end;
This example assumes that the ModifySQL property for the update component is as follows:
UPDATE EmpAudit
SET EmpNo = :EmpNo, Salary = :Salary, Changed = :DateChanged
WHERE EmpNo = :OLD_EmpNo
In this example, the call to SetParams supplies values to the EmpNo and Salary parameters. The DateChanged
parameter is not set because the name does not match the name of a field in the dataset, so the next
line of code sets this value explicitly.
1. Values for the record are bound to the parameters in the appropriate update SQL statement.
2. The SQL statement is executed.
Call the ExecSQL method to apply the update for the current record in the update cache. Only use ExecSQL
when the update object is not associated with the dataset using the dataset component’s UpdateObject
property, in which case the update object is not automatically executed. ExecSQL does not automatically
call the SetParams method to bind update SQL statement parameter values; call SetParams yourself before
invoking ExecSQL. The ExecSQL method is most often called from within a handler for the dataset’s OnUp-
dateRecord event.
If you use the dataset component’s UpdateObject property to associate dataset and update object, this
method is called automatically. Do not call ExecSQL in a handler for the dataset component’s OnUpdateRecord
event as this will result in a second attempt to apply the current record’s update.
In a handler for the OnUpdateRecord event, the UpdateKind parameter is used to determine which update
SQL statement to use. If invoked by the associated dataset, the UpdateKind is set automatically. If you invoke
the method in an OnUpdateRecord event, pass an UpdateKind constant as the parameter of ExecSQL.
If an exception is raised during the execution of the update program, execution continues in the OnUpda-
teError event, if it is defined.
NOTE
The operations performed by ExecSQL and SetParams are analogous to the Apply method described previously.
In a handler for the dataset component’s OnUpdateRecord event, use the properties and methods of
another dataset component to apply the cached updates for each record.
For example, the following code uses a table component to perform updates:
In many circumstances, you may also want to write an OnUpdateRecord event handler for the dataset.
Providing a handler for the OnUpdateRecord event allows you to perform actions just before the current
record’s update is actually applied. Such actions can include special data validation, updating other tables,
or executing multiple update objects. A handler for the OnUpdateRecord event affords you greater control
over the update process.
The sections that follow describe when you might need to provide a handler for the OnUpdateRecord event
and how to create a handler for this event.
For example, you might want to use the OnUpdateRecord event to provide validation routines that adjust data
before it is applied to the table, or you might want to use the OnUpdateRecord event to provide additional
processing for records in master and detail tables before writing them to the base tables.
In many cases you must provide additional processing. For example, if you access multiple tables using a
joined query, then you must provide one TIBUpdateSQL object for each table in the query, and you must
use the OnUpdateRecord event to make sure each update object is executed to write changes to the tables.
The following sections describe how to create and use an TIBUpdateSQL object and how to create and use
an OnUpdateRecord event.
The UpdateKindparameter indicates the type of update to perform. Values for UpdateKind are ukModify,
ukInsert, and ukDelete. When using an update component, you need to pass this parameter to its exe-
cution and parameter binding methods. For example using ukModify with the Apply method executes the
update object’s ModifySQL statement. You may also need to inspect this parameter if your handler performs
any special processing based on the kind of update to perform.
The UpdateAction parameter indicates if you applied an update or not. Values for UpdateAction are uaFail
(the default), uaAbort, uaSkip, uaRetry, uaApplied. Unless you encounter a problem during updating, your
event handler should set this parameter to uaApplied before exiting. If you decide not to update a particular
record, set the value to uaSkip to preserve unapplied changes in the cache.
If you do not change the value for UpdateAction, the entire update operation for the dataset is aborted.
For more information about UpdateAction, see Specifying the Action to Take.
In addition to these parameters, you will typically want to make use of the OldValue and NewValue properties
for the field component associated with the current record. For more information about OldValue and
NewValue see “Accessing a field’s OldValue, NewValue, and CurValue properties” in the Delphi Developer’s
Guide.
IMPORTANT
The OnUpdateRecord event, like the OnUpdateError and OnCalcFields event handlers, should never call any methods
that change which record in a dataset is the current record.
Here is an OnUpdateRecord event handler that executes two update components using their Apply methods.
The UpdateKind parameter is passed to the Apply method to determine which update SQL statement in
each update object to execute.
In this example the DataSet parameter is not used. This is because the update components are not asso-
ciated with the dataset component using its UpdateObject property.
A dataset component’s OnUpdateError event enables you to catch and respond to errors. You should create
a handler for this event if you use cached updates. If you do not, and an error occurs, the entire update
operation fails.
IMPORTANT
Do not call any dataset methods that change the current record (such as Next and Prior) in an OnUpdateError event
handler. Doing so causes the event handler to enter an endless loop.
The following sections describe specific aspects of error handling using an OnUpdateError handler, and how
the event’s parameters are used.
UpdateKind values
Value Meaning
ukModify Editing an existing record caused an error
ukInsert Inserting a new record caused an error
ukDelete Deleting an existing record caused an error
The example below shows the decision construct to perform different operations based on the value of
the UpdateKind parameter.
UpdateAction values
Value Meaning
uaAbort Aborts the update operation without displaying an error message
uaFail Aborts the update operation, and displays an error message; this is the default value for UpdateAction
when you enter an update error handler
uaSkip Skips updating the row, but leaves the update for the record in the cache
uaRetry Repeats the update operation; correct the error condition before setting UpdateAction to this value
uaApplied Not used in error handling routines
If your error handler can correct the error condition that caused the handler to be invoked, set UpdateAction
to the appropriate action to take on exit. For error conditions you correct, set UpdateAction to uaRetry to
apply the update for the record again.
When set to uaSkip, the update for the row that caused the error is skipped, and the update for the record
remains in the cache after all other updates are completed.
Both uaFail and uaAbort cause the entire update operation to end. uaFail raises an exception, and displays
an error message. uaAbort raises a silent exception (does not display an error message).
NOTE
If an error occurs during the application of cached updates, an exception is raised and an error message displayed. Unless
the ApplyUpdates is called from within a try...except construct, an error message to the user displayed from inside
your OnUpdateError event handler may cause your application to display the same error message twice. To prevent
error message duplication, set UpdateAction to uaAbort to turn off the system-generated error message display.
The uaApplied value should only be used inside an OnUpdateRecord event. Do not set this value in an update
error handler. For more information about update record events, see Creating an OnUpdateRecord Event
Handler.
Understanding Datasets
In Delphi, the fundamental unit for accessing data is the dataset family of objects. Your application uses
datasets for all database access. Generally, a dataset object represents a specific table belonging to a
database, or it represents a query or stored procedure that accesses a database.
All dataset objects that you will use in your database applications descend from the virtualized dataset
object, TDataSet, and they inherit data fields, properties, events, and methods from TDataSet. This chapter
describes the functionality of TDataSet that is inherited by the dataset objects you will use in your database
applications. You need to understand this shared functionality to use any dataset object.
The following figure illustrates the hierarchical relationship of all the dataset components:
1. What is TDataSet?
TDataSet is the ancestor for all the dataset objects that you use in your applications. It defines a set of
data fields, properties, events, and methods that are shared by all dataset objects. TDataSet is a virtualized
dataset, meaning that many of its properties and methods are virtual or abstract. A virtual method is a
function or procedure declaration where the implementation of that method can be (and usually is) over-
ridden in descendant objects. An abstract method is a function or procedure declaration without an actual
implementation. The declaration is a prototype that describes the method (and its parameters and return
type, if any) that must be implemented in all descendant dataset objects, but that might be implemented
differently by each of them.
Because TDataSet contains abstract methods, you cannot use it directly in an application without generating
a runtime error. Instead, you either create instances of TDataSet’s descendants, such as TIBCustomDataSet,
TIBDataSet, TIBTable, TIBQuery, TIBStoredProc, and TClientDataSet, and use them in your application, or
you derive your own dataset object from TDataSet or its descendants and write implementations for all
its abstract methods.
Nevertheless, TDataSet defines much that is common to all dataset objects. For example, TDataSet defines
the basic structure of all datasets: an array of TField components that correspond to actual columns in
one or more database tables, lookup fields provided by your application, or calculated fields provided by
your application. For more information about TField components, see “Working with field components”
in the Delphi Developer’s Guide.
• Set the Active property of the dataset to True, either at design time in the Object Inspector, or in
code at runtime:
IBTable.Active := True;
IBQuery.Open;
• Set the Active property of the dataset to False, either at design time in the Object Inspector, or in
code at runtime:
IBQuery.Active := False;
IBTable.Close;
You may need to close a dataset when you want to change certain of its properties, such as TableName
on a TIBTable component. At runtime, you may also want to close a dataset for other reasons specific
to your application.
When an application opens a dataset, it appears by default in dsBrowse mode. The state of a dataset
changes as an application processes data. An open dataset changes from one state to another based on
either the code in your application, or the built-in behavior of data-related components.
To put a dataset into dsBrowse, dsEdit, or dsInsert states, call the method corresponding to the name of
the state. For example, the following code puts IBTable into dsInsert state, accepts user input for a new
record, and writes the new record to the database:
This example also illustrates that the state of a dataset automatically changes to dsBrowse when
• The Post method successfully writes a record to the database. (If Post fails, the dataset state remains
unchanged.)
• The Cancel method is called.
Some states cannot be set directly. For example, to put a dataset into dsInactive state, set its Active
property to False, or call the Close method for the dataset. The following statements are equivalent:
IBTable.Active := False;
IBTable.Close;
The remaining states (dsCalcFields, dsCurValue, dsNewValue, dsOldValue, and dsFilter) cannot be set by
your application. Instead, the state of the dataset changes automatically to these values as necessary. For
example, dsCalcFields is set when a dataset’s OnCalcFields event is called. When the OnCalcFields event
finishes, the dataset is restored to its previous state.
Whenever a dataset state changes, the OnStateChange event is called for any data source components
associated with the dataset. For more information about data source components and OnStateChange, see
the Using Data Controls chapter of the Delphi Developer's Guide.
The following sections provide overviews of each state, how and when states are set, how states relate to
one another, and where to go for related information, if applicable.
3.1. Deactivating a Dataset
A dataset is inactive when it is closed. You cannot access records in a closed dataset. At design time, a
dataset is closed until you set its Active property to True. At runtime, a dataset is initially closed until an
application opens it by calling the Open method, or by setting the Active property to True.
When you open an inactive dataset, its state automatically changes to the dsBrowse state. The following
figure illustrates the relationship between these states and the methods that set them.
To make a dataset inactive, call its Close method. You can write BeforeClose and AfterClose event handlers
that respond to the Close method for a dataset. For example, if a dataset is in dsEdit or dsInsert modes
when an application calls Close, you should prompt the user to post pending changes or cancel them
before closing the dataset. The following code illustrates such a handler:
To associate a procedure with the BeforeClose event for a dataset at design time:
3.2. Browsing a Dataset
When an application opens a dataset, the dataset automatically enters dsBrowse state. Browsing enables
you to view records in a dataset, but you cannot edit records or insert new records. You mainly use dsBrowse
to scroll from record to record in a dataset. For more information about scrolling from record to record,
see Navigating Datasets.
From dsBrowse all other dataset states can be set. For example, calling the Insert or Append methods for a
dataset changes its state from dsBrowse to dsInsert (note that other factors and dataset properties, such
as CanModify, may prevent this change). For more information about inserting and appending records in
a dataset, see Modifying Dataset Data.
Two methods associated with all datasets can return a dataset to dsBrowse state. Cancel ends the current
edit, insert, or search task, and always returns a dataset to dsBrowse state. Post attempts to write changes
to the database, and if successful, also returns a dataset to dsBrowse state. If Post fails, the current state
remains unchanged.
The following figure illustrates the relationship of dsBrowse both to the other dataset modes you can set
in your applications, and the methods that set those modes.
On forms in your application, some data-aware controls can automatically put a dataset into dsEdit state if:
IMPORTANT
For TIBTable components, if the ReadOnly property is True, CanModify is False, preventing editing of records.
NOTE
Even if a dataset is in dsEdit state, editing records will not succeed for InterBase databases if your application user
does not have proper SQL access privileges.
You can return a dataset from dsEdit state to dsBrowse state in code by calling the Cancel, Post, or Delete
methods. Cancel discards edits to the current field or record. Post attempts to write a modified record to
the dataset, and if it succeeds, returns the dataset to dsBrowse. If Post cannot write changes, the dataset
remains in dsEdit state. Delete attempts to remove the current record from the dataset, and if it succeeds,
returns the dataset to dsBrowse state. If Delete fails, the dataset remains in dsEdit state.
Data-aware controls for which editing is enabled automatically call Post when a user executes any action
that changes the current record (such as moving to a different record in a grid) or that causes the control
to lose focus (such as moving to a different control on the form).
For a complete discussion of editing fields and records in a dataset, see Modifying Dataset Data.
On forms in your application, the data-aware grid and navigator controls can put a dataset into dsInsert
state if
IMPORTANT
For TIBTable components, if the ReadOnly property is True, CanModify is False, preventing editing of records.
NOTE
Even if a dataset is in dsInsert state, inserting records will not succeed for InterBase databases if your application user
does not have proper SQL access privileges.
You can return a dataset from dsInsert state to dsBrowse state in code by calling the Cancel, Post, or Delete
methods. Delete and Cancel discard the new record. Post attempts to write the new record to the dataset,
and if it succeeds, returns the dataset to dsBrowse. If Post cannot write the record, the dataset remains
in dsInsert state.
Data-aware controls for which inserting is enabled automatically call Post when a user executes any action
that changes the current record (such as moving to a different record in a grid).
For more discussion of inserting and appending records in a dataset, see Modifying Dataset Data.
3.5. Calculating Fields
Delphi puts a dataset into dsCalcFields mode whenever an application calls the dataset’s OnCalcFields
event handler. This state prevents modifications or additions to the records in a dataset except for the
calculated fields the handler is designed to modify. The reason all other modifications are prevented is
because OnCalcFields uses the values in other fields to derive values for calculated fields. Changes to those
other fields might otherwise invalidate the values assigned to calculated fields.
When the OnCalcFields handler finishes, the dataset is returned to dsBrowse state.
For more information about creating calculated fields and OnCalcFields event handlers, see Using OnCal-
cFields.
3.6. Updating Records
When performing cached update operations, Delphi may put the dataset into dsNewValue, dsOldValue,
or dsCurValue states temporarily. These states indicate that the corresponding properties of a field com-
ponent (NewValue, OldValue, and CurValue, respectively) are being accessed, usually in an OnUpdateError
event handler. Your applications cannot see or set these states. For more information about using cached
updates, see Working with Cached Updates.
4. Navigating Datasets
For information on navigating datasets, refer to Navigating Datasets in the Delphi Developer's Guide.
5. Searching Datasets
For information on searching datasets, refer to Searching Datasets in the Delphi Developer's Guide.
Dataset events
Event Description
BeforeOpen, AfterOpen Called before/after a dataset is opened.
BeforeClose, AfterClose Called before/after a dataset is closed.
BeforeInsert, AfterInsert Called before/after a dataset enters Insert state.
BeforeEdit, AfterEdit Called before/after a dataset enters Edit state.
BeforePost, AfterPost Called before/after changes to a table are posted.
BeforeCancel, AfterCancel Called before/after the previous state is canceled.
BeforeDelete, AfterDelete Called before/after a record is deleted.
OnNewRecord Called when a new record is created; used to set default values.
OnCalcFields Called when calculated fields are calculated.
For more information about events for the TIBCustomDataSet component, see the online VCL Reference.
7.1. Aborting a Method
To abort a method such as an Open or Insert, call the Abort procedure in any of the Before event handlers
(BeforeOpen, BeforeInsert, and so on). For example, the following code requests a user to confirm a delete
operation or else it aborts the call to Delete:
7.2. Using OnCalcFields
The OnCalcFields event is used to set the values of calculated fields. The AutoCalcFields property deter-
mines when OnCalcFields is called. If AutoCalcFields is True, then OnCalcFields is called when
• A dataset is opened.
• Focus moves from one visual component to another, or from one column to another in a data-aware
grid control and the current record has been modified.
• A record is retrieved from the database.
OnCalcFields is always called whenever a value in a non-calculated field changes, regardless of the setting
of AutoCalcFields.
IMPORTANT
OnCalcFields is called frequently, so the code you write for it should be kept short. Also, if AutoCalcFields is True,
OnCalcFields should not perform any actions that modify the dataset (or the linked dataset if it is part of a master-detail
relationship), because this can lead to recursion. For example, if OnCalcFields performs a Post, and AutoCalcFields is
True, then OnCalcFields is called again, leading to another Post, and so on.
If AutoCalcFields is False, then OnCalcFields is not called when individual fields within a single record are
modified.
When OnCalcFields executes, a dataset is in dsCalcFields mode, so you cannot set the values of any fields
other than calculated fields. After OnCalcFields is completed, the dataset returns to dsBrowse state.
Implementation of cached updating occurs in TIBCustomDataSet. The following table lists the properties,
events, and methods for cached updating:
Using cached updates and coordinating them with other applications that access data in a multi-user
environment is an advanced topic that is fully covered in Working with Cached Updates.
A query component encapsulates an SQL statement that is used in a client application to retrieve, insert,
update, and delete data from one or more database tables. Query components can be used with remote
database servers and with ODBC-compliant databases.
Chances are you have also used range methods and a filter property of TIBTable to limit the number of
records available at any given time in your applications. Applying a range temporarily limits data access to
a block of contiguously indexed records that fall within prescribed boundary conditions, such as returning
all records for employees whose last names are greater than or equal to “Jones” and less than or equal to
“Smith.” Setting a filter temporarily restricts data access to a set of records that is usually non-contiguous
and that meets filter criteria, such as returning only those customer records that have a California mailing
address.
A query behaves in many ways very much like a table filter, except that you use the SQL property of the
query component (and sometimes the Params property) to identify the records in a dataset to retrieve,
insert, delete, or update. In some ways a query is even more powerful than a filter because it lets you access:
Queries can be verbatim, or they can contain replaceable parameters. Queries that use parameters are
called parameterized queries. When you use parameterized queries, the actual values assigned to the
parameters are inserted into the query before you execute, or run, the query. Using parameterized queries
is very flexible, because you can change a user’s view of and access to data on the fly at runtime without
having to alter the SQL statement.
Most often you use queries to select the data that a user should see in your application, just as you do when
you use a table component. Queries, however, can also perform update, insert, and delete operations
as well as retrieving records for display. When you use a query to perform insert, update, and delete
operations, the query ordinarily does not return records for viewing.
To learn more about using the SQL property to write a SQL statement, see Specifying the SQL statement to
execute. To learn more about using parameters in your SQL statements, see Setting parameters. To learn
about executing a query, see Executing a query of the Language Reference Guide.
The SQL statement and its parameters are the most important parts of a query component. The query
component’s SQL property is used to provide the SQL statement to use for data access, and the compo-
nent’s Params property is an optional array of parameters to bind into the query. However, a query com-
ponent is much more than a SQL statement and its parameters. A query component is also the interface
between your client application and the server.
A client application uses the properties and methods of a query component to manipulate a SQL statement
and its parameters, to specify the database to query, to prepare and unprepare queries with parameters,
and to execute the query. A query component’s methods communicates with the database server.
To learn more about using the SQL property to write a SQL statement, see Specifying the SQL statement
to execute. To learn more about using parameters in your SQL statements, see Setting parameters. To
learn about preparing a query, see Preparing a query, and to learn more about executing a query, see
Executing a query.
Use TIBDataSet or TIBQuery when you require use of data-aware components or a scrollable result set. In
any other case, it is probably best to use TIBSQL, which requires much less overhead.
1. Place a query component from the InterBase tab of the Tool Palette in a data module, and set its
Name property appropriately for your application.
2. Set the Database property of the component to the name of the TIBDatabase component to query.
3. Set the Transaction property of the component to the name of the TIBTransaction component to
query.
4. Specify a SQL statement in the SQL property of the component, and optionally specify any parameters
for the statement in the Params property. For more information, see Specifying the SQL property at
design time.
5. If the query data is to be used with visual data controls, place a data source component from the
Data Access tab of the Tool Palette in the data module, and set its DataSet property to the name of
the query component. The data source component is used to return the results of the query (called
a result set) from the query to data-aware components for display. Connect data-aware components
to the data source using their DataSource and DataField properties.
6. Activate the query component. For queries that return a result set, use the Active property or the
Open method. For queries that only perform an action on a table and return no result set, use the
ExecSQL method.
To execute a query for the first time at runtime, follow these steps:
After you execute a query for the first time, then as long as you do not modify the SQL statement, an
application can repeatedly close and reopen or re-execute a query without preparing it again. For more
information about reusing a query, see Executing a query.
The SQL property is a TStrings object, which is an array of text strings and a set of properties, events, and
methods that manipulate them. The strings in SQL are automatically concatenated to produce the SQL
statement to execute. You can provide a statement in as few or as many separate strings as you desire.
One advantage to using a series of strings is that you can divide the SQL statement into logical units (for
example, putting the WHERE clause for a SELECT statement into its own string), so that it is easier to modify
and debug a query.
The SQL statement can be a query that contains hard-coded field names and values, or it can be a pa-
rameterized query that contains replaceable parameters that represent field values that must be bound
into the statement before it is executed. For example, this statement is hard-coded:
Hard-coded statements are useful when applications execute exact, known queries each time they run.
At design time or runtime you can easily replace one hard-code query with another hard-coded or pa-
rameterized query as needed. Whenever the SQL property is changed the query is automatically closed
and unprepared.
NOTE
In queries using local SQL, when column names in a query contain spaces or special characters, the column name must
be enclosed in quotes and must be preceded by a table reference and a period. For example, BIOLIFE."Species Name".
A parameterized query contains one or more placeholder parameters, application variables that stand in
for comparison values such as those found in the WHERE clause of a SELECT statement. Using parameterized
queries enables you to change the value without rewriting the application. Parameter values must be bound
into the SQL statement before it is executed for the first time. Query components do this automatically for
you even if you do not explicitly call the Prepare method before executing a query.
The variable Number, indicated by the leading colon, is a parameter that fills in for a comparison value
that must be provided at runtime and that may vary each time the statement is executed. The actual value
for Number is provided in the query component’s Params property.
TIP
It is a good programming practice to provide variable names for parameters that correspond to the actual name of
the column with which it is associated. For example, if a column name is “Number,” then its corresponding parameter
would be “:Number”. Using matching names ensures that if a query uses its DataSource property to provide values for
parameters, it can match the variable name to valid field names.
You can enter a SQL statement in as many or as few lines as you want. Entering a statement on multiple
lines, however, makes it easier to read, change, and debug. Choose OK to assign the text you enter to
the SQL property.
Normally, the SQL property can contain only one complete SQL statement at a time, although these
statements can be as complex as necessary (for example, a SELECT statement with a WHERE clause that uses
several nested logical operators such as AND and OR). InterBase supports “batch” syntax so you can enter
multiple statements in the SQL property.
NOTE
You can use the SQL Builder to construct a query based on a visible representation of tables and fields in a database. To
use the SQL Builder, select a query component, right-click it to invoke the context menu, and choose Graphical Query
Editor. To learn how to use the SQL Builder, open it and use its online help.
1. Call Close to deactivate the query. Even though an attempt to modify the SQL property automatically
deactivates the query, it is a good safety measure to do so explicitly.
2. If you are replacing the whole SQL statement, call the Clear method for the SQL property to delete
its current SQL statement.
3. If you are building the whole SQL statement from nothing or adding a line to an existing statement,
call the Add method for the SQL property to insert and append one or more strings to the SQL property
to create a new SQL statement. If you are modifying an existing line use the SQL property with an
index to indicate the line affected, and assign the new value.
4. Call Open or ExecSQL to execute the query.
The following code illustrates building an entire SQL statement from nothing.
The code below demonstrates modifying only a single line in an existing SQL statement. In this case, the
WHERE clause already exists on the second line of the statement. It is referenced via the SQL property using
an index of 1.
NOTE
If a query uses parameters, you should also set their initial values and call the Prepare method before opening or
executing a query. Explicitly calling Prepare is most useful if the same SQL statement is used repeatedly; otherwise it
is called automatically by the query component.
CustomerQuery.Close;
CustomerQuery.SQL.LoadFromFile(‘c:\orders.txt’);
CustomerQuery.Open;
NOTE
If the SQL statement contained in the file is a parameterized query, set the initial values for the parameters and call
Prepare before opening or executing the query. Explicitly calling Prepare is most useful if the same SQL statement is
used repeatedly; otherwise it is called automatically by the query component.
CustomerQuery.Close;
CustomerQuery.SQL.Assign(Memo1.Lines);
CustomerQuery.Open;
NOTE
If the SQL statement is a parameterized query, set the initial values for the parameters and call Prepare before opening
or executing the query. Explicitly calling Prepare is most useful if the same SQL statement is used repeatedly; otherwise
it is called automatically by the query component.
6. Setting parameters
A parameterized SQL statement contains parameters, or variables, the values of which can be varied at
design time or runtime. Parameters can replace data values, such as those used in a WHERE clause for
comparisons, that appear in a SQL statement. Ordinarily, parameters stand in for data values passed to
the statement. For example, in the following INSERT statement, values to insert are passed as parameters:
In this SQL statement, <:name>, <:capital>, and <:population> are placeholders for actual values supplied
to the statement at runtime by your application. Before a parameterized query is executed for the first
time, your application should call the Prepare method to bind the current values for the parameters to the
SQL statement. Binding means that the server allocates resources for the statement and its parameters
that improve the execution speed of the query.
ParamByName(‘Name’).AsString := ‘Belize’;
ParamByName(‘Capital’).AsString := ‘Belmopan’;
ParamByName(‘Population’).AsInteger := ‘240000’;
Prepare;
Open;
end;
To access parameters:
For queries that do not already contain parameters (the SQL property is empty or the existing SQL statement
has no parameters), the list of parameters in the collection editor dialog is empty. If parameters are already
defined for a query, then the parameter editor lists all existing parameters.
NOTE
The TIBQuery component shares the TParam object and its collection editor with a number of different components.
While the right-click context menu of the collection editor always contains the Add and Delete options, they are never
enabled for TIBQuery parameters. The only way to add or delete TIBQuery parameters is in the SQL statement itself.
As each parameter in the collection editor is selected, the Object Inspector displays the properties and
events for that parameter. Set the values for parameter properties and methods in the Object Inspector.
The DataType property lists the data type for the parameter selected in the editing dialog. Initially the type
will be ftUnknown. You must set a data type for each parameter.
The ParamType property lists the type of parameter selected in the editing dialog. Initially the type will be
ptUnknown. You must set a type for each parameter.
Use the Value property to specify a value for the selected parameter at design time. This is not mandatory
when parameter values are supplied at runtime. In these cases, leave Value blank.
• Params property to assign values to a parameter based on the parameter’s ordinal position within
the SQL statement.
• property to assign values to one or more parameters in a single command line,
Params.ParamValues
based on the name of each parameter set. This method uses variants and avoids the need to cast
values.
NOTE
For all of the examples below, assume the SQL property contains the SQL statement below. All three pa-
rameters used are of data type ftString.
The following code uses ParamByName to assign the text of an edit box to the Capital parameter:
IBQuery1.ParamByName(‘Capital’).AsString := Edit1.Text;
The same code can be rewritten using the Params property, using an index of 1 (the Capital parameter is
the second parameter in the SQL statement):
IBQuery1.Params[1].AsString := Edit1.Text;
The command line below sets all three parameters at once, using the Params.ParamValues property:
IBQuery1.Params.ParamValues[‘Country;Capital;Continent’] :=
VarArrayOf([Edit1.Text, Edit2.Text, Edit3.Text]);
You can create a simple application to understand how to use the DataSource property to link a query in a
master-detail form. Suppose the data module for this application is called LinkModule, and that it contains
a query component called SalesQuery that has the following SQL property:
• A TIBDatabase component named SalesDatabase linked to the employee.gdb database, SalesQuery and
SalesTransaction.
• A TDataSource component named SalesSource. The DataSet property of SalesSource points to Sa-
lesQuery.
Suppose, too, that this application has a form, named LinkedQuery that contains two data grids, a Customers
Table grid linked to CustomersSource, and an SalesQuery grid linked to SalesSource.
The following figure illustrates how this application appears at design time:
NOTE
If you build this application, create the table component and its data source before creating the query component.
If you compile this application, at runtime the :Cust_No parameter in the SQL statement for SalesQuery is
not assigned a value, so SalesQuery tries to match the parameter by name against a column in the table
pointed to by CustomersSource. CustomersSource gets its data from CustomersTable, which, in turn, derives
its data from the CUSTOMER table. Because CUSTOMER contains a column called “Cust_No,” the value
from the Cust_No field in the current record of the CustomersTable dataset is assigned to the :Cust_No
parameter for the SalesQuerySQL statement. The grids are linked in a master-detail relationship. At runtime,
each time you select a different record in the Customers Table grid, the SalesQuery SELECT statement
executes to retrieve all orders based on the current customer number.
7. Executing a query
After you specify a SQL statement in the SQL property and set any parameters for the query, you can
execute the query. When a query is executed, the server receives and processes SQL statements from your
application. If the query is against local tables, the SQL engine processes the SQL statement and, for a
SELECT query, returns data to the application.
NOTE
Before you execute a query for the first time, you may want to call the Prepare method to improve query performance.
Prepare initializes the database server, each of which allocates system resources for the query. For more information
about preparing a query, see Preparing a query.
The following sections describe executing both static and dynamic SQL statements at design time and
at runtime.
The results of the query, if any, are displayed in any data-aware controls associated with the query com-
ponent.
NOTE
The Active property can be used only with queries that returns a result set, such as by the SELECT statement.
• Open executes a query that returns a result set, such as with the SELECT statement.
• ExecSQLexecutes a query that does not return a result set, such as with the INSERT, UPDATE, or DELETE
statements.
NOTE
If you do not know at design time whether a query will return a result set at runtime, code both types of query execution
statements in a try...except block. Put a call to the Open method in the try clause. This allows you to suppress the error
message that would occur due to using an activate method not applicable to the type of SQL statement used. Check
the type of exception that occurs. If it is other than an ENoResult exception, the exception occurred for another reason
and must be processed. This works because an action query will be executed when the query is activated with the Open
method, but an exception occurs in addition to that.
try
IBQuery2.Open;
except
on E: Exception do
if not (E is ENoResultSet) then
raise;
end;
1. Call Close to ensure that the query is not already open. If a query is already open you cannot open
it again without first closing it. Closing a query and reopening it fetches a new version of data from
the server.
2. Call Open to execute the query.
For example:
IBQuery.Close;
IBQuery.Open; { query returns a result set }
For information on navigating within a result set, see Disabling bi-directional cursors. For information on
editing and updating a result set, see Working with result sets.
For example:
8. Preparing a query
Preparing a query is an optional step that precedes query execution. Preparing a query submits the SQL
statement and its parameters, if any, for parsing, resource allocation, and optimization. The server, too, may
allocate resources for the query. These operations improve query performance, making your application
faster, especially when working with updatable queries.
An application can prepare a query by calling the Prepare method. If you do not prepare a query before
executing it, then Delphi automatically prepares it for you each time you call Open or ExecSQL. Even though
Delphi prepares queries for you, it is better programming practice to prepare a query explicitly. That way
your code is self-documenting, and your intentions are clear. For example:
CustomerQuery.Close;
if not (CustomerQuery.Prepared) then
CustomerQuery.Prepare;
CustomerQuery.Open;
This example checks the query component’s Prepared property to determine if a query is already prepared.
Prepared is a Boolean value that is True if a query is already prepared. If the query is not already prepared,
the example calls the Prepare method before calling Open.
CustomerQuery.UnPrepare;
When you change the text of the SQL property for a query, the query component automatically closes
and unprepares the query.
• Set the TIBQuery component’s UniDirectional property to True if you do not need to navigate back-
ward through a result set (SQL-92 does not, itself, permit backward navigation through a result set).
• Prepare the query before execution. This is especially helpful when you plan to execute a single query
several times. You need only prepare the query once, before its first use. For more information about
query preparation, see Preparing a query.
UniDirectional is False by default, meaning that the cursor for a result set can navigate both forward and
backward through its records. Bi-directional cursor support requires some additional processing overhead,
and can slow some queries. To improve query performance, you may be able to set UniDirectional to
True, restricting a cursor to forward movement through a result set.
If you do not need to be able to navigate backward through a result set, you can set UniDirectional to
True for a query. Set UniDirectional before preparing and executing a query. The following code illustrates
setting UniDirectional prior to preparing and executing a query:
Applications can update data returned in a read-only result set if they are using cached updates. To update
a read-only result set associated with a query component:
1. Add a TIBUpdateSQL component to the data module in your application to essentially give you the
ability to post updates to a read-only dataset.
2. Enter the SQL update statement for the result set to the ModifySQL, InsertSQL, or DeleteSQL properties
of the update component. To do this more easily, right-click on the TIBUpdateSQL component to access
the UpdateSQL Editor.
NOTE
A table component always references a single database table. If you need to access multiple tables with a single com-
ponent, or if you are only interested in a subset of rows and columns in one or more tables, you should use a TIBQuery
or TIBDataSet component instead of a TIBTable component. For more information about TIBQuery and TIBDataSet
components, see Working with Queries.
1. Place a table component from the InterBase page of the Tool Palette in a data module or on a form,
and set its Name property to a unique value appropriate to your application.
2. Set the Database property to the name of the database component to access.
3. Set the Transaction property to the name of the transaction component.
4. Set the DatabaseName property in the Database component to the name of a the database containing
the table.
5. Set the TableName property to the name of the table in the database. You can select tables from the
drop-down list if the Database and Transaction properties are already specified, and if the Database
and Transaction components are connected to the server.
6. Place a data source component in the data module or on the form, and set its DataSet property to
the name of the table component. The data source component is used to pass a result set from the
table to data-aware components for display.
1. Place a data source component from the Data Access page of the Tool Palette in the data module or
form, and set its DataSet property to the name of the table component.
2. Place a data-aware control, such as TDBGrid, on a form, and set the control’s DataSource property to
the name of the data source component placed in the previous step.
TIP
NOTE
You can use the Database Editor to set the database location, login name, password, SQL role, and switch the login
prompt on and off. To access the Database Component Editor, right click on the database component and choose
Database Editor from the drop-down menu.
3. Set the TableName property to the table to access. You are prompted to log in to the database. At
design time you can choose from valid table names in the drop-down list for the TableName property
in the Object Inspector. At runtime, you must specify a valid name in code.
Once you specify a valid table name, you can set the table component’s Active property to True to connect
to the database, open the table, and display and edit data.
At runtime, you can set or change the table associated with a table component by:
For example, the following code changes the table name for the OrderOrCustTable table component based
on its current table name:
with OrderOrCustTable do
begin
Active := False; {Close the table}
if TableName = 'CUSTOMER.DB' then
TableName := 'ORDERS.DB'
else
TableName := 'CUSTOMER.DB';
Active := True; {Reopen with a new table}
end;
To end display and editing of data, or to change the values for a table component’s fundamental properties
(for example: Database, TableName, and TableType), first post or discard any pending changes. If cached
updates are enabled, call the ApplyUpdates method to write the posted changes to the database. Finally,
close the table.
There are two ways to close a table. You can set its Active property to False, or you can call its Close
method. Closing a table puts the table into dsInactive state. Active controls associated with the table’s
data source are cleared.
The ReadOnly property for table components is the only property that can affect an application’s read
and write access to a table.
ReadOnly determines whether or not a user can both view and edit data. When ReadOnly is False (the
default), a user can both view and edit data. To restrict a user to viewing data, set ReadOnly to True before
opening a table.
• Locate finds the first row matching a specified set of criteria and moves the cursor to that row.
• Lookup returns values from the first row that matches a specified set of criteria, but does not move
the cursor to that row.
You can use Locate and Lookup with any kind of dataset, not just TIBTable. For a complete discussion of
Locate and Lookup, see Understanding Datasets.
5. Sorting records
An index determines the display order of records in a table. In general, records appear in ascending order
based on a primary index. This default behavior does not require application intervention. If you want a
different sort order, however, you must specify either
• An alternate index.
• A list of columns on which to sort.
var
IndexList: TList;
{...}
CustomersTable.GetIndexNames(IndexList);
CustomersTable.IndexName := 'CustDescending';
IndexName and IndexFieldNames are mutually exclusive. Setting one property clears values set for the other.
The following code sets the sort order for PhoneTable based on LastName, then FirstName:
PhoneTable.IndexFieldNames := 'LastName;FirstName';
IndexFields is a string list containing the column names for the index. The following code fragment illus-
trates how you might use IndexFieldCount and IndexFields to iterate through a list of column names in
an application:
var
I: Integer;
ListOfIndexFields: array[0 to 20} of string;
begin
with CustomersTable do
begin
for I := 0 to IndexFieldCount - 1 do
ListOfIndexFields[I] := IndexFields[I];
end;
end;
NOTE
PhoneTable.EmptyTable;
IMPORTANT
9. Deleting a table
At design time, to delete a table from a database, right-click the table component and select Delete Table
from the context menu. The Delete Table menu pick will only be present if the table component represents
an existing database table (the Database and TableName properties specify an existing table).
To delete a table at runtime, call the table component’s DeleteTable method. For example, the following
statement removes the table underlying a dataset:
CustomersTable.DeleteTable;
IMPORTANT
When you delete a table with DeleteTable, the table and all its data are gone forever.
10. Renaming a table
You can rename a table by typing over the name of an existing table next to the TableName property in the
Object Inspector. When you change the TableName property, a dialog appears asking you if you want to
rename the table. At this point, you can either choose to rename the table, or you can cancel the operation,
changing the TableName property (for example, to create a new table) without changing the name of the
table represented by the old value of TableName.
11. Creating a table
You can create new database tables at design time or at runtime. The Create Table command (at design
time) or the CreateTable method (at runtime) provides a way to create tables without requiring SQL knowl-
edge. They do, however, require you to be intimately familiar with the properties, events, and methods
common to dataset components, TIBTable in particular. This is so that you can first define the table you
want to create by doing the following:
• Set the Database property to the database that will contain the new table.
• Set the TableName property to the name of the new table.
• Add field definitions to describe the fields in the new table. At design time, you can add the field
definitions by double-clicking the FieldDefs property in the Object Inspector to bring up the collection
editor. Use the collection editor to add, remove, or change the properties of the field definitions. At
runtime, clear any existing field definitions and then use the AddFieldDef method to add each new
field definition. For each new field definition, set the properties of the TFieldDef object to specify the
desired attributes of the field.
• Optionally, add index definitions that describe the desired indexes of the new table. At design time,
you can add index definitions by double-clicking the IndexDefs property in the Object Inspector to
bring up the collection editor. Use the collection editor to add, remove, or change the properties of
the index definitions. At runtime, clear any existing index definitions, and then use the AddIndexDef
method to add each new index definition. For each new index definition, set the properties of the
TIndexDef object to specify the desired attributes of the index.
NOTE
At design time, you can preload the field definitions and index definitions of an existing table into the FieldDefs and
IndexDefs properties, respectively. Set the Database and TableName properties to specify the existing table. Right
click the table component and choose Update Table Definition. This automatically sets the values of the FieldDefs and
IndexDefs properties to describe the fields and indexes of the existing table. Next, reset the Database and TableName
to specify the table you want to create, cancelling any prompts to rename the existing table. If you want to store these
definitions with the table component (for example, if your application will be using them to create tables on user’s
systems), set the StoreDefs property to True.
Once the table is fully described, you are ready to create it. At design time, right-click the table component
and choose Create Table. At runtime, call the CreateTable method to generate the specified table.
IMPORTANT
If you create a table that duplicates the name of an existing table, the existing table and all its data are overwritten by
the newly created table. The old table and its data cannot be recovered.
The following code creates a new table at runtime and associates it with the employee.gdb database. Before
it creates the new table, it verifies that the table name provided does not match the name of an existing
table:
var
NewTable: TIBTable;
NewIndexOptions: TIndexOptions;
TableFound: Boolean;
begin
NewTable := TIBTable.Create;
NewIndexOptions := [ixPrimary, ixUnique];
with NewTable do
begin
Active := False;
Database := 'employee.gdb';
TableName := Edit1.Text;
TableType := ttDefault;
FieldDefs.Clear;
FieldDefs.Add(Edit2.Text, ftInteger, 0, False);
FieldDefs.Add(Edit3.Text, ftInteger, 0, False);
IndexDefs.Clear;
IndexDefs.Add('PrimaryIndex’, Edit2.Text, NewIndexOptions);
end;
{Now check for prior existence of this table}
TableFound := FindTable(Edit1.Text); {code for FindTable not shown}
if TableFound = True then
if MessageDlg('Overwrite existing table ' + Edit1.Text + '?',
mtConfirmation,
mbYesNo, 0) = mrYes then
TableFound := False;
if not TableFound then
CreateTable; { create the table}
end;
end;
You can force the current record for each of these table components to be the same with the GotoCurrent
method. GotoCurrent sets its own table’s current record to the current record of another table component.
For example, the following code sets the current record of CustomerTableOne to be the same as the current
record of CustomerTableTwo:
CustomerTableOne.GotoCurrent(CustomerTableTwo);
TIP
If your application needs to synchronize table components in this manner, put the components in a data module and
include the header for the data module in each unit that accesses the tables.
If you must synchronize table components on separate forms, you must include one form’s header file in
the source unit of the other form, and you must qualify at least one of the table names with its form name.
For example:
CustomerTableOne.GotoCurrent(Form2.CustomerTableTwo);
The MasterSource property is used to specify a data source from which the table will get data for the
master table. For instance, if you link two tables in a master/detail relationship, then the detail table can
track the events occurring in the master table by specifying the master table’s data source component
in this property.
The MasterFields property specifies the column(s) common to both tables used to establish the link. To
link tables based on multiple column names, use a semicolon delimited list:
Table1.MasterFields := 'OrderNo;ItemNo';
To help create meaningful links between two tables, you can use the Field Link Designer. For more infor-
mation about the Field Link Designer, see the Delphi Developer's Guide.
1. To create a VCL project, select File > New > VCL Forms Application - Delphi.
2. Add a data module to the project:
In the Project Manager, right-click the project and select Add New... > Other... > Data Module.
3. Drop two of each of these components in the data module: TIBTable, TIBDatabase, TIBTransaction,
and TDataSource.
4. Set the Name of the first TIBDatabase component to CustDatabase.
5. Double-click the CustDatabase component to open the Database Component Editor:
a. Set the Database parameter to:
C:\Users\Public\Documents\Embarcadero\Studio\17.0\Samples\Data\employee.gdb (location of the
database)
b. Set the User Name to sysdba.
c. Set the Password to masterkey.
d. Uncheck the Login Prompt box.
If you run the application now, you will see that the tables are linked together, and that when you move
to a new record in the CUSTOMER table, you see only those records in the SALES table that belong to
the current customer.
InterBase servers return all data (datasets and individual pieces of information) exclusively with output
parameters.
In InterBase Express applications, access to stored procedures is provided by the TIBStoredProc and TIB-
Query components. The choice of which to use for the access is predicated on how the stored procedure
is coded, how data is returned (if any), and the database system used. The TIBStoredProc and TIBQuery
components are both descendants of TIBCustomDataSet and inherit behaviors from TIBCustomDataSet. For
more information about TIBCusomDataSet, see Understanding Datasets.
A stored procedure component is used to execute stored procedures that do not return any data, to
retrieve individual pieces of information in the form of output parameters, and to relay a returned dataset
to an associated data source component. The stored procedure component allows values to be passed
to and return from the stored procedure through parameters, each parameter defined in the Params
property. The stored procedure component is the preferred means for using stored procedures that either
do not return any data or only return data through output parameters.
A query component is primarily used to run InterBase stored procedures that only return datasets via
output parameters. The query component can also be used to execute a stored procedure that does not
return a dataset or output parameter values.
Use parameters to pass distinct values to or return values from a stored procedure. Input parameter val-
ues are used in such places as the WHERE clause of a SELECT statement in a stored procedure. An output
parameter allows a stored procedure to pass a single value to the calling application. Some stored proce-
dures return a result parameter. See “Input parameters” and “Output parameters” in the “Procedures and
Triggers” chapter of the Language Reference Guide, and “Working with Stored Procedures” in the Data
Definition Guide for more information.
• Taking advantage of the server’s usually greater processing power and speed.
• Reducing the amount of network traffic since the processing takes place on the server where the data
resides.
For example, consider an application that needs to compute a single value: the standard deviation of values
over a large number of records. To perform this function in your application, all the values used in the
computation must be fetched from the server, resulting in increased network traffic. Then your application
must perform the computation. Because all you want in your application is the end result—a single value
representing the standard deviation—it would be far more efficient for a stored procedure on the server to
read the data stored there, perform the calculation, and pass your application the single value it requires.
See “Working with Stored Procedures” in the Data Definition Guide for more information.
1. Instantiate a TIBStoredProc component and optionally associate it with a stored procedure on the
server. Or instantiate a TIBQuery component and compose the contents of its SQL property to perform
either a SELECT query against the stored procedure or an EXECUTE command, depending on whether
the stored procedure returns a result set. For more information about creating a TIBStoredProc, see
Creating a Stored Procedure Component. For more information about creating a TIBQuery compo-
nent, see Working with Queries.
2. Provide input parameter values to the stored procedure component, if necessary. When a stored
procedure component is not associated with stored procedure on a server, you must provide addi-
tional input parameter information, such as parameter names and data types. For more information
about providing input parameter information, see Setting Parameter Information at Design Time.
3. Execute the stored procedure.
4. Process any result and output parameters. As with any other dataset component, you can also ex-
amine the result dataset returned from the server. For more information about output and result
parameters, see Using Output Parameters and Using the Result Parameter. For information about
viewing records in a dataset, see Using Stored Procedures that Return Result Sets.
1. Place stored procedure, database, and transaction components from the InterBase page of the Tool
Palette in a data module.
2. Set the Database and Transaction properties of the stored procedure component to the names of
the database and transaction components.
3. Set the DatabaseName property in the Database component.
Normally you should specify the DatabaseName property, but if the server database against which your
application runs is currently unavailable, you can still create and set up a stored procedure component
by omitting the DatabaseName and supplying a stored procedure name and input, output, and result
parameters at design time. For more information about input parameters, see Using Input Parameters. For
more information about output parameters, see Using Output Parameters. For more information about
result parameters, see Using the Result Parameter.
4. Optionally set the StoredProcName property to the name of the stored procedure to use. If you pro-
vided a value for the Database property, and the Database component is connected to the database,
then you can select a stored procedure name from the drop-down list for the property. A single
TIBStoredProc component can be used to execute any number of stored procedures by setting the
StoredProcName property to a valid name in the application. It may not always be desirable to set the
StoredProcName at design time.
5. Double-click the Params property value box to invoke the StoredProc Parameters editor to examine
input and output parameters for the stored procedure. If you did not specify a name for the stored
procedure in Step 4, or you specified a name for the stored procedure that does not exist on the
server specified in the DatabaseName property in Step 3, then when you invoke the parameters editor,
it is empty.
See “Working with Stored Procedures” in the Data Definition Guide for more information.
NOTE
If you do not specify the Database property in Step 2, then you must use the StoredProc Parameters editor to set up
parameters at design time. For information about setting parameters at design time, see Setting Parameter Information
at Design Time.
A stored procedure can be created by an application at runtime using a SQL statement issued from a
TIBQuery component, typically with a CREATE PROCEDURE statement. If parameters are used in the stored
procedure, set the ParamCheck property of the TIBQuery to False. This prevents the TIBQuery from mistaking
the parameter in the new stored procedure from a parameter for the TIBQuery itself.
NOTE
You can also use the SQL Explorer to examine, edit, and create stored procedures on the server.
After the SQL property has been populated with the statement to create the stored procedure, execute
it by invoking the ExecSQL method.
For example, the following code prepares a stored procedure for execution:
IBStoredProc1.Prepare;
NOTE
If your application changes parameter information at runtime, you should prepare the procedure again.
To execute a prepared stored procedure, call the ExecProc method for the stored procedure component.
The following code illustrates code that prepares and executes a stored procedure:
IBStoredProc1.Params[0].AsString := Edit1.Text;
IBStoredProc1.Prepare;
IBStoredProc1.ExecProc;
NOTE
If you attempt to execute a stored procedure before preparing it, the stored procedure component automatically pre-
pares it for you, and then unprepares it after it executes. If you plan to execute a stored procedure a number of times,
it is more efficient to call Prepare yourself, and then only call UnPrepare once, when you no longer need to execute
the procedure.
When you execute a stored procedure, it can return all or some of these items:
• A dataset consisting of one or more records that can be viewed in data-aware controls associated
with the stored procedure through a data source component.
• Output parameters.
• A result parameter that contains status information about the stored procedure’s execution.
3. If the stored procedure requires input parameters, express the parameter values as a comma-sepa-
rated list, enclosed in parentheses, following the procedure name.
4. Set the Active property to True or invoke the Open method.
For example, the InterBase stored procedure GET_EMP_PROJ, below, accepts a value using the input pa-
rameter EMP_NO and returns a dataset through the output parameter PROJ_ID.
The SQL statement issued from a TIBQuery to use this stored procedure would be:
SELECT *
FROM GET_EMP_PROJ(52)
For example, the InterBase stored procedure GET_HIGH_EMP_NAME, below, retrieves the alphabetically
last value in the LAST_NAME column of a table named EMPLOYEE. The stored procedure returns this value
in the output parameter High_Last_Name.
The SQL statement issued from a TIBQuery to use this stored procedure would be:
SELECT High_Last_Name
FROM GET_HIGH_EMP_NAME
For example, the InterBase stored procedure GET_HIGH_EMP_NAME, below, retrieves the alphabetically
last value in the LAST_NAME column of a table named EMPLOYEE. The stored procedure returns this value
in the output parameter High_Last_Name.
The Delphi code to get the value in the High_Last_Name output parameter and store it to the Text property
of a TEdit component is:
end;
For example, the InterBase stored procedure ADD_EMP_PROJ, below, adds a new row to the table EM-
PLOYEE_PROJECT. No dataset is returned and no individual values are returned in output parameters.
The SQL statement issued from a TIBQuery to execute this stored procedure would be:
3. If the stored procedure requires input parameters, supply values for the parameters using the Params
property or ParamByName method.
4. Invoke the ExecProc method.
For example, the InterBase stored procedure ADD_EMP_PROJ, below, adds a new row to the table EM-
PLOYEE_PROJECT. No dataset is returned, and no individual values are returned in output parameters.
Whether a stored procedure uses a particular type of parameter depends both on the general language
implementation of stored procedures on your database server and on a specific instance of a stored
procedure. For example, individual stored procedures on any server may either be implemented using
input parameters, or may not be. On the other hand, some uses of parameters are server-specific. For
example, the InterBase implementation of a stored procedure never returns a result parameter.
Access to stored procedure parameters is provided by TParam objects in the TIBStoredProc.Params prop-
erty. If the name of the stored procedure is specified at design time in the StoredProcName property, a
TParam object is automatically created for each parameter and added to the Params property. If the stored
procedure name is not specified until runtime, the TParam objects need to be programmatically created at
that time. Not specifying the stored procedure and manually creating the TParam objects allows a single
TIBStoredProc component to be used with any number of available stored procedures.
NOTE
Some stored procedures return a dataset in addition to output and result parameters. Applications can display dataset
records in data-aware controls, but must separately process output and result parameters. For more information about
displaying records in data-aware controls, see Using Stored Procedures that Return Result Sets.
If a stored procedure returns a dataset and is used through a SELECT query in a TIBQuery component,
supply input parameter values as a comma-separated list, enclosed in parentheses, following the stored
procedure name. For example, the SQL statement below retrieves data from a stored procedure named
GET_EMP_PROJ and supplies an input parameter value of 52.
SELECT PROJ_ID
FROM GET_EMP_PROJ(52)
If a stored procedure is executed with a TIBStoredProc component, use the Params property or the Param-
ByName method access to set each input parameter. Use the TParam property appropriate for the data type
of the parameter, such as the TParam.AsString property for a CHAR type parameter. Set input parameter
values prior to executing or activating the TIBStoredProc component. In the example below, the EMP_NO
parameter (type SMALLINT) for the stored procedure GET_EMP_PROJ is assigned the value 52.
Most stored procedures return one or more output parameters. Output parameters may represent the
sole return values for a stored procedure that does not also return a dataset, they may represent one
set of values returned by a procedure that also returns a dataset, or they may represent values that have
no direct correspondence to an individual record in the dataset returned by the stored procedure. Each
server’s implementation of stored procedures differs in this regard.
In the example Oracle stored procedure below, the parameter IN_OUTVAR is an input/output parameter.
In the Delphi program code below, IN_OUTVAR is assigned an input value, the stored procedure executed,
and then the output value in IN_OUTVAR is inspected and stored to a memory variable.
DateVar := StoredProc1.ParamByName('MyOutputParam').AsDate;
parameter, and you can set the values for the input parameters to pass to the server when you execute
the stored procedure.
IMPORTANT
Do not change the names or data types for input parameters reported by the server, or when you execute the stored
procedure an exception is raised.
Some servers do not report parameter names or data types. In these cases, use the SQL Explorer or
IBConsole to look at the source code of the stored procedure on the server to determine input parameters
and data types. See the SQL Explorer online help for more information.
At design time, if you do not receive a parameter list from a stored procedure on a remote server (for
example because you are not connected to a server), then you must invoke the StoredProc Parameters
editor, list each required input parameter, and assign each a data type and a value. For more information
about using the StoredProc Parameters editor to create parameters, see Setting Parameter Information
at Design Time.
The parameter collection editor allows you to set up stored procedure parameters. If you set the Database
and StoredProcName properties of the TIBStoredProc component at design time, all existing parameters are
listed in the collection editor. If you do not set both of these properties, no parameters are listed and you
must add them manually. Additionally, some database types do not return all parameter information, like
types. For these database systems, use the SQL Explorer utility to inspect the stored procedures, determine
types, and then configure parameters through the collection editor and the Object Inspector. The steps
to set up stored procedure parameters at design time are:
6. If a data type is not automatically specified for the DataType property, select a data type from the
property’s drop-down list.
7. Use the Value property to optionally specify a starting value for an input or input/output parameter.
Right-clicking in the parameter collection editor invokes a context menu for operating on parameter def-
initions. Depending on whether any parameters are listed or selected, enabled options include: adding
new parameters, deleting existing parameters, moving parameters up and down in the list, and selecting
all listed parameters.
You can edit the definition for any TParam you add, but the attributes of the TParam objects you add must
match the attributes of the parameters for the stored procedure on the server. To edit the TParam for a
parameter, select it in the parameter collection editor and edit its property values in the Object Inspector.
NOTE
You can never set values for output and result parameters. These types of parameters have values set by the execution
of the stored procedure.
For example, the InterBase stored procedure GET_EMP_PROJ, below, requires one input parameter (EM-
P_NO) and one output parameter (PROJ_ID).
The Delphi code to associate this stored procedure with a TIBStoredProc named StoredProc1 and create
TParam objects for the two parameters using the TParam.Create method is:
var
P1, P2: TParam;
begin
{...}
with StoredProc1 do begin
StoredProcName := 'GET_EMP_PROJ';
Params.Clear;
P1 := TParam.Create(Params, ptInput);
P2 := TParam.Create(Params, ptOutput);
try
Params[0].Name := ‘EMP_NO’;
Params[1].Name := ‘PROJ_ID’;
ParamByname(‘EMP_NO’).AsSmallInt := 52;
ExecProc;
Edit1.Text := ParamByname(‘PROJ_ID’).AsString;
finally
P1.Free;
P2.Free;
end;
end;
{...}
end;
• Invoke the SQL Explorer to view the source code for a stored procedure on a remote server. The source
code includes parameter declarations that identify the data types and names for each parameter.
• Use the Object Inspector to view the property settings for individual TParam objects.
You can use the SQL Explorer to examine stored procedures on your database servers. If you are using
ODBC drivers, you cannot examine stored procedures with the SQL Explorer. While using the SQL Explorer
is not always an option, it can sometimes provide more information than the Object Inspector viewing
TParam objects. The amount of returned information about a stored procedure in the Object Inspector
depends on your database server.
For some servers, some or all parameter information may not be accessible.
In the Object Inspector, when viewing individual TParam objects, the ParamType property indicates whether
the selected parameter is an input, output, input/output, or result parameter. The DataType property indi-
cates the data type of the value the parameter contains, such as string, integer, or date. The Value edit box
enables you to enter a value for a selected input parameter.
For more about setting parameter values, see Setting Parameter Information at Design Time.
NOTE
You can never set values for output and result parameters. These types of parameters have values set by the execution
of the stored procedure.
Use the TIBSQLMonitor component to watch dynamic SQL taking place in all InterBase data access appli-
cations both before and after they have been compiled.
SQL monitoring involves a bit of overhead, so you should be aware of the following:
Memo1.Lines.Add(EventText);
You can now start another IBX application and monitor the code.
1. Installing
TIBInstall and its ancestor, TIBSetup provide properties to allow you to build an InterBase install compo-
nent into your application. TIBInstall allows you to set the installation source and destination, display your
own installation messages, and set the individual InterBase components to be installed.
The following sections describe how to set up an installation application, select the installation options, set
the source and destination installation directories, and track the installation progress. Once the installation
component is set up, you execute it using the InstallExecute method.
TIBInstall properties
Property Purpose
DestinationDirectory Sets or returns the installation target path; if not set, defaults to what is in the Windows reg-
istry.
InstallOptions Sets what InterBase components are to be installed; see below.
MsgFilePath Sets or returns the directory path where the ibinstall.msg file can be found.
Progress Returns an integer from 0 to 100 indicating the percentage of installation completed; if unset,
no progress is displayed.
RebootToComplete If set to <True>, returns a message instructing the user to reboot after installation is complete.
SourceDirectory Sets or returns the path of the installation source files; in most cases, this will be a path on the
InterBase CD.
UnInstallFile Returns the name and path of the uninstall file, which contains information on the installed op-
tions.
TIBInstall options
Option Installs:
CmdLineTools the InterBase command line tools, including isql, gbak, and gsec.
ConnectivityClients the InterBase connectivity clients, including ODBC, OLE DB, and JDBC.
Examples the InterBase database and API examples.
MainComponents the main InterBase components, including the client, server, documentation, GUI tools, and de-
velopment tools.
The following code snippet shows how you could set up a series of check boxes to allow a user to select
the InterBase main components:
try
IBInstall1.SourceDirectory := SrcDir.Directory;
IBInstall1.DestinationDirectory := DestDir.Directory;
IBInstall1.InstallCheck;
except
on E:EIBInstallError do
begin
Label1.Caption := '';
Cancel.Visible := False;
Execute.Visible := True;
ProgressBar1.Visible := False;
Exit;
end;
end;
function TSampleform.IBInstall1StatusChange(
begin
IBUninstall1.UnInstallFile := 'C:\Program Files\InterBase
Corp\InterBase\ibuninst.000';
bUninstall.Visible := False;
ProgressBar1.Visible := True;
try
IBUninstall1.UnInstallCheck;
except
on E:EIBInstallError do
begin
Application.MessageBox(PChar(E.Message), PChar('Precheck Error'), MB_OK);
Label1.Caption := '';
bUninstall.Visible := True;
ProgressBar1.Visible := False;
Exit;
end;
end;
try
IBUninstall1.UnInstallExecute;
except
on E:EIBInstallError do
begin
Application.MessageBox(PChar(E.Message), PChar( 'Install Error'), MB_OK );
Label1.Caption := '';
bUninstall.Visible := True;
ProgressBar1.Visible := False;
Exit;
end;
end;