OPENEDGE Managing Performance
OPENEDGE Managing Performance
OPENEDGE Managing Performance
The potential for improving performance depends on the type of system you run Progress on.
Some options might not be available on your hardware or operating system platform. This
chapter discusses options for managing database performance.
Performance is diminished if the database cannot use these resources efficiently. Performance
bottlenecks occur when a resource performs inadequately (or is overloaded) and prevents other
resources from accomplishing work. The key to improving performance is determining which
resource is creating a bottleneck. Once you understand your resource limitation, you can take
steps to eliminate bottlenecks.
Performance management is a continual process of measuring and adjusting resource use.
Because system resources depend on each other in complex relationships, you might fix one
problem only to create another. You should measure resource use regularly and adjust as
required.
To effectively manage performance, you must have solid knowledge of your system, users, and
applications. Because system and application performance can vary greatly depending on the
configuration
PROMON Utility
The Progress Monitor (PROMON) utility helps you monitor database activity and performance
1- Create database server
NT Performance Monitor
The NT Performance Monitor is a graphical tool, supplied with Windows NT, that lets you
monitor the performance of a local or remote Windows NT workstation. The Performance
Monitor measures the performance of workstation objects like processes and memory using a
defined set of counters. The Progress Version 9 database provides a comprehensive set of
counters ranging from measurements about the number of clients running to the number of
database records read or written. These counters are derived from the PROMON utility,
Summary of Activity option, and they report on the state and performance of a particular
database. Progress database support for the native NT performance monitor does not replace the
PROMON utility; it simply provides another mechanism that system administrators can use to
monitor performance-related data.
Increasing the BI Cluster Size
The BI file is organized into clusters on disk. As the database engine writes data to the BI file,
these clusters fill up. When a cluster fills, the engine must ensure that all modified database
buffer blocks referenced by notes in that cluster are written to disk. This is known as a
checkpoint. Checkpointing reduces recovery time and lets the engine reuse BI disk space.
Raising the BI cluster size increases the interval between checkpoints.
Raising the BI cluster size can reduce the I/O overhead of writing modified database buffers to
disk. It also lets you defer writes and collect more changes before writing a block; this lets you
write multiple changes with the same write.
Larger cluster sizes generally increase performance. However, they also have significant
drawbacks:
Increased disk space usage for the BI file.
Longer crash recovery periods.
Longer checkpoint times. (Run APWs to eliminate this drawback.)
For size, specify the new cluster size in kilobytes. The number must be a multiple of 16
in the range 16 to 262128 (16K–256MB). The default cluster size is 512K. Cluster sizes
from 512 to 16384 are common.
The BI clusters typically grow to their optimal number over time. You can calculate the current
number of BI clusters for a database by dividing the BI physical file size by the BI cluster size.
For example, a database BI file with a BI cluster size of 128K and a physical size of 91,7504
has 7 BI clusters.
Whenever the BI file is truncated, you should consider growing the number of BI clusters to its
optimal size before restarting the database, thus preventing the database engine from adding
clusters on an as-needed basis. The BI file is truncated:
Automatically by the database engine when you start after-imaging (RFUTIL AIMAGE
BEGIN)
Automatically by the database engine when you perform an index rebuild (PROUTIL
IDXBUILD)
Manually (PROUTIL TRUNCATE BI)
Follow this step to increase the number of BI clusters:
Enter the following command:
For n, specify the number of BI clusters that you want to create for the specified database.
Progress creates four BI clusters by default. If you specify a BIGROW value of 9, Progress
creates an additional 9 BI file clusters for a total of 13 clusters.
For size, specify the new BI block size in kilobytes. Valid values are 0, 1, 2, 4, 8, and 16.
If you have a single AI file and after-imaging is enabled when you enter this command,
you must use the After-image Filename (-a) parameter to specify the AI filename.
You can also change the BI cluster size with this command.
db-name is the name of the database for which you want to adjust the BI threshold.
During a database quiet processing point, all file write activity to the database is stopped.
Any processes that attempt to start a transaction while the quiet point is enabled must wait
until you disable the database quiet processing point.
db-name
Specifies the name of the database for which you want to adjust the BI threshold.
n
Specifies the new value for the threshold.
Truncate the BI file to bring the database and BI files up to date and eliminate any need
for database recovery. To do this, enter the following command:
Typically, if you change the AI block size, you should also change the BI block size. If
you have not already, you might want to use this command to do so.
For size, specify the size of the AI read and write block in kilobytes. The minimum value
allowed is the size of the database block. Valid values are 0, 1, 2, 4, 8, and 16. If you
specify 0, Progress uses the default size (8K) for your operating system platform
Database Fragmentation
Over time, as records are deleted from a database and new records are added, gaps can occur on
the disk where the data is stored. This fragmentation can cause inefficient disk space usage and
poor performance with sequential reads. You can eliminate fragmentation by dumping and
reloading the database.
You can run PROUTIL with the TABANALYS qualifier while the database is in use; however,
PROUTIL generates only approximate information.
db-name
Performing index compaction reduces the number of blocks in the B-tree and possibly the
number of B-tree levels, which improves query performance.
The index compacting utility operates in phases:
Phase 1: If the index is a unique index, the delete chain is scanned and the index blocks
are cleaned up by removing deleted entries.
Phase 2: The nonleaf levels of the B-tree are compacted starting at the root working toward
the leaf level.
Phase 3: The leaf level is compacted.
Rebuilding Indexes
Use the IDXBUILD (Index Rebuild) qualifier of the PROUTIL utility to:
Compress index blocks to minimize space usage.
Activate all deactivated indexes in the database.
Repair corrupted indexes in the database. (Index corruption is normally signaled by error
messages.)
NOTE: When you run the Index Rebuild qualifier, the database must not be in use.
To run the IDXBUILD qualifier with PROUTIL, enter the following command:
db-name
Specifies the name of the database whose indexes you want to build.
To improve performance, use the Merge Number (-TM) and Speed Sort (-TB) startup parameters.
Use the Some option to rebuild only specific indexes. Use the All option to rebuild all indexes.
After you enter a selection and you name those indexes you want to rebuild, the utility prompts
if you have enough disk space for index sorting. If you enter yes, the utility sorts the indexes
you are rebuilding, generating the indexes in order by their keys. This sorting results in a faster
index rebuild and better space use in the index blocks.
To estimate whether you have enough free space to sort the indexes or not, use the following
formulas:
If you rebuild all the indexes in your database, sorting the indexes requires up to 75 percent
of the total database size in free space.
If you rebuild an individual index, sorting that index requires as much as the following
amount of free space:
(size of one index entry) * (number of records in file) * 3
The Index Rebuild qualifier with PROUTIL rebuilds an index or set of indexes in a series of
three phases:
1. The utility scans the database file, clearing all index blocks that belong to the indexes you
are rebuilding and adding those blocks to the free block list.
2. The utility scans the database file and rebuilds all the index entries for every data record.
If you chose to sort the index, the utility writes the index entries to the sort file. Otherwise,
the utility writes the index entries to the appropriate index at this point.
3. This phase only occurs if you chose to sort the indexes. In this phase, the utility sorts the
index entries in the sort file into groups and enters those entries into their respective entries
in order, one index at a time.
The Index Rebuild qualifier accomplishes most of its work without displaying messages, unless
it encounters an error condition.
Another way is to create data extents that each contains a piece of the total database. For example, if
you have four drives, create 16 extents and put four on each drive. Put extent 1 on the first drive,
extent 2 on the second, extent 3 on the third, extent 4 on the fourth, extent 5 on the first, and so on.
A drawback to this "manual striping" is that as the database grows, the balance is disturbed when you
add extents.
On Linux 4 kb should be used if the kernel version is less than 2.6, which came out in December of
2003. In kernel versions prior to 2.6, the Linux virtual memory architecture did not allow for larger
page sizes. 8kb may be used with kernel versions 2.6 and newer.
If you have the Workgroup database, keep the cluster size small. 512 k or less will be better than
large cluster sizes because there will not be any page writers to write modified database blocks to
disk.
5. Set BI block size to 8 kb (Usually the default value).
Allows for more efficient writing of the bi, which is always done using synchronous writes. The default
BI block size (8K) is sufficient for applications with low transaction rates. However, if performance
monitoring indicates that BI writes are a performance bottleneck and your platform's I/O subsystem
can take advantage of larger writes, increasing the BI block size might improve performance.
When using OpenEdge Replication it is advisable to set the BI block size to 16 kb to keep it in sync
with the AI blocksize for improved performance of OpenEdge replication.
9. If you use after-image journaling (you should, but not for performance reasons), run the
after-image writer (AIW).
The after-image writer's job is to write filled AI buffers so the server does not have to do it. This gives
the server more time to do useful work. With self-service clients, the server and the client are in the
same process, but you still want the server to do useful work.
14. Use two drives for AI extents, with extents alternating between them.
The purpose of after-image journaling is to provide for a way to recover if the drive(s) holding your
database fail. Therefore you MUST NOT store any AI extents on the same drives as the data
extents Alternating between two drives allows filled extents to be archived without slowing down
writing of the current extent. If you do not have enough drives, put all the AI extents on the same
drive. If using a RAID array there is no separate drive unless there are multiple RAID sets on the
system in which case put the AI files on a different RAID set on the same system.