EDB Postgres Advanced Server Guide v10
EDB Postgres Advanced Server Guide v10
November 6, 2017
EDB Postgres Advanced Server Guide
by EnterpriseDB Corporation
Copyright 2014 - 2017 EnterpriseDB Corporation
Table of Contents
1 Introduction ................................................................................................................. 8
1.1 Whats New ...................................................................................................... 10
1.2 Typographical Conventions Used in this Guide ............................................... 11
1.3 Other Conventions Used in this Guide ............................................................. 12
1.4 About the Examples Used in this Guide ........................................................... 13
1.4.1 Sample Database Description ....................................................................... 13
2 Enhanced Compatibility Features ............................................................................. 22
2.1 Enabling Compatibility Features ...................................................................... 23
2.2 Stored Procedural Language ............................................................................. 23
2.3 Optimizer Hints................................................................................................. 24
2.4 Data Dictionary Views ...................................................................................... 24
2.5 dblink_ora ......................................................................................................... 25
2.6 Profile Management .......................................................................................... 25
2.7 Built-In Packages .............................................................................................. 26
2.8 Open Client Library .......................................................................................... 28
2.9 Utilities.............................................................................................................. 29
2.10 ECPGPlus ......................................................................................................... 31
2.11 Table Partitioning.............................................................................................. 32
3 Database Administration ........................................................................................... 33
3.1 Configuration Parameters ................................................................................. 33
3.1.1 Setting Configuration Parameters ................................................................. 34
3.1.2 Summary of Configuration Parameters ........................................................ 36
3.1.3 Configuration Parameters by Functionality .................................................. 54
3.1.3.1 Top Performance Related Parameters ................................................... 55
3.1.3.2 Resource Usage / Memory .................................................................... 66
3.1.3.3 Resource Usage / EDB Resource Manager .......................................... 71
3.1.3.4 Query Tuning ........................................................................................ 73
3.1.3.5 Query Tuning / Planner Method Configuration .................................... 73
3.1.3.6 Reporting and Logging / What to Log .................................................. 76
3.1.3.7 Auditing Settings .................................................................................. 78
3.1.3.8 Client Connection Defaults / Locale and Formatting ........................... 83
3.1.3.9 Client Connection Defaults / Statement Behavior ................................ 83
3.1.3.10 Client Connection Defaults / Other Defaults .................................... 84
3.1.3.11 Compatibility Options ....................................................................... 86
3.1.3.12 Customized Options .......................................................................... 94
3.1.3.13 Ungrouped....................................................................................... 103
3.2 Index Advisor.................................................................................................. 106
3.2.1 Index Advisor Components ........................................................................ 107
3.2.2 Index Advisor Configuration ...................................................................... 108
3.2.3 Using Index Advisor ................................................................................... 111
3.2.3.1 Using the pg_advise_index Utility...................................................... 111
3.2.3.2 Using Index Advisor at the psql Command Line................................ 113
3.2.4 Reviewing the Index Advisor Recommendations....................................... 115
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 3
EDB Postgres Advanced Server Guide
1 Introduction
This guide describes the features of EDB Postgres Advanced Server (Advanced Server).
Index Advisor described in Section 3.2 helps to determine the additional indexes
needed on tables to improve application performance.
SQL Profiler described in Section 3.3 locates and diagnoses poorly running SQL
queries in applications.
Resource Groups described in Section 5.1 shows how to create and maintain the
groups on which resource limits can be defined and to which Advanced Server
processes can be assigned.
CPU Usage Throttling described in Section 5.2 provides a method to control CPU
usage by Advanced Server processes.
Dirty Buffer Throttling described in Section 5.3 provides a method to control the
dirty rate of shared buffers by Advanced Server processes.
Performance Analysis and Tuning. Chapter 8 contains the various tools for
analyzing and improving application and database server performance.
Dynatune described in Section 8.1 provides a quick and easy means for
configuring Advanced Server depending upon the type of application usage.
EDB Clone Schema. Chapter 9 contains information about the EDB Clone
Schema feature, which provides the capability to copy a schema and its database
objects within a single database or from one database to another database.
For information about the features that are shared by Advanced Server and PostgreSQL,
see the PostgreSQL core documentation, available at:
https://www.postgresql.org/docs/10/static/index.html
Advanced Server now includes the following enhancements for EDB Audit
Logging: 1) Separate auditing for each DDL type allows the specification of
individual DDL commands (CREATE, ALTER, or DROP) with or without their
object types (for example, TABLE, VIEW, SEQUENCE, etc.) for which audit logging
is to occur. This refines the types of SQL statements recorded in the audit log file
(see Section 3.5.2.1). 2) Separate auditing for each DML type allows the
specification of individual DML commands (INSERT, UPDATE, DELETE, or
TRUNCATE) for which audit logging is to occur. This provides the same benefit as
separate DDL auditing (see Section 3.5.2.2). 3) Statements to be audited can now
be determined by: a) the database in which auditing is to occur, b) the role
running the session, or c) the combination of the role and the database. This is
accomplished by setting the edb_audit_statement configuration parameter
with the ALTER DATABASE, ALTER ROLE, or ALTER ROLE IN DATABASE
command (see Section 3.5.3). 4) Command tags identify the SQL command that
was executed, and thus aids in scanning the audit log to find entries related to
certain SQL commands (see Section 3.5.6). 5) Addition of the
edb_audit_destination parameter specifies whether audit logging is to be
recorded in the default location under the database cluster or to be handled by the
syslog process (see Section 3.5.1).
Advanced Server now includes the EDB Clone Schema extension, which allows
you to make a copy of a schema with its database objects from a given database,
and to insert it into another database, or into the same database (with a different
schema name). The cloning functionality can be run in both an online mode or a
non-blocking, background job mode. Using a background job frees up access to
your terminal while the cloning operation is in progress. Multiple background
worker processes can be used for the cloning process to shorten the amount of
time to complete the operation. For more information, see Chapter 9.
Advanced Server now provides the pg_prewarm module, which implements the
autoprewarm background worker. The autoprewarm worker process
automatically dumps shared buffers to disk before a shutdown. It then prewarms
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 10
EDB Postgres Advanced Server Guide
the shared buffers the next time the server is started, meaning it loads blocks from
the disk back into the buffer pool. This shortens the warm up times after the
server has been restarted. For more information, see Section 3.1.3.1.16.
Advanced Server now provides the --wal-segsize option for the initdb
utility program. This provides the capability to specify the WAL segment file size
when creating a database cluster instead of using the default size of 16 MB. For
more information, see Section 3.7.
In the following descriptions a term refers to any word or group of words that may be
language keywords, user-supplied values, literals, etc. A terms exact meaning depends
upon the context in which it is used.
Italic font introduces a new term, typically, in the sentence that defines it for the
first time.
Fixed-width (mono-spaced) font is used for terms that must be given
literally such as SQL commands, specific table and column names used in the
examples, programming language keywords, directory paths and file names,
parameter values, etc. For example postgresql.conf, SELECT * FROM emp;
Italic fixed-width font is used for terms for which the user must
substitute values in actual usage. For example, DELETE FROM table_name;
A vertical pipe | denotes a choice between the terms on either side of the pipe. A
vertical pipe is used to separate two or more alternative terms within square
brackets (optional choices) or braces (one mandatory choice).
Square brackets [ ] denote that one or none of the enclosed term(s) may be
substituted. For example, [ a | b ], means choose one of a or b or neither
of the two.
Braces {} denote that exactly one of the enclosed alternatives must be specified.
For example, { a | b }, means exactly one of a or b must be specified.
Ellipses ... denote that the proceeding term may be repeated. For example, [ a |
b ] ... means that you may have the sequence, b a a b a.
This guide applies to both Linux and Windows systems. Directory paths are
presented in the Linux format with forward slashes. When working on Windows
systems, start the directory path with the drive letter followed by a colon and
substitute back slashes for forward slashes.
Some of the information in this document may apply interchangeably to the
PostgreSQL and EDB Postgres Advanced Server database systems. The term
Advanced Server is used to refer to EDB Postgres Advanced Server. The term
Postgres is used to generically refer to both PostgreSQL and Advanced Server.
When a distinction needs to be made between these two database systems, the
specific names, PostgreSQL or Advanced Server are used.
The installation directory path of the PostgreSQL or Advanced Server products is
referred to as POSTGRES_INSTALL_HOME. For PostgreSQL Linux installations,
this defaults to /opt/PostgreSQL/x.x. For PostgreSQL Windows
installations, this defaults to C:\Program Files\PostgreSQL\x.x. For
Advanced Server Linux installations accomplished using the interactive installer,
this defaults to /opt/edb/asx.x. For Advanced Server Linux installations
accomplished using an RPM package, this defaults to /usr/edb/asx.x. For
Advanced Server Windows installations, this defaults to C:\Program
Files\edb\asx.x. The product version number is represented by x.x.
Examples and output from examples are shown in fixed-width, blue font on a
light blue background.
The examples use the sample tables, dept, emp, and jobhist, created and loaded when
Advanced Server is installed.
The tables and programs in the sample database can be re-created at any time by
executing the following script:
/opt/edb/as10/installer/server/pg-sample.sql.
In addition there is a script in the same directory containing the database objects created
using syntax compatible with Oracle databases. This script file is edb-sample.sql.
The script:
Creates the sample tables and programs in the currently connected database.
Grants all permissions on the tables to the PUBLIC group.
The tables and programs will be created in the first schema of the search path in which
the current user has permission to create tables and procedures. You can display the
search path by issuing the command:
SHOW SEARCH_PATH;
Each employee has an identification number, name, hire date, salary, and manager. Some
employees earn a commission in addition to their salary. All employee-related
information is stored in the emp table.
The sample company is regionally diverse, so it tracks the locations of its departments.
Each company employee is assigned to a department. Each department is identified by a
unique department number and a short name. Each department is associated with one
location. All department-related information is stored in the dept table.
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 13
EDB Postgres Advanced Server Guide
The company also tracks information about jobs held by the employees. Some employees
have been with the company for a long time and have held different positions, received
raises, switched departments, etc. When a change in employee status occurs, the company
records the end date of the former position. A new job record is added with the start date
and the new job title, department, salary, and the reason for the status change. All
employee history is maintained in the jobhist table.
--
-- Script that creates the 'sample' tables, views
-- functions, triggers, etc.
--
-- Start new transaction - commit all or nothing
--
BEGIN;
--
-- Create and load tables used in the documentation examples.
--
-- Create the 'dept' table
--
CREATE TABLE dept (
deptno NUMERIC(2) NOT NULL CONSTRAINT dept_pk PRIMARY KEY,
dname VARCHAR(14) CONSTRAINT dept_dname_uq UNIQUE,
loc VARCHAR(13)
);
--
-- Create the 'emp' table
--
CREATE TABLE emp (
empno NUMERIC(4) NOT NULL CONSTRAINT emp_pk PRIMARY KEY,
ename VARCHAR(10),
job VARCHAR(9),
mgr NUMERIC(4),
hiredate DATE,
sal NUMERIC(7,2) CONSTRAINT emp_sal_ck CHECK (sal > 0),
comm NUMERIC(7,2),
deptno NUMERIC(2) CONSTRAINT emp_ref_dept_fk
REFERENCES dept(deptno)
);
--
-- Create the 'jobhist' table
--
CREATE TABLE jobhist (
empno NUMERIC(4) NOT NULL,
startdate TIMESTAMP(0) NOT NULL,
enddate TIMESTAMP(0),
job VARCHAR(9),
sal NUMERIC(7,2),
comm NUMERIC(7,2),
deptno NUMERIC(2),
chgdesc VARCHAR(80),
CONSTRAINT jobhist_pk PRIMARY KEY (empno, startdate),
CONSTRAINT jobhist_ref_emp_fk FOREIGN KEY (empno)
REFERENCES emp(empno) ON DELETE CASCADE,
CONSTRAINT jobhist_ref_dept_fk FOREIGN KEY (deptno)
REFERENCES dept (deptno) ON DELETE SET NULL,
v_ename emp.ename%TYPE;
v_hiredate emp.hiredate%TYPE;
v_sal emp.sal%TYPE;
v_comm emp.comm%TYPE;
v_dname dept.dname%TYPE;
v_disp_date VARCHAR(10);
BEGIN
SELECT INTO
v_ename, v_hiredate, v_sal, v_comm, v_dname
ename, hiredate, sal, COALESCE(comm, 0), dname
FROM emp e, dept d
WHERE empno = p_empno
AND e.deptno = d.deptno;
IF NOT FOUND THEN
RAISE INFO 'Employee % not found', p_empno;
RETURN;
END IF;
v_disp_date := TO_CHAR(v_hiredate, 'MM/DD/YYYY');
RAISE INFO 'Number : %', p_empno;
RAISE INFO 'Name : %', v_ename;
RAISE INFO 'Hire Date : %', v_disp_date;
RAISE INFO 'Salary : %', v_sal;
RAISE INFO 'Commission: %', v_comm;
RAISE INFO 'Department: %', v_dname;
RETURN;
EXCEPTION
WHEN OTHERS THEN
RAISE INFO 'The following is SQLERRM : %', SQLERRM;
RAISE INFO 'The following is SQLSTATE: %', SQLSTATE;
RETURN;
END;
$$ LANGUAGE 'plpgsql';
--
-- A RECORD type used to format the return value of
-- function, 'emp_query'.
--
CREATE TYPE emp_query_type AS (
empno NUMERIC,
ename VARCHAR(10),
job VARCHAR(9),
hiredate DATE,
sal NUMERIC
);
--
-- Function that queries the 'emp' table based on
-- department number and employee number or name. Returns
-- employee number and name as INOUT parameters and job,
-- hire date, and salary as OUT parameters. These are
-- returned in the form of a record defined by
-- RECORD type, 'emp_query_type'.
--
CREATE OR REPLACE FUNCTION emp_query (
IN p_deptno NUMERIC,
INOUT p_empno NUMERIC,
INOUT p_ename VARCHAR,
OUT p_job VARCHAR,
OUT p_hiredate DATE,
OUT p_sal NUMERIC
)
AS $$
BEGIN
SELECT INTO
p_empno, p_ename, p_job, p_hiredate, p_sal
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 17
EDB Postgres Advanced Server Guide
v_comm NUMERIC(7,2);
v_deptno NUMERIC(2);
BEGIN
v_empno := new_empno();
INSERT INTO emp VALUES (v_empno, p_ename, 'SALESMAN', 7698,
CURRENT_DATE, p_sal, p_comm, 30);
SELECT INTO
v_empno, v_ename, v_job, v_mgr, v_hiredate, v_sal, v_comm, v_deptno
empno, ename, job, mgr, hiredate, sal, comm, deptno
FROM emp WHERE empno = v_empno;
RAISE INFO 'Department : %', v_deptno;
RAISE INFO 'Employee No: %', v_empno;
RAISE INFO 'Name : %', v_ename;
RAISE INFO 'Job : %', v_job;
RAISE INFO 'Manager : %', v_mgr;
RAISE INFO 'Hire Date : %', v_hiredate;
RAISE INFO 'Salary : %', v_sal;
RAISE INFO 'Commission : %', v_comm;
RETURN v_empno;
EXCEPTION
WHEN OTHERS THEN
RAISE INFO 'The following is SQLERRM : %', SQLERRM;
RAISE INFO 'The following is SQLSTATE: %', SQLSTATE;
RETURN -1;
END;
$$ LANGUAGE 'plpgsql';
--
-- Rule to INSERT into view 'salesemp'
--
CREATE OR REPLACE RULE salesemp_i AS ON INSERT TO salesemp
DO INSTEAD
INSERT INTO emp VALUES (NEW.empno, NEW.ename, 'SALESMAN', 7698,
NEW.hiredate, NEW.sal, NEW.comm, 30);
--
-- Rule to UPDATE view 'salesemp'
--
CREATE OR REPLACE RULE salesemp_u AS ON UPDATE TO salesemp
DO INSTEAD
UPDATE emp SET empno = NEW.empno,
ename = NEW.ename,
hiredate = NEW.hiredate,
sal = NEW.sal,
comm = NEW.comm
WHERE empno = OLD.empno;
--
-- Rule to DELETE from view 'salesemp'
--
CREATE OR REPLACE RULE salesemp_d AS ON DELETE TO salesemp
DO INSTEAD
DELETE FROM emp WHERE empno = OLD.empno;
--
-- After statement-level trigger that displays a message after
-- an insert, update, or deletion to the 'emp' table. One message
-- per SQL command is displayed.
--
CREATE OR REPLACE FUNCTION user_audit_trig() RETURNS TRIGGER
AS $$
DECLARE
v_action VARCHAR(24);
v_text TEXT;
BEGIN
IF TG_OP = 'INSERT' THEN
v_action := ' added employee(s) on ';
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 20
EDB Postgres Advanced Server Guide
The Database Compatibility for Oracle Developers Tools and Utilities Guide
provides information about the compatible tools supported by Advanced Server:
EDB*Plus, EDB*Loader, EDB*Wrap, and DRITA.
https://www.enterprisedb.com/resources/product-documentation
Use the interactive graphical installer; on the Advanced Server Dialect dialog, use
the drop-down menu to select Compatible with Oracle.
Use the INITDBOPTS variable (in the Advanced Server service configuration file)
to specify --redwood-like before initializing your cluster.
For more information about the installation options supported by the Advanced Server
installers, please see the EDB Postgres Advanced Server Installation Guide, available
from the EDB website at:
https://www.enterprisedb.com/resources/product-documentation
Advanced Server supports a highly productive procedural language that allows you to
write custom procedures, functions, triggers and packages. The procedural language:
For information about using the Stored Procedural Language, see the Database
Compatibility for Oracle Developers Guide, available at:
https://www.enterprisedb.com/resources/product-documentation
When you invoke a DELETE, INSERT, SELECT, or UPDATE command, the server
generates a set of execution plans; after analyzing those execution plans, the server
selects a plan that will (generally) return the result set in the least amount of time. The
server's choice of plan is dependent upon several factors:
As a rule, the query planner will select the least expensive plan. You can use an optimizer
hint to influence the server as it selects a query plan.
https://www.enterprisedb.com/resources/product-documentation
https://www.enterprisedb.com/resources/product-documentation
2.5 dblink_ora
dblink_ora provides an OCI-based database link that allows you to SELECT, INSERT,
UPDATE or DELETE data stored on an Oracle system from within Advanced Server. For
detailed information about using dblink_ora, and the supported functions and
procedures, see the Database Compatibility for Oracle Developers Guide, available at:
https://www.enterprisedb.com/resources/product-documentation
Advanced Server supports compatible SQL syntax for profile management. Profile
management commands allow a database superuser to create and manage named
profiles. Each profile defines rules for password management that augment password
and md5 authentication. The rules in a profile can:
A profile is a named set of attributes that allow you to easily manage a group of roles that
share comparable authentication requirements. If password requirements change, you
can modify the profile to have the new requirements applied to each user that is
associated with that profile.
After creating the profile, you can associate the profile with one or more users. When a
user connects to the server, the server enforces the profile that is associated with their
login role. Profiles are shared by all databases within a cluster, but each cluster may have
multiple profiles. A single user with access to multiple databases will use the same
profile when connecting to each database within the cluster.
For information about using profile management commands, see the Database
Compatibility for Oracle Developers Guide, available at:
https://www.enterprisedb.com/resources/product-documentation
For detailed information about the procedures and functions available within each
package, please see the Database Compatibility for Oracle Developers Built-In Package
Guide, available at:
https://www.enterprisedb.com/resources/product-documentation
The following diagram compares the Open Client Library and Oracle Call Interface
application stacks.
For detailed information about the functions supported by the Open Client Library, see
the EDB Postgres Advanced Server OCI Connector Guide, available at:
https://www.enterprisedb.com/resources/product-documentation
2.9 Utilities
For detailed information about the compatible syntax supported by the utilities listed
below, see the Database Compatibility for Oracle Developers Tools and Utilities Guide,
available at:
https://www.enterprisedb.com/resources/product-documentation
EDB*Plus
EDB*Plus is a utility program that provides a command line user interface to the
Advanced Server that will be familiar to Oracle developers and users. EDB*Plus accepts
SQL commands, SPL anonymous blocks, and EDB*Plus commands.
For detailed information about EDB*Plus, please see the EDB*Plus User's Guide.
EDB*Loader
Support for the Oracle SQL*Loader data loading methods - conventional path
load, direct path load, and parallel direct path load
Oracle SQL*Loader compatible syntax for control file directives
Input data with delimiter-separated or fixed-width fields
Bad file for collecting rejected records
Loading of multiple target tables
Discard file for collecting records that do not meet the selection criteria of any
target table
Log file for recording the EDB*Loader session and any error messages
Data loading from standard input and remote loading
EDB*Wrap
The EDB*Wrap utility protects proprietary source code and programs (functions, stored
procedures, triggers, and packages) from unauthorized scrutiny. The EDB*Wrap
program translates a file that contains SPL or PL/pgSQL source code (the plaintext) into
a file that contains the same code in a form that is nearly impossible to read. Once you
have the obfuscated form of the code, you can send that code to Advanced Server and it
will store those programs in obfuscated form. While EDB*Wrap does obscure code,
table definitions are still exposed.
Everything you wrap is stored in obfuscated form. If you wrap an entire package, the
package body source, as well as the prototypes contained in the package header and the
functions and procedures contained in the package body are stored in obfuscated form.
2.10 ECPGPlus
EnterpriseDB has enhanced ECPG (the PostgreSQL pre-compiler) to create ECPGPlus.
ECPGPlus allows you to include embedded SQL commands in C applications; when you
use ECPGPlus to compile an application that contains embedded SQL commands, the
SQL code is syntax-checked and translated into C.
For information about using ECPGPlus, please see the EDB Postgres Advanced Server
ECPG Connector Guide, available from the EnterpriseDB website at:
https://www.enterprisedb.com/resources/product-documentation
Table partitioning is worthwhile only when a table would otherwise be very large. The
exact point at which a table will benefit from partitioning depends on the application; a
good rule of thumb is that the size of the table should exceed the physical memory of the
database server.
For information about database compatibility features supported by Advanced Server see
the Database Compatibility for Oracle Developer's Guide, available at:
https://www.enterprisedb.com/resources/product-documentation
3 Database Administration
This chapter describes the features that aid in the management and administration of
Advanced Server databases.
https://www.postgresql.org/docs/10/static/runtime-config.html
This section provides an overview of how configuration parameters are specified and set.
Each configuration parameter is set using a name/value pair. Parameter names are case-
insensitive. The parameter name is typically separated from its value by an optional
equals sign (=).
# This is a comment
log_connections = yes
log_destination = 'syslog'
search_path = '"$user", public'
shared_buffers = 128MB
Boolean. Acceptable values can be written as on, off, true, false, yes, no, 1,
0, or any unambiguous prefix of these.
Integer. Number without a fractional part.
Floating Point. Number with an optional fractional part separated by a decimal
point.
String. Text value. Enclose in single quotes if the value is not a simple identifier
or number (that is, the value contains special characters such as spaces or other
punctuation marks).
Enum. Specific set of string values. The allowed values can be found in the
system view pg_settings.enumvals. Enum values are case-insensitive.
Some settings specify a memory or time value. Each of these has an implicit unit, which
is kilobytes, blocks (typically 8 kilobytes), milliseconds, seconds, or minutes. Default
units can be found by referencing the system view pg_settings.unit. A different unit
can be specified explicitly.
Valid memory units are kB (kilobytes), MB (megabytes), and GB (gigabytes). Valid time
units are ms (milliseconds), s (seconds), min (minutes), h (hours), and d (days). The
multiplier for memory units is 1024.
There is a number of parameter settings that are established when the Advanced
Server database product is built. These are read-only parameters, and their values
cannot be changed. There are also a couple of parameters that are permanently set
for each database when the database is created. These parameters are read-only as
well and cannot be subsequently changed for the database.
The initial settings for almost all configurable parameters across the entire
database cluster are listed in the configuration file, postgresql.conf. These
settings are put into effect upon database server start or restart. Some of these
initial parameter settings can be overridden as discussed in the following bullet
points. All configuration parameters have built-in default settings that are in effect
if not explicitly overridden.
Configuration parameters in the postgresql.conf file are overridden when the
same parameters are included in the postgresql.auto.conf file. The ALTER
SYSTEM command is used to manage the configuration parameters in the
postgresql.auto.conf file.
Parameter settings can be modified in the configuration file while the database
server is running. If the configuration file is then reloaded (meaning a SIGHUP
signal is issued), for certain parameter types, the changed parameters settings
immediately take effect. For some of these parameter types, the new settings are
available in a currently running session immediately after the reload. For other of
these parameter types, a new session must be started to use the new settings. And
yet for other parameter types, modified settings do not take effect until the
database server is stopped and restarted. See Section 18.1, Setting Parameters in
the PostgreSQL Core Documentation for information on how to reload the
configuration file.
The SQL commands ALTER DATABASE, ALTER ROLE, or ALTER ROLE IN
DATABASE can be used to modify certain parameter settings. The modified
parameter settings take effect for new sessions after the command is executed.
ALTER DATABASE affects new sessions connecting to the specified database.
ALTER ROLE affects new sessions started by the specified role. ALTER ROLE IN
DATABASE affects new sessions started by the specified role connecting to the
specified database. Parameter settings established by these SQL commands
remain in effect indefinitely, across database server restarts, overriding settings
established by the methods discussed in the second and third bullet points.
Parameter settings established using the ALTER DATABASE, ALTER ROLE, or
ALTER ROLE IN DATABASE commands can only be changed by: a) re-issuing
these commands with a different parameter value, or b) issuing these commands
using either of the SET parameter TO DEFAULT clause or the RESET
parameter clause. These clauses change the parameter back to using the setting
established by the methods set forth in the prior bullet points. See Section I, SQL
Commands of Chapter VI Reference in the PostgreSQL Core Documentation
for the exact syntax of these SQL commands.
Changes can be made for certain parameter settings for the duration of individual
sessions using the PGOPTIONS environment variable or by using the SET
command within the EDB-PSQL or PSQL command line terminal programs.
Parameter settings made in this manner override settings established using any of
the methods described by the second, third, and fourth bullet points, but only for
the duration of the session.
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 35
EDB Postgres Advanced Server Guide
This section contains a summary table listing all Advanced Server configuration
parameters along with a number of key attributes of the parameters.
These attributes are described by the following columns of the summary table:
Note: There are a number of parameters that should never be altered. These are
designated as Note: For internal use only in the Description column.
When EPA
Scope of Authorized
Parameter Takes Description S
Effect User
Effect Only
autovacuum_vacuum_scale_ Cluster Reload EPAS Number of tuple updates or
factor service deletes prior to vacuum as a
account fraction of reltuples.
When EPA
Scope of Authorized
Parameter Takes Description S
Effect User
Effect Only
client_min_messages Session Immediate User Sets the message levels that are
sent to the client.
commit_delay Session Immediate Superuser Sets the delay in microseconds
between transaction commit
and flushing WAL to disk.
commit_siblings Session Immediate User Sets the minimum concurrent
open transactions before
performing commit_delay.
config_file Cluster Restart EPAS Sets the server's main
service configuration file.
account
constraint_exclusion Session Immediate User Enables the planner to use
constraints to optimize queries.
cpu_index_tuple_cost Session Immediate User Sets the planner's estimate of
the cost of processing each
index entry during an index
scan.
cpu_operator_cost Session Immediate User Sets the planner's estimate of
the cost of processing each
operator or function call.
cpu_tuple_cost Session Immediate User Sets the planner's estimate of
the cost of processing each
tuple (row).
cursor_tuple_fraction Session Immediate User Sets the planner's estimate of
the fraction of a cursor's rows
that will be retrieved.
custom_variable_classes Cluster Reload EPAS Deprecated in Advanced Server X
service 9.2.
account
data_checksums Cluster Preset n/a Shows whether data checksums
are turned on for this cluster.
data_directory Cluster Restart EPAS Sets the server's data directory.
service
account
DateStyle Session Immediate User Sets the display format for date
and time values.
db_dialect Session Immediate User Sets the precedence of built-in X
namespaces.
dbms_alert.max_alerts Cluster Restart EPAS Sets maximum number of X
service alerts.
account
dbms_pipe.total_message_ Cluster Restart EPAS Specifies the total size of the X
buffer service buffer used for the
account DBMS_PIPE package.
db_user_namespace Cluster Reload EPAS Enables per-database user
service names.
account
deadlock_timeout Session Immediate Superuser Sets the time to wait on a lock
When EPA
Scope of Authorized
Parameter Takes Description S
Effect User
Effect Only
before checking for deadlock.
debug_assertions Cluster Preset n/a Turns on various assertion
checks. (Not supported in
EPAS builds.)
debug_pretty_print Session Immediate User Indents parse and plan tree
displays.
debug_print_parse Session Immediate User Logs each query's parse tree.
debug_print_plan Session Immediate User Logs each query's execution
plan.
debug_print_rewritten Session Immediate User Logs each query's rewritten
parse tree.
default_heap_fillfactor Session Immediate User Create new tables with this X
heap fillfactor by default.
default_statistics_targe Session Immediate User Sets the default statistics target.
t
When EPA
Scope of Authorized
Parameter Takes Description S
Effect User
Effect Only
edb_audit_disconnect Cluster Reload EPAS Audits end of a session. X
service
account
edb_audit_filename Cluster Reload EPAS Sets the file name pattern for X
service audit files.
account
edb_audit_rotation_day Cluster Reload EPAS Automatic rotation of log files X
service based on day of week.
account
edb_audit_rotation_secon Cluster Reload EPAS Automatic log file rotation will X
ds service occur after N seconds.
account
edb_audit_rotation_size Cluster Reload EPAS Automatic log file rotation will X
service occur after N Megabytes.
account
edb_audit_statement Cluster Reload EPAS Sets the type of statements to X
service audit.
account
edb_audit_tag Session Immediate User Specify a tag to be included in X
the audit log.
edb_connectby_order Session Immediate User Sort results of CONNECT BY X
queries with no ORDER BY to
depth-first order. Note: For
internal use only.
edb_custom_plan_tries Session Immediate User Specifies the number of custom X
execution plans considered by
the planner before the planner
selects a generic execution
plan.
edb_dynatune Cluster Restart EPAS Sets the edb utilization X
service percentage.
account
edb_dynatune_profile Cluster Restart EPAS Sets the workload profile for X
service dynatune.
account
edb_enable_icache Cluster Restart EPAS Enable external shared buffer X
service Infinite Cache mechanism.
account
edb_enable_pruning Session Immediate User Enables the planner to early- X
prune partitioned tables.
edb_icache_compression_l Session Immediate Superuser Sets compression level of X
evel Infinite Cache buffers.
edb_icache_servers Cluster Reload EPAS A list of comma separated X
service hostname:portnumber
account icache servers.
edb_log_every_bulk_value Session Immediate Superuser Sets the statements logged for X
bulk processing.
edb_max_resource_groups Cluster Restart EPAS Specifies the maximum number X
When EPA
Scope of Authorized
Parameter Takes Description S
Effect User
Effect Only
service of resource groups for
account simultaneous use.
edb_max_spins_per_delay Cluster Restart EPAS Specifies the number of times a X
service session will spin while waiting
account for a lock.
edb_redwood_date Session Immediate User Determines whether DATE X
should behave like a
TIMESTAMP or not.
edb_redwood_greatest_lea Session Immediate User Determines how GREATEST and X
st LEAST functions should handle
NULL parameters.
edb_redwood_raw_names Session Immediate User Return the unmodified name X
stored in the PostgreSQL
system catalogs from Redwood
interfaces.
edb_redwood_strings Session Immediate User Treat NULL as an empty string X
when concatenated with a text
value.
edb_resource_group Session Immediate User Specifies the resource group to X
be used by the current process.
edb_sql_protect.enabled Cluster Reload EPAS Defines whether SQL/Protect X
service should track queries or not.
account
edb_sql_protect.level Cluster Reload EPAS Defines the behavior of X
service SQL/Protect when an event is
account found.
edb_sql_protect.max_prot Cluster Restart EPAS Sets the maximum number of X
ected_relations service relations protected by
account SQL/Protect per role.
edb_sql_protect.max_prot Cluster Restart EPAS Sets the maximum number of X
ected_roles service roles protected by SQL/Protect.
account
edb_sql_protect.max_quer Cluster Restart EPAS Sets the maximum number of X
ies_to_save service offending queries to save by
account SQL/Protect.
edb_stmt_level_tx Session Immediate User Allows continuing on errors X
instead of requiring a
transaction abort.
edbldr.empty_csv_field Session Immediate Superuser Specifies how EDB*Loader X
handles empty strings.
effective_cache_size Session Immediate User Sets the planner's assumption
about the size of the disk cache.
effective_io_concurrency Session Immediate User Number of simultaneous
requests that can be handled
efficiently by the disk
subsystem.
enable_bitmapscan Session Immediate User Enables the planner's use of
bitmap-scan plans.
When EPA
Scope of Authorized
Parameter Takes Description S
Effect User
Effect Only
enable_hashagg Session Immediate User Enables the planner's use of
hashed aggregation plans.
enable_hashjoin Session Immediate User Enables the planner's use of
hash join plans.
enable_hints Session Immediate User Enable optimizer hints in SQL X
statements.
enable_indexonlyscan Session Immediate User Enables the planners use of
index-only-scan plans.
enable_indexscan Session Immediate User Enables the planner's use of
index-scan plans.
enable_material Session Immediate User Enables the planner's use of
materialization.
enable_mergejoin Session Immediate User Enables the planner's use of
merge join plans.
enable_nestloop Session Immediate User Enables the planner's use of
nested-loop join plans.
enable_seqscan Session Immediate User Enables the planner's use of
sequential-scan plans.
enable_sort Session Immediate User Enables the planner's use of
explicit sort steps.
enable_tidscan Session Immediate User Enables the planner's use of
TID scan plans.
escape_string_warning Session Immediate User Warn about backslash escapes
in ordinary string literals.
event_source Cluster Restart EPAS Sets the application name used
service to identify PostgreSQL
account messages in the event log.
exit_on_error Session Immediate User Terminate session on any error.
external_pid_file Cluster Restart EPAS Writes the postmaster PID to
service the specified file.
account
extra_float_digits Session Immediate User Sets the number of digits
displayed for floating-point
values.
from_collapse_limit Session Immediate User Sets the FROM-list size beyond
which subqueries are not
collapsed.
fsync Cluster Reload EPAS Forces synchronization of
service updates to disk.
account
full_page_writes Cluster Reload EPAS Writes full pages to WAL when
service first modified after a
account checkpoint.
geqo Session Immediate User Enables genetic query
optimization.
geqo_effort Session Immediate User GEQO: effort is used to set the
default for other GEQO
parameters.
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 43
EDB Postgres Advanced Server Guide
When EPA
Scope of Authorized
Parameter Takes Description S
Effect User
Effect Only
geqo_generations Session Immediate User GEQO: number of iterations of
the algorithm.
geqo_pool_size Session Immediate User GEQO: number of individuals
in the population.
geqo_seed Session Immediate User GEQO: seed for random path
selection.
geqo_selection_bias Session Immediate User GEQO: selective pressure
within the population.
geqo_threshold Session Immediate User Sets the threshold of FROM
items beyond which GEQO is
used.
gin_fuzzy_search_limit Session Immediate User Sets the maximum allowed
result for exact search by GIN.
hba_file Cluster Restart EPAS Sets the server's "hba"
service configuration file.
account
hot_standby Cluster Restart EPAS Allows connections and queries
service during recovery.
account
hot_standby_feedback Cluster Reload EPAS Allows feedback from a hot
service standby to the primary that will
account avoid query conflicts.
huge_pages Cluster Restart EPAS Use of huge pages on Linux.
service
account
icu_short_form Database Preset n/a Shows the ICU collation order X
configuration.
ident_file Cluster Restart EPAS Sets the server's "ident"
service configuration file.
account
ignore_checksum_failure Session Immediate Superuser Continues processing after a
checksum failure.
ignore_system_indexes Cluster/ Reload/ EPAS Disables reading from system
Session Immediate service indexes. (Can also be set with
account/ PGOPTIONS at session start.)
User
index_advisor.enabled Session Immediate User Enable Index Advisor plugin. X
integer_datetimes Cluster Preset n/a Datetimes are integer based.
IntervalStyle Session Immediate User Sets the display format for
interval values.
join_collapse_limit Session Immediate User Sets the FROM-list size beyond
which JOIN constructs are not
flattened.
krb_caseins_users Cluster Reload EPAS Sets whether Kerberos and
service GSSAPI user names should be
account treated as case-insensitive.
krb_server_keyfile Cluster Reload EPAS Sets the location of the
service Kerberos server key file.
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 44
EDB Postgres Advanced Server Guide
When EPA
Scope of Authorized
Parameter Takes Description S
Effect User
Effect Only
account
lc_collate Database Preset n/a Shows the collation order
locale.
lc_ctype Database Preset n/a Shows the character
classification and case
conversion locale.
lc_messages Session Immediate Superuser Sets the language in which
messages are displayed.
lc_monetary Session Immediate User Sets the locale for formatting
monetary amounts.
lc_numeric Session Immediate User Sets the locale for formatting
numbers.
lc_time Session Immediate User Sets the locale for formatting
date and time values.
listen_addresses Cluster Restart EPAS Sets the host name or IP
service address(es) to listen to.
account
local_preload_libraries Cluster/ Reload/ EPAS Lists shared libraries to preload
Session Immediate service into each backend. (Can also be
account/ set with PGOPTIONS at session
User start.)
lock_timeout Session Immediate User Sets the maximum time allowed
that a statement may wait for a
lock.
lo_compat_privileges Session Immediate Superuser Enables backward compatibility
mode for privilege checks on
large objects.
log_autovacuum_min_durat Cluster Reload EPAS Sets the minimum execution
ion service time above which autovacuum
account actions will be logged.
log_checkpoints Cluster Reload EPAS Logs each checkpoint.
service
account
log_connections Cluster/ Reload/ EPAS Logs each successful
Session Immediate service connection. (Can also be set
account/ with PGOPTIONS at session
User start.)
log_destination Cluster Reload EPAS Sets the destination for server
service log output.
account
log_directory Cluster Reload EPAS Sets the destination directory
service for log files.
account
log_disconnections Cluster/ Reload/ EPAS Logs end of a session, including
Session Immediate service duration. (Can also be set with
account/ PGOPTIONS at session start.)
User
log_duration Session Immediate Superuser Logs the duration of each
When EPA
Scope of Authorized
Parameter Takes Description S
Effect User
Effect Only
completed SQL statement.
log_error_verbosity Session Immediate Superuser Sets the verbosity of logged
messages.
log_executor_stats Session Immediate Superuser Writes executor performance
statistics to the server log.
log_file_mode Cluster Reload EPAS Sets the file permissions for log
service files.
account
log_filename Cluster Reload EPAS Sets the file name pattern for
service log files.
account
log_hostname Cluster Reload EPAS Logs the host name in the
service connection logs.
account
log_line_prefix Cluster Reload EPAS Controls information prefixed
service to each log line.
account
log_lock_waits Session Immediate Superuser Logs long lock waits.
log_min_duration_stateme Session Immediate Superuser Sets the minimum execution
nt time above which statements
will be logged.
log_min_error_statement Session Immediate Superuser Causes all statements
generating error at or above this
level to be logged.
log_min_messages Session Immediate Superuser Sets the message levels that are
logged.
log_parser_stats Session Immediate Superuser Writes parser performance
statistics to the server log.
log_planner_stats Session Immediate Superuser Writes planner performance
statistics to the server log.
log_rotation_age Cluster Reload EPAS Automatic log file rotation will
service occur after N minutes.
account
log_rotation_size Cluster Reload EPAS Automatic log file rotation will
service occur after N kilobytes.
account
log_statement Session Immediate Superuser Sets the type of statements
logged.
log_statement_stats Session Immediate Superuser Writes cumulative performance
statistics to the server log.
log_temp_files Session Immediate Superuser Log the use of temporary files
larger than this number of
kilobytes.
log_timezone Cluster Reload EPAS Sets the time zone to use in log
service messages.
account
log_truncate_on_rotation Cluster Reload EPAS Truncate existing log files of
service same name during log rotation.
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 46
EDB Postgres Advanced Server Guide
When EPA
Scope of Authorized
Parameter Takes Description S
Effect User
Effect Only
account
logging_collector Cluster Restart EPAS Start a subprocess to capture
service stderr output and/or csvlogs
account into log files.
maintenance_work_mem Session Immediate User Sets the maximum memory to
be used for maintenance
operations.
max_connections Cluster Restart EPAS Sets the maximum number of
service concurrent connections.
account
max_files_per_process Cluster Restart EPAS Sets the maximum number of
service simultaneously open files for
account each server process.
max_function_args Cluster Preset n/a Shows the maximum number of
function arguments.
max_identifier_length Cluster Preset n/a Shows the maximum identifier
length.
max_index_keys Cluster Preset n/a Shows the maximum number of
index keys.
max_locks_per_transactio Cluster Restart EPAS Sets the maximum number of
n service locks per transaction.
account
max_pred_locks_per_trans Cluster Restart EPAS Sets the maximum number of
action service predicate locks per transaction.
account
max_prepared_transaction Cluster Restart EPAS Sets the maximum number of
s service simultaneously prepared
account transactions.
max_replication_slots Cluster Restart EPAS Sets the maximum number of
service simultaneously defined
account replication slots.
max_stack_depth Session Immediate Superuser Sets the maximum stack depth,
in kilobytes.
max_standby_archive_dela Cluster Reload EPAS Sets the maximum delay before
y service canceling queries when a hot
account standby server is processing
archived WAL data.
max_standby_streaming_de Cluster Reload EPAS Sets the maximum delay before
lay service canceling queries when a hot
account standby server is processing
streamed WAL data.
max_wal_senders Cluster Restart EPAS Sets the maximum number of
service simultaneously running WAL
account sender processes.
max_wal_size Cluster Reload EPAS Sets the maximum size to
service which the WAL will grow
account between automatic WAL
checkpoints. The default is
When EPA
Scope of Authorized
Parameter Takes Description S
Effect User
Effect Only
1GB.
max_worker_processes Cluster Restart EPAS Maximum number of
service concurrent worker processes.
account
min_wal_size Cluster Reload EPAS Sets the threshold at which
service WAL logs will be recycled
account rather than removed. The
default is 80 MB.
nls_length_semantics Session Immediate Superuser Sets the semantics to use for X
char, varchar, varchar2 and
long columns.
odbc_lib_path Cluster Restart EPAS Sets the path for ODBC library. X
service
account
optimizer_mode Session Immediate User Default optimizer mode. X
oracle_home Cluster Restart EPAS Sets the path for the Oracle X
service home directory.
account
password_encryption Session Immediate User Encrypt passwords.
pg_prewarm.autoprewarm Cluster Restart EPAS Enables the autoprewarm X
service background worker.
account
pg_prewarm.autoprewarm_i Cluster Reload EPAS Sets the minimum number of X
nterval service seconds after which
account autoprewarm dumps
shared buffers.
port Cluster Restart EPAS Sets the TCP port on which the
service server listens.
account
post_auth_delay Cluster/ Reload/ EPAS Waits N seconds on connection
Session Immediate service startup after authentication.
account/ (Can also be set with
User PGOPTIONS at session start.)
pre_auth_delay Cluster Reload EPAS Waits N seconds on connection
service startup before authentication.
account
qreplace_function Session Immediate Superuser The function to be used by X
Query Replace feature. Note:
For internal use only.
query_rewrite_enabled Session Immediate User Child table scans will be X
skipped if their constraints
guarantee that no rows match
the query.
query_rewrite_integrity Session Immediate Superuser Sets the degree to which query X
rewriting must be enforced.
quote_all_identifiers Session Immediate User When generating SQL
fragments, quote all identifiers.
random_page_cost Session Immediate User Sets the planner's estimate of
When EPA
Scope of Authorized
Parameter Takes Description S
Effect User
Effect Only
the cost of a nonsequentially
fetched disk page.
restart_after_crash Cluster Reload EPAS Reinitialize server after
service backend crash.
account
search_path Session Immediate User Sets the schema search order
for names that are not schema-
qualified.
segment_size Cluster Preset n/a Shows the number of pages per
disk file.
seq_page_cost Session Immediate User Sets the planner's estimate of
the cost of a sequentially
fetched disk page.
server_encoding Database Preset n/a Sets the server (database)
character set encoding.
server_version Cluster Preset n/a Shows the server version.
server_version_num Cluster Preset n/a Shows the server version as an
integer.
session_preload_librarie Session Immediate Superuser Lists shared libraries to preload
s but only at into each backend.
connection
start
session_replication_role Session Immediate Superuser Sets the session's behavior for
triggers and rewrite rules.
shared_buffers Cluster Restart EPAS Sets the number of shared
service memory buffers used by the
account server.
shared_preload_libraries Cluster Restart EPAS Lists shared libraries to preload
service into server.
account
sql_inheritance Session Immediate User Causes subtables to be included
by default in various
commands.
ssl Cluster Restart EPAS Enables SSL connections.
service
account
ssl_ca_file Cluster Restart EPAS Location of the SSL certificate
service authority file.
account
ssl_cert_file Cluster Restart EPAS Location of the SSL server
service certificate file.
account
ssl_ciphers Cluster Restart EPAS Sets the list of allowed SSL
service ciphers.
account
ssl_crl_file Cluster Restart EPAS Location of the SSL certificate
service revocation list file.
account
When EPA
Scope of Authorized
Parameter Takes Description S
Effect User
Effect Only
ssl_ecdh_curve Cluster Restart EPAS Sets the curve to use for ECDH.
service
account
ssl_key_file Cluster Restart EPAS Location of the SSL server
service private key file.
account
ssl_prefer_server_cipher Cluster Restart EPAS Give priority to server
s service ciphersuite order.
account
ssl_renegotiation_limit Session Immediate User Set the amount of traffic to send
and receive before
renegotiating the encryption
keys.
standard_conforming_stri Session Immediate User Causes '...' strings to treat
ngs backslashes literally.
statement_timeout Session Immediate User Sets the maximum allowed
duration of any statement.
stats_temp_directory Cluster Reload EPAS Writes temporary statistics files
service to the specified directory.
account
superuser_reserved_conne Cluster Restart EPAS Sets the number of connection
ctions service slots reserved for superusers.
account
synchronize_seqscans Session Immediate User Enable synchronized sequential
scans.
synchronous_commit Session Immediate User Sets immediate fsync at
commit.
synchronous_standby_name Cluster Reload EPAS List of names of potential
s service synchronous standbys.
account
syslog_facility Cluster Reload EPAS Sets the syslog "facility" to be
service used when syslog enabled.
account
syslog_ident Cluster Reload EPAS Sets the program name used to
service identify PostgreSQL messages
account in syslog.
tcp_keepalives_count Session Immediate User Maximum number of TCP
keepalive retransmits.
tcp_keepalives_idle Session Immediate User Time between issuing TCP
keepalives.
tcp_keepalives_interval Session Immediate User Time between TCP keepalive
retransmits.
temp_buffers Session Immediate User Sets the maximum number of
temporary buffers used by each
session.
temp_file_limit Session Immediate Superuser Limits the total size of all
temporary files used by each
session.
When EPA
Scope of Authorized
Parameter Takes Description S
Effect User
Effect Only
temp_tablespaces Session Immediate User Sets the tablespace(s) to use for
temporary tables and sort files.
timed_statistics Session Immediate User Enables the recording of timed X
statistics.
timezone Session Immediate User Sets the time zone for
displaying and interpreting time
stamps.
timezone_abbreviations Session Immediate User Selects a file of time zone
abbreviations.
trace_hints Session Immediate User Emit debug info about hints X
being honored.
trace_notify Session Immediate User Generates debugging output for
LISTEN and NOTIFY.
trace_recovery_messages Cluster Reload EPAS Enables logging of recovery-
service related debugging information.
account
trace_sort Session Immediate User Emit information about
resource usage in sorting.
track_activities Session Immediate Superuser Collects information about
executing commands.
track_activity_query_siz Cluster Restart EPAS Sets the size reserved for
e service pg_stat_activity.curren
account t_query, in bytes.
track_counts Session Immediate Superuser Collects statistics on database
activity.
track_functions Session Immediate Superuser Collects function-level statistics
on database activity.
track_io_timing Session Immediate Superuser Collects timing statistics for
database I/O activity.
transaction_deferrable Session Immediate User Whether to defer a read-only
serializable transaction until it
can be executed with no
possible serialization failures.
transaction_isolation Session Immediate User Sets the current transaction's
isolation level.
transaction_read_only Session Immediate User Sets the current transaction's
read-only status.
transform_null_equals Session Immediate User Treats "expr=NULL" as "expr
IS NULL".
unix_socket_directories Cluster Restart EPAS Sets the directory where the
service Unix-domain socket will be
account created.
unix_socket_group Cluster Restart EPAS Sets the owning group of the
service Unix-domain socket.
account
unix_socket_permissions Cluster Restart EPAS Sets the access permissions of
service the Unix-domain socket.
account
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 51
EDB Postgres Advanced Server Guide
When EPA
Scope of Authorized
Parameter Takes Description S
Effect User
Effect Only
update_process_title Session Immediate Superuser Updates the process title to
show the active SQL command.
utl_encode.uudecode_redw Session Immediate User Allows decoding of Oracle- X
ood created uuencoded data.
utl_file.umask Session Immediate User Umask used for files created X
through the UTL_FILE
package.
vacuum_cost_delay Session Immediate User Vacuum cost delay in
milliseconds.
vacuum_cost_limit Session Immediate User Vacuum cost amount available
before napping.
vacuum_cost_page_dirty Session Immediate User Vacuum cost for a page dirtied
by vacuum.
vacuum_cost_page_hit Session Immediate User Vacuum cost for a page found
in the buffer cache.
vacuum_cost_page_miss Session Immediate User Vacuum cost for a page not
found in the buffer cache.
vacuum_defer_cleanup_age Cluster Reload EPAS Number of transactions by
service which VACUUM and HOT
account cleanup should be deferred, if
any.
vacuum_freeze_min_age Session Immediate User Minimum age at which
VACUUM should freeze a
table row.
vacuum_freeze_table_age Session Immediate User Age at which VACUUM
should scan whole table to
freeze tuples.
vacuum_multixact_freeze_ Session Immediate User Minimum age at which
min_age VACUUM should freeze a
MultiXactId in a table row.
vacuum_multixact_freeze_ Session Immediate User Multixact age at which
table_age VACUUM should scan whole
table to freeze tuples.
wal_block_size Cluster Preset n/a Shows the block size in the
write ahead log.
wal_buffers Cluster Restart EPAS Sets the number of disk-page
service buffers in shared memory for
account WAL.
wal_keep_segments Cluster Reload EPAS Sets the number of WAL files
service held for standby servers.
account
wal_level Cluster Restart EPAS Set the level of information
service written to the WAL.
account
wal_log_hints Cluster Restart EPAS Writes full pages to WAL when
service first modified after a
account checkpoint, even for non-
critical modifications.
When EPA
Scope of Authorized
Parameter Takes Description S
Effect User
Effect Only
wal_receiver_status_inte Cluster Reload EPAS Sets the maximum interval
rval service between WAL receiver status
account reports to the primary.
wal_receiver_timeout Cluster Reload EPAS Sets the maximum wait time to
service receive data from the primary.
account
wal_segment_size Cluster Preset n/a Shows the number of pages per
write ahead log segment.
wal_sender_timeout Cluster Reload EPAS Sets the maximum time to wait
service for WAL replication.
account
wal_sync_method Cluster Reload EPAS Selects the method used for
service forcing WAL updates to disk.
account
wal_writer_delay Cluster Reload EPAS WAL writer sleep time between
service WAL flushes.
account
work_mem Session Immediate User Sets the maximum memory to
be used for query workspaces.
xloginsert_locks Cluster Restart EPAS Sets the number of locks used
service for concurrent xlog insertions.
account
xmlbinary Session Immediate User Sets how binary values are to
be encoded in XML.
Xmloption Session Immediate User Sets whether XML data in
implicit parsing and
serialization operations is to be
considered as documents or
content fragments.
zero_damaged_pages Session Immediate Superuser Continues processing past
damaged page headers.
This section provides more detail for certain groups of configuration parameters.
Parameter Type. Type of values the parameter can accept. See Section 3.1.1 for
a discussion of parameter type values.
Default Value. Default setting if a value is not explicitly set in the configuration
file.
Range. Permitted range of values.
Minimum Scope of Effect. Smallest scope for which a distinct setting can be
made. Generally, the minimal scope of a distinct setting is either the entire cluster
(the setting is the same for all databases and sessions thereof, in the cluster), or
per session (the setting may vary by role, database, or individual session). (This
attribute has the same meaning as the Scope of Effect column in the table of
Section 2.1.2.)
When Value Changes Take Effect. Least invasive action required to activate a
change to a parameters value. All parameter setting changes made in the
configuration file can be put into effect with a restart of the database server;
however certain parameters require a database server restart. Some parameter
setting changes can be put into effect with a reload of the configuration file
without stopping the database server. Finally, other parameter setting changes can
be put into effect with some client side action whose result is immediate. (This
attribute has the same meaning as the When Takes Effect column in the table of
Section 2.1.2.)
Required Authorization to Activate. The type of user authorization to activate a
change to a parameters setting. If a database server restart or a configuration file
reload is required, then the user must be a EPAS service account
(enterprisedb or postgres depending upon the Advanced Server
compatibility installation mode). This attribute has the same meaning as the
Authorized User column in the table of Section 2.1.2.
3.1.3.1.1 shared_buffers
Sets the amount of memory the database server uses for shared memory buffers. The
default is typically 32 megabytes (32MB), but might be less if your kernel settings will not
support it (as determined during initdb). This setting must be at least 128 kilobytes.
(Non-default values of BLCKSZ change the minimum.) However, settings significantly
higher than the minimum are usually needed for good performance.
If you have a dedicated database server with 1GB or more of RAM, a reasonable starting
value for shared_buffers is 25% of the memory in your system. There are some
workloads where even large settings for shared_buffers are effective, but because
Advanced Server also relies on the operating system cache, it is unlikely that an
allocation of more than 40% of RAM to shared_buffers will work better than a
smaller amount.
On systems with less than 1GB of RAM, a smaller percentage of RAM is appropriate, so
as to leave adequate space for the operating system (15% of memory is more typical in
these situations). Also, on Windows, large values for shared_buffers aren't as
effective. You may find better results keeping the setting relatively low and using the
operating system cache more instead. The useful range for shared_buffers on
Windows systems is generally from 64MB to 512MB.
Increasing this parameter might cause Advanced Server to request more System V shared
memory than your operating system's default configuration allows. See Section 17.4.1,
Shared Memory and Semaphores in the PostgreSQL Core Documentation for
information on how to adjust those parameters, if necessary.
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 55
EDB Postgres Advanced Server Guide
3.1.3.1.2 work_mem
Specifies the amount of memory to be used by internal sort operations and hash tables
before writing to temporary disk files. The value defaults to one megabyte (1MB). Note
that for a complex query, several sort or hash operations might be running in parallel;
each operation will be allowed to use as much memory as this value specifies before it
starts to write data into temporary files. Also, several running sessions could be doing
such operations concurrently. Therefore, the total memory used could be many times the
value of work_mem; it is necessary to keep this fact in mind when choosing the value.
Sort operations are used for ORDER BY, DISTINCT, and merge joins. Hash tables are
used in hash joins, hash-based aggregation, and hash-based processing of IN subqueries.
Reasonable values are typically between 4MB and 64MB, depending on the size of your
machine, how many concurrent connections you expect (determined by
max_connections), and the complexity of your queries.
3.1.3.1.3 maintenance_work_mem
database session, and an installation normally doesn't have many of them running
concurrently, it's safe to set this value significantly larger than work_mem. Larger settings
might improve performance for vacuuming and for restoring database dumps.
A good rule of thumb is to set this to about 5% of system memory, but not more than
about 512MB. Larger values won't necessarily improve performance.
3.1.3.1.4 wal_buffers
The amount of memory used in shared memory for WAL data. The default is 64
kilobytes (64kB). The setting need only be large enough to hold the amount of WAL data
generated by one typical transaction, since the data is written out to disk at every
transaction commit.
Increasing this parameter might cause Advanced Server to request more System V shared
memory than your operating system's default configuration allows. See Section 17.4.1,
Shared Memory and Semaphores in the PostgreSQL Core Documentation for
information on how to adjust those parameters, if necessary.
Although even this very small setting does not always cause a problem, there are
situations where it can result in extra fsync calls, and degrade overall system
throughput. Increasing this value to 1MB or so can alleviate this problem. On very busy
systems, an even higher value may be needed, up to a maximum of about 16MB. Like
shared_buffers, this parameter increases Advanced Servers initial shared memory
allocation, so if increasing it causes an Advanced Server start failure, you will need to
increase the operating system limit.
3.1.3.1.5 checkpoint_segments
Now deprecated; this parameter is not supported by server versions 9.5 or later.
3.1.3.1.6 checkpoint_completion_target
Range: 0 to 1
The default of 0.5 means aim to finish the checkpoint writes when 50% of the next
checkpoint is ready. A value of 0.9 means aim to finish the checkpoint writes when 90%
of the next checkpoint is done, thus throttling the checkpoint writes over a larger amount
of time and avoiding spikes of performance bottlenecking.
3.1.3.1.7 checkpoint_timeout
Maximum time between automatic WAL checkpoints, in seconds. The default is five
minutes (5min). Increasing this parameter can increase the amount of time needed for
crash recovery.
3.1.3.1.8 max_wal_size
Default Value: 1 GB
Range: 2 to 2147483647
max_wal_size specifies the maximum size that the WAL will reach between automatic
WAL checkpoints. This is a soft limit; WAL size can exceed max_wal_size under
special circumstances (when under a heavy load, a failing archive_command, or a high
wal_keep_segments setting).
Increasing this parameter can increase the amount of time needed for crash recovery. This
parameter can only be set in the postgresql.conf file or on the server command line.
3.1.3.1.9 min_wal_size
Default Value: 80 MB
Range: 2 to 2147483647
If WAL disk usage stays below the value specified by min_wal_size, old WAL files
are recycled for future use at a checkpoint, rather than removed. This ensures that
enough WAL space is reserved to handle spikes in WAL usage (like when running large
batch jobs). This parameter can only be set in the postgresql.conf file or on the server
command line.
3.1.3.1.10 bgwriter_delay
Specifies the delay between activity rounds for the background writer. In each round the
writer issues writes for some number of dirty buffers (controllable by the
bgwriter_lru_maxpages and bgwriter_lru_multiplier parameters). It then
sleeps for bgwriter_delay milliseconds, and repeats.
The default value is 200 milliseconds (200ms). Note that on many systems, the effective
resolution of sleep delays is 10 milliseconds; setting bgwriter_delay to a value that is
not a multiple of 10 might have the same results as setting it to the next higher multiple of
10.
Typically, when tuning bgwriter_delay, it should be reduced from its default value.
This parameter is rarely increased, except perhaps to save on power consumption on a
system with very low utilization.
3.1.3.1.11 seq_page_cost
Default Value: 1
Range: 0 to 1.79769e+308
Sets the planner's estimate of the cost of a disk page fetch that is part of a series of
sequential fetches. The default is 1.0. This value can be overridden for a particular
tablespace by setting the tablespace parameter of the same name. (Refer to the ALTER
TABLESPACE command in the PostgreSQL Core Documentation.)
The default value assumes very little caching, so it's frequently a good idea to reduce it.
Even if your database is significantly larger than physical memory, you might want to try
setting this parameter to less than 1 (rather than its default value of 1) to see whether you
get better query plans that way. If your database fits entirely within memory, you can
lower this value much more, perhaps to 0.1.
3.1.3.1.12 random_page_cost
Default Value: 4
Range: 0 to 1.79769e+308
Sets the planner's estimate of the cost of a non-sequentially-fetched disk page. The
default is 4.0. This value can be overridden for a particular tablespace by setting the
tablespace parameter of the same name. (Refer to the ALTER TABLESPACE command in
the PostgreSQL Core Documentation.)
Reducing this value relative to seq_page_cost will cause the system to prefer index
scans; raising it will make index scans look relatively more expensive. You can raise or
lower both values together to change the importance of disk I/O costs relative to CPU
costs, which are described by the cpu_tuple_cost and cpu_index_tuple_cost
parameters.
The default value assumes very little caching, so it's frequently a good idea to reduce it.
Even if your database is significantly larger than physical memory, you might want to try
setting this parameter to 2 (rather than its default of 4) to see whether you get better query
plans that way. If your database fits entirely within memory, you can lower this value
much more, perhaps to 0.1.
Although the system will let you do so, never set random_page_cost less than
seq_page_cost. However, setting them equal (or very close to equal) makes sense if
the database fits mostly or entirely within memory, since in that case there is no penalty
for touching pages out of sequence. Also, in a heavily-cached database you should lower
both values relative to the CPU parameters, since the cost of fetching a page already in
RAM is much smaller than it would normally be.
3.1.3.1.13 effective_cache_size
Sets the planner's assumption about the effective size of the disk cache that is available to
a single query. This is factored into estimates of the cost of using an index; a higher value
makes it more likely index scans will be used, a lower value makes it more likely
sequential scans will be used. When setting this parameter you should consider both
Advanced Servers shared buffers and the portion of the kernel's disk cache that will be
used for Advanced Server data files. Also, take into account the expected number of
concurrent queries on different tables, since they will have to share the available space.
This parameter has no effect on the size of shared memory allocated by Advanced Server,
nor does it reserve kernel disk cache; it is used only for estimation purposes. The default
is 128 megabytes (128MB).
If this parameter is set too low, the planner may decide not to use an index even when it
would be beneficial to do so. Setting effective_cache_size to 50% of physical
memory is a normal, conservative setting. A more aggressive setting would be
approximately 75% of physical memory.
3.1.3.1.14 synchronous_commit
Specifies whether transaction commit will wait for WAL records to be written to disk
before the command returns a "success" indication to the client. The default, and safe,
setting is on. When off, there can be a delay between when success is reported to the
client and when the transaction is really guaranteed to be safe against a server crash. (The
maximum delay is three times wal_writer_delay.)
Unlike fsync, setting this parameter to off does not create any risk of database
inconsistency: an operating system or database crash might result in some recent
allegedly-committed transactions being lost, but the database state will be just the same
as if those transactions had been aborted cleanly.
This parameter can be changed at any time; the behavior for any one transaction is
determined by the setting in effect when it commits. It is therefore possible, and useful, to
have some transactions commit synchronously and others asynchronously. For example,
to make a single multistatement transaction commit asynchronously when the default is
the opposite, issue SET LOCAL synchronous_commit TO OFF within the
transaction.
3.1.3.1.15 edb_max_spins_per_delay
This may be useful for systems that use NUMA (non-uniform memory access)
architecture.
3.1.3.1.16 pg_prewarm.autoprewarm
This parameter controls whether or not the database server should run autoprewarm,
which is a background worker process that automatically dumps shared buffers to disk
before a shutdown. It then prewarms the shared buffers the next time the server is started,
meaning it loads blocks from the disk back into the buffer pool.
The advantage is that it shortens the warm up times after the server has been restarted by
loading the data that has been dumped to disk before shutdown.
shared_preload_libraries =
'$libdir/dbms_pipe,$libdir/edb_gen,$libdir/dbms_aq,$libdir/pg_prewarm'
The autoprewarm process will start loading blocks that were previously recorded in
$PGDATA/autoprewarm.blocks until there is no free buffer space left in the buffer
pool. In this manner, any new blocks that were loaded either by the recovery process or
by the querying clients, are not replaced.
Once the autoprewarm process has finished loading buffers from disk, it will
periodically dump shared buffers to disk at the interval specified by the
pg_prewarm.autoprewarm_interval parameter (see Section 3.1.3.1.17). Upon the
next server restart, the autoprewarm process will prewarm shared buffers with the
blocks that were last dumped to disk.
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 64
EDB Postgres Advanced Server Guide
3.1.3.1.17 pg_prewarm.autoprewarm_interval
Range: 0s to 2147483s
This is the minimum number of seconds after which the autoprewarm background
worker dumps shared buffers to disk. The default is 300 seconds. If set to 0, shared
buffers are not dumped at regular intervals, but only when the server is shut down.
3.1.3.2.1 edb_dynatune
Default Value: 0
Range: 0 to 100
Determines how much of the host systems resources are to be used by the database
server based upon the host machines total available resources and the intended usage of
the host machine.
The edb_dynatune parameter can be set to any integer value between 0 and 100,
inclusive. A value of 0, turns off the dynamic tuning feature thereby leaving the database
server resource usage totally under the control of the other configuration parameters in
the postgresql.conf file.
A low non-zero, value (e.g., 1 - 33) dedicates the least amount of the host machines
resources to the database server. This setting would be used for a development machine
where many other applications are being used.
The highest values (e.g., 67 - 100) dedicate most of the servers resources to the database
server. This setting would be used for a host machine that is totally dedicated to running
Advanced Server.
3.1.3.2.2 edb_dynatune_profile
This parameter is used to control tuning aspects based upon the expected workload
profile on the database server.
3.1.3.2.3 edb_enable_icache
If you set edb_enable_icache to on, you must also specify a list of cache servers by
setting the edb_icache_servers parameter.
3.1.3.2.4 edb_icache_servers
Range: n/a
The edb_icache_servers parameter specifies a list of one or more servers with active
edb-icache daemons. edb_icache_servers is a string value that takes the form of a
comma-separated list of hostname:port pairs. You can specify each pair in any of the
following forms:
hostname
IP_address
hostname:portnumber
IP_address:portnumber
If you do not specify a port number, Infinite Cache assumes that the cache server is
listening at port 11211. This configuration parameter will take effect only if
edb_enable_icache is set to on. Use the edb_icache_servers parameter to specify
a maximum of 128 cache nodes.
You can dynamically modify the Infinite Cache server nodes. To change the Infinite
Cache server configuration, use the edb_icache_servers parameter in the
postgresql.conf file to perform the following:
3.1.3.2.5 edb_icache_compression_level
Default Value: 6
Range: 0 to 9
When Advanced Server reads data from disk, it typically reads the data in 8kB
increments. If edb_icache_compression_level is set to 0, each time Advanced
Server sends an 8kB page to the Infinite Cache server that page is stored (uncompressed)
in 8kB of cache memory. If the edb_icache_compression_level parameter is set to
9, Advanced Server applies the maximum compression possible before sending it to the
Infinite Cache server, so a page that previously took 8kB of cached memory might take
2kB of cached memory. Exact compression numbers are difficult to predict, as they are
dependent on the nature of the data on each page.
The compression level must be set by the superuser and can be changed for the current
session while the server is running. The following command disables the compression
mechanism for the currently active session:
SET edb_icache_compression_level TO 0;
3.1.3.3.1 edb_max_resource_groups
Default Value: 16
Range: 0 to 65536
This parameter controls the maximum number of resource groups that can be used
simultaneously by EDB Resource Manager. More resource groups can be created than
the value specified by edb_max_resource_groups, however, the number of resource
groups in active use by processes in these groups cannot exceed this value.
3.1.3.3.2 edb_resource_group
Range: n/a
Set the edb_resource_group parameter to the name of the resource group to which
the current session is to be controlled by EDB Resource Manager according to the
groups resource type settings.
If the parameter is not set, then the current session does not utilize EDB Resource
Manager.
3.1.3.4.1 enable_hints
Optimizer hints embedded in SQL commands are utilized when enable_hints is on.
Optimizer hints are ignored when this parameter is off.
3.1.3.5.1 edb_custom_plan_tries
Default Value: 5
Range: {0 | 100}
This configuration parameter controls the number of custom execution plans considered
by the planner before the planner settles on a generic execution plan.
When a client application repeatedly executes a prepared statement, the server may
decide to evaluate several execution plans before deciding to choose a custom plan or a
generic plan.
A custom plan is a plan built for a specific set of parameter values.
A generic plan is a plan that will work with any set of parameter values supplied
by the client application.
By default, the optimizer will generate five custom plans before evaluating a generic
plan. That means that if you execute a prepared statement six times, the optimizer will
generate five custom plans, then one generic plan, and then decide whether to stick with
the generic plan.
In certain workloads, this extra planning can have a negative impact on performance.
You can adjust the edb_custom_plan_tries configuration parameter to decrease the
number of custom plans considered before evaluating a generic plan. Setting
edb_custom_plan_tries to 0 will effectively disable custom plan generation.
The $1 token in this query is a parameter marker - the client application must provide a
value for each parameter marker each time the statement executes.
When the client application repeatedly executes the custQuery prepared statement, the
optimizer will generate some number of parameter-value-specific execution plans
(custom plans), followed by a generic plan (a plan that ignores the parameter values), and
then decide whether to stick with the generic plan or to continue to generate custom plans
for each execution. The decision process takes into account not only the cost of executing
the plans, but the cost of generating custom plans as well.
3.1.3.5.2 edb_enable_pruning
Conversely, late-pruning means that the query planner prunes partitions after generating
query plans for each partition. (The constraint_exclusion configuration parameter
controls late-pruning.)
The ability to early-prune depends upon the nature of the query in the WHERE clause.
Early-pruning can be utilized in only simple queries with constraints of the type WHERE
column = literal (e.g., WHERE deptno = 10).
Early-pruning is not used for more complex queries such as WHERE column =
expression (e.g., WHERE deptno = 10 + 5).
3.1.3.6.1 trace_hints
Use with the optimizer hints feature to provide more detailed information regarding
whether or not a hint was used by the planner. Set the client_min_messages and
trace_hints configuration parameters as follows:
The SELECT command with the NO_INDEX hint shown below illustrates the additional
information produced when the aforementioned configuration parameters are set.
3.1.3.6.2 edb_log_every_bulk_value
Bulk processing logs the resulting statements into both the Advanced Server log file and
the EDB Audit log file. However, logging each and every statement in bulk processing is
costly. This can be controlled by the edb_log_every_bulk_value configuration
parameter. When set to true, each and every statement in bulk processing is logged.
When set to false, a log message is recorded once per bulk processing. In addition, the
duration is emitted once per bulk processing. Default is set to false.
3.1.3.7.1 edb_audit
Enables or disables database auditing. The values xml or csv will enable database
auditing. These values represent the file format in which auditing information will be
captured. none will disable database auditing and is also the default.
3.1.3.7.2 edb_audit_directory
Range: n/a
Specifies the directory where the audit log files will be created. The path of the directory
can be absolute or relative to the Advanced Server data directory.
3.1.3.7.3 edb_audit_filename
Range: n/a
Specifies the file name of the audit file where the auditing information will be stored. The
default file name will be audit-%Y%m%d_%H%M%S. The escape sequences, %Y, %m etc.,
will be replaced by the appropriate current values according to the system date and time.
3.1.3.7.4 edb_audit_rotation_day
Range: {none | every | sun | mon | tue | wed | thu | fri | sat} ...
Specifies the day of the week on which to rotate the audit files. Valid values are sun,
mon, tue, wed, thu, fri, sat, every, and none. To disable rotation, set the value to
none. To rotate the file every day, set the edb_audit_rotation_day value to every.
To rotate the file on a specific day of the week, set the value to the desired day of the
week.
3.1.3.7.5 edb_audit_rotation_size
Specifies a file size threshold in megabytes when file rotation will be forced to occur. The
default value is 0MB. If the parameter is commented out or set to 0, rotation of the file on
a size basis will not occur.
3.1.3.7.6 edb_audit_rotation_seconds
Default Value: 0s
Range: 0s to 2147483647s
Specifies the rotation time in seconds when a new log file should be created. To disable
this feature, set this parameter to 0.
3.1.3.7.7 edb_audit_connect
3.1.3.7.8 edb_audit_disconnect
3.1.3.7.9 edb_audit_statement
Range: {none | ddl | dml | insert | update | delete | truncate | select | error |
create | drop | alter | grant | revoke | rollback | all} ...
3.1.3.7.10 edb_audit_tag
Use edb_audit_tag to specify a string value that will be included in the audit log when
the edb_audit parameter is set to csv or xml.
3.1.3.7.11 edb_audit_destination
Specifies whether the audit log information is to be recorded in the directory as given by
the edb_audit_directory parameter or to the directory and file managed by the
syslog process. Set to file to use the directory specified by edb_audit_directory
(the default setting). Set to syslog to use the syslog process and its location as
configured in the /etc/syslog.conf file. Note: In recent Linux versions, syslog has
been replaced by rsyslog and the configuration file is in /etc/rsyslog.conf.
3.1.3.7.12 edb_log_every_bulk_value
3.1.3.8.1 icu_short_form
Range: n/a
This configuration parameter is set either when the CREATE DATABASE command is
used with the ICU_SHORT_FORM parameter (see Section 3.6.3.2) in which case the
specified short form name is set and appears in the icu_short_form configuration
parameter when connected to this database, or when an Advanced Server instance is
created with the initdb command used with the --icu_short_form option (see
Section 3.6.3.3) in which case the specified short form name is set and appears in the
icu_short_form configuration parameter when connected to a database in that
Advanced Server instance, and the database does not override it with its own
ICU_SHORT_FORM parameter with a different short form.
3.1.3.9.1 default_heap_fillfactor
Range: 10 to 100
Sets the fillfactor for a table when the FILLFACTOR storage parameter is omitted from a
CREATE TABLE command.
The fillfactor for a table is a percentage between 10 and 100. 100 (complete packing) is
the default. When a smaller fillfactor is specified, INSERT operations pack table pages
only to the indicated percentage; the remaining space on each page is reserved for
updating rows on that page. This gives UPDATE a chance to place the updated copy of a
row on the same page as the original, which is more efficient than placing it on a different
page. For a table whose entries are never updated, complete packing is the best choice,
but in heavily updated tables smaller fillfactors are appropriate.
3.1.3.10.1 oracle_home
Range: n/a
Before creating an Oracle Call Interface (OCI) database link to an Oracle server, you
must direct Advanced Server to the correct Oracle home directory. Set the
LD_LIBRARY_PATH environment variable on Linux (or PATH on Windows) to the lib
directory of the Oracle client installation directory.
For Windows only, you can instead set the value of the oracle_home configuration
parameter in the postgresql.conf file. The value specified in the oracle_home
configuration parameter will override the Windows PATH environment variable.
oracle_home = 'lib_directory'
Substitute the name of the Windows directory that contains oci.dll for
lib_directory.
After setting the oracle_home configuration parameter, you must restart the server for
the changes to take effect. Restart the server from the Windows Services console.
3.1.3.10.2 odbc_lib_path
Range: n/a
odbc_lib_path = 'complete_path_to_libodbc.so'
The configuration file must include the complete pathname to the driver manager shared
library (typically libodbc.so).
3.1.3.11.1 edb_redwood_date
When DATE appears as the data type of a column in the commands, it is translated to
TIMESTAMP(0) at the time the table definition is stored in the database if the
configuration parameter edb_redwood_date is set to TRUE. Thus, a time component
will also be stored in the column along with the date.
3.1.3.11.2 edb_redwood_greatest_least
The GREATEST function returns the parameter with the greatest value from its list of
parameters. The LEAST function returns the parameter with the least value from its list of
parameters.
greatest
----------
(1 row)
greatest
----------
3
(1 row)
greatest
----------
(1 row)
3.1.3.11.3 edb_redwood_raw_names
For example, the following user name is created, and then a session is started with that
user.
When connected to the database as reduser, the following tables are created.
These names now match the case when viewed from the PostgreSQL pg_tables
catalog.
3.1.3.11.4 edb_redwood_strings
The sample application contains a table of employees. This table has a column named
comm that is null for most employees. The following query is run with
edb_redwood_string set to FALSE. The concatenation of a null column with non-
empty strings produces a final result of null, so only employees that have a commission
appear in the query result. The output line for all other employees is null.
EMPLOYEE COMPENSATION
----------------------------------
(14 rows)
The following is the same query executed when edb_redwood_strings is set to TRUE.
Here, the value of a null column is treated as an empty string. The concatenation of an
empty string with a non-empty string produces the non-empty string.
EMPLOYEE COMPENSATION
----------------------------------
SMITH 800.00
ALLEN 1,600.00 300.00
WARD 1,250.00 500.00
JONES 2,975.00
MARTIN 1,250.00 1,400.00
BLAKE 2,850.00
CLARK 2,450.00
SCOTT 3,000.00
KING 5,000.00
TURNER 1,500.00 .00
ADAMS 1,100.00
JAMES 950.00
FORD 3,000.00
MILLER 1,300.00
(14 rows)
3.1.3.11.5 edb_stmt_level_tx
The term statement level transaction isolation describes the behavior whereby when a
runtime error occurs in a SQL command, all the updates on the database caused by that
single command are rolled back. For example, if a single UPDATE command successfully
updates five rows, but an attempt to update a sixth row results in an exception, the
updates to all six rows made by this UPDATE command are rolled back. The effects of
prior SQL commands that have not yet been committed or rolled back are pending until a
COMMIT or ROLLBACK command is executed.
In Advanced Server, if an exception occurs while executing a SQL command, all the
updates on the database since the start of the transaction are rolled back. In addition, the
transaction is left in an aborted state and either a COMMIT or ROLLBACK command must
be issued before another transaction can be started.
Note: Use edb_stmt_level_tx set to TRUE only when absolutely necessary, as this
may cause a negative performance impact.
The following example run in PSQL shows that when edb_stmt_level_tx is FALSE,
the abort of the second INSERT command also rolls back the first INSERT command.
Note that in PSQL, the command \set AUTOCOMMIT off must be issued, otherwise
every statement commits automatically defeating the purpose of this demonstration of the
effect of edb_stmt_level_tx.
COMMIT;
SELECT empno, ename, deptno FROM emp WHERE empno > 9000;
In the following example, with edb_stmt_level_tx set to TRUE, the first INSERT
command has not been rolled back after the error on the second INSERT command. At
this point, the first INSERT command can either be committed or rolled back.
SELECT empno, ename, deptno FROM emp WHERE empno > 9000;
COMMIT;
A ROLLBACK command could have been issued instead of the COMMIT command in
which case the insert of employee number 9001 would have been rolled back as well.
3.1.3.11.6 db_dialect
When set to postgres, the namespace precedence is pg_catalog, sys, then dbo,
giving the PostgreSQL catalog the highest precedence. When set to redwood, the
namespace precedence is sys, dbo, then pg_catalog, giving the expanded catalog
views the highest precedence.
3.1.3.11.7 default_with_rowids
When set to on, CREATE TABLE includes a ROWID column in newly created tables,
which can then be referenced in SQL commands.
3.1.3.11.8 optimizer_mode
These optimization modes are based upon the assumption that the client submitting the
SQL command is interested in viewing only the first n rows of the result set and will
then abandon the remainder of the result set. Resources allocated to the query are
adjusted as such.
3.1.3.12.1 custom_variable_classes
3.1.3.12.2 dbms_alert.max_alerts
Range: 0 to 500
Specifies the maximum number of concurrent alerts allowed on a system using the
DBMS_ALERTS package.
3.1.3.12.3 dbms_pipe.total_message_buffer
Default Value: 30 Kb
Range: 30 Kb to 256 Kb
Specifies the total size of the buffer used for the DBMS_PIPE package.
3.1.3.12.4 index_advisor.enabled
The Index Advisor plugin can be loaded as shown by the following example:
Use the SET command to change the parameter setting to control whether or not Index
Advisor generates an alternative query plan as shown by the following example:
(6 rows)
3.1.3.12.5 edb_sql_protect.enabled
3.1.3.12.6 edb_sql_protect.level
Sets the action taken by SQL/Protect when a SQL statement is issued by a protected role.
learn. Tracks the activities of protected roles and records the relations used by the
roles. This is used when initially configuring SQL/Protect so the expected
behaviors of the protected applications are learned.
passive. Issues warnings if protected roles are breaking the defined rules, but does
not stop any SQL statements from executing. This is the next step after
SQL/Protect has learned the expected behavior of the protected roles. This
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 96
EDB Postgres Advanced Server Guide
If you are using SQL/Protect for the first time, set edb_sql_protect.level to
learn.
3.1.3.12.7 edb_sql_protect.max_protected_relations
Range: 1 to 2147483647
Sets the maximum number of relations that can be protected per role. Please note the
total number of protected relations for the server will be the number of protected relations
times the number of protected roles. Every protected relation consumes space in shared
memory. The space for the maximum possible protected relations is reserved during
database server startup.
Though the upper range for the parameter is listed as the maximum value for an integer
data type, the practical setting depends on how much shared memory is available and the
parameter value used during database server startup.
As long as the space required can be reserved in shared memory, the value will be
acceptable. If the value is such that the space in shared memory cannot be reserved, the
database server startup fails with an error message such as the following:
2014-07-18 15:22:17 EDT FATAL: could not map anonymous shared memory: Cannot
allocate memory
2014-07-18 15:22:17 EDT HINT: This error usually means that PostgreSQL's
request for a shared memory segment exceeded available memory, swap space or
huge pages. To reduce the request size (currently 2070118400 bytes), reduce
PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or
max_connections.
In such cases, reduce the parameter value until the database server can be started
successfully.
3.1.3.12.8 edb_sql_protect.max_protected_roles
Default Value: 64
Range: 1 to 2147483647
Every protected role consumes space in shared memory. Please note that the server will
reserve space for the number of protected roles times the number of protected relations
(edb_sql_protect.max_protected_relations). The space for the maximum
possible protected roles is reserved during database server startup.
Though the upper range for the parameter is listed as the maximum value for an integer
data type, the practical setting depends on how much shared memory is available and the
parameter value used during database server startup.
As long as the space required can be reserved in shared memory, the value will be
acceptable. If the value is such that the space in shared memory cannot be reserved, the
database server startup fails with an error message such as the following:
2014-07-18 15:22:17 EDT FATAL: could not map anonymous shared memory: Cannot
allocate memory
2014-07-18 15:22:17 EDT HINT: This error usually means that PostgreSQL's
request for a shared memory segment exceeded available memory, swap space or
huge pages. To reduce the request size (currently 2070118400 bytes), reduce
PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or
max_connections.
In such cases, reduce the parameter value until the database server can be started
successfully.
3.1.3.12.9 edb_sql_protect.max_queries_to_save
Every query that is saved consumes space in shared memory. The space for the maximum
possible queries that can be saved is reserved during database server startup.
2014-07-18 13:05:31 EDT WARNING: 10 is outside the valid range for parameter
"edb_sql_protect.max_queries_to_save" (100 .. 2147483647)
Though the upper range for the parameter is listed as the maximum value for an integer
data type, the practical setting depends on how much shared memory is available and the
parameter value used during database server startup.
As long as the space required can be reserved in shared memory, the value will be
acceptable. If the value is such that the space in shared memory cannot be reserved, the
database server startup fails with an error message such as the following:
2014-07-18 15:22:17 EDT FATAL: could not map anonymous shared memory: Cannot
allocate memory
2014-07-18 15:22:17 EDT HINT: This error usually means that PostgreSQL's
request for a shared memory segment exceeded available memory, swap space or
huge pages. To reduce the request size (currently 2070118400 bytes), reduce
PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or
max_connections.
In such cases, reduce the parameter value until the database server can be started
successfully.
3.1.3.12.10 edbldr.empty_csv_field
3.1.3.12.11 utl_encode.uudecode_redwood
3.1.3.12.12 utl_file.umask
The utl_file.umask parameter sets the file mode creation mask or simply, the mask,
in a manner similar to the Linux umask command. This is for usage only within the
Advanced Server UTL_FILE package.
The value specified for utl_file.umask is a 3 or 4-character octal string that would be
valid for the Linux umask command. The setting determines the permissions on files
created by the UTL_FILE functions and procedures. (Refer to any information source
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 101
EDB Postgres Advanced Server Guide
regarding Linux or Unix systems for information on file permissions and the usage of the
umask command.)
The following shows the results of the default utl_file.umask setting of 0077. Note
that all permissions are denied on users belonging to the enterprisedb group as well
as all other users. Only user enterprisedb has read and write permissions on the file.
3.1.3.13 Ungrouped
Configuration parameters in this section apply to Advanced Server only and are for a
specific, limited purpose.
3.1.3.13.1 nls_length_semantics
For example, the form of the ALTER SESSION command is accepted in Advanced Server
without throwing a syntax error, but does not alter the session environment:
Note: Since the setting of this parameter has no effect on the server environment, it does
not appear in the system view pg_settings.
3.1.3.13.2 query_rewrite_enabled
For example, the following form of the ALTER SESSION command is accepted in
Advanced Server without throwing a syntax error, but does not alter the session
environment:
Note: Since the setting of this parameter has no effect on the server environment, it does
not appear in the system view pg_settings.
3.1.3.13.3 query_rewrite_integrity
For example, the following form of the ALTER SESSION command is accepted in
Advanced Server without throwing a syntax error, but does not alter the session
environment:
Note: Since the setting of this parameter has no effect on the server environment, it does
not appear in the system view pg_settings.
3.1.3.13.4 timed_statistics
Controls the collection of timing data for the Dynamic Runtime Instrumentation Tools
Architecture (DRITA) feature. When set to on, timing data is collected.
Index Advisor works with Advanced Server's query planner by creating hypothetical
indexes that the query planner uses to calculate execution costs as if such indexes were
available. Index Advisor identifies the indexes by analyzing SQL queries supplied in the
workload.
There are three ways to use Index Advisor to analyze SQL queries:
Invoke the Index Advisor utility program, supplying a text file containing the
SQL queries that you wish to analyze; Index Advisor will generate a text file with
CREATE INDEX statements for the recommended indexes.
Provide queries at the EDB-PSQL command line that you want Index Advisor to
analyze.
Access Index Advisor through the Postgres Enterprise Manager client. When
accessed via the PEM client, Index Advisor works with SQL Profiler, providing
indexing recommendations on code captured in SQL traces. For more
information about using SQL Profiler and Index Advisor with PEM, please see
the PEM Getting Started Guide available from the EnterpriseDB website at:
http://www.enterprisedb.com/products-services-training/products/postgres-enterprise-
manager
During the analysis, Index Advisor compares the query execution costs with and without
hypothetical indexes. If the execution cost using a hypothetical index is less than the
execution cost without it, both plans are reported in the EXPLAIN statement output,
metrics that quantify the improvement are calculated, and Index Advisor generates the
CREATE INDEX statement needed to create the index.
If no hypothetical index can be found that reduces the execution cost, Index Advisor
displays only the original query plan output of the EXPLAIN statement.
Index Advisor does not actually create indexes on the tables. Use the CREATE INDEX
statements supplied by Index Advisor to add any recommended indexes to your tables.
A script supplied with Advanced Server creates the table in which Index Advisor stores
the indexing recommendations generated by the analysis; the script also creates a
function and a view of the table to simplify the retrieval and interpretation of the results.
If you choose to forego running the script, Index Advisor will log recommendations in a
temporary table that is available only for the duration of the Index Advisor session.
The Index Advisor shared library interacts with the query planner to make indexing
recommendations. The Advanced Server installer creates the following shared library in
the libdir subdirectory of your Advanced Server home directory:
On Linux:
index_advisor.so
On Windows:
index_advisor.dll
Please note that libraries in the libdir directory can only be loaded by a superuser. A
database administrator can allow a non-superuser to use Index Advisor by manually
copying the Index Advisor file from the libdir directory into the libdir/plugins
directory (under your Advanced Server home directory). Only a trusted non-superuser
should be allowed access to the plugin; this is an unsafe practice in a production
environment.
The installer also creates the Index Advisor utility program and setup script:
pg_advise_index
index_advisor.sql
index_advisor_log
show_index_recommendations()
index_recommendations
Index Advisor does not require any configuration to generate recommendations that are
available only for the duration of the current session; to store the results of multiple
sessions, you must create the index_advisor_log table (where Advanced Server will
store Index Advisor recommendations). To create the index_advisor_log table , you
must run the index_advisor.sql script.
When selecting a storage schema for the Index Advisor table, function and view, keep in
mind that all users that invoke Index Advisor (and query the result set) must have USAGE
privileges on the schema. The schema must be in the search path of all users that are
interacting with the Index Advisor.
1. Place the selected schema at the start of your search_path parameter. For
example, if your search path is currently:
search_path=public, accounting
and you want the Index Advisor objects to be created in a schema named
advisor, use the command:
SET search_path = advisor, public, accounting;
\i full_pathname/index_advisor.sql
Specify the pathname to the index_advisor.sql script in place of
full_pathname.
The following example demonstrates the creation of the Index Advisor database objects
in a schema named ia, which will then be accessible to an Index Advisor user with user
name ia_user:
GRANT
edb=# GRANT SELECT, INSERT, DELETE ON index_advisor_log TO ia_user;
GRANT
edb=# GRANT SELECT ON index_recommendations TO ia_user;
GRANT
While using Index Advisor, the specified schema (ia) must be included in ia_user's
search_path parameter.
When you invoke Index Advisor, you must supply a workload; the workload is either a
query (specified at the command line), or a file that contains a set of queries (executed by
the pg_advise_index() function). After analyzing the workload, Index Advisor will
either store the result set in a temporary table, or in a permanent table. You can review
the indexing recommendations generated by Index Advisor and use the CREATE INDEX
statements generated by Index Advisor to create the recommended indexes.
The following examples assume that superuser enterprisedb is the Index Advisor
user, and the Index Advisor database objects have been created in a schema in the
search_path of superuser enterprisedb.
The examples in the following sections use the table created with the statement shown
below:
a | b
-------+-------
0 | 99999
1 | 99998
2 | 99997
3 | 99996
.
.
.
99997 | 2
99998 | 1
99999 | 0
When invoking the pg_advise_index utility, you must include the name of a file that
contains the queries that will be executed by pg_advise_index; the queries may be on
the same line, or on separate lines, but each query must be terminated by a semicolon.
Queries within the file should not begin with the EXPLAIN keyword.
In the code sample, the -d, -h, and -U options are psql connection options.
-s
-o
The recommended indexes are written to the file specified after the -o option.
You can create the recommended indexes at the psql command line with the CREATE
INDEX statements in the file, or create the indexes by executing the advisory.sql
script.
1. Connect to the server with the edb-psql command line utility, and load the Index
Advisor plugin:
2. Use the edb-psql command line to invoke each SQL command that you would like
Index Advisor to analyze. Index Advisor stores any recommendations for the queries
in the index_advisor_log table. If the index_advisor_log table does not exist
in the user's search_path, a temporary table is created with the same name. This
temporary table exists only for the duration of the user's session.
After loading the Index Advisor plugin, Index Advisor will analyze all SQL statements
and log any indexing recommendations for the duration of the session.
If you would like Index Advisor to analyze a query (and make indexing
recommendations) without actually executing the query, preface the SQL
statement with the EXPLAIN keyword.
If you do not preface the statement with the EXPLAIN keyword, Index Advisor
will analyze the statement while the statement executes, writing the indexing
recommendations to the index_advisor_log table for later review.
In the example that follows, the EXPLAIN statement displays the normal query plan,
followed by the query plan of the same query, if the query were using the recommended
hypothetical index:
-----------------------------------------------------------------------------
Seq Scan on t (cost=0.00..1693.00 rows=1 width=8)
Filter: (a = 100)
Result (cost=0.00..8.28 rows=1 width=8)
One-Time Filter: '===[ HYPOTHETICAL PLAN ]==='::text
-> Index Scan using "<hypothetical-index>:3" on t
(cost=0.00..8.28 rows=1 width=8)
Index Cond: (a = 100)
(6 rows)
After loading the Index Advisor plugin, the default value of index_advisor.enabled
is on. The Index Advisor plugin must be loaded to use a SET or SHOW command to
display the current value of index_advisor.enabled.
There are several ways to review the index recommendations generated by Index
Advisor. You can:
Where pid is the process ID of the current session. If you do not know the process ID of
your current session, passing a value of NULL will also return a result set for the current
session.
In the example, create index idx_t_a on t(a) is the SQL statement needed to create
the index suggested by Index Advisor. Each row in the result set shows:
You can display the results of all Index Advisor sessions from the following view:
You can query the index_advisor_log table at the psql command line. The following
example shows the index_advisor_log table entries resulting from two Index
Advisor sessions. Each session contains two queries, and can be identified (in the table
below) by a different backend_pid value. For each session, Index Advisor generated
two index recommendations.
Index Advisor added the first two rows to the table after analyzing the following two
queries executed by the pg_advise_index utility:
The value of 3442 in column backend_pid identifies these results as coming from the
session with process ID 3442.
The value of 1 in column attrs in the first row indicates that the hypothetical index is
on the first column of the table (column a of table t).
The value of 2 in column attrs in the second row indicates that the hypothetical index
is on the second column of the table (column b of table t).
Index Advisor added the last two rows to the table after analyzing the following two
queries (executed at the psql command line):
The values in the benefit column of the index_advisor_log table are calculated using
the following formula:
The value of the benefit column for the last row of the index_advisor_log table
(shown in the example) is calculated using the query plan for the following SQL
statement:
The execution costs of the different execution plans are evaluated and compared:
You can delete rows from the index_advisor_log table when you no longer have the
need to review the results of the queries stored in the row.
The index_recommendations view contains the calculated metrics and the CREATE
INDEX statements to create the recommended indexes for all sessions whose results are
currently in the index_advisor_log table. You can display the results of all stored
Index Advisor sessions by querying the index_recommendations view as shown
below:
Using the example shown in the previous section (Querying the index_advisor_log
Table), the index_recommendations view displays the following:
Within each session, the results of all queries that benefit from the same recommended
index are combined to produce one set of metrics per recommended index, reflected in
the fields named benefit and gain.
So for example, using the following query results from the process with a backend_pid
of 3506:
backend_pid | show_index_recommendations
-------------+-------------------------------------------------------------
--------------------------------
3506 | create index idx_t_a on t(a);/* size: 2184 KB, benefit:
3040.62, gain: 1.39222666981456 */
and
The gain metric is useful when comparing the relative advantage of the different
recommended indexes derived during a given session. The larger the gain value, the
better the cost effectiveness derived from the index weighed against the possible disk
space consumption of the index.
3.2.5 Limitations
Index Advisor does not consider Index Only scans; it does consider Index scans when
making recommendations.
Index Advisor ignores any computations found in the WHERE clause. Effectively, the
index field in the recommendations will not be any kind of expression; the field will be a
simple column name.
Index Advisor does not consider inheritance when recommending hypothetical indexes.
If a query references a parent table, Index Advisor does not make any index
recommendations on child tables.
If it is necessary to display the recommendations made prior to the backup, you can
replace the old OIDs in the reloid column of the index_advisor_log table with the
new OIDs of the referenced tables using the SQL UPDATE statement:
SQL Profiler helps you locate and optimize poorly running SQL code.
On-Demand Traces. You can capture SQL traces at any time by manually
setting up your parameters and starting the trace.
Scheduled Traces. For inconvenient times, you can also specify your trace
parameters and schedule them to run at some later time.
Save Traces. Execute your traces and save them for later review.
Trace Filters. Selectively filter SQL captures by database and by user, or capture
every SQL statement sent by all users against all databases.
Trace Output Analyzer. A graphical table lets you quickly sort and filter queries
by duration or statement, and a graphical or text based EXPLAIN plan lays out
your query paths and joins.
Index Advisor Integration. Once you have found your slow queries and
optimized them, you can also let the Index Advisor recommend the creation of
underlying table indices to further improve performance.
SQL Profiler is installed by the Advanced Server Installer, or you can download and
install SQL Profiler into a managed database instance.
Modify the postgresql.conf parameter file for the instance to include the SQL
Profiler library in the shared_preload_libraries configuration parameter.
$libdir/sql-profiler
$libdir\sql-profiler.dll
The SQL Profiler installation program places a SQL script (named sql-
profiler.sql) in:
On Linux:
/opt/edb/as10/share/contrib/
On Windows:
C:\Program Files\edb\as10\share\contrib\
Use the psql command line interface to run the sql-profiler.sql script in the
database specified as the Maintenance Database on the server you wish to profile. If you
are using Advanced Server, the default maintenance database is named edb. If you are
using a PostgreSQL instance, the default maintenance database is named postgres.
The following command uses the psql command line to invoke the sql-
profiler.sql script on a Linux system:
Step 4: Stop and restart the server for the changes to take effect.
After configuring SQL Profiler, it is ready to use with all databases that reside on the
server. You can take advantage of SQL Profiler functionality with EDB Postgres
Enterprise Manager; for more information about Postgres Enterprise Manager, visit the
EnterpriseDB website at:
http://www.enterprisedb.com/products/postgres-enterprise-manager
Troubleshooting
To correct this error, you must replace the existing query set with a new query set. First,
uninstall SQL Profiler by invoking the uninstall-sql-profiler.sql script, and
then reinstall SQL Profiler by invoking the sql-profiler.sql script.
3.4 pgsnmpd
pgsnmpd is an SNMP agent that can return hierarchical information about the current
state of Advanced Server on a Linux system. pgsnmpd is distributed with and installed
by the Advanced Server installer as part of the database server component. The pgsnmpd
agent can operate as a stand-alone SNMP agent, as a pass-through sub-agent, or as an
AgentX sub-agent.
After installing Advanced Server, you will need to update the LD_LIBRARY_PATH
variable. Use the command:
$ export LD_LIBRARY_PATH=/opt/edb/as10/lib:$LD_LIBRARY_PATH
This command does not persistently alter the value of LD_LIBRARY_PATH. Consult the
documentation for your distribution of Linux for information about persistently setting
the value of LD_LIBRARY_PATH.
The examples that follow demonstrate the simplest usage of pgsnmpd, implementing
read only access. pgsnmpd is based on the net-snmp library; for more information about
net-snmp, visit:
http://net-snmp.sourceforge.net/
The pgsnmpd configuration file is named snmpd.conf. For information about the
directives that you can specify in the configuration file, please review the snmpd.conf
man page (man snmpd.conf).
You can create the configuration file by hand, or you can use the snmpconf perl script to
create the configuration file. The perl script is distributed with net-snmp package.
http://www.net-snmp.org/
To use the snmpconf configuration file wizard, download and install net-snmp. When the
installation completes, open a command line and enter:
snmpconf
When the configuration file wizard opens, it may prompt you to read in an existing
configuration file. Enter none to generate a new configuration file (not based on a
previously existing configuration file).
/opt/edb/as10/share/
By default, pgsnmpd listens on port 161. If the listener port is already being used by
another service, you may receive the following error:
You can specify an alternate listener port by adding the following line to your
snmpd.conf file:
agentaddress $host_address:2000
The example instructs pgsnmpd to listen on UDP port 2000, where $host_address is
the IP address of the server (e.g., 127.0.0.1).
Ensure that an instance of Advanced Server is up and running (pgsnmpd will connect to
this server). Open a command line and assume superuser privileges, before invoking
pgsnmpd with a command that takes the following form:
POSTGRES_INSTALL_HOME/bin/pgsnmpd -b
-c POSTGRES_INSTALL_HOME/share/snmpd.conf
-C "user=enterprisedb dbname=edb password=safe_password
port=5444"
Include the -b option to specify that pgsnmpd should run in the background.
Include the -c option, specifying the path and name of the pgsnmpd configuration file.
Include connection information for your installation of Advanced Server (in the form of a
libpq connection string) after the -C option.
Include the --help option when invoking the pgsnmpd utility to view other pgsnmpd
command line options:
pgsnmpd --help
Version PGSQL-SNMP-Ver1.0
usage: pgsnmpd [-s] [-b] [-c FILE ] [-x address ] [-g] [-C "Connect String"]
-s : run as AgentX sub-agent of an existing snmpd process
-b : run in the background
-c : configuration file name
-g : use syslog
-C : libpq connection string
-x : address:port of a network interface
-V : display version strings
You can use net-snmp commands to query the pgsnmpd service. For example:
The encodings required to query any given object are defined in the MIB (Management
Information Base). An SNMP client can monitor a variety of servers; the server type
determines the information exposed by a given server. Each SNMP server describes the
exposed data in the form of a MIB (Management information base). By default, pgsnmpd
searches for MIB's in the following locations:
/usr/share/snmp/mibs
$HOME/.snmp/mibs
Use the following configuration parameters to control database auditing. See Section
3.1.2 to determine if a change to the configuration parameter takes effect immediately, or
if the configuration needs to be reloaded, or if the database server needs to be restarted.
edb_audit
Enables or disables database auditing. The values xml or csv will enable
database auditing. These values represent the file format in which auditing
information will be captured. none will disable database auditing and is also the
default.
edb_audit_directory
Specifies the directory where the log files will be created. The path of the
directory can be relative or absolute to the data folder. The default is the
PGDATA/edb_audit directory.
edb_audit_filename
Specifies the file name of the audit file where the auditing information will be
stored. The default file name will be audit-%Y%m%d_%H%M%S. The escape
sequences, %Y, %m etc., will be replaced by the appropriate current values
according to the system date and time.
edb_audit_rotation_day
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 126
EDB Postgres Advanced Server Guide
Specifies the day of the week on which to rotate the audit files. Valid values are
sun, mon, tue, wed, thu, fri, sat, every, and none. To disable rotation, set
the value to none. To rotate the file every day, set the
edb_audit_rotation_day value to every. To rotate the file on a specific day
of the week, set the value to the desired day of the week. every is the default
value.
edb_audit_rotation_size
Specifies a file size threshold in megabytes when file rotation will be forced to
occur. The default value is 0 MB. If the parameter is commented out or set to 0,
rotation of the file on a size basis will not occur.
edb_audit_rotation_seconds
Specifies the rotation time in seconds when a new log file should be created. To
disable this feature, set this parameter to 0, which is the default.
edb_audit_connect
edb_audit_disconnect
edb_audit_statement
edb_audit_tag
Use this configuration parameter to specify a string value that will be included in
the audit log file for each entry as a tracking tag.
edb_log_every_bulk_value
Bulk processing logs the resulting statements into both the Advanced Server log
file and the EDB Audit log file. However, logging each and every statement in
bulk processing is costly. This can be controlled by the
edb_log_every_bulk_value configuration parameter. When set to true,
each and every statement in bulk processing is logged. When set to false, a log
message is recorded once per bulk processing. In addition, the duration is emitted
once per bulk processing. Default is false.
edb_audit_destination
The following section describes selection of specific SQL statements for auditing using
the edb_audit_statement parameter.
The comma-separated values may include or omit space characters following the comma.
The values can be specified in any combination of lowercase or uppercase letters.
all Results in the auditing and logging of every statement including any error
messages on statements.
none Disables all auditing and logging. A value of none overrides any other
value included in the list.
ddl Results in the auditing of all data definition language (DDL) statements
(CREATE, ALTER, and DROP) as well as GRANT and REVOKE data control language
(DCL) statements.
dml Results in the auditing of all data manipulation language (DML) statements
(INSERT, UPDATE, DELETE, and TRUNCATE).
select Results in the auditing of SELECT statements.
rollback Results in the auditing of ROLLBACK statements.
error Results in the logging of all error messages that occur. Unless the error
value is included, no error messages are logged regarding any errors that occur on
SQL statements related to any of the other preceding parameter values except
when all is used.
Section 3.5.2.1 describes additional parameter values for selecting particular DDL or
DCL statements for auditing.
Section 3.5.2.2 describes additional parameter values for selecting particular DML
statements for auditing.
The following sections describe the values for the SQL language types DDL, DCL, and
DML.
ACCESS METHOD
AGGREGATE
CAST
COLLATION
CONVERSION
DATABASE
EVENT TRIGGER
EXTENSION
FOREIGN TABLE
FUNCTION
INDEX
LANGUAGE
LARGE OBJECT
MATERIALIZED VIEW
OPERATOR
OPERATOR CLASS
OPERATOR FAMILY
POLICY
PUBLICATION
ROLE
RULE
SCHEMA
SEQUENCE
SERVER
SUBSCRIPTION
TABLE
TABLESPACE
TEXT SEARCH CONFIGURATION
TEXT SEARCH DICTIONARY
TEXT SEARCH PARSER
TEXT SEARCH TEMPLATE
TRANSFORM
TRIGGER
TYPE
USER MAPPING
VIEW
Descriptions of object types as used in SQL commands can be found in the PostgreSQL
core documentation available at:
https://www.postgresql.org/docs/10/static/sql-commands.html
If object_type is omitted from the parameter value, then all of the specified command
statements (either create, alter, or drop) are audited.
{ grant | revoke }
Example 1
edb_audit_connect = 'all'
edb_audit_statement = 'create, alter, error'
Thus, only SQL statements invoked by the CREATE and ALTER commands are audited.
Error messages are also included in the audit log.
Each audit log entry has been split and displayed across multiple lines, and a blank line
has been inserted between the audit log entries for more clarity in the appearance of the
results.
2017-07-16 12:59:42.125 EDT,"enterprisedb","edb",3356,"[local]",
596b9b7e.d1c,1,"authentication",2017-07-16 12:59:42 EDT,6/2,0,AUDIT,00000,
"connection authorized: user=enterprisedb database=edb",,,,,,,,,"","",""
The CREATE and ALTER statements for the adminuser role and auditdb database are
audited. The error for the ALTER ROLE adminuser statement is also logged since error
is included in the edb_audit_statement parameter.
Similarly, the CREATE statements for schema edb and tables department and dept are
audited.
Note that the DROP TABLE department statement is not in the audit log since there is
no edb_audit_statement setting that would result in the auditing of successfully
processed DROP statements such as ddl, all, or drop.
Example 2
edb_audit_connect = 'all'
edb_audit_statement = create view,create materialized view,create sequence,grant'
Thus, only SQL statements invoked by the CREATE VIEW , CREATE MATERIALIZED
VIEW, CREATE SEQUENCE and GRANT commands are audited.
Each audit log entry has been split and displayed across multiple lines, and a blank line
has been inserted between the audit log entries for more clarity in the appearance of the
results.
2017-07-16 13:20:09.836 EDT,"adminuser","auditdb",4143,"[local]",
596ba049.102f,1,"authentication",2017-07-16 13:20:09 EDT,4/10,0,AUDIT,00000,
"connection authorized: user=adminuser database=auditdb",,,,,,,,,"","",""
The CREATE VIEW and CREATE MATERIALIZED VIEW statements are audited. Note
that the prior CREATE TABLE emp statement is not audited since none of the values
create, create table, ddl, nor all are included in the edb_audit_statement
parameter.
The CREATE SEQUENCE and GRANT statements are audited since those values are
included in the edb_audit_statement parameter.
This section describes the values that can be included in the edb_audit_statement
parameter to audit DML statements.
Example
edb_audit_connect = 'all'
edb_audit_statement = 'UPDATE, DELETE, error'
Thus, only SQL statements invoked by the UPDATE and DELETE commands are audited.
All errors are also included in the audit log (even errors not related to the UPDATE and
DELETE commands).
Each audit log entry has been split and displayed across multiple lines, and a blank line
has been inserted between the audit log entries for more clarity in the appearance of the
results.
2017-07-16 13:43:26.638 EDT,"adminuser","auditdb",4574,"[local]",
596ba5be.11de,1,"authentication",2017-07-16 13:43:26 EDT,4/11,0,AUDIT,00000,
"connection authorized: user=adminuser database=auditdb",,,,,,,,,"","",""
The UPDATE dept and DELETE FROM emp statements are audited. Note that all of the
prior INSERT statements are not audited since none of the values insert, dml, nor all
are included in the edb_audit_statement parameter.
The SELECT * FROM dept statement is not audited as well since neither select nor
all is included in the edb_audit_statement parameter.
The following steps describe how to configure Advanced Server to log all connections,
disconnections, DDL statements, DCL statements, DML statements, and any statements
resulting in an error.
A database and role are established with the following settings for the
edb_audit_statement parameter:
Creation and alteration of the database and role are shown by the following:
$ psql edb enterprisedb
Password for user enterprisedb:
psql.bin (10.0.1)
Type "help" for help.
The following demonstrates the changes made and the resulting audit log file for three
cases.
The following audit log file shows entries only for the CREATE TABLE, INSERT INTO
audit_tbl, and UPDATE audit_tbl statements. The SELECT * FROM audit_tbl
and TRUNCATE audit_tbl statements were not audited.
Each audit log entry has been split and displayed across multiple lines, and a blank line
has been inserted between the audit log entries for more clarity in the appearance of the
results.
2017-07-13 15:26:17.426 EDT,"enterprisedb","auditdb",4024,"[local]",
5967c947.fb8,1,"idle",2017-07-13 15:25:59 EDT,7/4,0,AUDIT,00000,
"statement: CREATE TABLE audit_tbl (f1 INTEGER PRIMARY KEY, f2 TEXT);",,,,,,,,,
"psql.bin","CREATE TABLE",""
Case 2: Changes made in database edb by role admin. Only select and truncate
statements are audited:
$ psql edb admin
Password for user admin:
psql.bin (10.0.1)
Type "help" for help.
edb=# CREATE TABLE edb_tbl (f1 INTEGER PRIMARY KEY, f2 TEXT) <== Should not be audited
CREATE TABLE
edb=# INSERT INTO edb_tbl VALUES (1, 'Row 1'); <== Should not be audited
INSERT 0 1
edb=# SELECT * FROM edb_tbl;
f1 | f2
----+-------
1 | Row 1
(1 row)
Continuation of the audit log file now appears as follows. The last two entries
representing the second case show only the SELECT * FROM edb_tbl and TRUNCATE
edb_tbl statements. The CREATE TABLE edb_tbl and INSERT INTO edb_tbl
statements were not audited.
2017-07-13 15:26:17.426 EDT,"enterprisedb","auditdb",4024,"[local]",
5967c947.fb8,1,"idle",2017-07-13 15:25:59 EDT,7/4,0,AUDIT,00000,
"statement: CREATE TABLE audit_tbl (f1 INTEGER PRIMARY KEY, f2 TEXT);",,,,,,,,,
"psql.bin","CREATE TABLE",""
Case 3: Changes made in database auditdb by role admin. Only create table,
insert, and update statements are audited:
Continuation of the audit log file now appears as follows. The next to last two entries
representing the third case show only CREATE TABLE audit_tbl_2 and INSERT
INTO audit_tbl_2 statements. The SELECT * FROM audit_tbl_2 and TRUNCATE
audit_tbl_2 statements were not audited.
The audit log file can be generated in either CSV or XML format depending upon the
setting of the edb_audit configuration parameter. The XML format contains less
information than the CSV format.
The information in the audit log is based on the logging performed by PostgreSQL as
described in Section 19.8.4 Using CSV-Format Log Output within Section 19.8 Error
Reporting and Logging in the PostgreSQL core documentation, available at:
https://www.postgresql.org/docs/10/static/runtime-config-logging.html
The following table lists the fields in the order they appear in the CSV audit log format.
The table contains the following information:
Field. Name of the field as shown in the sample table definition in the
PostgreSQL documentation as previously referenced.
XML Element/Attribute. For the XML format, name of the XML element and
its attribute (if used), referencing the value. Note: n/a indicates that there is no
XML representation for this field.
Data Type. Data type of the field as given by the PostgreSQL sample table
definition.
The fields with the Description of Not supported appear as consecutive commas
(,,) in the CSV format.
The following examples are generated in the CSV and XML formats.
The edb_audit parameter is changed to xml when generating the XML format.
edb=# \q
Each audit log entry has been split and displayed across multiple lines, and a blank line
has been inserted between the audit log entries for more clarity in the appearance of the
results.
2017-07-17 13:28:44.235 EDT,"enterprisedb","edb",4068,"[local]",
596cf3cc.fe4,1,"authentication",2017-07-17 13:28:44 EDT,6/2,0,AUDIT,00000,
"connection authorized: user=enterprisedb database=edb",,,,,,,,,"","","edbaudit"
The following is the XML format of the audit log file. The output has been formatted for
more clarity in the appearance in the example.
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 144
EDB Postgres Advanced Server Guide
<event user="enterprisedb" database="edb" remote_host="[local]"
session_id="596cf5b7.12a8" process_id="4776" time="2017-07-17 13:36:55 EDT"
transaction_id="0" type="connect" audit_tag="edbaudit">
<message>AUDIT: connection authorized: user=enterprisedb database=edb</message>
</event>
<event user="enterprisedb" database="edb" remote_host="[local]"
session_id="596cf5b7.12a8" process_id="4776" time="2017-07-17 13:37:02 EDT"
transaction_id="0" type="create" command_tag="CREATE SCHEMA" audit_tag="edbaudit">
<message>AUDIT: statement: CREATE SCHEMA edb;</message>
</event>
<event user="enterprisedb" database="edb" remote_host="[local]"
session_id="596cf5b7.12a8" process_id="4776" time="2017-07-17 13:37:19 EDT"
transaction_id="0" type="create" command_tag="CREATE TABLE" audit_tag="edbaudit">
<message>AUDIT: statement: CREATE TABLE dept (
deptno NUMBER(2) NOT NULL CONSTRAINT dept_pk PRIMARY KEY,
dname VARCHAR2(14) CONSTRAINT dept_dname_uq UNIQUE,
loc VARCHAR2(13));
</message>
</event>
<event user="enterprisedb" database="edb" remote_host="[local]"
session_id="596cf5b7.12a8" process_id="4776" time="2017-07-17 13:37:29 EDT"
transaction_id="0" type="insert" command_tag="INSERT" audit_tag="edbaudit">
<message>AUDIT: statement: INSERT INTO dept VALUES
(10,'ACCOUNTING','NEW YORK');
</message>
</event>
<event user="enterprisedb" database="edb" remote_host="[local]"
session_id="596cf5b7.12a8" process_id="4776" time="2017-07-17 13:37:40 EDT"
transaction_id="0" type="update" command_tag="UPDATE" audit_tag="edbaudit">
<message>AUDIT: statement: UPDATE department SET
loc = 'BOSTON' WHERE deptno = 10;
</message>
</event>
<event user="enterprisedb" database="edb" remote_host="[local]"
session_id="596cf5b7.12a8" process_id="4776" time="2017-07-17 13:37:40 EDT"
transaction_id="0" type="error" audit_tag="edbaudit">
<message>ERROR: relation "department" does not exist at character 8
</message>
</event>
<event user="enterprisedb" database="edb" remote_host="[local]"
session_id="596cf5b7.12a8" process_id="4776" time="2017-07-17 13:37:51 EDT"
transaction_id="0" type="update" command_tag="UPDATE" audit_tag="edbaudit">
<message>AUDIT: statement: UPDATE dept SET loc = 'BOSTON' WHERE deptno = 10;
</message>
</event>
<event user="enterprisedb" database="edb" remote_host="[local]"
session_id="596cf5b7.12a8" process_id="4776" time="2017-07-17 13:37:59 EDT"
transaction_id="0" type="select" command_tag="SELECT" audit_tag="edbaudit">
<message>AUDIT: statement: SELECT * FROM dept;</message>
</event>
<event user="enterprisedb" database="edb" remote_host="[local]"
session_id="596cf5b7.12a8" process_id="4776" time="2017-07-17 13:38:01 EDT"
transaction_id="0" type="disconnect" command_tag="SELECT" audit_tag="edbaudit">
<message>AUDIT: disconnection: session time: 0:01:05.814
user=enterprisedb database=edb host=[local]
</message>
</event>
<event process_id="4696" time="2017-07-17 13:38:08 EDT"
transaction_id="0" type="shutdown" audit_tag="edbaudit">
<message>LOG: database system is shut down</message>
</event>
Advanced Server includes an extension that you can use to exclude log file entries that
include a user-specified error code from the Advanced Server log files. To filter audit log
entries, you must first enable the extension by modifying the postgresql.conf file,
adding the following value to the values specified in the shared_preload_libraries
parameter:
$libdir/edb_filter_log
Then, use the edb_filter_log.errcodes parameter to specify any error codes you
wish to omit from the log files:
edb_filter_log.errcode = 'error_code'
Where error_code specifies one or more error codes that you wish to omit from the log
file. Provide multiple error codes in a comma-delimited list.
For a complete list of the error codes supported by Advanced Server audit log filtering,
please see the core documentation at:
https://www.postgresql.org/docs/10/static/errcodes-appendix.html
Each entry in the log file except for those displaying an error message contains a
command tag, which is the SQL command executed for that particular log entry.
The command tag makes it possible to use subsequent tools to scan the log file to find
entries related to a particular SQL command.
The following is an example in XML form. The output has been formatted for easier
appearance in the example.
The command tag is displayed as the command_tag attribute of the event element with
values CREATE ROLE, ALTER ROLE, and DROP ROLE in the example.
<event user="enterprisedb" database="edb" remote_host="[local]"
session_id="595e8537.10f1" process_id="4337" time="2017-07-06 14:45:18 EDT"
transaction_id="0" type="create"
command_tag="CREATE ROLE">
<message>AUDIT: statement: CREATE ROLE newuser WITH LOGIN;</message>
</event>
<event user="enterprisedb" database="edb" remote_host="[local]"
session_id="595e8537.10f1" process_id="4337" time="2017-07-06 14:45:31 EDT"
transaction_id="0" type="error">
<message>ERROR: unrecognized role option "super" at character 25
STATEMENT: ALTER ROLE newuser WITH SUPER USER;</message>
</event>
<event user="enterprisedb" database="edb" remote_host="[local]"
session_id="595e8537.10f1" process_id="4337" time="2017-07-06 14:45:38 EDT"
transaction_id="0" type="alter" command_tag="ALTER ROLE">
<message>AUDIT: statement: ALTER ROLE newuser WITH SUPERUSER;</message>
</event>
<event user="enterprisedb" database="edb" remote_host="[local]"
session_id="595e8537.10f1" process_id="4337" time="2017-07-06 14:45:46 EDT"
transaction_id="0" type="drop" command_tag="DROP ROLE">
<message>AUDIT: statement: DROP ROLE newuser;</message>
</event>
The following is the same example in CSV form. The command tag is the next to last
column of each entry. (The last column appears empty as "", which would be the value
provided by the edb_audit_tag parameter.)
Each audit log entry has been split and displayed across multiple lines, and a blank line
has been inserted between the audit log entries for more clarity in the appearance of the
results.
2017-07-06 14:47:22.294 EDT,"enterprisedb","edb",4720,"[local]",
595e85b2.1270,1,"idle",2017-07-06 14:47:14 EDT,6/4,0,AUDIT,00000,
"statement: CREATE ROLE newuser WITH LOGIN;",,,,,,,,,"psql.bin","CREATE ROLE",""
Given all of these variations with the vast number of languages supported by Unicode,
there is a necessity for a method to select the specific criteria for determining a collating
sequence. This is what the Unicode Collation Algorithm defines.
Note: In addition, another advantage for using ICU collations (the implementation of the
Unicode Collation Algorithm) is for performance. Sorting tasks, including B-tree index
creation, can complete in less than half the time it takes with a non-ICU collation. The
exact performance gain depends on your operating system version, the language of your
text data, and other factors.
The following sections provide a brief, simplified explanation of the Unified Collation
Algorithm concepts. As the algorithm and its usage are quite complex with numerous
variations, refer to the official documents cited in these sections for complete details.
The official information for the Unicode Collation Algorithm is specified in Unicode
Technical Report #10, which can be found on The Unicode Consortium website at:
http://www.unicode.org/reports/tr10/
The ICU International Components for Unicode also provides much useful information.
An explanation of the collation concepts can be found on their website located at:
http://userguide.icu-project.org/collation/concepts
The basic concept behind the Unicode Collation Algorithm is the use of multilevel
comparison. This means that a number of levels are defined, which are listed as level 1
through level 5 in the following bullet points. Each level defines a type of comparison.
Strings are first compared using the primary level, also called level 1.
If the order can be determined based on the primary level, then the algorithm is done. If
the order cannot be determined based on the primary level, then the secondary level, level
2, is applied. If the order can be determined based on the secondary level, then the
algorithm is done, otherwise the tertiary level is applied, and so on. There is typically, a
final tie-breaking level to determine the order if it cannot be resolved by the prior levels.
Level 1 Primary Level for Base Characters. The order of basic characters
such as letters and digits determines the difference such as A < B.
Level 2 Secondary Level for Accents. If there are no primary level differences,
then the presence or absence of accents and other such characters determine the
order such as a < .
Level 3 Tertiary Level for Case. If there are no primary level or secondary
level differences, then a difference in case determines the order such as a < A.
Level 4 Quaternary Level for Punctuation. If there are no primary,
secondary, or tertiary level differences, then the presence or absence of white
space characters, control characters, and punctuation determine the order such as
-A < A.
Level 5 Identical Level for Tie-Breaking. If there are no primary, secondary,
tertiary, or quaternary level differences, then some other difference such as the
code point values determines the order.
When Advanced Server is used to create a collation that invokes the ICU components to
produce the collation, the result is referred to as an ICU collation.
When creating a collation for a locale, a predefined ICU short form name for the given
locale is typically provided.
An ICU short form is a method of specifying collation attributes, which are the
properties of a collation. Section 3.6.2.2 provides additional information on collation
attributes.
There are predefined ICU short forms for locales. The ICU short form for a locale
incorporates the collation attribute settings typically used for the given locale. This
simplifies the collation creation process by eliminating the need to specify the entire list
of collation attributes for that locale.
LROOT | en
LROOT | en_US
LEN_RUS_VPOSIX | en_US_POSIX
LEO | eo
LES | es
LET | et
LFA | fa
LFA_RAF | fa_AF
.
.
.
If needed, the default characteristics of an ICU short form for a given locale can be
overridden by specifying the collation attributes to override that property. This is
discussed in the next section.
Collation attributes define the rules of how characters are to be compared for determining
the collation sequence of text strings. As Unicode covers a vast set of languages in
numerous variations according to country, territory and culture, these collation attributes
are quite complex.
For the complete, precise meaning and usage of collation attributes, see Section 13
Collator Naming Scheme on the ICU International Components for Unicode website
at:
http://userguide.icu-project.org/collation/concepts
The following is a brief summary of the collation attributes and how they are specified
using the ICU short form method
Each collation attribute is represented by an uppercase letter, which are listed in the
following bullet points. The possible valid values for each attribute are given by codes
shown within the parentheses. Some codes have general meanings for all attributes. X
means to set the attribute off. O means to set the attribute on. D means to set the attribute
to its default value.
C Case First (X, L, U, D). Controls whether a lowercase letter sorts before the
same uppercase letter (L), or the uppercase letter sorts before the same lowercase
letter (U). Off (X) is typically specified when lowercase first (L) is desired.
E Case Level (X, O, D). Set in combination with the Strength attribute, the
Case Level attribute is used when accents are to be ignored, but not case.
F French Collation (X, O, D). When set to on, secondary differences (presence
of accents) are sorted from the back of the string as done in the French Canadian
locale.
H Hiragana Quaternary (X, O, D). Introduces an additional level to
distinguish between the Hiragana and Katakana characters for compatibility with
the JIS X 4061 collation of Japanese character strings.
N Normalization Checking (X, O, D). Controls whether or not text is
thoroughly normalized for comparison. Normalization deals with the issue of
canonical equivalence of text whereby different code point sequences represent
the same character, which then present issues when sorting or comparing such
characters. Languages such as Arabic, ancient Greek, Hebrew, Hindi, Thai, or
Vietnamese should be used with Normalization Checking set to on.
S Strength (1, 2, 3, 4, I, D). Maximum collation level used for comparison.
Influences whether accents or case are taken into account when collating or
comparing strings. Each number represents a level. A setting of I represents
identical strength (that is, level 5).
T Variable Top (hexadecimal digits). Applicable only when the Alternate
attribute is not set to non-ignorable (N). The hexadecimal digits specify the
highest character sequence that is to be considered ignorable. For example, if
white space is to be ignorable, but visible variable characters are not to be
ignorable, then Variable Top set to 0020 would be specified along with the
Alternate attribute set to S and the Strength attribute set to 3. (The space character
is hexadecimal 0020. Other non-visible variable characters such as backspace,
tab, line feed, carriage return, etc. have values less than 0020. All visible
punctuation marks have values greater than 0020.)
A set of collation attributes and their values is represented by a text string consisting of
the collation attribute letter concatenated with the desired attribute value. Each
attribute/value pair is joined to the next pair with an underscore character as shown by the
following example.
AN_CX_EX_FX_HX_NO_S3
Collation attributes can be specified along with a locales ICU short form name to
override those default attribute settings of the locale.
The following is an example where the ICU short form named LROOT is modified with a
number of other collation attribute/value pairs.
AN_CX_EX_LROOT_NO_S3
In the preceding example, the Alternate attribute (A) is set to non-ignorable (N). The Case
First attribute (C) is set to off (X). The Case Level attribute (E) is set to off (X). The
Normalization attribute (N) is set to on (O). The Strength attribute (S) is set to the tertiary
level 3. LROOT is the ICU short form to which these other attributes are applying
modifications.
When creating a new database cluster with the initdb command, the --icu-
short-form option can be specified to define the ICU collation to be used by
default by all databases in the cluster.
When creating a new database with the CREATE DATABASE command, the
ICU_SHORT_FORM parameter can be specified to define the ICU collation to be
used by default in that database.
In an existing database, the CREATE COLLATION command can be used with the
ICU_SHORT_FORM parameter to define an ICU collation to be used under specific
circumstances such as when assigned with the COLLATE clause onto selected
columns of certain tables or when appended with the COLLATE clause onto an
expression such as ORDER BY expr COLLATE "collation_name".
Use the ICU_SHORT_FORM parameter with the CREATE COLLATION command to create
an ICU collation:
To be able to create a collation, you must have CREATE privilege on the destination
schema where the collation is to reside.
For information about the general usage of the CREATE COLLATION command, please
refer to the PostgreSQL core documentation available at:
https://www.postgresql.org/docs/10/static/sql-createcollation.html
UTF-8 character encoding of the database is required. Any LOCALE, or LC_COLLATE and
LC_CTYPE settings that are accepted with UTF-8 encoding can be used.
Parameters
collation_name
locale
The locale to be used. Short cut for setting LC_COLLATE and LC_TYPE. If
LOCALE is specified, then LC_COLLATE and LC_TYPE must be omitted.
lc_collate
lc_ctype
icu_short_form
The text string specifying the collation attributes and their settings. This typically
consists of an ICU short form name, possibly appended with additional collation
attribute/value pairs. A list of ICU short form names is available from column
icu_short_form in system catalog pg_catalog.pg_icu_collate_names.
Example
The following creates a collation using the LROOT ICU short form.
The definition of the new collation can be seen with the following psql command.
edb=# \dO
List of collations
Schema | Name | Collate | Ctype | ICU
--------------+---------------+------------+------------+-------
enterprisedb | icu_collate_a | en_US.UTF8 | en_US.UTF8 | LROOT
(1 row)
For complete information about the general usage, syntax, and parameters of the CREATE
DATABASE command, please refer to the PostgreSQL core documentation available at:
https://www.postgresql.org/docs/10/static/sql-createdatabase.html
When using the CREATE DATABASE command to create a database using an ICU
collation, the TEMPLATE template0 clause must be specified and the database
encoding must be UTF-8.
The following is an example of creating a database using the LROOT ICU short form
collation, but sorts an uppercase form of a letter before its lowercase counterpart (CU) and
treats variable characters as non-ignorable (AN).
edb=# \l collation_db
List of databases
Name | Owner | Encoding | Collate | Ctype | ICU |
Access privileges
--------------+--------------+----------+-------------+-------------+-------------------+-----
--------------
collation_db | enterprisedb | UTF8 | en_US.UTF-8 | en_US.UTF-8 | AN_CU_EX_NX_LROOT |
(1 row)
The following table is created and populated with rows in the database.
The following query shows that the uppercase form of a letter sorts before the lowercase
form of the same base letter, and in addition, variable characters are taken into account
when sorted as they appear at the beginning of the sort list. (The default behavior for
en_US.UTF-8 is to sort the lowercase form of a letter before the uppercase form of the
same base letter, and to ignore variable characters.)
3.6.3.3 initdb
A database cluster can be created with a default ICU collation for all databases in the
cluster by using the --icu-short-form option with the initdb command.
For complete information about the general usage, syntax, and parameters of the initdb
command, please refer to the PostgreSQL core documentation available at:
https://www.postgresql.org/docs/10/static/app-initdb.html
$ su enterprisedb
Password:
$ /opt/edb/as10/bin/initdb -U enterprisedb -D /tmp/collation_data --encoding
UTF8 --icu-short-form 'AN_CU_EX_NX_LROOT'
The files belonging to this database system will be owned by user
"enterprisedb".
This user must also own the server process.
/opt/edb/as10/bin/edb-postgres -D /tmp/collation_data
or
/opt/edb/as10/bin/pg_ctl -D /tmp/collation_data -l logfile start
The following shows the databases created in the cluster which all have an ICU collation
of AN_CU_EX_NX_LROOT.
edb=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | ICU |
Access privileges
-----------+--------------+----------+-------------+-------------+-------------------+--------
-----------------------
edb | enterprisedb | UTF8 | en_US.UTF-8 | en_US.UTF-8 | AN_CU_EX_NX_LROOT |
postgres | enterprisedb | UTF8 | en_US.UTF-8 | en_US.UTF-8 | AN_CU_EX_NX_LROOT |
template0 | enterprisedb | UTF8 | en_US.UTF-8 | en_US.UTF-8 | AN_CU_EX_NX_LROOT |
=c/enterprisedb +
| | | | | |
enterprisedb=CTc/enterprisedb
template1 | enterprisedb | UTF8 | en_US.UTF-8 | en_US.UTF-8 | AN_CU_EX_NX_LROOT |
=c/enterprisedb +
| | | | | |
enterprisedb=CTc/enterprisedb
(4 rows)
The following are some examples of the creation and usage of ICU collations based on
the English language in the United States (en_US.UTF8).
In these examples, ICU collations are created with the following characteristics.
Note: When creating collations, ICU may generate notice and warning messages when
attributes are given to modify the LROOT collation.
edb=# \dO
List of collations
Schema | Name | Collate | Ctype | ICU
--------------+-----------------------------+------------+------------+-----------------------
-----
enterprisedb | icu_collate_ignore_punct | en_US.UTF8 | en_US.UTF8 | AS_CX_EX_NX_LROOT_L3
enterprisedb | icu_collate_ignore_white_sp | en_US.UTF8 | en_US.UTF8 |
AS_CX_EX_NX_LROOT_L3_T0020
enterprisedb | icu_collate_lowercase | en_US.UTF8 | en_US.UTF8 | AN_CL_EX_NX_LROOT
enterprisedb | icu_collate_uppercase | en_US.UTF8 | en_US.UTF8 | AN_CU_EX_NX_LROOT
(4 rows)
The following query sorts on column c2 using the default collation. Note that variable
characters (white space and punctuation marks) with id column values of 9, 10, and 11
are ignored and sort with the letter B.
sort order at the same level when comparing base characters so rows with id values of 9,
10, and 11 appear at the beginning of the sort list before all letters and numbers.
6 | c
3 | C
(11 rows)
The row with id value of 11, which starts with a space character (hexadecimal 0020)
sorts with the letter B. The rows with id values of 9 and 10, which start with visible
punctuation marks greater than hexadecimal 0020, appear at the beginning of the sort list
as these particular variable characters are included in the sort order at the same level
when comparing base characters.
When a database cluster is created with the initdb utility program, the default size of
each WAL segment file is 16 MB.
size is the WAL segment file size in megabytes, which must be a power of 2 (for
example, 1, 2, 4, 8, 16, 32, etc.). The minimum permitted value of size is 1 and the
maximum permitted value is 1024. The database cluster is to be created in directory.
For more information on the initdb utility and its other options, see the PostgreSQL
core documentation available at:
https://www.postgresql.org/docs/10/static/app-initdb.html
The following example shows the creation of a database cluster where the WAL segment
file size is specified as 1024 MB (equivalent to 1 GB).
After the database server is started, the display of the wal_segment_size parameter
also confirms the file size:
4 Security
The chapter describes various features providing for added security.
SQL/Protect gives the control back to the database administrator by alerting the
administrator to potentially dangerous queries and by blocking these queries.
This section contains an introduction to the different types of SQL injection attacks and
describes how SQL/Protect guards against them.
Unauthorized Relations
When SQL/Protect is switched to either passive or active mode, the incoming queries are
checked against the list of learned relations.
Utility Commands
A common technique used in SQL injection attacks is to run utility commands, which are
typically SQL Data Definition Language (DDL) statements. An example is creating a
user-defined function that has the ability to access other system resources.
SQL/Protect can block the running of all utility commands, which are not normally
needed during standard application processing.
SQL Tautology
The most frequent technique used in SQL injection attacks is issuing a tautological
WHERE clause condition (that is, using a condition that is always true).
Attackers will usually start identifying security weaknesses using this technique.
SQL/Protect can block queries that use a tautological conditional clause.
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 166
EDB Postgres Advanced Server Guide
A dangerous action taken during SQL injection attacks is the running of unbounded DML
statements. These are UPDATE and DELETE statements with no WHERE clause. For
example, an attacker may update all users passwords to a known value or initiate a
denial of service attack by deleting all of the data in a key table.
Monitoring for SQL injection attacks involves analyzing SQL statements originating in
database sessions where the current user of the session is a protected role. A protected
role is an Advanced Server user or group that the database administrator has chosen to
monitor using SQL/Protect. (In Advanced Server, users and groups are collectively
referred to as roles.)
Each protected role can be customized for the types of SQL injection attacks for which it
is to be monitored, thus providing different levels of protection by role and significantly
reducing the user maintenance load for DBAs.
Note: A role with the superuser privilege cannot be made a protected role. If a protected
non-superuser role is subsequently altered to become a superuser, certain behaviors are
exhibited whenever an attempt is made by that superuser to issue any command:
A protected role that has the superuser privilege should either be altered so that it is no
longer a superuser, or it should be reverted back to an unprotected role.
These statistics are accessible from view edb_sql_protect_stats that can be easily
monitored to identify the start of a potential attack.
This gives database administrators the opportunity to react proactively in preventing theft
of valuable data or other malicious actions.
If a role is protected in more than one database, the roles statistics for attacks in each
database are maintained separately and are viewable only when connected to the
respective database.
Note: SQL/Protect statistics are maintained in memory while the database server is
running. When the database server is shut down, the statistics are saved to a binary file
named edb_sqlprotect.stat in the data/global subdirectory of the Advanced
Server home directory.
username. Database user name of the attacker used to log into the database
server.
ip_address. IP address of the machine from which the attack was initiated.
port. Port number from which the attack originated.
machine_name. Name of the machine, if known, from which the attack
originated.
date_time. Date and time at which the query was received by the database server.
The time is stored to the precision of a minute.
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 168
EDB Postgres Advanced Server Guide
If a role is protected in more than one database, the roles queries for attacks in each
database are maintained separately and are viewable only when connected to the
respective database.
You will also need the SQL script file sqlprotect.sql located in the
share/contrib subdirectory of your Advanced Server home directory.
You must configure the database server to use SQL/Protect, and you must configure each
database that you want SQL/Protect to monitor:
The following example shows the settings of these parameters in the postgresql.conf
file:
shared_preload_libraries = '$libdir/dbms_pipe,$libdir/edb_gen,$libdir/sqlprotect'
# (change requires restart)
.
.
.
edb_sql_protect.enabled = off
edb_sql_protect.level = learn
edb_sql_protect.max_protected_roles = 64
edb_sql_protect.max_protected_relations = 1024
edb_sql_protect.max_queries_to_save = 5000
Step 2: Restart the database server after you have modified the postgresql.conf file.
On Linux: Invoke the Advanced Server service script with the restart option:
/etc/init.d/edb-as-10 restart
On Windows: Use the Windows Services applet to restart the service named edb-as-
10.
Step 3: For each database that you want to protect from SQL injection attacks, connect to
the database as a superuser (either enterprisedb or postgres, depending upon your
installation options) and run the script sqlprotect.sql located in the
share/contrib subdirectory of your Advanced Server home directory. The script
creates the SQL/Protect database objects in a schema named sqlprotect.
The following example shows this process to set up protection for a database named edb:
edb=# \i /opt/edb/as10/share/contrib/sqlprotect.sql
CREATE SCHEMA
GRANT
SET
CREATE TABLE
GRANT
CREATE TABLE
GRANT
CREATE FUNCTION
CREATE FUNCTION
CREATE FUNCTION
CREATE FUNCTION
CREATE FUNCTION
CREATE FUNCTION
CREATE FUNCTION
DO
CREATE FUNCTION
CREATE FUNCTION
DO
CREATE VIEW
GRANT
DO
CREATE VIEW
GRANT
CREATE VIEW
GRANT
CREATE FUNCTION
CREATE FUNCTION
SET
For each database that you want to protect, you must determine the roles you want to
monitor and then add those roles to the protected roles list of that database.
Step 1: Connect as a superuser to a database that you wish to protect using either psql or
Postgres Enterprise Manager Client.
edb=#
Step 2: Since the SQL/Protect tables, functions, and views are built under the
sqlprotect schema, use the SET search_path command to include the
sqlprotect schema in your search path. This eliminates the need to schema-qualify
any operation or query involving SQL/Protect database objects.
Step 3: Each role that you wish to protect must be added to the protected roles list. This
list is maintained in the table edb_sql_protect.
(1 row)
You can list the roles that have been added to the protected roles list by issuing the
following query:
edb=# SELECT * FROM edb_sql_protect;
dbid | roleid | protect_relations | allow_utility_cmds | allow_tautology | allow_empty_dml
-------+--------+-------------------+--------------------+-----------------+-----------------
13917 | 16671 | t | f | f | f
(1 row)
A view is also provided that gives the same information using the object names instead of
the Object Identification numbers (OIDs).
edb=# \x
Expanded display is on.
edb=# SELECT * FROM list_protected_users;
-[ RECORD 1 ]------+--------
dbname | edb
username | appuser
protect_relations | t
allow_utility_cmds | f
allow_tautology | f
allow_empty_dml | f
learn. Tracks the activities of protected roles and records the relations used by the
roles. This is used when initially configuring SQL/Protect so the expected
behaviors of the protected applications are learned.
passive. Issues warnings if protected roles are breaking the defined rules, but does
not stop any SQL statements from executing. This is the next step after
SQL/Protect has learned the expected behavior of the protected roles. This
essentially behaves in intrusion detection mode and can be run in production
when properly monitored.
active. Stops all invalid statements for a protected role. This behaves as a SQL
firewall preventing dangerous queries from running. This is particularly effective
against early penetration testing when the attacker is trying to determine the
vulnerability point and the type of database behind the application. Not only does
SQL/Protect close those vulnerability points, but it tracks the blocked queries
allowing administrators to be alerted before the attacker finds an alternate method
of penetrating the system.
If you are using SQL/Protect for the first time, set edb_sql_protect.level to
learn.
With a new SQL/Protect installation, the first step is to determine the relations that
protected roles should be permitted to access during normal operation. Learn mode
allows a role to run applications during which time SQL/Protect is recording the relations
that are accessed. These are added to the roles protected relations list stored in table
edb_sql_protect_rel.
Monitoring for protection against attack begins when SQL/Protect is run in passive or
active mode. In passive and active modes, the role is permitted to access the relations in
its protected relations list as these were determined to be the relations the role should be
able to access during typical usage.
However, if a role attempts to access a relation that is not in its protected relations list, a
WARNING or ERROR severity level message is returned by SQL/Protect. The roles
attempted action on the relation may or may not be carried out depending upon whether
the mode is passive or active.
Step 1: To activate SQL/Protect in learn mode, set the following parameters in the
postgresql.conf file as shown below:
edb_sql_protect.enabled = on
edb_sql_protect.level = learn
Choose Expert Configuration, then Reload Configuration from the Advanced Server
application menu.
Note: For an alternative method of reloading the configuration file, use the
pg_reload_conf function. Be sure you are connected to a database as a superuser and
execute function pg_reload_conf as shown by the following example:
As an example the following queries are issued in the psql application by protected role
appuser:
edb=> SELECT empno, ename, job FROM emp WHERE deptno = 10;
NOTICE: SQLPROTECT: Learned relation: 16391
empno | ename | job
-------+--------+-----------
7782 | CLARK | MANAGER
7839 | KING | PRESIDENT
7934 | MILLER | CLERK
(3 rows)
SQL/Protect generates a NOTICE severity level message indicating the relation has been
added to the roles protected relations list.
In SQL/Protect learn mode, SQL statements that are cause for suspicion are not prevented
from executing, but a message is issued to alert the user to potentially dangerous
statements as shown by the following example:
Step 4: As a protected role runs applications, the SQL/Protect tables can be queried to
observe the addition of relations to the roles protected relations list.
Connect as a superuser to the database you are monitoring and set the search path to
include the sqlprotect schema.
Query the edb_sql_protect_rel table to see the relations added to the protected
relations list:
Once you have determined that a roles applications have accessed all relations they will
need, you can now change the protection level so that SQL/Protect can actively monitor
the incoming SQL queries and protect against SQL injection attacks.
Passive mode is the less restrictive of the two protection modes, passive and active.
Step 1: To activate SQL/Protect in passive mode, set the following parameters in the
postgresql.conf file as shown below:
edb_sql_protect.enabled = on
edb_sql_protect.level = passive
Now SQL/Protect is in passive mode. For relations that have been learned such as the
dept and emp tables of the prior examples, SQL statements are permitted with no special
notification to the client by SQL/Protect as shown by the following queries run by user
appuser:
edb=> SELECT empno, ename, job FROM emp WHERE deptno = 10;
empno | ename | job
-------+--------+-----------
7782 | CLARK | MANAGER
7839 | KING | PRESIDENT
7934 | MILLER | CLERK
(3 rows)
SQL/Protect does not prevent any SQL statement from executing, but issues a message of
WARNING severity level for SQL statements executed against relations that were not
learned, or for SQL statements that contain a prohibited signature as shown in the
following example:
By querying the view edb_sql_protect_stats, you can see the number of times
SQL statements were executed that referenced relations that were not in a roles protected
relations list, or contained SQL injection attack signatures. See Section 4.1.1.2.2 for more
information on view edb_sql_protect_stats.
By querying the view edb_sql_protect_queries, you can see the SQL statements
that were executed that referenced relations that were not in a roles protected relations
list, or contained SQL injection attack signatures. See Section 4.1.1.2.3 for more
information on view edb_sql_protect_queries.
Note: The ip_address and port columns do not return any information if the attack
originated on the same host as the database server using the Unix-domain socket (that is,
pg_hba.conf connection type local).
In active mode, disallowed SQL statements are prevented from executing. Also, the
message issued by SQL/Protect has a higher severity level of ERROR instead of WARNING.
Step 1: To activate SQL/Protect in active mode, set the following parameters in the
postgresql.conf file as shown below:
edb_sql_protect.enabled = on
edb_sql_protect.level = active
The following example illustrates SQL statements similar to those given in the examples
of Step 2 in Section 4.1.2.2.2, but executed by user appuser when
edb_sql_protect.level is set to active:
-[ RECORD 3 ]+---------------------------------------------
username | appuser
ip_address | 192.168.2.6
port | 50098
machine_name |
date_time | 20-JUN-14 13:39:00 -04:00
query | CREATE TABLE appuser_tab_3 (f1 INTEGER);
-[ RECORD 4 ]+---------------------------------------------
username | appuser
ip_address | 192.168.2.6
port | 50098
machine_name |
date_time | 20-JUN-14 13:39:00 -04:00
query | INSERT INTO appuser_tab_2 VALUES (1);
-[ RECORD 5 ]+---------------------------------------------
username | appuser
ip_address | 192.168.2.6
port | 50098
machine_name |
date_time | 20-JUN-14 13:39:00 -04:00
query | SELECT * FROM appuser_tab_2 WHERE 'x' = 'x';
You must be connected as a superuser to perform these operations and have included
schema sqlprotect in your search path.
protect_role('rolename')
(1 row)
unprotect_role('rolename')
unprotect_role(roleoid)
Note: The variation of the function using the OID is useful if you remove the role using
the DROP ROLE or DROP USER SQL statement before removing the role from the
protected roles list. If a query on a SQL/Protect relation returns a value such as unknown
(OID=16458) for the user name, use the unprotect_role(roleoid) form of the
function to remove the entry for the deleted role from the protected roles list.
Removing a role using these functions also removes the roles protected relations list.
The statistics for a role that has been removed are not deleted until you use the
drop_stats function as described in Section 4.1.3.5.
The offending queries for a role that has been removed are not deleted until you use the
drop_queries function as described in Section 4.1.3.6.
(1 row)
(1 row)
Change the Boolean value for the column in edb_sql_protect corresponding to the
type of SQL injection attack for which protection of a role is to be disabled or enabled.
Be sure to qualify the following columns in your WHERE clause of the statement that
updates edb_sql_protect:
dbid. OID of the database for which you are making the change
roleid. OID of the role for which you are changing the Boolean settings
For example, to allow a given role to issue utility commands, update the
allow_utility_cmds column as follows:
The updated rules take effect on new sessions started by the role since the change was
made.
Delete its entry from the edb_sql_protect_rel table using any of the following
functions:
unprotect_rel('rolename', 'relname')
unprotect_rel('rolename', 'schema', 'relname')
unprotect_rel(roleoid, reloid)
If the relation given by relname is not in your current search path, specify the relations
schema using the second function format.
The third function format allows you to specify the OIDs of the role and relation,
respectively, instead of their text names.
The following example illustrates the removal of the public.emp relation from the
protected relations list of the role appuser.
(1 row)
The following query shows there is no longer an entry for the emp relation.
SQL/Protect will now issue a warning or completely block access (depending upon the
setting of edb_sql_protect.level) whenever the role attempts to utilize that
relation.
You can delete statistics from view edb_sql_protect_stats using either of the two
following functions:
drop_stats('rolename')
drop_stats(roleoid)
Note: The variation of the function using the OID is useful if you remove the role using
the DROP ROLE or DROP USER SQL statement before deleting the roles statistics using
drop_stats('rolename'). If a query on edb_sql_protect_stats returns a value
such as unknown (OID=16458) for the user name, use the drop_stats(roleoid)
form of the function to remove the deleted roles statistics from
edb_sql_protect_stats.
(1 row)
(1 row)
You can delete offending queries from view edb_sql_protect_queries using either
of the two following functions:
drop_queries('rolename')
drop_queries(roleoid)
Note: The variation of the function using the OID is useful if you remove the role using
the DROP ROLE or DROP USER SQL statement before deleting the roles offending
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 184
EDB Postgres Advanced Server Guide
edb_sql_protect.enabled = off
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 185
EDB Postgres Advanced Server Guide
edb_sql_protect.enabled = on
Backing up a database that is configured with SQL/Protect, and then restoring the backup
file to a new database require additional considerations to what is normally associated
with backup and restore procedures. This is primarily due to the use of Object
Identification numbers (OIDs) in the SQL/Protect tables as explained in this section.
Note: This section is applicable if your backup and restore procedures result in the re-
creation of database objects in the new database with new OIDs such as is the case when
using the pg_dump backup program.
If you are backing up your Advanced Server database server by simply using the
operating systems copy utility to create a binary image of the Advanced Server data files
(file system backup method), then this section does not apply.
When a database object is created, Advanced Server assigns an OID to the object, which
is then used whenever a reference is needed to the object in the database catalogs. If you
create the same database object in two databases, such as a table with the same CREATE
TABLE statement, each table is assigned a different OID in each database.
In a backup and restore operation that results in the re-creation of the backed up database
objects, the restored objects end up with different OIDs in the new database than what
they were assigned in the original database. As a result, the OIDs referencing databases,
roles, and relations stored in the edb_sql_protect and edb_sql_protect_rel
tables are no longer valid when these tables are simply dumped to a backup file and then
restored to a new database.
The following example shows a plain-text backup file named /tmp/edb.dmp created
from database edb using the pg_dump utility program:
$ cd /opt/edb/as10/bin
$ ./pg_dump -U enterprisedb -Fp -f /tmp/edb.dmp edb
Password:
$
Step 2: Connect to the database as a superuser and export the SQL/Protect data using the
export_sqlprotect('sqlprotect_file') function where sqlprotect_file is
the fully qualified path to a file where the SQL/Protect data is to be saved.
(1 row)
The following example uses the psql utility program to restore the plain-text backup file
/tmp/edb.dmp to a newly created database named newdb:
Step 2: Connect to the new database as a superuser and delete all rows from the
edb_sql_protect_rel table.
This step removes any existing rows in the edb_sql_protect_rel table that were
backed up from the original database. These rows do not contain the correct OIDs
relative to the database where the backup file has been restored.
This step removes any existing rows in the edb_sql_protect table that were backed
up from the original database. These rows do not contain the correct OIDs relative to the
database where the backup file has been restored.
Step 4: Delete any statistics that may exist for the database.
This step removes any existing statistics that may exist for the database to which you are
restoring the backup. The following query displays any existing statistics:
For each row that appears in the preceding query, use the drop_stats function
specifying the role name of the entry.
For example, if a row appeared with appuser in the username column, issue the
following command to remove it:
(1 row)
Step 5: Delete any offending queries that may exist for the database.
This step removes any existing queries that may exist for the database to which you are
restoring the backup. The following query displays any existing queries:
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 189
EDB Postgres Advanced Server Guide
For each row that appears in the preceding query, use the drop_queries function
specifying the role name of the entry.
For example, if a row appeared with appuser in the username column, issue the
following command to remove it:
(1 row)
Step 6: Make sure the role names that were protected by SQL/Protect in the original
database exist in the database server where the new database resides.
If the original and new databases reside in the same database server, then nothing needs
to be done assuming you have not deleted any of these roles from the database server.
(1 row)
The SQL/Protect tables and statistics are now properly restored for this database. This is
verified by the following queries on the Advanced Server system catalogs:
newdb=# SELECT datname, oid FROM pg_database;
datname | oid
-----------+-------
template1 | 1
template0 | 13909
edb | 13917
newdb | 16679
(4 rows)
newedb=# \x
Expanded display is on.
nwedb=# SELECT * FROM sqlprotect.edb_sql_protect_queries;
-[ RECORD 1 ]+---------------------------------------------
username | appuser
ip_address |
port |
machine_name |
date_time | 20-JUN-14 13:21:00 -04:00
query | CREATE TABLE appuser_tab_2 (f1 INTEGER);
-[ RECORD 2 ]+---------------------------------------------
username | appuser
ip_address |
port |
machine_name |
date_time | 20-JUN-14 13:22:00 -04:00
query | INSERT INTO appuser_tab_2 VALUES (2);
-[ RECORD 3 ]+---------------------------------------------
username | appuser
ip_address | 192.168.2.6
port | 50098
machine_name |
date_time | 20-JUN-14 13:39:00 -04:00
query | CREATE TABLE appuser_tab_3 (f1 INTEGER);
-[ RECORD 4 ]+---------------------------------------------
username | appuser
ip_address | 192.168.2.6
port | 50098
machine_name |
date_time | 20-JUN-14 13:39:00 -04:00
query | INSERT INTO appuser_tab_2 VALUES (1);
-[ RECORD 5 ]+---------------------------------------------
username | appuser
ip_address | 192.168.2.6
port | 50098
machine_name |
date_time | 20-JUN-14 13:39:00 -04:00
query | SELECT * FROM appuser_tab_2 WHERE 'x' = 'x';
dbid. Matches the value in the oid column from pg_database for newdb
roleid. Matches the value in the oid column from pg_roles for appuser
Also note that in table edb_sql_protect_rel, the values in the relid column match
the values in the oid column of pg_class for relations dept and appuser_tab.
Step 8: Verify that the SQL/Protect configuration parameters are set as desired in the
postgresql.conf file for the database server running the new database. Restart the
database server or reload the configuration file as appropriate.
The rules that encode a security policy are defined in a policy function, which is an SPL
function with certain input parameters and return value. The security policy is the named
association of the policy function to a particular database object, typically a table.
Note: In Advanced Server, the policy function can be written in any language supported
by Advanced Server such as SQL and PL/pgSQL in addition to SPL.
Note: The database objects currently supported by Advanced Server Virtual Private
Database are tables. Policies cannot be applied to views or synonyms.
Note: The only way security policies can be circumvented is if the EXEMPT ACCESS
POLICY system privilege has been granted to a user. The EXEMPT ACCESS POLICY
privilege should be granted with extreme care as a user with this privilege is exempted
from all policies in the database.
The DBMS_RLS package provides procedures to create policies, remove policies, enable
policies, and disable policies.
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 193
EDB Postgres Advanced Server Guide
4.3 sslutils
The sslutils package provides the functions shown in the following sections.
4.3.1 openssl_rsa_generate_key
When invoking the function, pass the number of bits as an integer value; the function
returns the generated key.
4.3.2 openssl_rsa_key_to_csr
Parameters
parameter 1
parameter 2
The common name (e.g., agentN) of the agent that will use the signing request.
parameter 3
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 194
EDB Postgres Advanced Server Guide
parameter 4
parameter 5
The location (city) within the state in which the server resides.
parameter 6
parameter 7
4.3.3 openssl_csr_to_crt
Parameters
parameter 1
parameter 2
parameter 3
The path to the certificate authoritys private key or (if argument 2 is NULL) the
path to a private key.
4.3.4 openssl_rsa_generate_crl
Parameters
parameter 1
parameter 2
This capability allows you to protect the system from processes that may uncontrollably
overuse and monopolize certain system resources.
The following are some key points about using EDB Resource Manager.
A default resource group can be assigned to a role using the ALTER ROLE ...
SET command, or to a database by the ALTER DATABASE ... SET command.
The entire database server instance can be assigned a default resource group by
setting the parameter in the postgresql.conf file.
In order to include resource groups in a backup file of the database server
instance, use the pg_dumpall backup utility with default settings (That is, do not
specify any of the --globals-only, --roles-only, or --tablespaces-
only options.)
Use the CREATE RESOURCE GROUP command to create a new resource group.
Description
The CREATE RESOURCE GROUP command creates a resource group with the specified
name. Resource limits can then be defined on the group with the ALTER RESOURCE
GROUP command. The resource group is accessible from all databases in the Advanced
Server instance.
To use the CREATE RESOURCE GROUP command you must have superuser privileges.
Parameters
group_name
Example
The following example results in the creation of three resource groups named resgrp_a,
resgrp_b, and resgrp_c.
The following query shows the entries for the resource groups in the
edb_resource_group catalog.
Use the ALTER RESOURCE GROUP command to change the attributes of an existing
resource group. The command syntax comes in three forms.
The third form resets the assignment of a resource type to its default within the group:
Description
The first form with the RENAME TO clause assigns a new name to an existing resource
group.
The second form with the SET resource_type TO clause either assigns the specified
literal value to a resource type, or resets the resource type when DEFAULT is specified.
Resetting or setting a resource type to DEFAULT means that the resource group has no
defined limit on that resource type.
The third form with the RESET resource_type clause resets the resource type for the
group as described previously.
To use the ALTER RESOURCE GROUP command you must have superuser privileges.
Parameters
group_name
new_name
resource_type
The resource type parameter specifying the type of resource to which a usage
value is to be set.
value | DEFAULT
Example
The following query shows the results of the ALTER RESOURCE GROUP commands to
the entries in the edb_resource_group catalog.
Description
The DROP RESOURCE GROUP command removes a resource group with the specified
name.
To use the DROP RESOURCE GROUP command you must have superuser privileges.
Parameters
group_name
IF EXISTS
Do not throw an error if the resource group does not exist. A notice is issued in
this case.
Example
The resource type settings of the group immediately take effect on the current process. If
the command is used to change the resource group assigned to the current process, the
resource type settings of the newly assigned group immediately take effect.
A default resource group can be assigned to a role using the ALTER ROLE ... SET
command. For more information about the ALTER ROLE command, please refer to the
PostgreSQL core documentation available at:
https://www.postgresql.org/docs/10/static/sql-alterrole.html
A default resource group can be assigned to a database by the ALTER DATABASE ...
SET command. For more information about the ALTER DATABASE command, please
refer to the PostgreSQL core documentation available at:
https://www.postgresql.org/docs/10/static/sql-alterdatabase.html
The entire database server instance can be assigned a default resource group by setting
the edb_resource_group configuration parameter in the postgresql.conf file as
shown by the following.
(1 row)
For removing a default resource group from a role, use the ALTER ROLE ... RESET
form of the ALTER ROLE command.
For removing a default resource group from a database, use the ALTER DATABASE ...
RESET form of the ALTER DATABASE command.
For removing a default resource group from the database server instance, set the
edb_resource_group configuration parameter to an empty string in the
postgresql.conf file and reload the configuration file.
After resource groups have been created, the number of processes actively using these
resource groups can be obtained from the view edb_all_resource_groups.
per_process_cpu_rate_limit | 0.195694289022895
dirty_rate_limit | 6144
per_process_dirty_rate_limit | 3785.92924684337
-[ RECORD 3 ]----------------+------------------
group_name | resgrp_c
active_processes | 1
cpu_rate_limit | 0.3
per_process_cpu_rate_limit | 0.292342129631091
dirty_rate_limit | 3072
per_process_dirty_rate_limit | 3072
The CPU rate limit and dirty rate limit settings that are assigned to these resource groups
are as follows.
Set the cpu_rate_limit parameter to the fraction of CPU time over wall-clock time to
which the combined, simultaneous CPU usage of all processes in the group should not
exceed. Thus, the value assigned to cpu_rate_limit should typically be less than or
equal to 1.
When multiplied by 100, the cpu_rate_limit can also be interpreted as the CPU usage
percentage for a resource group.
EDB Resource Manager utilizes CPU throttling to keep the aggregate CPU usage of all
processes in the group within the limit specified by the cpu_rate_limit parameter. A
process in the group may be interrupted and put into sleep mode for a short interval of
time to maintain the defined limit. When and how such interruptions occur is defined by a
proprietary algorithm used by EDB Resource Manager.
The ALTER RESOURCE GROUP command with the SET cpu_rate_limit clause is
used to set the CPU rate limit for a resource group.
In the following example the CPU usage limit is set to 50% for resgrp_a, 40% for
resgrp_b and 30% for resgrp_c. This means that the combined CPU usage of all
processes assigned to resgrp_a is maintained at approximately 50%. Similarly, for all
processes in resgrp_b, the combined CPU usage is kept to approximately 40%, etc.
resgrp_c | 0.3
(3 rows)
Changing the cpu_rate_limit of a resource group not only affects new processes that
are assigned to the group, but any currently running processes that are members of the
group are immediately affected by the change. That is, if the cpu_rate_limit is
changed from .5 to .3, currently running processes in the group would be throttled
downward so that the aggregate group CPU usage would be near 30% instead of 50%.
To illustrate the effect of setting the CPU rate limit for resource groups, the following
examples use a CPU-intensive calculation of 20000 factorial (multiplication of 20000 *
19999 * 19998, etc.) performed by the query SELECT 20000!; run in the psql
command line utility.
The resource groups with the CPU rate limit settings shown in the previous query are
used in these examples.
The following shows that the current process is set to use resource group resgrp_b. The
factorial calculation is then started.
In a second session, the Linux top command is used to display the CPU usage as shown
under the %CPU column. The following is a snapshot at an arbitrary point in time as the
top command output periodically changes.
$ top
top - 16:37:03 up 4:15, 7 users, load average: 0.49, 0.20, 0.38
Tasks: 202 total, 1 running, 201 sleeping, 0 stopped, 0 zombie
Cpu(s): 42.7%us, 2.3%sy, 0.0%ni, 55.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0
Mem: 1025624k total, 791160k used, 234464k free, 23400k buffers
Swap: 103420k total, 13404k used, 90016k free, 373504k cached
The psql session performing the factorial calculation is shown by the row where edb-
postgres appears under the COMMAND column. The CPU usage of the session shown
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 206
EDB Postgres Advanced Server Guide
under the %CPU column shows 39.9, which is close to the 40% CPU limit set for resource
group resgrp_b.
By contrast, if the psql session is removed from the resource group and the factorial
calculation is performed again, the CPU usage is much higher.
(1 row)
Under the %CPU column for edb-postgres, the CPU usage is now 93.6, which is
significantly higher than the 39.9 when the process was part of the resource group.
$ top
top - 16:43:03 up 4:21, 7 users, load average: 0.66, 0.33, 0.37
Tasks: 202 total, 5 running, 197 sleeping, 0 stopped, 0 zombie
Cpu(s): 96.7%us, 3.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0
Mem: 1025624k total, 791228k used, 234396k free, 23560k buffers
Swap: 103420k total, 13404k used, 90016k free, 373508k cached
As stated previously, the CPU rate limit applies to the aggregate of all processes in the
resource group. This concept is illustrated in the following example.
Session 1:
Session 2:
$ top
top - 16:53:03 up 4:31, 7 users, load average: 0.31, 0.19, 0.27
Tasks: 202 total, 1 running, 201 sleeping, 0 stopped, 0 zombie
Cpu(s): 41.2%us, 3.0%sy, 0.0%ni, 55.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0
Mem: 1025624k total, 792020k used, 233604k free, 23844k buffers
Swap: 103420k total, 13404k used, 90016k free, 373508k cached
There are now two processes named edb-postgres with %CPU values of 19.9 and 19.6,
whose sum is close to the 40% CPU usage set for resource group resgrp_b.
The following command sequence displays the sum of all edb-postgres processes
sampled over half second time intervals. This shows how the total CPU usage of the
processes in the resource group changes over time as EDB Resource Manager throttles
the processes to keep the total resource group CPU usage near 40%.
$ while [[ 1 -eq 1 ]]; do top -d0.5 -b -n2 | grep edb-postgres | awk '{ SUM
+= $9} END { print SUM / 2 }'; done
37.2
39.1
38.9
38.3
44.7
39.2
42.5
39.1
39.2
39.2
41
42.85
46.1
.
.
.
In this example, two additional psql sessions are used along with the previous two
sessions. The third and fourth sessions perform the same factorial calculation within
resource group resgrp_c with a cpu_rate_limit of .3 (30% CPU usage).
Session 3:
Session 4:
$ top
top - 17:45:09 up 5:23, 8 users, load average: 0.47, 0.17, 0.26
Tasks: 203 total, 4 running, 199 sleeping, 0 stopped, 0 zombie
Cpu(s): 70.2%us, 0.0%sy, 0.0%ni, 29.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0
Mem: 1025624k total, 806140k used, 219484k free, 25296k buffers
Swap: 103420k total, 13404k used, 90016k free, 374092k cached
The two resource groups in use have CPU usage limits of 40% and 30%. The sum of the
%CPU column for the first two edb-postgres processes is 39.5 (approximately 40%,
which is the limit for resgrp_b) and the sum of the %CPU column for the third and
fourth edb-postgres processes is 31.6 (approximately 30%, which is the limit for
resgrp_c).
The sum of the CPU usage limits of the two resource groups to which these processes
belong is 70%. The following output shows that the sum of the four processes borders
around 70%.
$ while [[ 1 -eq 1 ]]; do top -d0.5 -b -n2 | grep edb-postgres | awk '{ SUM
+= $9} END { print SUM / 2 }'; done
61.8
76.4
72.6
69.55
64.55
79.95
68.55
71.25
74.85
62
74.85
76.9
72.4
65.9
74.9
68.25
By contrast, if three sessions are processing where two sessions remain in resgrp_b, but
the third session does not belong to any resource group, the top command shows the
following output.
$ top
top - 17:24:55 up 5:03, 7 users, load average: 1.00, 0.41, 0.38
Tasks: 199 total, 3 running, 196 sleeping, 0 stopped, 0 zombie
Cpu(s): 99.7%us, 0.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0
Mem: 1025624k total, 797692k used, 227932k free, 24724k buffers
Swap: 103420k total, 13404k used, 90016k free, 374068k cached
The second and third edb-postgres processes belonging to the resource group where
the CPU usage is limited to 40%, have a total CPU usage of 37.8. However, the first
edb-postgres process has a 58.6% CPU usage as it is not within a resource group, and
basically utilizes the remaining, available CPU resources on the system.
Likewise, the following output shows the sum of all three sessions is around 95% since
one of the sessions has no set limit on its CPU usage.
$ while [[ 1 -eq 1 ]]; do top -d0.5 -b -n2 | grep edb-postgres | awk '{ SUM
+= $9} END { print SUM / 2 }'; done
96
90.35
92.55
96.4
94.1
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 210
EDB Postgres Advanced Server Guide
90.7
95.7
95.45
93.65
87.95
96.75
94.25
95.45
97.35
92.9
96.05
96.25
94.95
.
.
.
Set the dirty_rate_limit parameter to the number of kilobytes per second for the
combined rate at which all the processes in the group should write to or dirty the shared
buffers. An example setting would be 3072 kilobytes per seconds.
EDB Resource Manager utilizes dirty buffer throttling to keep the aggregate, shared
buffer writing rate of all processes in the group near the limit specified by the
dirty_rate_limit parameter. A process in the group may be interrupted and put into
sleep mode for a short interval of time to maintain the defined limit. When and how such
interruptions occur is defined by a proprietary algorithm used by EDB Resource
Manager.
The ALTER RESOURCE GROUP command with the SET dirty_rate_limit clause is
used to set the dirty rate limit for a resource group.
In the following example the dirty rate limit is set to 12288 kilobytes per second for
resgrp_a, 6144 kilobytes per second for resgrp_b and 3072 kilobytes per second for
resgrp_c. This means that the combined writing rate to the shared buffer of all
processes assigned to resgrp_a is maintained at approximately 12288 kilobytes per
second. Similarly, for all processes in resgrp_b, the combined writing rate to the shared
buffer is kept to approximately 6144 kilobytes per second, etc.
Changing the dirty_rate_limit of a resource group not only affects new processes
that are assigned to the group, but any currently running processes that are members of
the group are immediately affected by the change. That is, if the dirty_rate_limit is
changed from 12288 to 3072, currently running processes in the group would be throttled
downward so that the aggregate group dirty rate would be near 3072 kilobytes per second
instead of 12288 kilobytes per second.
To illustrate the effect of setting the dirty rate limit for resource groups, the following
examples use the following table for intensive I/O operations.
shared_preload_libraries = '$libdir/dbms_pipe,$libdir/edb_gen,$libdir/pg_stat_statements'
Step 3: Use the CREATE EXTENSION command to complete the creation of the
pg_stat_statements module.
The resource groups with the dirty rate limit settings shown in the previous query are
used in these examples.
The following sequence of commands shows the creation of table t1. The current process
is set to use resource group resgrp_b. The pg_stat_statements view is cleared out
by running the pg_stat_statements_reset() function.
Finally, the INSERT command generates a series of integers from 1 to 10,000 to populate
the table, and dirty approximately 10,000 blocks.
(1 row)
The number of blocks dirtied per millisecond (ms) is 10003 blocks / 13496.184
ms, which yields 0.74117247 blocks per millisecond.
Multiply the result by 1000 to give the number of shared blocks dirtied per second
(1 second = 1000 ms), which yields 741.17247 blocks per second.
Multiply the result by 8.192 to give the number of kilobytes dirtied per second (1
block = 8.192 kilobytes), which yields approximately 6072 kilobytes per second.
Note that the actual dirty rate of 6072 kilobytes per second is close to the dirty rate limit
for the resource group, which is 6144 kilobytes per second.
By contrast, if the steps are repeated again without the process belonging to any resource
group, the dirty buffer rate is much higher.
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 214
EDB Postgres Advanced Server Guide
(1 row)
(1 row)
The following shows the results from the INSERT command without the usage of a
resource group.
First, note the total time was only 2432.165 milliseconds as compared to 13496.184
milliseconds when a resource group with a dirty rate limit set to 6144 kilobytes per
second was used.
The actual dirty rate without the use of a resource group is calculated as follows.
The number of blocks dirtied per millisecond (ms) is 10003 blocks / 2432.165 ms,
which yields 4.112797 blocks per millisecond.
Multiply the result by 1000 to give the number of shared blocks dirtied per second
(1 second = 1000 ms), which yields 4112.797 blocks per second.
Multiply the result by 8.192 to give the number of kilobytes dirtied per second (1
block = 8.192 kilobytes), which yields approximately 33692 kilobytes per second.
Note that the actual dirty rate of 33692 kilobytes per second is significantly higher than
when the resource group with a dirty rate limit of 6144 kilobytes per second was used.
As stated previously, the dirty rate limit applies to the aggregate of all processes in the
resource group. This concept is illustrated in the following example.
For this example the inserts are performed simultaneously on two different tables in two
separate psql sessions, each of which has been added to resource group resgrp_b that
has a dirty_rate_limit set to 6144 kilobytes per second.
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 215
EDB Postgres Advanced Server Guide
Session 1:
Session 2:
Note: The INSERT commands in session 1 and session 2 were started after the SELECT
pg_stat_statements_reset() command in session 2 was run.
The following shows the results from the INSERT commands in the two sessions.
RECORD 3 shows the results from session 1. RECORD 2 shows the results from session 2.
First, note the total time was 33215.334 milliseconds for session 1 and 30591.551
milliseconds for session 2. When only one session was active in the same resource group
as shown in the first example, the time was 13496.184 milliseconds. Thus more active
processes in the resource group result in a slower dirty rate for each active process in the
group. This is shown in the following calculations.
The number of blocks dirtied per millisecond (ms) is 10003 blocks / 33215.334
ms, which yields 0.30115609 blocks per millisecond.
Multiply the result by 1000 to give the number of shared blocks dirtied per second
(1 second = 1000 ms), which yields 301.15609 blocks per second.
Multiply the result by 8.192 to give the number of kilobytes dirtied per second (1
block = 8.192 kilobytes), which yields approximately 2467 kilobytes per second.
The number of blocks dirtied per millisecond (ms) is 10003 blocks / 30591.551
ms, which yields 0.32698571 blocks per millisecond.
Multiply the result by 1000 to give the number of shared blocks dirtied per second
(1 second = 1000 ms), which yields 326.98571 blocks per second.
Multiply the result by 8.192 to give the number of kilobytes dirtied per second (1
block = 8.192 kilobytes), which yields approximately 2679 kilobytes per second.
The combined dirty rate from session 1 (2467 kilobytes per second) and from session 2
(2679 kilobytes per second) yields 5146 kilobytes per second, which is below the set
dirty rate limit of the resource group (6144 kilobytes per seconds).
In this example, two additional psql sessions are used along with the previous two
sessions. The third and fourth sessions perform the same INSERT command in resource
group resgrp_c with a dirty_rate_limit of 3072 kilobytes per second.
Sessions 1 and 2 are repeated as illustrated in the prior example using resource group
resgrp_b. with a dirty_rate_limit of 6144 kilobytes per second.
Session 3:
resgrp_c
(1 row)
Session 4:
(1 row)
Note: The INSERT commands in all four sessions were started after the SELECT
pg_stat_statements_reset() command in session 4 was run.
The following shows the results from the INSERT commands in the four sessions.
RECORD 3 shows the results from session 1. RECORD 2 shows the results from session 2.
RECORD 4 shows the results from session 3. RECORD 5 shows the results from session 4.
rows | 10000
total_time | 56063.697
shared_blks_dirtied | 10003
First note that the times of session 1 (28407.435) and session 2 (31343.458) are close to
each other as they are both in the same resource group with dirty_rate_limit set to
6144, as compared to the times of session 3 (52727.846) and session 4 (56063.697),
which are in the resource group with dirty_rate_limit set to 3072. The latter group
has a slower dirty rate limit so the expected processing time is longer as is the case for
sessions 3 and 4.
The number of blocks dirtied per millisecond (ms) is 10003 blocks / 28407.435
ms, which yields 0.35212612 blocks per millisecond.
Multiply the result by 1000 to give the number of shared blocks dirtied per second
(1 second = 1000 ms), which yields 352.12612 blocks per second.
Multiply the result by 8.192 to give the number of kilobytes dirtied per second (1
block = 8.192 kilobytes), which yields approximately 2885 kilobytes per second.
The number of blocks dirtied per millisecond (ms) is 10003 blocks / 31343.458
ms, which yields 0.31914156 blocks per millisecond.
Multiply the result by 1000 to give the number of shared blocks dirtied per second
(1 second = 1000 ms), which yields 319.14156 blocks per second.
Multiply the result by 8.192 to give the number of kilobytes dirtied per second (1
block = 8.192 kilobytes), which yields approximately 2614 kilobytes per second.
The combined dirty rate from session 1 (2885 kilobytes per second) and from session 2
(2614 kilobytes per second) yields 5499 kilobytes per second, which is near the set dirty
rate limit of the resource group (6144 kilobytes per seconds).
The number of blocks dirtied per millisecond (ms) is 10003 blocks / 52727.846
ms, which yields 0.18971001 blocks per millisecond.
Multiply the result by 1000 to give the number of shared blocks dirtied per second
(1 second = 1000 ms), which yields 189.71001 blocks per second.
Multiply the result by 8.192 to give the number of kilobytes dirtied per second (1
block = 8.192 kilobytes), which yields approximately 1554 kilobytes per second.
The number of blocks dirtied per millisecond (ms) is 10003 blocks / 56063.697
ms, which yields 0.17842205 blocks per millisecond.
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 219
EDB Postgres Advanced Server Guide
Multiply the result by 1000 to give the number of shared blocks dirtied per second
(1 second = 1000 ms), which yields 178.42205 blocks per second.
Multiply the result by 8.192 to give the number of kilobytes dirtied per second (1
block = 8.192 kilobytes), which yields approximately 1462 kilobytes per second.
The combined dirty rate from session 3 (1554 kilobytes per second) and from session 4
(1462 kilobytes per second) yields 3016 kilobytes per second, which is near the set dirty
rate limit of the resource group (3072 kilobytes per seconds).
Thus, this demonstrates how EDB Resource Manager keeps the aggregate dirty rate of
the active processes in its groups close to the dirty rate limit set for each group.
5.4.1 edb_all_resource_groups
5.4.2 edb_resource_group
6 libpq C Library
libpq is the C application programmers interface to Advanced Server. libpq is a set of
library functions that allow client programs to pass queries to the Advanced Server and to
receive the results of these queries.
libpq is also the underlying engine for several other EnterpriseDB application interfaces
including those written for C++, Perl, Python, Tcl and ECPG. So some aspects of libpqs
behavior will be important to the user if one of those packages is used.
Client programs that use libpq must include the header file libpq-fe.h and must link
with the libpq library.
The EnterpriseDB SPL language can be used with the libpq interface library, providing
support for:
In earlier releases, Advanced Server provided support for REFCURSORs through the
following libpq functions; these functions should now be considered deprecated:
PQCursorResult()
PQgetCursorResult()
PQnCursor()
You may now use PQexec() and PQgetvalue() to retrieve a REFCURSOR returned by
an SPL (or PL/pgSQL) function. A REFCURSOR is returned in the form of a null-
terminated string indicating the name of the cursor. Once you have the name of the
cursor, you can execute one or more FETCH statements to retrieve the values exposed
through the cursor.
Please note that the samples that follow do not include error-handling code that would be
required in a real-world client application.
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 222
EDB Postgres Advanced Server Guide
The following example shows an SPL function that returns a value of type REFCURSOR:
RETURN result;
END;
This function expects a single parameter, p_deptno, and returns a REFCURSOR that
holds the result set for the SELECT query shown in the OPEN statement. The OPEN
statement executes the query and stores the result set in a cursor. The server constructs a
name for that cursor and stores the name in a variable (named result). The function
then returns the name of the cursor to the caller.
To call this function from a C client using libpq, you can use PQexec() and
PQgetvalue():
#include <stdio.h>
#include <stdlib.h>
#include "libpq-fe.h"
int
main(int argc, char *argv[])
{
PGconn *conn = PQconnectdb(argv[1]);
PGresult *result;
if (PQstatus(conn) != CONNECTION_OK)
fail(conn, PQerrorMessage(conn));
if (PQresultStatus(result) != PGRES_COMMAND_OK)
fail(conn, PQerrorMessage(conn));
PQclear(result);
if (PQresultStatus(result) != PGRES_TUPLES_OK)
fail(conn, PQerrorMessage(conn));
PQclear(result);
PQexec(conn, "COMMIT");
PQfinish(conn);
exit(0);
}
static void
fetchAllRows(PGconn *conn,
const char *cursorName,
const char *description)
{
size_t commandLength = strlen("FETCH ALL FROM ") +
strlen(cursorName) + 3;
if (PQresultStatus(result) != PGRES_TUPLES_OK)
fail(conn, PQerrorMessage(conn));
printf("\n");
}
PQclear(result);
free(commandText);
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 224
EDB Postgres Advanced Server Guide
static void
fail(PGconn *conn, const char *msg)
{
fprintf(stderr, "%s\n", msg);
if (conn != NULL)
PQfinish(conn);
exit(-1);
}
The code sample contains a line of code that calls the getEmployees() function, and
returns a result set that contains all of the employees in department 10:
The PQexec() function returns a result set handle to the C program. The result set will
contain exactly one value; that value is the name of the cursor as returned by
getEmployees().
Once you have the name of the cursor, you can use the SQL FETCH statement to retrieve
the rows in that cursor. The function fetchAllRows() builds a FETCH ALL statement,
executes that statement, and then prints the result set of the FETCH ALL statement.
-- employees --
7782,CLARK,MANAGER,7839,09-JUN-81 00:00:00,2450.00,,10
7839,KING,PRESIDENT,,17-NOV-81 00:00:00,5000.00,,10
7934,MILLER,CLERK,7782,23-JAN-82 00:00:00,1300.00,,10
The first REFCURSOR contains the name of a cursor (employees) that contains
all employees who work in a department within the range specified by the caller.
In this example, instead of returning a single REFCURSOR, the function returns a SETOF
REFCURSOR (which means 0 or more REFCURSORS). One other important difference is
that the libpq program should not expect a single REFCURSOR in the result set, but should
expect two rows, each of which will contain a single value (the first row contains the
name of the employees cursor, and the second row contains the name of the
departments cursor).
As in the previous example, you can use PQexec() and PQgetvalue() to call the SPL
function:
#include <stdio.h>
#include <stdlib.h>
#include "libpq-fe.h"
int
main(int argc, char *argv[])
{
PGconn *conn = PQconnectdb(argv[1]);
PGresult *result;
if (PQstatus(conn) != CONNECTION_OK)
fail(conn, PQerrorMessage(conn));
if (PQresultStatus(result) != PGRES_COMMAND_OK)
fail(conn, PQerrorMessage(conn));
PQclear(result);
if (PQresultStatus(result) != PGRES_TUPLES_OK)
fail(conn, PQerrorMessage(conn));
PQclear(result);
PQexec(conn, "COMMIT");
PQfinish(conn);
exit(0);
}
static void
fetchAllRows(PGconn *conn,
const char *cursorName,
const char *description)
{
size_t commandLength = strlen("FETCH ALL FROM ") +
strlen(cursorName) + 3;
char *commandText = malloc(commandLength);
PGresult *result;
int row;
if (PQresultStatus(result) != PGRES_TUPLES_OK)
fail(conn, PQerrorMessage(conn));
printf("\n");
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 227
EDB Postgres Advanced Server Guide
PQclear(result);
free(commandText);
}
static void
fail(PGconn *conn, const char *msg)
{
fprintf(stderr, "%s\n", msg);
if (conn != NULL)
PQfinish(conn);
exit(-1);
}
If you call getEmpsAndDepts(20, 30), the server will return a cursor that contains all
employees who work in department 20 or 30, and a second cursor containing the
description of departments 20 and 30.
-- employees --
7369,SMITH,CLERK,7902,17-DEC-80 00:00:00,800.00,,20
7499,ALLEN,SALESMAN,7698,20-FEB-81 00:00:00,1600.00,300.00,30
7521,WARD,SALESMAN,7698,22-FEB-81 00:00:00,1250.00,500.00,30
7566,JONES,MANAGER,7839,02-APR-81 00:00:00,2975.00,,20
7654,MARTIN,SALESMAN,7698,28-SEP-81 00:00:00,1250.00,1400.00,30
7698,BLAKE,MANAGER,7839,01-MAY-81 00:00:00,2850.00,,30
7788,SCOTT,ANALYST,7566,19-APR-87 00:00:00,3000.00,,20
7844,TURNER,SALESMAN,7698,08-SEP-81 00:00:00,1500.00,0.00,30
7876,ADAMS,CLERK,7788,23-MAY-87 00:00:00,1100.00,,20
7900,JAMES,CLERK,7698,03-DEC-81 00:00:00,950.00,,30
7902,FORD,ANALYST,7566,03-DEC-81 00:00:00,3000.00,,20
-- departments --
20,RESEARCH,DALLAS
30,SALES,CHICAGO
Advanced Server's array binding functionality allows you to send an array of data across
the network in a single round-trip. When the back end receives the bulk data, it can use
the data to perform insert or update operations.
Perform bulk operations with a prepared statement; use the following function to prepare
the statement:
PQBulkStart
PQexecBulk
PQBulkFinish
PQexecBulkPrepared
6.1.3.1 PQBulkStart
PQBulkStart() initializes bulk operations on the server. You must call this function
before sending bulk data to the server. PQBulkStart() initializes the prepared
statement specified in stmtName to receive data in a format specified by paramFmts.
API Definition
6.1.3.2 PQexecBulk
This function can be used more than once after PQBulkStart() to send multiple blocks
of data. See the example for more details.
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 229
EDB Postgres Advanced Server Guide
API Definition
6.1.3.3 PQBulkFinish
This function completes the current bulk operation. You can use the prepared statement
again without re-preparing it.
API Definition
6.1.3.4 PQexecBulkPrepared
Specify a previously prepared statement in the place of stmtName. Commands that will
be used repeatedly will be parsed and planned just once, rather than each time they are
executed.
API Definition
paramTypes[0] = 23;
paramTypes[1] = 1043;
res = PQprepare( conn, "stmt_1", "INSERT INTO testtable1 values( $1, $2
)", 2, paramTypes );
PQclear( res );
paramLens[i][0] = 4;
paramLens[i][1] = strlen( b[i] );
}
res = PQBulkFinish(conn);
PQclear( res );
printf( "< -- PQBulkFinish -- >\n" );
}
paramTypes[0] = 23;
paramTypes[1] = 1043;
res = PQprepare( conn, "stmt_2", "INSERT INTO testtable1 values( $1, $2
)", 2, paramTypes );
PQclear( res );
paramLens[i][0] = 4;
paramLens[i][1] = strlen( b[i] );
}
7 Debugger
The Debugger is a tool that gives developers and DBAs the ability to test and debug
server-side programs using a graphical, dynamic environment. The types of programs
that can be debugged are SPL stored procedures, functions, triggers, and packages as well
as PL/pgSQL functions and triggers.
The Debugger is integrated with and invoked from the Postgres Enterprise Manager
client. There are two basic ways the Debugger can be used to test programs:
The debugging tools and operations are the same whether using standalone or in-context
debugging. The difference is in how the program to be debugged is invoked.
The following sections discuss the features and functionality of the Debugger using the
standalone debugging method. The directions for starting the Debugger for in-context
debugging are discussed in Section 7.1.5.3.
Before using the Debugger, edit the postgresql.conf file (located in the data
subdirectory of your Advanced Server home directory), adding
$libdir/plugin_debugger to the libraries listed in the
shared_preload_libraries configuration parameter:
shared_preload_libraries = '$libdir/dbms_pipe,$libdir/edb_gen,$libdir/plugin_debugger'
You can use the Postgres Enterprise Manager (PEM) client to access the Debugger for
standalone debugging. To open the Debugger, highlight the name of the stored procedure
or function you wish to debug in the PEM Object browser panel. Then, navigate
through the Tools menu to the Debugging menu and select Debug from the submenu
as shown in Figure 7.1.
You can also right-click on the name of the stored procedure or function in the PEM
client Object Browser, and select Debugging, and the Debug from the context menu
as shown in Figure 7.2.
Figure 7.2 - Starting the Debugger from the objects context menu
Note that triggers cannot be debugged using standalone debugging. Triggers must be
debugged using in-context debugging. See Section 7.1.5.3 for information on setting a
global breakpoint for in-context debugging.
To debug a package, highlight the specific procedure or function under the package node
of the package you wish to debug and follow the same directions as for stored procedures
and functions.
You can use the View Data Options window to pass parameter values when you are
standalone-debugging a program that expects parameters. When you start the debugger,
the View Data Options window opens automatically to display any IN or IN OUT
parameters expected by the program. If the program declares no IN or IN OUT
parameters, the View Data Options window does not open.
Use the fields on the View Data Options window (shown in Figure 7.3) to provide a
value for each parameter:
Check the Null? checkbox to indicate that the parameter is a NULL value.
The Value field contains the parameter value that will be passed to the program.
Check the Use default? checkbox to indicate that the program should use the
value in the Default Value field.
The Default Value field contains the default value of the parameter.
Press the Enter key to select the next parameter in the list for data entry, or click on a
Value field to select the parameter for data entry.
If you are debugging a procedure or function that is a member of a package that has an
initialization section, check the Debug Package Initializer check box to instruct the
Debugger to step into the package initialization section, allowing you to debug the
initialization section code before debugging the procedure or function. If you do not
select the check box, the Debugger executes the package initialization section without
allowing you to see or step through the individual lines of code as they are executed.
After entering the desired parameter values, click the OK button to start the debugging
process. Click the Cancel button to terminate the Debugger and return control to the
PEM client.
Note: The View Data Options window does not open during in-context debugging.
Instead, the application calling the program to be debugged must supply any required
input parameter values.
When you have completed a full debugging cycle by stepping through the program code,
the View Data Options window re-opens, allowing you to enter new parameter values
and repeat the debugging cycle, or end the debugging session.
The Main Debugger window (see Figure 7.4) contains three panes:
You can use the debugger menu bar or tool bar icons (located at the top of the debugger
window) to access debugging functions.
Status and error information is displayed in the status bar at the bottom of the Debugger
window.
The Program Body pane in the upper-left corner of the Debugger window displays the
source code of the program that is being debugged.
Figure 7.5 shows that the Debugger is about to execute the SELECT statement. The green
indicator in the program body highlights the next statement to execute.
The Stack pane displays a list of programs that are currently on the call stack (programs
that have been invoked but which have not yet completed). When a program is called,
the name of the program is added to the top of the list displayed in the Stack pane;
when the program ends, its name is removed from the list.
The Stack pane also displays information about program calls. The information includes:
Reviewing the call stack can help you trace the course of execution through a series of
nested programs.
After the call to emp_query executes, emp_query is displayed at the top of the Stack
pane, and its code is displayed in the Program Body frame (see Figure 7.7).
Upon completion of execution of the subprogram, control returns to the calling program
(public.emp_query_caller), now displayed at the top of the Stack pane in Figure
7.8.
Highlight an entry in the call stack to review detailed information about the selected entry
on the tabs in the Output pane. Using the call stack to navigate to another entry in the
call stack will not alter the line that is currently executing.
You can use tabs in the Output pane (see Figure 7.9) to view or modify parameter
values or local variables, or to view messages generated by RAISE INFO and function
results.
The Local Variables tab displays the value of any variables declared within
the program.
The DBMS Messages tab displays any results returned by the program as it
executes.
Use the tool bar icons to step through a program with the Debugger:
Use the Step Into icon to execute the line of code currently highlighted
by the green bar in the Program Body pane, and then pause execution. If the
executed code line is a call to a subprogram, the called subprogram is brought into the
Program Body pane, and the first executable line of code of the subprogram is
highlighted as the Debugger waits for you to perform an operation on the
subprogram.
Use the Step Over icon to execute a line of code, stepping over
any subprograms invoked by that line of code. The subprogram is executed, but not
debugged. If the subprogram contains a breakpoint, the debugger will stop at that
breakpoint.
Use the Continue icon to execute the line of code highlighted by the
green bar, and continue execution until either a breakpoint is encountered or the last
line of the program has been executed.
Figure 7.11 shows the locations of the Step Into, Step Over, and Continue icons on
the tool bar:
Figure 7.11 - The Step Into, Step Over, and Continue icons
The debugging operations are also accessible through the Debug menu, as shown in
Figure 7.12.
Local Breakpoint - A local breakpoint can be set at any executable line of code within a
program. The Debugger pauses execution when it reaches a line where a local breakpoint
has been set.
Global Breakpoint - A global breakpoint will trigger when any session reaches that
breakpoint. Set a global breakpoint if you want to perform in-context debugging of a
program. When a global breakpoint is set on a program, the debugging session that set
the global breakpoint waits until that program is invoked in another session. A global
breakpoint can only be set by a superuser.
To create a local breakpoint, left-click in the grey shaded margin to the left of the line of
code where you want the local breakpoint set. The Debugger displays a red dot in the
margin, indicating a breakpoint has been set at the selected line of code (see Figure 7.13).
You can also set a breakpoint by left-clicking in the Program Body to place your
cursor, and selecting Toggle Breakpoint from Debug menu or by clicking the
Toggle Breakpoint icon (see Figure 7.14). A red dot appears in the left-hand margin
indicating a breakpoint has been set as the line of code.
You can set as many local breakpoints as desired. Local breakpoints remain in effect for
the duration of a debugging session until they are removed.
Left click the mouse on the red breakpoint indicator in the left margin of the
Program Body pane. The red dot disappears, indicating that the breakpoint has
been removed.
Use your mouse to select the location of the breakpoint in the code body, and
select Toggle Breakpoint from Debug menu, or click the Toggle
Breakpoint icon.
You can remove all of the breakpoints from the program that currently appears in the
Program Body frame by selecting Clear all breakpoints from the Debug menu
(see Figure 7.15) or by clicking the Clear All Breakpoints icon.
Note: When you perform any of the preceding actions, only the breakpoints in the
program that currently appears in the Program Body frame are removed. Breakpoints in
called subprograms or breakpoints in programs that call the program currently appearing
in the Program Body frame are not removed.
Alternatively, you can right-click on the name of the stored procedure, function, or
trigger on which you wish to set a global breakpoint and select Debugging, then Set
Breakpoint from the context menu as shown in Figure 7.17.
Figure 7.17 - Setting a global breakpoint from the object's context menu
To set a global breakpoint on a trigger, expand the table node that contains the trigger,
highlight the specific trigger you wish to debug, and follow the same directions as for
stored procedures and functions.
After you choose Set Breakpoint, the Debugger window opens and waits for an
application to call the program to be debugged (see Figure 7.18).
In Figure 7.19, the EDB-PSQL client invokes the select_emp function (on which a
global breakpoint has been set).
The select_emp function does not complete until you step through the program in the
Debugger, which now appears as shown in Figure 7.20.
You can now debug the program using any of the previously discussed operations such as
step into, step over, and continue, or set local breakpoints. When you have stepped
through execution of the program, the calling application (EDB-PSQL) regains control as
shown in Figure 7.21.
At this point, you can end the Debugger session by choosing Exit from the File menu.
If you do not end the Debugger session, the next application that invokes the program
will encounter the global breakpoint and the debugging cycle will begin again.
To end a Debugger session and exit the Debugger, select Exit from File menu or press
Alt-F4 as shown by the following:
8.1 Dynatune
Advanced Server supports dynamic tuning of the database server to make the optimal
usage of the system resources available on the host machine on which it is installed. The
two parameters that control this functionality are located in the postgresql.conf file.
These parameters are:
edb_dynatune
edb_dynatune_profile
8.1.1 edb_dynatune
edb_dynatune determines how much of the host system's resources are to be used by
the database server based upon the host machine's total available resources and the
intended usage of the host machine.
You can change the value of the edb_dynatune parameter after the initial installation of
Advanced Server by editing the postgresql.conf file. The postmaster must be
restarted in order for the new configuration to take effect.
The edb_dynatune parameter can be set to any integer value between 0 and 100,
inclusive. A value of 0, turns off the dynamic tuning feature thereby leaving the database
server resource usage totally under the control of the other configuration parameters in
the postgresql.conf file.
A low non-zero, value (e.g., 1 - 33) dedicates the least amount of the host machine's
resources to the database server. This setting would be used for a development machine
where many other applications are being used.
The highest values (e.g., 67 - 100) dedicate most of the server's resources to the database
server. This setting would be used for a host machine that is totally dedicated to running
Advanced Server.
8.1.2 edb_dynatune_profile
Value Usage
Recommended when the database server is processing heavy online transaction
oltp
processing workloads.
reporting Recommended for database servers used for heavy data reporting.
Recommended for servers that provide a mix of transaction processing and data
mixed
reporting.
Advanced Server tries very hard to minimize disk I/O by keeping frequently used data in
memory. When the first server process starts, it creates an in-memory data structure
known as the buffer cache. The buffer cache is organized as a collection of 8K (8192
byte) pages: each page in the buffer cache corresponds to a page in some table or index.
The buffer cache is shared between all processes servicing a given database.
When you select a row from a table, Advanced Server reads the page that contains the
row into the shared buffer cache. If there isn't enough free space in the cache, Advanced
Server evicts some other page from the cache. If Advanced Server evicts a page that has
been modified, that data is written back out to disk; otherwise, it is simply discarded.
Index pages are cached in the shared buffer cache as well.
Figure 7.1 demonstrates the flow of data in a typical Advanced Server session:
A client application sends a query to the Postgres server and the server searches the
shared buffer cache for the required data. If the requested data is found in the cache, the
server immediately sends the data back to the client. If not, the server reads the page that
holds the data into the shared buffer cache, evicting one or more pages if necessary. If
the server decides to evict a page that has been modified, that page is written to disk.
As you can see, a query will execute much faster if the required data is found in the
shared buffer cache.
One way to improve performance is to increase the amount of memory that you can
devote to the shared buffer cache. However, most computers impose a strict limit on the
amount of RAM that you can install. To help circumvent this limit, Infinite Cache lets
you utilize memory from other computers connected to your network.
With Infinite Cache properly configured, Advanced Server will dedicate a portion of the
memory installed on each cache server as a secondary memory cache. When a client
application sends a query to the server, the server first searches the shared buffer cache
for the required data; if the requested data is not found in the cache, the server searches
for the necessary page in one of the cache servers.
Figure 7.2 shows the flow of data in an Advanced Server session with Infinite Cache:
When a client application sends a query to the server, the server searches the shared
buffer cache for the required data. If the requested data is found in the cache, the server
immediately sends the data back to the client. If not, the server sends a request for the
page to a specific cache server; if the cache server holds a copy of the page it sends the
data back to the server and the server copies the page into the shared buffer cache. If the
required page is not found in the primary cache (the shared buffer cache) or in the
secondary cache (the cloud of cache servers), Advanced Server must read the page from
disk. Infinite Cache improves performance by utilizing RAM from other computers on
your network in order to avoid reading frequently accessed data from disk.
You can add or remove cache servers without restarting the database server by adding or
deleting cache nodes from the list defined in the edb_icache_servers configuration
parameter. For more information about changing the configuration parameter, see
Section 8.2.2.2.
When you add one or more cache nodes, the server re-allocates the cache, dividing the
cache evenly amongst the servers; each of the existing cache servers loses a percentage of
the information that they have cached. You can calculate the percentage of the cache that
remains valid with the following formula:
For example, if an Advanced Server installation with three existing cache nodes adds an
additional cache node, 75% of the existing cache remains valid after the reconfiguration.
If cache nodes are removed from a server, the data that has been stored on the remaining
cache nodes is preserved. If one cache server is removed from a set of five cache servers,
Advanced Server preserves the 80% of the distributed cache that is stored on the four
remaining cache nodes.
When you change the cache server configuration (by adding or removing cache servers),
the portion of the cache configuration that is preserved is not re-written unless the cache
is completely re-warmed using the edb_icache_warm() function or
edb_icache_warm utility. If you do not re-warm the cache servers, new cache servers
will accrue cache data as queries are performed on the server.
Without Infinite Cache, Advanced Server will read each page from disk as an 8K chunk;
when a page resides in the shared buffer cache, it consumes 8K of RAM. With Infinite
Cache, Postgres can compress each page before sending it to a cache server. A
compressed page can take significantly less room in the secondary cache, making more
space available for other data and effectively increasing the size of the cache. A
compressed page consumes less network bandwidth as well, decreasing the amount of
time required to retrieve a page from the secondary cache.
The fact that Infinite Cache can compress each page may make it attractive to configure a
secondary cache server on the same computer that runs your Postgres server. If, for
example, your computer is configured with 6GB of RAM, you may want to allocate a
smaller amount (say 1GB) for the primary cache (the shared buffer cache) and a larger
amount (4GB) to the secondary cache (Infinite Cache), reserving 1GB for the operating
system. Since the secondary cache resides on the same computer, there is very little
overhead involved in moving data between the primary and secondary cache. All data
stored in the Infinite Cache is compressed so the secondary cache can hold many more
pages than would fit into the (uncompressed) shared buffer cache. If you had allocated
5GB to the shared buffer cache, the cache could hold no more than 65000 pages
(approximately). By assigning 4GB of memory to Infinite Cache, the cache may be able
to hold 130000 pages (at 2x compression), 195000 pages (at 3x compression) or more.
The compression factor that you achieve is determined by the amount of redundancy in
the data itself and the edb_icache_compression_level parameter.
To use Infinite Cache, you must specify a list of one or more cache servers (computers on
your network) and start the edb_icache daemon on each of those servers.
For information about using the RPM packages to install Infinite Cache, please see the
EDB Postgres Advanced Server Installation Guide available at:
http://www.enterprisedb.com/products-services-
training/products/documentation/enterpriseedition
To use the graphical installer to install Advanced Server with Infinite Cache
functionality, confirm that the box next to the Database Server option (located on the
Select Components dialog, shown in Figure 7.3) is selected when running the
installation wizard.
The Database Server option installs the following Infinite Cache components:
The graphical installation wizard can selectively install only the Infinite Cache daemon
on a cache server. To install the edb-icache daemon on a cache server, deploy the
installation wizard on the machine hosting the cache; when the Setup: Select
Components window opens, de-select all options except Infinite Cache (as shown
in Figure 7.4).
Specify Infinite Cache server settings in the Infinite Cache configuration file.
Modify the Advanced Server postgresql.conf file, enabling Infinite Cache,
and specifying connection and compression settings.
Start the Infinite Cache service.
The Infinite Cache configuration file is named edb-icache, and contains two
parameters and their associated values:
PORT=11211
CACHESIZE=500
PORT
Use the PORT variable to specify the port where Infinite Cache will listen for
connections from Advanced Server.
CACHESIZE
Use the CACHESIZE variable to specify the size of the cache (in MB).
The postgresql.conf file includes three configuration parameters that control the
behavior of Infinite Cache. The postgresql.conf file is read each time you start the
Advanced Server database server. To modify a parameter, open the postgresql.conf
file (located in the $PGDATA directory) with your editor of choice, and edit the section of
the configuration file shown below:
# - Infinite Cache
#edb_enable_icache = off
#edb_icache_servers = '' #'host1:port1,host2,ip3:port3,ip4'
#edb_icache_compression_level = 6
Lines that begin with a pound sign (#) are treated as comments; to enable a given
parameter, remove the pound sign and specify a value for the parameter. When you've
updated and saved the configuration file, restart the database server for the changes to
take effect.
edb_enable_icache
If you set edb_enable_icache to on, you must also specify a list of cache
servers by setting the edb_icache_servers parameter (described in the next
section).
edb_icache_servers
hostname
IP-address
hostname:portnumber
IP-address:portnumber
If you do not specify a port number, Infinite Cache assumes that the cache server
is listening at port 11211. This configuration parameter will take effect only if
edb_enable_icache is set to on. Use the edb_icache_servers parameter
to specify a maximum of 128 cache nodes.
edb_icache_compression_level
When Advanced Server reads data from disk, it typically reads the data in 8K
increments. If edb_icache_compression_level is set to 0, each time
Advanced Server sends an 8K page to the Infinite Cache server that page is stored
(uncompressed) in 8K of cache memory. If the
edb_icache_compression_level parameter is set to 9, Advanced Server
applies the maximum compression possible before sending it to the Infinite Cache
server, so a page that previously took 8K of cached memory might take 2K of
cached memory. Exact compression numbers are difficult to predict, as they are
dependent on the nature of the data on each page.
The compression level must be set by the superuser and can be changed for the
current session while the server is running. The following command disables the
compression mechanism for the currently active session:
SET edb_icache_compression_level = 0
edb_enable_icache = on
edb_icache_servers = 'localhost,192.168.2.1:11200,192.168.2.2'
edb_icache_compression_level = 6
Linux
On Linux, the Infinite Cache service script is named edb-icache. The service script
resides in the /etc/init.d directory. You can control the Infinite Cache service, or
check the status of the service with the following command:
/etc/init.d/edb-icache action
You can dynamically modify the Infinite Cache server nodes; to change the Infinite
Cache server configuration, use the edb_icache_servers parameter in the
postgresql.conf file to:
Alternatively, you can use the pg_ctl reload command to update the server's
configuration parameters at the command line:
Please Note: If the server detects a problem with the value specified for the
edb_icache_servers parameter during a server reload, it will ignore changes to the
parameter and use the last valid parameter value. If you are performing a server
restart, and the parameter contains an invalid value, the server will return an error.
Before starting the database server, the edb-icache daemon must be running on each
server node. Log into each server and start the edb-icache server (on that host) by
issuing the following command:
Where:
-u
-m
-d
To gracefully kill an edb-icache daemon (close any in-use files, flush buffers, and
exit), execute the command:
If the edb-icache daemon refuses to die, you may need to use the following command:
To view the command line options for the edb-icache daemon, use the following
command:
# /opt/edb/icache/bin/edb-icache -h
Parameter Description
-p <port_number> The TCP port number the Infinite Cache daemon is listening on. The default is
11211.
-U <UDP_number> The UDP port number the Infinite Cache daemon is listening on. The default is
0 (off).
-s <pathname> The Unix socket pathname the Infinite Cache daemon is listening on. If
included, the server limits access to the host on which the Infinite Cache
daemon is running, and disables network support for Infinite Cache.
-a <mask> The access mask for the Unix socket, in octal form. The default value is 0700.
-l <ip_addr> Specifies the IP address that the daemon is listening on. If an individual address
is not specified, the default value is INDRR_ANY; all IP addresses assigned to
the resource are available to the daemon.
-d Run as a daemon.
-r Maximize core file limit.
-u <username> Assume the identity of the specified user (when run as root).
-m <numeric> Max memory to use for items in megabytes. Default is 64 MB.
-M Return error on memory exhausted (rather than removing items).
-c <numeric> Max simultaneous connections. Default is 1024.
-k Lock down all paged memory. Note that there is a limit on how much memory
you may lock. Trying to allocate more than that would fail, so be sure you set
the limit correctly for the user you started the daemon with (not for -u
<username> user; under sh this is done with 'ulimit -S -l NUM_KB').
-v Verbose (print errors/warnings while in event loop).
-vv Very verbose (include client commands and responses).
-vvv Extremely verbose (also print internal state transitions).
-h Print the help text and exit.
-i Print memcached and libevent licenses.
-P <file> Save PID in <file>, only used with -d option.
-f <factor> Chunk size growth factor. Default value is 1.25.
-n <bytes> Minimum space allocated for key+value+flags. Default is 48.
-L Use large memory pages (if available). Increasing the memory page size could
reduce the number of transition look-aside buffer misses and improve the
performance. To get large pages from the OS, Infinite Cache will allocate the
total item-cache in one large chunk.
-D <char> Use <char> as the delimiter between key prefixes and IDs. This is used for per-
prefix stats reporting. The default is":" (colon).
If this option is specified, stats collection is enabled automatically; if not, then it
may be enabled by sending the stats detail on command to the server.
-t <num> Specifies the number of threads to use. Default is 4.
-R Maximum number of requests per event; this parameter limits the number of
requests process for a given connection to prevent starvation, default is 20.
-C Disable use of CAS (check and set).
-b Specifies the backlog queue limit, default is 1024.
-B Specifies the binding protocol. Possible values are ascii, binary or auto;
default value is auto.
-I Override the size of each slab page. Specifies the max item size; default 1 MB,
minimum size is 1 k, maximum is 128 MB).
8.2.4.2 edb-icache-tool
edb-icache-tool provides a command line interface that queries the edb-icache daemon
to retrieve statistical information about a specific cache node. The syntax is:
host specifies the address of the host that you are querying.
port specifies the port that the daemon is listening on.
Statistic Description
accepting_conns Will this server accept new connection(s)? 1 if yes, otherwise 0.
auth_cmds Number of authentication commands handled by this server, success or
failure.
auth_errors Number of failed authentications.
bytes Total number of bytes in use.
bytes_read Total number of bytes received by this server (from the network).
bytes_written Total number of bytes sent by this server (to the network).
cas_badval Number of keys that have been compared and swapped by this server but
the comparison (original) value did not match the supplied value.
cas_hits Number of keys that have been compared and swapped by this server and
found present.
cas_misses Number of keys that have been compared and swapped by this server and
not found.
cmd_flush Cumulative number of flush requests sent to this server.
cmd_get Cumulative number of read requests sent to this server.
cmd_set Cumulative number of write requests sent to this server.
conn_yields Number of times any connection yielded to another due to hitting the edb-
icache -R limit.
connection_structures Number of connection structures allocated by the server.
curr_connections Number of open connections.
curr_items Number of items currently stored by the server.
decr_hits Number of decrement requests satisfied by this server.
decr_misses Number of decrement requests not satisfied by this server.
delete_hits Number of delete requests satisfied by this server.
delete_misses Number of delete requests not satisfied by this server.
evictions Number of valid items removed from cache to free memory for new items.
get_hits Number of read requests satisfied by this server.
get_misses Number of read requests not satisfied by this server.
incr_hits Number of increment requests satisfied by this server.
incr_misses Number of increment requests not satisfied by this server.
limit_maxbytes Number of bytes allocated on this server for storage.
listen_disabled_num Cumulative number of times this server has hit its connection limit.
pid Process ID (on cache server).
pointer_size Default pointer size on host OS (usually 32 or 64).
reclaimed Number of times an entry was stored using memory from an expired entry.
rusage_user Accumulated user time for this process (seconds.microseconds).
rusage_system Accumulated system time for this process (seconds.microseconds).
threads Number of worker threads requested.
total_time Number of seconds since this server's base date (usually midnight, January
1, 1970, UTC).
total_connections Total number of connections opened since the server started running.
total_items Total number of items stored by this server (cumulative).
uptime Amount of time that server has been active.
version edb-icache version.
Field Value
accepting_conns 1
auth_cmds 0
auth_errors 0
bytes 52901223
bytes_read 188383848
bytes_written 60510385
cas_badval 0
cas_hits 0
cas_misses 0
cmd_flush 1
cmd_get 53139
cmd_set 229120
conn_yields 0
connection_structures 34
curr_connections 13
curr_items 54953
decr_hits 0
decr_misses 0
delete_hits 0
delete_misses 0
evictions 0
get_hits 52784
get_misses 355
incr_hits 0
incr_misses 0
limit_maxbytes 314572800
listen_disabled_num 0
pid 7226
pointer_size 32
reclaimed 0
rusage_system 10.676667
rusage_user 3.068191
threads 4
time 1320919080
total_connections 111
total_items 229120
uptime 7649
version 1.4.5
When the server starts, the primary and secondary caches are empty. When Advanced
Server processes a client request, the server reads the required data from disk and stores a
copy in each cache. You can improve server performance by warming (or pre-loading)
the data into the memory cache before a client asks for it.
There are two advantages to warming the cache. Advanced Server will find data in the
cache the first time it is requested by a client application, instead of waiting for it to be
read from disk. Also, manually warming the cache with the data that your applications
are most likely to need saves time by avoiding future random disk reads. If you don't
warm the cache at startup, Advanced Server performance may not reach full speed until
the client applications happen to load commonly used data into the cache.
There are several ways to load pages to warm the Infinite Cache server nodes. You can:
Use the edb_icache_warm utility to warm the caches from the command line.
While it is not necessary to re-warm the cache after making changes to an existing cache
configuration, re-warming the cache can improve performance by bringing the new
configuration of cache servers up-to-date.
The edb_icache_warm() function comes in two variations; the first variation warms
not only the table, but any indexes associated with the table. If you use the second
variation, you must make additional calls to warm any associated indexes.
The first form of the edb_icache_warm() function warms the given table and any
associated indexes into the cache. The signature is:
edb_icache_warm(table_name)
When you call the first form of edb_icache_warm(), Advanced Server reads each
page in the given table, compresses the page (if configured to do so), and then sends the
The second form of the edb_icache_warm() function warms the pages that contain the
specified range of bytes into the cache. The signature of the second form is:
You must make subsequent calls to specify indexes separately when using this form of
the edb_icache_warm() function.
You can use the edb_icache_warm command-line utility to load the cache servers with
specified tables, allowing fast access to relevant data from the cache.
You can view Infinite Cache statistics by using the edb_icache_stats() function at
the edb-psql command line (or any other query tool). The edb_icache_stats()
function returns a result set that reflects the state of an Infinite Cache node or nodes and
the related usage statistics. The result set includes:
Statistic Description
hostname Host name (or IP address) of server
Port Port number at which edb-icache daemon is listening
State Health of this server
write_failures Number of write failures
Bytes Total number of bytes in use
bytes_read Total number of bytes received by this server (from the network)
bytes_written Total number of bytes sent by this server (to the network)
cmd_get Cumulative number of read requests sent to this server
cmd_set Cumulative number of write requests sent to this server
connection_structures Number of connection structures allocated by the server
curr_connections Number of open connections
curr_items Number of items currently stored by the server
Evictions Number of valid items removed from cache to free memory for new items
get_hits Number of read requests satisfied by this server
get_misses Number of read requests not satisfied by this server
limit_maxbytes Number of bytes allocated on this server for storage
Pid Process ID (on cache server)
pointer_size Default pointer size on host OS (usually 32 or 64)
rusage_user Accumulated user time for this process (seconds.microseconds)
rusage_system Accumulated system time for this process (seconds.microseconds)
Threads Number of worker threads requested
total_time Number of seconds since this server's base date (usually midnight, January
1, 1970, UTC)
total_connections Total number of connections opened since the server started running
total_items Total number of items stored by this server (cumulative)
Uptime Amount of time that server has been active
Version edb-icache version
You can use SQL queries to view Infinite Cache statistics. To view the server status of
all Infinite Cache nodes:
Use the following command to view complete statistics (shown here using edb-psql's
expanded display mode, \x) for a specified node:
-[RECORD 1]-----------+--------------
hostname | 192.168.23.85
port | 11211
state | ACTIVE
write_failures | 0
bytes | 225029460
bytes_read | 225728252
bytes_written | 192806774
cmd_get | 23313
cmd_set | 27088
connection_structures | 53
curr_connections | 3
curr_items | 27088
evictions | 0
get_hits | 23266
get_misses | 47
limit_maxbytes | 805306368
pid | 4240
pointer_size | 32
rusage_user | 0.481926
rusage_system | 1.583759
threads | 1
total_time | 1242199782
total_connections | 66
total_items | 27088
uptime | 714
version | 1.2.6
8.2.6.2 edb_icache_server_list
The edb_icache_server_list view exposes information about the status and health
of all Infinite Cache servers listed in the edb_icache_servers GUC. The
edb_icache_server_list view is created using the edb_icache stats() API.
The view exposes the following information for each server:
Statistic Description
Hostname Host name (or IP address) of server
Port Port number at which edb-icache daemon is listening
State Health of this server
write_failures Number of write failures
total_memory Number of bytes allocated to the cache on this server
memory_used Number of bytes currently used by the cache
memory_free Number of unused bytes remaining in the cache
hit_ratio Percentage of cache hits
The state column will contain one of the following four values, reflecting the health of
the given server:
Use the following SELECT statement to return the health of each node in the Infinite
Cache server farm:
Use the following command to view complete details about a specific Infinite Cache
node (shown here using edb-psql's \x expanded-view option):
-[RECORD 1]-----------+--------------
hostname | 192.168.23.85
port | 11211
state | ACTIVE
write_failures | 0
total_memory | 805306368
memory_used | 225029460
memory_free | 580276908
hit_ratio | 99.79
Advanced Server provides six system views that contain statistical information on a per-
table basis. The views are:
pg_statio_all_tables
pg_statio_sys_tables
pg_statio_user_tables
pg_statio_all_indexes
pg_statio_sys_indexes
pg_statio_user_indexes
You can use standard SQL queries to view and compare the information stored in the
views. The views contain information that will allow you to observe the effectiveness of
the Advanced Server buffer cache and the icache servers.
8.2.7.1 pg_statio_all_tables
The pg_statio_all_tables view contains one row for each table in the database.
The view contains the following information:
You can execute a simple query to view performance statistics for a specific table:
-[ RECORD 1 ]---------+---------
relid | 16402
schemaname | public
relname | jobhist
heap_blks_read | 1
heap_blks_hit | 51
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 278
EDB Postgres Advanced Server Guide
heap_blks_icache_hit | 0
idx_blks_read | 2
idx_blks_hit | 17
idx_blks_icache_hit | 0
toast_blks_read |
toast_blks_hit |
toast_blks_icache_hit |
tidx_blks_read |
tidx_blks_hit |
tidx_blks_icache_hit |
Or, you can view the statistics by activity level. The following example displays the
statistics for the ten tables that have the greatest heap_blks_icache_hit activity:
12 433 4
8.2.7.2 pg_statio_sys_tables
The pg_statio_sys_tables view contains one row for each table in a system-defined
schema. The statistical information included in this view is the same as for
pg_statio_all_tables.
8.2.7.3 pg_statio_user_tables
The pg_statio_user_tables view contains one row for each table in a user-defined
schema. The statistical information in this view is the same as for
pg_statio_all_tables.
8.2.7.4 pg_statio_all_indexes
The pg_statio_all_indexes view contains one row for each index in the current
database. The view contains the following information:
You can execute a simple query to view performance statistics for the indexes on a
specific table:
-[ RECORD 1 ]---------+---------
relid | 1249
indexrelid | 2658
schemaname | pg_catalog
relname | pg_attribute
indexrelname | pg_attribute_relid_attnam_index
idx_blks_read | 10
idx_blks_hit | 1200
idx_blks_icache_hit | 0
-[ RECORD 2 ]---------+---------
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 280
EDB Postgres Advanced Server Guide
relid | 1249
indexrelid | 2659
schemaname | pg_catalog
relname | pg_attribute
indexrelname | pg_attribute_relid_attnum_index
idx_blks_read | 12
idx_blks_hit | 3917
idx_blks_icache_hit | 0
The result set from the query includes the statistical information for two indexes; the
pg_attribute table has two indexes.
You can also view the statistics by activity level. The following example displays the
statistics for the ten indexes that have the greatest idx_blks_icache_hit activity:
8.2.7.5 pg_statio_sys_indexes
The pg_statio_sys_indexes view contains one row for each index on the system
tables. The statistical information in this view is the same as in
pg_statio_all_indexes.
8.2.7.6 pg_statio_user_indexes
The pg_statio_user_indexes view contains one row for each index on a table that
resides in a user-defined schema. The statistical information in this view is the same as
in pg_statio_all_indexes.
8.2.8 edb_icache_server_enable()
You can use the edb_icache_server_enable() function to take the Infinite Cache
server offline for maintenance or other planned downtime. The syntax is:
host specifies the host that you want to disable. The host name may be specified by
name or numeric address.
port specifies the port number that the Infinite Cache server is listening on.
online specifies the state of the Infinite Cache server. The value of online must be true
or false.
To take a server offline, specify the host that you want to disable, the port number that
the Infinite Cache server is listening on, and false. To bring the Infinite Cache server
back online, specify the host name and port number, and pass a value of true.
When you start Advanced Server, a message that includes Infinite Cache status, cache
node count and cache node size is written to the server log. The following example
shows the server log for an active Infinite Cache installation with two 750 MB cache
servers:
As mentioned earlier in this document, each computer imposes a limit on the amount of
physical memory that you can install. However, modern operating systems typically
simulate a larger address space so that programs can transparently access more memory
than is actually installed. This "virtual memory" allows a computer to run multiple
programs that may simultaneously require more memory than is physically available.
For example, you may run an e-mail client, a web browser, and a database server which
each require 1GB of memory on a machine that contains only 2GB of physical RAM.
When the operating system runs out of physical memory, it starts swapping bits and
pieces of the currently running programs to disk to make room to satisfy your current
demand for memory.
Since the primary goal of Infinite Cache is to improve performance by limiting disk I/O,
you should avoid dedicating so much memory to Infinite Cache that the operating system
must start swapping data to disk. If the operating system begins to swap to disk, you lose
the benefits offered by Infinite Cache.
The overall demand for physical memory can vary throughout the day; if the server is
frequently idle, you may never encounter swapping. If you have dedicated a large
portion of physical memory to the cache, and system usage increases, the operating
system may start swapping. To get the best performance and avoid disk swapping,
dedicate a server node to Infinite Cache so other applications on that computer will not
compete for physical memory.
The source and target databases can be the same physical database, or different databases
within the same database cluster, or separate databases running under different database
clusters on separate database server hosts.
The database objects that can be cloned from one schema to another are the following:
Data types
Tables including partitioned tables, but not foreign tables
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 286
EDB Postgres Advanced Server Guide
Indexes.
Constraints
Sequences
View definitions
Materialized views
Private synonyms
Table triggers, but not event triggers
Rules
Functions
Procedures
Packages
Comments for all supported object types
Access control lists (ACLs) for all supported object types
The source code within functions, procedures, triggers, packages, etc., are not
modified after being copied to the target schema. If such programs contain coded
references to objects with schema names, the programs may fail upon invocation
in the target schema if such schema names are no longer consistent within the
target schema.
Cross schema object dependencies are not resolved. If an object in the target
schema depends upon an object in another schema, this dependency is not
resolved by the cloning functions.
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 287
EDB Postgres Advanced Server Guide
At most, 16 copy jobs can run in parallel to clone schemas, whereas each job can
have at most 16 worker processes to copy table data in parallel.
The following section describes how to set up EDB Clone Schema on the databases.
The following describes the steps to install the required extensions and the PL/Perl
language.
These steps must be performed on any database to be used as the source or target
database by an EDB Clone Schema function.
postgres_fdw
dblink
adminpack
pgagent
Ensure that pgAgent is installed before creating the pgagent extension. You can use
StackBuilder Plus to download and install pgAgent.
The previously listed extensions can be installed by the following commands if they do
not already exist:
For more information about using the CREATE EXTENSION command, see the
PostgreSQL core documentation at:
https://www.postgresql.org/docs/10/static/sql-createextension.html
Step 3: The Perl Procedural Language (PL/Perl) must be installed on the database and the
CREATE TRUSTED LANGUAGE plperl command must be run.
Run StackBuilder Plus, select and download the EDB Language Pack installer, and
proceed with the installation.
Step 4: Once the installation has been completed, edit the configuration file
plLanguages.config located in the Advanced Server installation directory under
subdirectory etc/sysconfig to point to the directory where PL/Perl was installed.
INSTALLER_DIR/etc/sysconfig/plLanguages.config
EDB_PERL_VERSION=5.24
EDB_PYTHON_VERSION=3.4
EDB_TCL_VERSION=8.6
EDB_PERL_PATH=PERL_INSTALL_PATH
EDB_PYTHON_PATH=PYTHON_INSTALL_PATH
EDB_TCL_PATH=TCL_INSTALL_PATH
EDB_PERL_VERSION=5.24
EDB_PYTHON_VERSION=3.4
EDB_TCL_VERSION=8.6
EDB_PERL_PATH=/opt/edb/languagepack-10/Perl-5.24
EDB_PYTHON_PATH=PYTHON_INSTALL_PATH
EDB_TCL_PATH=TCL_INSTALL_PATH
Step 6: Connect to the database as a superuser where PL/Perl was installed and run the
following command:
For more information about using the CREATE LANGUAGE command, see the
PostgreSQL core documentation at:
https://www.postgresql.org/docs/10/static/sql-createlanguage.html
The following sections describe certain configuration parameters that may need to be
altered in the postgresql.conf file.
The configuration parameters in the postgresql.conf file that may need to be tuned
include the following:
For information about the configuration parameters, see the PostgreSQL core
documentation at:
https://www.postgresql.org/docs/10/static/runtime-config.html
The name of the log file is determined by what you specify in the parameter list when
invoking the cloning function.
To display the status from a log file, use the process_status_from_log function as
described in Section 9.2.5.
The following are the directions for installing EDB Clone Schema.
These steps must be performed on any database to be used as the source or target
database by an EDB Clone Schema function.
Make sure you create the parallel_clone extension before creating the
edb_cloneschema extension.
For each foreign server, a user mapping must be created. When a selected database
superuser invokes a cloning function, that database superuser who invokes the function
must have been mapped to a database user name and password that has access to the
foreign server that is specified as a parameter in the cloning function.
For general information about foreign data, foreign servers, and user mappings, see the
PostgreSQL core documentation at:
https://www.postgresql.org/docs/10/static/ddl-foreign-data.html
The following two sections describe how these foreign servers and user mappings are
defined.
9.1.4.1 Foreign Server and User Mapping for Local Cloning Functions
For the localcopyschema and localcopyschema_nb functions, the source and target
schemas are both within the same database of the same database server. Thus, only one
foreign server must be defined and specified for these functions. This foreign server is
also referred to as the local server.
This server is referred to as the local server because this server is the one to which you
must be connected when invoking the localcopyschema or localcopyschema_nb
function.
The user mapping defines the connection and authentication information for the foreign
server.
This foreign server and user mapping must be created within the database of the
local server in which the cloning is to occur.
The database user for whom the user mapping is defined must be a superuser and
the user connected to the local server when invoking an EDB Clone Schema
function.
The following example creates the foreign server for the database containing the schema
to be cloned, and to receive the cloned schema as well.
For more information about using the CREATE SERVER command, see the PostgreSQL
core documentation at:
https://www.postgresql.org/docs/10/static/sql-createserver.html
For more information about using the CREATE USER MAPPING command, see the
PostgreSQL core documentation at:
https://www.postgresql.org/docs/10/static/sql-createusermapping.html
The following psql commands show the foreign server and user mapping:
edb=# \des+
List of foreign servers
-[ RECORD 1 ]--------+----------------------------------------------
Name | local_server
Owner | enterprisedb
Foreign-data wrapper | postgres_fdw
Access privileges |
Type |
Version |
FDW options | (host 'localhost', port '5444', dbname 'edb')
Description |
edb=# \deu+
List of user mappings
Server | User name | FDW options
--------------+--------------+----------------------------------------------
local_server | enterprisedb | ("user" 'enterprisedb', password 'password')
(1 row)
When database superuser enterprisedb invokes a cloning function, the database user
enterprisedb with its password is used to connect to local_server on the
localhost with port 5444 to database edb.
In this case, the mapped database user, enterprisedb, and the database user,
enterprisedb, used to connect to the local edb database happen to be the same,
identical database user, but that is not an absolute requirement.
For specific usage of these foreign server and user mapping examples, see the example
given in Section 9.2.1.
The foreign server defining the originating database server and its database containing the
source schema to be cloned is referred to as the source server or the remote server.
The foreign server defining the database server and its database to receive the schema to
be cloned is referred to as the target server or the local server.
The target server is also referred to as the local server because this server is the one to
which you must be connected when invoking the remotecopyschema or
remotecopyschema_nb function.
The user mappings define the connection and authentication information for the foreign
servers.
All of these foreign servers and user mappings must be created within the target
database of the target/local server.
The database user for whom the user mappings are defined must be a superuser
and the user connected to the local server when invoking an EDB Clone Schema
function.
The following example creates the foreign server for the local, target database that is to
receive the cloned schema.
The following example creates the foreign server for the remote, source database that is
to be the source for the cloned schema.
);
The following psql commands show the foreign servers and user mappings:
tgtdb=# \des+
List of foreign servers
-[ RECORD 1 ]--------+---------------------------------------------------
Name | src_server
Owner | tgtuser
Foreign-data wrapper | postgres_fdw
Access privileges |
Type |
Version |
FDW options | (host '192.168.2.28', port '5444', dbname 'srcdb')
Description |
-[ RECORD 2 ]--------+---------------------------------------------------
Name | tgt_server
Owner | tgtuser
Foreign-data wrapper | postgres_fdw
Access privileges |
Type |
Version |
FDW options | (host 'localhost', port '5444', dbname 'tgtdb')
Description |
tgtdb=# \deu+
List of user mappings
Server | User name | FDW options
------------+--------------+--------------------------------------------
src_server | enterprisedb | ("user" 'srcuser', password 'srcpassword')
tgt_server | enterprisedb | ("user" 'tgtuser', password 'tgtpassword')
(2 rows)
When database superuser enterprisedb invokes a cloning function, the database user
tgtuser with password tgtpassword is used to connect to tgt_server on the
localhost with port 5444 to database tgtdb.
Note: Be sure the pg_hba.conf file of the database server running the source database
srcdb has an appropriate entry permitting connection from the target server location
(address 192.168.2.27 in the following example) connecting with the database user
srcuser that was included in the user mapping for the foreign server src_server
defining the source server and database.
For specific usage of these foreign server and user mapping examples, see the example
given in Section 9.2.3.
The EDB Clone Schema functions are created in the edb_util schema when the
parallel_clone and edb_cloneschema extensions are installed.
Verify the following conditions before using an EDB Clone Schema function:
You are connected to the target or local database as the database superuser
defined in the CREATE USER MAPPING command for the foreign server of the
target or local database. See Section 9.1.4.1 for information on the user mapping
for the localcopyschema or localcopyschema_nb function. See Section
9.1.4.2 for information on the user mapping for the remotecopyschema or
remotecopyschema_nb function.
The edb_util schema is in the search path, or the cloning function is to be
invoked with the edb_util prefix.
The target schema does not exist in the target database.
When using the remote copy functions, if the on_tblspace parameter is to be
set to true, then the target database cluster contains all tablespaces that are
referenced by objects in the source schema, otherwise creation of the DDL
statements for those database objects will fail in the target schema. This causes a
failure of the cloning process.
When using the remote copy functions, if the copy_acls parameter is to be set
to true, then all roles that have GRANT privileges on objects in the source schema
exist in the target database cluster, otherwise granting of privileges to those roles
will fail in the target schema. This causes a failure of the cloning process.
pgAgent is running against the target database if the non-blocking form of the
function is to be used.
For information about pgAgent, see the following section of the pgAdmin documentation
available at:
https://www.pgadmin.org/docs/pgadmin4/dev/pgagent.html
9.2.1 localcopyschema
The localcopyschema function copies a schema and its database objects within a local
database specified within the source_fdw foreign server from the source schema to the
specified target schema within the same database.
localcopyschema(
source_fdw TEXT,
source_schema TEXT,
target_schema TEXT,
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 297
EDB Postgres Advanced Server Guide
log_filename TEXT
[, on_tblspace BOOLEAN
[, verbose_on BOOLEAN
[, copy_acls BOOLEAN
[, worker_count INTEGER ]]]]
)
A BOOLEAN value is returned by the function. If the function succeeds, then true is
returned. If the function fails, then false is returned.
Parameters
source_fdw
Name of the foreign server managed by the postgres_fdw foreign data wrapper
from which database objects are to be cloned.
source_schema
target_schema
Name of the schema into which database objects are to be cloned from the source
schema.
log_filename
Name of the log file in which information from the function is recorded. The log
file is created under the directory specified by the log_directory configuration
parameter in the postgresql.conf file.
on_tblspace
BOOLEAN value to specify whether or not database objects are to be created within
their tablespaces. If false is specified, then the TABLESPACE clause is not
included in the applicable CREATE DDL statement when added to the target
schema. If true is specified, then the TABLESPACE clause is included in the
CREATE DDL statement when added to the target schema. If the on_tblspace
parameter is omitted, the default value is false.
verbose_on
copy_acls
BOOLEAN value to specify whether or not the access control list (ACL) is to be
included while creating objects in the target schema. The access control list is the
set of GRANT privilege statements. If false is specified, then the access control
list is not included for the target schema. If true is specified, then the access
control list is included for the target schema. If the copy_acls parameter is
omitted, the default value is false.
worker_count
Example
The following example shows the cloning of schema edb containing a set of database
objects to target schema edbcopy, both within database edb as defined by
local_server.
Before invoking the function, the connection is made by database user enterprisedb
to database edb.
After the clone has completed, the following shows some of the database objects copied
to the edbcopy schema:
edb=# \dv
List of relations
Schema | Name | Type | Owner
---------+----------+------+--------------
edbcopy | salesemp | view | enterprisedb
(1 row)
edb=# \di
List of relations
Schema | Name | Type | Owner | Table
---------+---------------+-------+--------------+---------
edbcopy | dept_dname_uq | index | enterprisedb | dept
edbcopy | dept_pk | index | enterprisedb | dept
edbcopy | emp_pk | index | enterprisedb | emp
edbcopy | jobhist_pk | index | enterprisedb | jobhist
(4 rows)
edb=# \ds
List of relations
Schema | Name | Type | Owner
---------+------------+----------+--------------
edbcopy | next_empno | sequence | enterprisedb
(1 row)
9.2.2 localcopyschema_nb
The localcopyschema_nb function copies a schema and its database objects within a
local database specified within the source_fdw foreign server from the source schema
to the specified target schema within the same database, but in a non-blocking manner as
a job submitted to pgAgent.
localcopyschema_nb(
source_fdw TEXT,
source TEXT,
target TEXT,
log_filename TEXT
[, on_tblspace BOOLEAN
[, verbose_on BOOLEAN
[, copy_acls BOOLEAN
[, worker_count INTEGER ]]]]
)
An INTEGER value job ID is returned by the function for the job submitted to pgAgent. If
the function fails, then null is returned.
The source_fdw, source, target, and log_filename are required parameters while
all other parameters are optional.
After completion of the pgAgent job, remove the job with the
remove_log_file_and_job function (see Section 9.2.6).
Parameters
source_fdw
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 301
EDB Postgres Advanced Server Guide
Name of the foreign server managed by the postgres_fdw foreign data wrapper
from which database objects are to be cloned.
source
target
Name of the schema into which database objects are to be cloned from the source
schema.
log_filename
Name of the log file in which information from the function is recorded. The log
file is created under the directory specified by the log_directory configuration
parameter in the postgresql.conf file.
on_tblspace
BOOLEAN value to specify whether or not database objects are to be created within
their tablespaces. If false is specified, then the TABLESPACE clause is not
included in the applicable CREATE DDL statement when added to the target
schema. If true is specified, then the TABLESPACE clause is included in the
CREATE DDL statement when added to the target schema. If the on_tblspace
parameter is omitted, the default value is false.
verbose_on
copy_acls
BOOLEAN value to specify whether or not the access control list (ACL) is to be
included while creating objects in the target schema. The access control list is the
set of GRANT privilege statements. If false is specified, then the access control
list is not included for the target schema. If true is specified, then the access
control list is included for the target schema. If the copy_acls parameter is
omitted, the default value is false.
worker_count
Example
The same cloning operation is performed as the example in Section 9.2.1, but using the
non-blocking function localcopyschema_nb.
The following command can be used to observe if pgAgent is running on the appropriate
local database:
[root@localhost ~]# ps -ef | grep pgagent
root 4518 1 0 11:35 pts/1 00:00:00 pgagent -s /tmp/pgagent_edb_log
hostaddr=127.0.0.1 port=5444 dbname=edb user=enterprisedb password=password
root 4525 4399 0 11:35 pts/1 00:00:00 grep --color=auto pgagent
If pgAgent is not running, it can be started as shown by the following. The pgagent
program file is located in the bin subdirectory of the Advanced Server installation
directory.
[root@localhost bin]# ./pgagent -l 2 -s /tmp/pgagent_edb_log hostaddr=127.0.0.1 port=5444
dbname=edb user=enterprisedb password=password
Note: the pgagent -l 2 option starts pgAgent in DEBUG mode, which logs continuous
debugging information into the log file specified with the -s option. Use a lower value
for the -l option, or omit it entirely to record less information.
9.2.3 remotecopyschema
The remotecopyschema function copies a schema and its database objects from a
source schema in the remote source database specified within the source_fdw foreign
server to a target schema in the local target database specified within the target_fdw
foreign server.
remotecopyschema(
source_fdw TEXT,
target_fdw TEXT,
source_schema TEXT,
target_schema TEXT,
log_filename TEXT
[, on_tblspace BOOLEAN
[, verbose_on BOOLEAN
[, copy_acls BOOLEAN
[, worker_count INTEGER ]]]]
)
A BOOLEAN value is returned by the function. If the function succeeds, then true is
returned. If the function fails, then false is returned.
Parameters
source_fdw
Name of the foreign server managed by the postgres_fdw foreign data wrapper
from which database objects are to be cloned.
target_fdw
Name of the foreign server managed by the postgres_fdw foreign data wrapper
to which database objects are to be cloned.
source_schema
target_schema
Name of the schema into which database objects are to be cloned from the source
schema.
log_filename
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 304
EDB Postgres Advanced Server Guide
Name of the log file in which information from the function is recorded. The log
file is created under the directory specified by the log_directory configuration
parameter in the postgresql.conf file.
on_tblspace
BOOLEAN value to specify whether or not database objects are to be created within
their tablespaces. If false is specified, then the TABLESPACE clause is not
included in the applicable CREATE DDL statement when added to the target
schema. If true is specified, then the TABLESPACE clause is included in the
CREATE DDL statement when added to the target schema. If the on_tblspace
parameter is omitted, the default value is false.
Note: If true is specified and a database object has a TABLESPACE clause, but
that tablespace does not exist in the target database cluster, then the cloning
function fails.
verbose_on
copy_acls
BOOLEAN value to specify whether or not the access control list (ACL) is to be
included while creating objects in the target schema. The access control list is the
set of GRANT privilege statements. If false is specified, then the access control
list is not included for the target schema. If true is specified, then the access
control list is included for the target schema. If the copy_acls parameter is
omitted, the default value is false.
Note: If true is specified and a role with GRANT privilege does not exist in the
target database cluster, then the cloning function fails.
worker_count
Example
The following example shows the cloning of schema srcschema within database srcdb
as defined by src_server to target schema tgtschema within database tgtdb as
defined by tgt_server.
Before invoking the function, the connection is made by database user enterprisedb
to database tgtdb. A worker_count of 4 is specified for this function.
tgtdb=# SELECT edb_util.remotecopyschema
('src_server','tgt_server','srcschema','tgtschema','clone_rmt_src_tgt',worker_count => 4);
remotecopyschema
------------------
t
(1 row)
The following displays the status from the log file during various points in the cloning
process:
tgtdb=# SELECT edb_util.process_status_from_log('clone_rmt_src_tgt');
process_status_from_log
----------------------------------------------------------------------------------------------
-------------------------------------------
---
(RUNNING,"28-JUN-17 13:18:05.299953 -04:00",4021,INFO,"STAGE: DATA-COPY","[0][0] successfully
copied data in [tgtschema.pgbench_tellers]
")
(1 row)
----------------------------------------------------------------------------------------------
-------------------------------------------
----------------------------------------------------------------------------------------------
-------------------------------------------
--
(RUNNING,"28-JUN-17 13:18:10.550393 -04:00",4039,INFO,"STAGE: POST-DATA","CREATE PRIMARY KEY
CONSTRAINT pgbench_tellers_pkey successful"
)
(1 row)
When the remotecopyschema function was invoked, four background workers were
specified.
The following portion of the log file clone_rmt_src_tgt shows the status of the
parallel data copying operation using four background workers:
Wed Jun 28 13:18:05.232949 2017 EDT: [4019] INFO: [STAGE: DATA-COPY] [0] table count [4]
Wed Jun 28 13:18:05.233321 2017 EDT: [4019] INFO: [STAGE: DATA-COPY] [0][0] worker started to
copy data
Wed Jun 28 13:18:05.233640 2017 EDT: [4019] INFO: [STAGE: DATA-COPY] [0][1] worker started to
copy data
Wed Jun 28 13:18:05.233919 2017 EDT: [4019] INFO: [STAGE: DATA-COPY] [0][2] worker started to
copy data
Wed Jun 28 13:18:05.234231 2017 EDT: [4019] INFO: [STAGE: DATA-COPY] [0][3] worker started to
copy data
Wed Jun 28 13:18:05.298174 2017 EDT: [4024] INFO: [STAGE: DATA-COPY] [0][3] successfully
copied data in [tgtschema.pgbench_branches]
Wed Jun 28 13:18:05.299913 2017 EDT: [4021] INFO: [STAGE: DATA-COPY] [0][0] successfully
copied data in [tgtschema.pgbench_tellers]
Wed Jun 28 13:18:06.634310 2017 EDT: [4022] INFO: [STAGE: DATA-COPY] [0][1] successfully
copied data in [tgtschema.pgbench_history]
Wed Jun 28 13:18:10.477333 2017 EDT: [4023] INFO: [STAGE: DATA-COPY] [0][2] successfully
copied data in [tgtschema.pgbench_accounts]
Wed Jun 28 13:18:10.477609 2017 EDT: [4019] INFO: [STAGE: DATA-COPY] [0] all workers finished
[4]
Wed Jun 28 13:18:10.477654 2017 EDT: [4019] INFO: [STAGE: DATA-COPY] [0] copy done [4] tables
Wed Jun 28 13:18:10.493938 2017 EDT: [4019] INFO: [STAGE: DATA-COPY] successfully copied data
into tgtschema
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 307
EDB Postgres Advanced Server Guide
Note that the DATA-COPY log message includes two, square bracket numbers (for
example, [0][3]).
The first number is the job index whereas the second number is the worker index. The
worker index values range from 0 to 3 for the four background workers.
In case two clone schema jobs are running in parallel, the first log file will have 0 as the
job index whereas the second will have 1 as the job index.
9.2.4 remotecopyschema_nb
The remotecopyschema_nb function copies a schema and its database objects from a
source schema in the remote source database specified within the source_fdw foreign
server to a target schema in the local target database specified within the target_fdw
foreign server, but in a non-blocking manner as a job submitted to pgAgent.
remotecopyschema_nb(
source_fdw TEXT,
target_fdw TEXT,
source TEXT,
target TEXT,
log_filename TEXT
[, on_tblspace BOOLEAN
[, verbose_on BOOLEAN
[, copy_acls BOOLEAN
[, worker_count INTEGER ]]]]
)
An INTEGER value job ID is returned by the function for the job submitted to pgAgent. If
the function fails, then null is returned.
After completion of the pgAgent job, remove the job with the
remove_log_file_and_job function (see Section 9.2.6).
Parameters
source_fdw
Name of the foreign server managed by the postgres_fdw foreign data wrapper
from which database objects are to be cloned.
target_fdw
Name of the foreign server managed by the postgres_fdw foreign data wrapper
to which database objects are to be cloned.
source
target
Name of the schema into which database objects are to be cloned from the source
schema.
log_filename
Name of the log file in which information from the function is recorded. The log
file is created under the directory specified by the log_directory configuration
parameter in the postgresql.conf file.
on_tblspace
BOOLEAN value to specify whether or not database objects are to be created within
their tablespaces. If false is specified, then the TABLESPACE clause is not
included in the applicable CREATE DDL statement when added to the target
schema. If true is specified, then the TABLESPACE clause is included in the
CREATE DDL statement when added to the target schema. If the on_tblspace
parameter is omitted, the default value is false.
Note: If true is specified and a database object has a TABLESPACE clause, but
that tablespace does not exist in the target database cluster, then the cloning
function fails.
verbose_on
copy_acls
BOOLEAN value to specify whether or not the access control list (ACL) is to be
included while creating objects in the target schema. The access control list is the
set of GRANT privilege statements. If false is specified, then the access control
list is not included for the target schema. If true is specified, then the access
control list is included for the target schema. If the copy_acls parameter is
omitted, the default value is false.
Note: If true is specified and a role with GRANT privilege does not exist in the
target database cluster, then the cloning function fails.
worker_count
Example
The same cloning operation is performed as the example in Section 9.2.3, but using the
non-blocking function remotecopyschema_nb.
The following command starts pgAgent on the target database tgtdb. The pgagent
program file is located in the bin subdirectory of the Advanced Server installation
directory.
[root@localhost bin]# ./pgagent -l 1 -s /tmp/pgagent_tgtdb_log hostaddr=127.0.0.1 port=5444
user=enterprisedb dbname=tgtdb password=password
The following removes the log file and the pgAgent job:
tgtdb=# SELECT edb_util.remove_log_file_and_job ('clone_rmt_src_tgt',2);
remove_log_file_and_job
-------------------------
t
(1 row)
9.2.5 process_status_from_log
process_status_from_log (
log_file TEXT
)
The function returns the following fields from the log file:
Parameters
log_file
Name of the log file recording the cloning of a schema as specified when the
cloning function was invoked.
Example
9.2.6 remove_log_file_and_job
remove_log_file_and_job (
{ log_file TEXT |
job_id INTEGER |
log_file TEXT, job_id INTEGER
}
)
Values for any or both of the two parameters may be specified when invoking the
remove_log_file_and_job function:
If only log_file is specified, then the function will only remove the log file.
If only job_id is specified, then the function will only remove the job.
If both are specified, then the function will remove the log file and the job.
Parameters
log_file
job_id
Example
The following examples removes only the log file, given the log filename.
The following example removes only the job, given the job ID.
The following example removes the log file and the job, given both values:
tgtdb=# SELECT edb_util.remove_log_file_and_job ('clone_rmt_src_tgt',2);
remove_log_file_and_job
-------------------------
t
(1 row)
10 PL/Java
The PL/Java package provides access to Java stored procedures, triggers, and functions
via the JDBC interface. Unless otherwise noted, the commands and paths noted in the
following section assume that you have performed an installation with the interactive
installer.
Before installing PL/Java for use with a standard Java virtual machine (JVM) on a Linux
system, you must first confirm that a Java runtime environment (version 1.5 or later) is
installed on your system. Installation of a Java development kit also provides a Java
runtime environment.
Step 1: Edit the postgresql.conf file located under the data directory of your
Advanced Server installation and add (or modify) the following settings:
pljava.classpath = 'path_to_pljava.jar'
pljava.libjvm_location = 'path_to_libjvm.so'
For example, the following lists the paths for a default installation with Java version 1.8:
pljava.classpath = '/opt/edb/as10/share/pljava/pljava-
1.5.0.jar'
pljava.libjvm_location = '/usr/lib/jvm/java-1.8.0-openjdk-
1.8.0.91-1.b14.el6.x86_64/jre/lib/amd64/server/libjvm.so'
Step 3: You can use the CREATE EXTENSION command to install PL/Java. To install the
PL/Java extension, login to the database in which you want to install PL/Java with the
psql or pgAdmin client, and invoke the following command:
The edb-psql client displays two rows indicating that java and javau (Java
Untrusted) have been installed in the database.
edb=# SELECT * FROM pg_language WHERE lanname LIKE 'java%';
lanname | lanowner | lanispl | lanpltrusted | lanplcallfoid | laninline | lanvalidator |
lanacl
---------+----------+---------+--------------+---------------+-----------+--------------+-----
--------------------------
java | 10 | t | t | 16462 | 0 | 0 |
{enterprisedb=U/enterprisedb}
javau | 10 | t | f | 16463 | 0 | 0 |
(2 rows)
Step 1: Edit the postgresql.conf file and add (or modify) the following settings:
pljava.classpath = 'POSTGRES_INSTALL_HOME\lib\pljava.jar'
pljava.libjvm_location = 'path_to_libjvm.so'
Step 3: Modify the PATH setting used by the server, adding the following two entries:
%JRE_HOME%\bin;%JRE_HOME%\bin\client
Where JRE_HOME specifies the installation directory of your Java runtime environment.
If you have a Java development kit, substitute the location of $JDK_HOME/jre for
JRE_HOME.
Step 4: Use the Postgres CREATE EXTENSION command to install PL/Java. To run the
installation script, use the psql or pgAdmin client to connect to the database in which you
wish to install PL/Java and invoke the following command:
The client will return a result set that includes java and javau (Java Untrusted).
Copyright 2014 - 2017 EnterpriseDB Corporation. All rights reserved. 314
EDB Postgres Advanced Server Guide
For example, the following CREATE FUNCTION statement creates a function named
getsysprop:
When invoked, getsysprop will execute the getProperty (static) method defined
within the java.lang.System class.
SELECT getsysprop('user.home');
getsysprop
---------------
/opt/edb/as10
(1 row)
The example that follows demonstrates the procedures used to create and install a simple
HelloWorld program:
package com.mycompany.helloworld;
public class HelloWorld
{
public static String helloWorld()
{
return "Hello World";
}
}
$ javac HelloWorld.java
com/mycompany/helloworld/HelloWorld.class
Step 4: Open the edb-psql client, and install the jar file with the following command:
SELECT sqlj.install_jar('file:///file_path/helloworld.jar',
'helloworld', true);
Where file_path is the directory containing the helloworld.jar file. For example,
if the /tmp directory is the file_path:
To confirm that the jar file has been loaded correctly, perform a SELECT statement on the
sqlj.jar_entry and sqlj.jar_repository tables.
The sqlj.classpath_entry table will now include an entry for the helloworld
class file.
Step 6: Create a function that uses Java to call the static function declared in the jar file:
helloworld
-------------
Hello World
(1 row)
The official PL/Java distribution is distributed with examples and documentation. For
more information about using PL/Java, see the project page at:
https://github.com/tada/pljava/wiki
11.1 COMMENT
COMMENT ON
{
AGGREGATE aggregate_name ( aggregate_signature ) |
CAST (source_type AS target_type) |
COLLATION object_name |
COLUMN relation_name.column_name |
CONSTRAINT constraint_name ON table_name |
CONSTRAINT constraint_name ON DOMAIN domain_name |
CONVERSION object_name |
DATABASE object_name |
DOMAIN object_name |
EXTENSION object_name |
EVENT TRIGGER object_name |
FOREIGN DATA WRAPPER object_name |
FOREIGN TABLE object_name |
FUNCTION func_name ([[argmode] [argname] argtype [, ...]])|
INDEX object_name |
LARGE OBJECT large_object_oid |
MATERIALIZED VIEW object_name |
OPERATOR operator_name (left_type, right_type) |
OPERATOR CLASS object_name USING index_method |
OPERATOR FAMILY object_name USING index_method |
PACKAGE object_name
POLICY policy_name ON table_name |
[ PROCEDURAL ] LANGUAGE object_name |
PROCEDURE proc_name [([[argmode] [argname] argtype [, ...]])]
PUBLIC SYNONYM object_name
ROLE object_name |
RULE rule_name ON table_name |
SCHEMA object_name |
SEQUENCE object_name |
SERVER object_name |
TABLE object_name |
TABLESPACE object_name |
TEXT SEARCH CONFIGURATION object_name |
TEXT SEARCH DICTIONARY object_name |
TEXT SEARCH PARSER object_name |
Parameters
object_name
* |
[ argmode ] [ argname ] argtype [ , ... ] |
[ [ argmode ] [ argname ] argtype [ , ... ] ]
ORDER BY [ argmode ] [ argname ] argtype [ , ... ]
Include the CAST clause to create a comment about a cast. When creating a
comment about a cast, source_type specifies the source data type of the cast,
and target_type specifies the target data type of the cast.
COLUMN relation_name.column_name
large_object_oid
Include the OPERATOR CLASS clause to add a comment about an operator class.
object_name specifies the (optionally schema-qualified) name of an operator on
which you are commenting. index_method specifies the associated index
method of the operator class.
text
Notes:
Example:
For more information about using the COMMENT command, please see the PostgreSQL
core documentation at:
https://www.postgresql.org/docs/10/static/sql-comment.html
12.1 edb_dir
The edb_dir table contains one row for each alias that points to a directory created with
the CREATE DIRECTORY command. A directory is an alias for a pathname that allows a
user limited access to the host file system.
You can use a directory to fence a user into a specific directory tree within the file
system. For example, the UTL_FILE package offers functions that permit a user to read
and write files and directories in the host file system, but only allows access to paths that
the database administrator has granted access to via a CREATE DIRECTORY command.
12.2 edb_all_resource_groups
The edb_all_resource_groups table contains one row for each resource group
created with the CREATE RESOURCE GROUP command and displays the number of
active processes in each resource group.
12.3 edb_password_history
The edb_password_history table contains one row for each password change. The
table is shared across all databases within a cluster.
12.4 edb_policy
12.5 edb_profile
The edb_profile table stores information about the available profiles. edb_profiles
is shared across all databases within a cluster.
12.6 edb_resource_group
The edb_resource_group table contains one row for each resource group created with
the CREATE RESOURCE GROUP command.
12.7 edb_variable
The edb_variable table contains one row for each package level variable (each
variable declared within a package).
12.8 pg_synonym
The pg_synonym table contains one row for each synonym created with the CREATE
SYNONYM command or CREATE PUBLIC SYNONYM command.
12.9 product_component_version
acctg=#
acctg=# SELECT * FROM pg_get_keywords();
word | catcode | catdesc
---------------------+---------+---------------------------------
abort | U | unreserved
absolute | U | unreserved
access | U | unreserved
...
Note that any character can be used in an identifier if the name is enclosed in double
quotes. You can selectively query the pg_get_keywords() function to retrieve an up-
to-date list of the Advanced Server keywords that belong to a specific category:
T - The word is used internally, but may be used as a name for a function or type.
C - The word is used internally, and may not be used as a name for a function or
type.
For more information about Advanced Server identifiers and keywords, please refer to
the PostgreSQL core documentation at:
https://www.postgresql.org/docs/10/static/sql-syntax-lexical.html