Tutorial All PP Slony Replication
Tutorial All PP Slony Replication
Tutorial All PP Slony Replication
Introduction
Slony is a master to multiple slaves replication system with cascading and failover
capabilities. When used with Postgres Plus, it becomes a powerful tool used for
safeguarding data, failover/standby capabilities, optimizing online transaction processing,
offloading reporting/BI queries, backup and restore, and migrating systems. This tutorial
shows how to efficiently setup a basic master-slave system that can be applied to a
variety of applications.
This EnterpriseDB Quick Tutorial helps you get started with the Postgres Plus Standard
Server or Postgres Plus Advanced Server database products in a Linux or Windows
environment. It is assumed that you have already downloaded and installed Postgres Plus
Standard Server or Postgres Plus Advanced Server on your desktop or laptop computer.
This Quick Tutorial is designed to help you expedite your Technical Evaluation of
Postgres Plus Standard Server or Postgres Plus Advanced Server. For more informational
assets on conducting your evaluation of Postgres Plus, visit the self-service web site,
Postgres Plus Open Source Adoption.
Feature Description
Slony-I is a pre-bundled enterprise module installed by default with Postgres Plus
Standard Server and Postgres Plus Advanced Server.
Slony replication is very versatile and can be used for a number of different applications:
Controlled Switchover
At times, it may be necessary to bring the master node offline for a variety of reasons
such as to perform system maintenance or a system upgrade. In that case, you would
want one of the slave nodes to temporarily take over the role of the master node while the
original master becomes a slave node and could hence, be taken offline.
This exchange of roles is called controlled switchover. When you perform a controlled
switchover, the old master becomes a slave node and the slave node becomes the master
node. All other slave nodes are notified of the change and become subscribers of the new
master node.
Failover
In the event of a catastrophic failure of the master node, Slony supports the failover of the
master node to a slave node. Failover is an irreversible action so it should only be done if
the master node is not recoverable. Once the failover process is completed, the old master
can be removed from the configuration. The new master takes over replication to the
other slaves in the cluster.
Tutorial Steps
Slony Concepts
The following terms are used to describe the components of a Slony replication system:
Additional information about Slony and the Slony project can be found on the Postgres
Community Projects page of the EnterpriseDB web site.
The remainder of this Quick Tutorial describes how to set up a basic Slony replication
cluster (pictured below) with a master node and one slave node. The example is presented
using Standard Server on Linux though the same steps apply to Advanced Server as well.
Differences in procedures for Microsoft Windows® systems are noted throughout the
instructions.
Note: When a distinction must be made between a Postgres Plus database that
participates in Slony replication and the operating environment in which the Postgres
Plus database resides, the Postgres Plus database will be referred to as the node and the
surrounding operating environment will be referred to as the host.
Note: For Advanced Server, substitute enterprisedb for postgres as the database
superuser.
Step 1: Verify that Slony is installed on the master host and on the slave host. The Slony
files are located under the bin subdirectory (dbserver/bin for Advanced Server) of
the Postgres Plus home directory. You should see files named slon and slonik in this
subdirectory.
Note: For Advanced Server on Windows, you should see files named edb-
replication and slonik. The file edb-replication takes the place of slon as the
Slony executable.
Note: When installing Standard Server, if you de-selected the Slony component, the
slon and slonik programs are not installed. If you did not install Slony, you can use
StackBuilder Plus to add Slony to your Standard Server configuration.
Step 2: Create a working directory on the master host and on the slave host where you
will create and store the scripts to configure and run Slony.
Step 3: Create a superuser with catalog modification privileges on the master node and
on the slave node that will be used for Slony configuration and replication. In this
example, the superuser is named slony on both the master and on the slave.
This can be done in pgAdmin in Standard Server (Postgres Studio in Advanced Server).
Click the secondary mouse button on the Login Roles node in the Object Browser.
Choose New Login Role and fill out the new Login Role dialog box.
Click the Role Privileges tab and check the Superuser and Can Modify Catalog Directly
check boxes. Click the OK button.
Step 4: Configure and reload the pg_hba.conf file on the master host and on the slave
host.
You will need to make sure that it is configured properly on each host to allow
connections from every other host in the Slony cluster.
On the master host in a 2-node replication system, the entry you add to permit connection
to the master node from the slave host has the following form:
On the slave host the entry you add to permit connection to the slave node from the
For Linux only: Be sure there is an entry for the local node, which is used by the slon
daemon to communicate with its own node.
Be sure to reload the configuration file on each host after making the file modifications.
Step 5: Define a pg_service.conf file on the master host and on the slave host.
For this example, the service name assigned to the master node is 192.168.10.102-
slonik. The service name assigned to the slave node is 192.168.10.103-slonik.
The service entries in the pg_service.conf file on the master host appear as follows:
[192.168.10.102-slonik]
dbname=reptest_node1
user=slony
password=password
[192.168.10.103-slonik]
dbname=reptest_node2
host=192.168.10.103
user=slony
password=password
The service entries in the pg_service.conf file on the slave host appear as follows:
[192.168.10.102-slonik]
dbname=reptest_node1
host=192.168.10.102
user=slony
password=password
[192.168.10.103-slonik]
dbname=reptest_node2
user=slony
password=password
Before running the slon or slonik programs, set the environment variable
PGSYSCONFDIR to the directory containing the pg_service.conf file. (For Windows
hosts, add a system environment variable named PGSYSCONFDIR.) This process is
described in more detail in the next section.
Step 6: Use the pg_dump utility program to create a backup file of the schema and table
definitions of the master tables that you wish to replicate. Do not include the table data
in your backup file.
The following example creates a backup file named sample.backup containing the
sample schema with the dept and emp tables from database reptest_node1:
cd /opt/PostgresPlus/8.4SS/bin
./pg_dump -U postgres -n sample -s -f /home/user/sample.backup reptest_node1
Step 7: Copy the backup file to the slave host. On the slave host, restore the backup file
to the database to which you want to replicate the master tables. Use the psql utility
program to restore the backup file:
Program psql is located in the bin subdirectory (dbserver/bin for Advanced Server)
of the Postgres Plus home directory.
In the following example, the createdb program is used to create the reptest_node2
database, and then the psql program is used to restore the sample schema from the
sample.backup file into the reptest_node2 database.
cd /opt/PostgresPlus/8.4SS/bin
./createdb -U postgres reptest_node2
./psql -U postgres -f /home/user/sample.backup reptest_node2
There are now table definitions, but no data, for sample.dept and sample.emp in the
reptest_node2 database:
Configuration of the Slony cluster is done by supplying commands to the slonik utility
program. Separate script files are constructed for each step of the configuration process.
This will help ensure that you successfully complete each step of the process before
proceeding to the next one and will make troubleshooting much easier.
Step 1: Log on to the master host using any valid account on the computer. (For
Windows, use a computer administrator account.)
For Linux only: Set the environment variable PGSYSCONFDIR to the directory
containing the pg_service.conf file. Change to your working directory where you
will create and run the slonik scripts.
export PGSYSCONFDIR=/home/user/testcluster
cd /home/user/testcluster
For Windows only: Add the environment variable PGSYSCONFDIR to the system. In the
Control Panel, open System, select the Advanced tab, and click the Environment
Variables button. Add PGSYSCONFDIR with value C:\testcluster to System
Variables.
Note: Other applications using libpq may also be affected by the use of this
environment variable.
For Windows XP only: Restart your computer at this point. (The Slony service that you
will register in Step 8 will not have access to the new PGSYSCONFDIR system
environment variable unless you restart the computer.)
For Windows Vista only: If you are using User Account Control, run the Command
Prompt window as an administrator. (Click the secondary mouse button on Command
Prompt, and then in the Command Prompt submenu, click the primary mouse button on
Run as Administrator.)
In the Command Prompt window, verify PGSYSCONFDIR is set to the working directory.
Change to your working directory where you will create and run the slonik scripts.
Step 2: Repeat Step 1 on the slave host. You should now have two terminal sessions
running – one on the master host and one on the slave host.
Step 3: Continue with the following steps on the master host. Create a preamble file to
provide connection information for each node in the cluster. This connection information
is used by the slonik program to set up and control administration of the cluster.
# file preamble.sk
The name assigned to the cluster is testcluster. The CONNINFO parameters reference
the service names defined in the pg_service.conf file.
Use the INCLUDE statement in other slonik scripts to reference the preamble file:
include <preamble.sk>;
Step 4: Create a script to define the Slony replication cluster. The replication cluster is
defined using the INIT CLUSTER command.
#!/opt/PostgresPlus/8.4SS/bin/slonik
#file initcluster.sk
include <preamble.sk>;
In the INIT CLUSTER command, the master node is assigned a numeric identifier
(typically, 1) using the ID parameter.
For Linux only: Be sure you add execute permission to this script (and all other scripts
created in subsequent steps) before running it.
For Windows only: Create a batch file to run script initcluster.sk and all
subsequent slonik scripts created in these instructions.
@ECHO OFF
REM file runslonik.bat
REM
REM Batch file to run slonik
Run the batch file with the initcluster.sk script as a parameter as follows:
C:\testcluster>runslonik initcluster.sk
<stdin>:7: Possible unsupported PostgreSQL version (80401) 8.4, defaulting to
8.3 support
Step 5: Create a script to add the slave node to the replication cluster. The node is added
using the STORE NODE command.
#!/opt/PostgresPlus/8.4SS/bin/slonik
#file addnode.sk
include <preamble.sk>;
Note: Specification of the EVENT NODE parameter is required in the STORE NODE
command for Slony version 2.x. Prior 1.x versions of Slony allowed omission of this
parameter for a default value.
$ ./addnode.sk
./addnode.sk:7: Possible unsupported PostgreSQL version (80401) 8.4,
defaulting to 8.3 support
#!/opt/PostgresPlus/8.4SS/bin/slonik
#file addpaths.sk
include <preamble.sk>;
There should be a STORE PATH command from each node to every other node.
(Connections are not established unless they are actually used.)
For this 2-node example, the first STORE PATH command provides the communication
path for the Slony daemon running on the slave host to connect to the master node. The
second STORE PATH command provides the communication path for the Slony daemon
running on the master host to connect to the slave node.
$ ./addpaths.sk
Step 7: Create a Slony daemon configuration file on the master host and on the slave host
for the parameters needed to start the Slony daemon for each respective node. The
parameters define how the daemon connects to the named cluster.
#file 192.168.10.102.slon
cluster_name='testcluster'
conn_info='service=192.168.10.102-slonik'
#file 192.168.10.103.slon
cluster_name='testcluster'
conn_info='service=192.168.10.103-slonik'
For Linux only: Start the Slony daemon on the master host. The Slony daemon
executable named slon, is found in the bin subdirectory (dbserver/bin for Advanced
Server) of the Postgres Plus home directory.
The command to start the Slony daemon using the configuration file created in the prior
step is the following:
You can verify that the Slony daemon is running by using the following command:
Check the log file, slon.log, to verify that there are no error messages.
For Windows only: Register the Slony service, and then register a Slony replication
engine with the Slony service.
A Slony replication engine is registered to the service with the following command:
The following shows the creation of a service named Slony along with a replication
engine:
Start the Slony service by opening Control Panel, Administrative Tools, and then
Services. Select the Slony service and click the Start link.
Use the Windows Event Viewer for applications to check for any Slony errors. In the
same Administrative Tools window of the Control Panel that you used to start the Slony
service, open Event Viewer, (then open Windows Logs if you are using Windows Vista),
then open Application.
For Linux only: On the slave host repeat Step 8 using the Slony daemon configuration
file you created in Step 7 for the slave node:
For Windows only: On the slave host repeat Step 8 using the Slony daemon
configuration file you created in Step 7 for the slave node.
Step 10: Continue with this step on the master host. Create a script to add a replication
set to the replication cluster. A replication set is added using the CREATE SET command.
A replication set contains the database objects that you wish to replicate from the master
node to the slave node.
#!/opt/PostgresPlus/8.4SS/bin/slonik
#file buildset.sk
include <preamble.sk>;
$ ./buildset.sk
Step 11: Create a script to add tables to the replication set. Tables are added using the
SET ADD TABLE command.
Slony requires a primary key or a unique, non-null index on each replicated table
otherwise the SET ADD TABLE command will fail, and an error message will be
displayed.
Note: Tables that are dependent upon each other by foreign key constraints must be
added to the same replication set.
#!/opt/PostgresPlus/8.4SS/bin/slonik
#file addtables.sk
include <preamble.sk>;
$ ./addtables.sk
Step 12: Create a script to subscribe the slave node to the replication set. Nodes are
subscribed to a replication set using the SUBSCRIBE SET command.
For each slave joining the replication set, provide the following parameters:
#!/opt/PostgresPlus/8.4SS/bin/slonik
include <preamble.sk>;
$ ./subscribeset.sk
The Slony daemons synchronize with each other and initialization of the slave database
begins. The tables in the slave are truncated and the data in the master node is copied to
the slave node. If data fails to replicate to the slave node, look for error messages in the
slon.log file (for Windows, use Event Viewer for applications) on the master host and
on the slave host.
Conclusion
In this Quick Tutorial you learned how to set up Slony-I replication on a Postgres Plus
database.
EnterpriseDB has the expertise and services to assist you in setting up a Slony cluster,
fully testing it, and providing management scripts. The Postgres Plus Replication Setup
Service is described on the Packaged Services page of the EnterpriseDB web site.
You should now be able to proceed confidently with a Technical Evaluation of Postgres
Plus.
The following resources should help you move on with this step: