You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/administration.md
+38-38
Original file line number
Diff line number
Diff line change
@@ -72,60 +72,57 @@ To use `multimaster`, you need to install Postgres Pro Enterprise on all nodes o
72
72
73
73
## Setting up a Multi-Master Cluster
74
74
75
+
You must have superuser rights to set up a multi-master cluster.
76
+
75
77
After installing Postgres Pro Enterprise on all nodes, you need to configure the cluster with `multimaster`. Suppose you are setting up a cluster of three nodes, with ```node1```, ```node2```, and ```node3``` domain names.
76
78
To configure your cluster with `multimaster`, complete these steps on each cluster node:
77
79
78
80
1. Set up the database to be replicated with `multimaster`:
79
-
80
-
* If you are starting from scratch, initialize a cluster, create an empty database `mydb` and a new user `myuser`, as usual:
81
+
* If you are starting from scratch, initialize a cluster, create an empty database `mydb` and a new user `myuser`, as usual:
81
82
```
82
83
initdb -D ./datadir
83
84
pg_ctl -D ./datadir -l ./pg.log start
84
85
createdb myuser -h localhost
85
86
createdb mydb -O myuser -h localhost
86
87
pg_ctl -D ./datadir -l ./pg.log stop
87
88
```
88
-
89
-
* If you already have a database `mydb` running on the `node1` server, initialize new nodes from the working node using `pg_basebackup`. On each cluster node you are going to add, run:
89
+
* If you already have a database `mydb` running on the `node1` server, initialize new nodes from the working node using `pg_basebackup`. On each cluster node you are going to add, run:
90
90
```
91
91
pg_basebackup -D ./datadir -h node1 mydb
92
92
```
93
-
For details, on `pg_basebackup`, see [pg_basebackup](https://www.postgresql.org/docs/9.6/static/app-pgbasebackup.html).
94
-
95
-
1. Modify the ```postgresql.conf``` configuration file, as follows:
96
-
* Set up PostgreSQL parameters related to replication.
97
-
98
-
```
99
-
wal_level = logical
100
-
max_connections = 100
101
-
max_prepared_transactions = 300
102
-
max_wal_senders = 10 # at least the number of nodes
103
-
max_replication_slots = 10 # at least the number of nodes
104
-
```
105
-
You must change the replication level to `logical` as `multimaster` relies on logical replication. For a cluster of N nodes, enable at least N WAL sender processes and replication slots. Since `multimaster` implicitly adds a `PREPARE` phase to each `COMMIT` transaction, make sure to set the number of prepared transactions to N*max_connections. Otherwise, prepared transactions may be queued.
93
+
For details, see [pg_basebackup](https://www.postgresql.org/docs/9.6/static/app-pgbasebackup.html).
106
94
107
-
* Make sure you have enough background workers allocated for each node:
108
-
109
-
```
110
-
max_worker_processes = 250
111
-
```
112
-
For example, for a three-node cluster with `max_connections` = 100, `multimaster` may need up to 206 background workers at peak times: 200 workers for connections from the neighbor nodes, two workers for walsender processes, two workers for walreceiver processes, and two workers for the arbiter sender and receiver processes. When setting this parameter, remember that other modules may also use background workers at the same time.
113
-
114
-
* Add `multimaster`-specific options:
115
-
116
-
```postgres
117
-
multimaster.max_nodes = 3 # cluster size
118
-
multimaster.node_id = 1 # the 1-based index of the node in the cluster
2. Modify the ```postgresql.conf``` configuration file, as follows:
96
+
* Change the isolation level for transactions to `repeatable read`:
97
+
```
98
+
default_transaction_isolation = "repeatable read"
99
+
```
100
+
`multimaster` supports only the `repeatable read` isolation level. You cannot set up `multimaster` with the default `read committed` level.
101
+
* Set up PostgreSQL parameters related to replication.
102
+
```
103
+
wal_level = logical
104
+
max_connections = 100
105
+
max_prepared_transactions = 300
106
+
max_wal_senders = 10 # at least the number of nodes
107
+
max_replication_slots = 10 # at least the number of nodes
108
+
```
109
+
You must change the replication level to `logical` as `multimaster` relies on logical replication. For a cluster of N nodes, enable at least N WAL sender processes and replication slots. Since `multimaster` implicitly adds a `PREPARE` phase to each `COMMIT` transaction, make sure to set the number of prepared transactions to N*max_connections. Otherwise, prepared transactions may be queued.
110
+
* Make sure you have enough background workers allocated for each node:
111
+
```
112
+
max_worker_processes = 250
113
+
```
114
+
For example, for a three-node cluster with `max_connections` = 100, `multimaster` may need up to 206 background workers at peak times: 200 workers for connections from the neighbor nodes, two workers for walsender processes, two workers for walreceiver processes, and two workers for the arbiter sender and receiver processes. When setting this parameter, remember that other modules may also use background workers at the same time.
115
+
* Add `multimaster`-specific options:
116
+
```postgres
117
+
multimaster.max_nodes = 3 # cluster size
118
+
multimaster.node_id = 1 # the 1-based index of the node in the cluster
# comma-separated list of connection strings to neighbor nodes
121
-
```
122
-
123
-
> **Important:** The `node_id` variable takes natural numbers starting from 1, without any gaps in numbering. For example, for a cluster of five nodes, set node IDs to 1, 2, 3, 4, and 5. In the `conn_strings` variable, make sure to list the nodes in the order of their IDs. The `conn_strings` variable must be the same on all nodes.
124
-
121
+
```
122
+
> **Important:** The `node_id` variable takes natural numbers starting from 1, without any gaps in numbering. For example, for a cluster of five nodes, set node IDs to 1, 2, 3, 4, and 5. In the `conn_strings` variable, make sure to list the nodes in the order of their IDs. The `conn_strings` variable must be the same on all nodes.
125
123
Depending on your network environment and usage patterns, you may want to tune other `multimaster` parameters. For details on all configuration parameters available, see [Tuning Configuration Parameters](#tuning-configuration-parameters).
126
124
127
-
1. Allow replication in `pg_hba.conf`:
128
-
125
+
3. Allow replication in `pg_hba.conf`:
129
126
```
130
127
host myuser all node1 trust
131
128
host myuser all node2 trust
@@ -135,17 +132,18 @@ To configure your cluster with `multimaster`, complete these steps on each clust
135
132
host replication all node3 trust
136
133
```
137
134
138
-
1. Start PostgreSQL:
135
+
4. Start PostgreSQL:
139
136
140
137
```
141
138
pg_ctl -D ./datadir -l ./pg.log start
142
139
```
143
140
144
-
1. When PostgreSQL is started on all nodes, connect to any node and create the `multimaster` extension:
141
+
When PostgreSQL is started on all nodes, connect to any node and create the `multimaster` extension:
145
142
```
146
143
psql -h node1
147
144
> CREATE EXTENSION multimaster;
148
145
```
146
+
The `CREATE EXTENSION` query is replicated to all the cluster nodes.
149
147
150
148
To ensure that `multimaster` is enabled, check the ```mtm.get_cluster_state()``` view:
151
149
```
@@ -240,7 +238,9 @@ Suppose we have a working cluster of three nodes, with ```node1```, ```node2```,
240
238
* Make sure the `pg_hba.conf` files allows replication to the new node.
241
239
242
240
**See Also**
241
+
243
242
[Setting up a Multi-Master Cluster](#setting-up-a-multi-master-cluster)
0 commit comments