Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Commit 663c11a

Browse files
committed
Update the README in 'contrib/multimaster'.
1 parent 261038c commit 663c11a

File tree

2 files changed

+28
-33
lines changed

2 files changed

+28
-33
lines changed

contrib/mmts/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# `mmts`
22

3-
An implementation of synchronous multi-master replication based on **logical decoding** and **xtm**.
3+
An implementation of synchronous **multi-master replication** based on **commit timestamps**.
44

55
## Usage
66

contrib/multimaster/README.md

Lines changed: 27 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -1,38 +1,33 @@
11
# `multimaster`
22

3-
A synchronous multi-master replication based on **snapshot sharing**.
3+
An implementation of synchronous **multi-master replication** based on **snapshot sharing**.
44

5-
## Installing
5+
## Usage
66

7-
1. Build and install postgres from this repo on all machines in cluster.
8-
1. Install contrib/raftable and contrib/mmts extensions.
9-
1. Right now we need clean postgres installation to spin up multimaster cluster.
10-
1. Create required database inside postgres before enabling multimaster extension.
11-
1. We are requiring following postgres configuration:
12-
* 'max_prepared_transactions' > 0 -- in multimaster all writing transaction along with ddl are wrapped as two-phase transaction, so this number will limit maximum number of writing transactions in this cluster node.
13-
* 'synchronous_commit - off' -- right now we do not support async commit. (one can enable it, but that will not bring desired effect)
14-
* 'wal_level = logical' -- multimaster built on top of logical replication so this is mandatory.
15-
* 'max_wal_senders' -- this should be at least number of nodes - 1
16-
* 'max_replication_slots' -- this should be at least number of nodes - 1
17-
* 'max_worker_processes' -- at least 2*N + 1 + P, where N is number of nodes in cluster, P size of pool of workers(see below) (1 raftable, n-1 receiver, n-1 sender, mtm-sender, mtm-receiver, + number of pool worker).
18-
* 'default_transaction_isolation = 'repeatable read'' -- multimaster isn't supporting default read commited level.
19-
1. Multimaster have following configuration parameters:
20-
* 'multimaster.conn_strings' -- connstrings for all nodes in cluster, separated by comma.
21-
* 'multimaster.node_id' -- id of current node, number starting from one.
22-
* 'multimaster.workers' -- number of workers that can apply transactions from neighbouring nodes.
23-
* 'multimaster.use_raftable = true' -- just set this to true. Deprecated.
24-
* 'multimaster.queue_size = 52857600' -- queue size for applying transactions from neighbouring nodes.
25-
* 'multimaster.ignore_tables_without_pk = 1' -- do not replicate tables without primary key
26-
* 'multimaster.heartbeat_send_timeout = 250' -- heartbeat period (ms).
27-
* 'multimaster.heartbeat_recv_timeout = 1000' -- disconnect node if we miss heartbeats all that time (ms).
28-
* 'multimaster.twopc_min_timeout = 40000' -- rollback stalled transaction after this period (ms).
29-
* 'raftable.id' -- id of current node, number starting from one.
30-
* 'raftable.peers' -- id of current node, number starting from one.
31-
1. Allow replication in pg_hba.conf.
7+
1. Install `contrib/arbiter` and `contrib/multimaster` on each instance.
8+
1. Add these required options to the `postgresql.conf` of each instance in the cluster.
329

33-
## Status functions
10+
```sh
11+
multimaster.workers = 8
12+
multimaster.queue_size = 10485760 # 10mb
13+
multimaster.local_xid_reserve = 100 # number of xids reserved for local transactions
14+
multimaster.buffer_size = 0 # sockhub buffer size, if 0, then direct connection will be used
15+
multimaster.arbiters = '127.0.0.1:5431,127.0.0.1:5430'
16+
# comma-separated host:port pairs where arbiters reside
17+
multimaster.conn_strings = 'replication=database dbname=postgres ...'
18+
# comma-separated list of connection strings
19+
multimaster.node_id = 1 # the 1-based index of the node in the cluster
20+
shared_preload_libraries = 'multimaster'
21+
max_connections = 200
22+
max_replication_slots = 10 # at least the number of nodes
23+
wal_level = logical # multimaster is build on top of
24+
# logical replication and will not work otherwise
25+
max_worker_processes = 100 # at least (FIXME: need an estimation here)
26+
```
3427

35-
* mtm.get_nodes_state() -- show status of nodes on cluster
36-
* mtm.get_cluster_state() -- show whole cluster status
37-
* mtm.get_cluster_info() -- print some debug info
38-
* mtm.make_table_local(relation regclass) -- stop replication for a given table
28+
## Testing
29+
30+
1. `cd contrib/multimaster`
31+
1. Deploy the cluster somewhere. You can use `tests/daemons.go` or one of `tests/*.sh` for that.
32+
1. `make -C tests`
33+
1. `tests/dtmbench ...`. See `tests/run.sh` for an example.

0 commit comments

Comments
 (0)