58
58
typically need more than five cluster nodes. Three cluster nodes are
59
59
enough to ensure high availability in most cases.
60
60
There is also a special 2+1 (referee) mode in which 2 nodes hold data and
61
- an additional one called <filename >referee</filename > only participates in voting. Compared to traditional three
61
+ an additional one called <firstterm >referee</firstterm > only participates in voting. Compared to traditional three
62
62
nodes setup, this is cheaper (referee resources demands are low) but availability
63
63
is decreased. For details, see <xref linkend =" setting-up-a-referee" />.
64
64
</para >
200
200
<filename >multimaster</filename > uses
201
201
<ulink url =" https://postgrespro.com/docs/postgresql/current/logicaldecoding-synchronous" >logical replication</ulink >
202
202
and the two phase commit protocol with transaction outcome determined by
203
- <link linkend =" multimaster-credits" >Paxos consensus algorithm. </link >
203
+ <link linkend =" multimaster-credits" >Paxos consensus algorithm</link >.
204
204
</para >
205
205
<para >
206
206
When <productname >PostgreSQL</productname > loads the <filename >multimaster</filename > shared
318
318
integrity, the decision to exclude or add back node(s) must be taken
319
319
coherently. Generations which represent a subset of
320
320
currently supposedly live nodes serve this
321
- purpose. Technically, generation is a pair <filename >< n, members> </filename >
322
- where <filename >n</filename > is unique number and
323
- <filename >members</filename > is subset of configured nodes. A node always
321
+ purpose. Technically, generation is a pair <literal >< n, members> </literal >
322
+ where <replaceable >n</replaceable > is unique number and
323
+ <replaceable >members</replaceable > is subset of configured nodes. A node always
324
324
lives in some generation and switches to the one with higher number as soon
325
325
as it learns about its existence; generation numbers act as logical
326
326
clocks/terms/epochs here. Each transaction is stamped during commit with
331
331
resides in generation in one of three states (can be shown with <literal >mtm.status()</literal >):
332
332
<orderedlist >
333
333
<listitem >
334
- <para ><filename >ONLINE</filename >: node is member of the generation and
335
- making transactions normally; </para >
334
+ <para ><literal >ONLINE</literal >: node is member of the generation and
335
+ making transactions normally;</para >
336
336
</listitem >
337
337
<listitem >
338
- <para ><filename >RECOVERY</filename >: node is member of the generation, but it
339
- must apply in recovery mode transactions from previous generations to become <filename >ONLINE;</ filename > </para >
338
+ <para ><literal >RECOVERY</literal >: node is member of the generation, but it
339
+ must apply in recovery mode transactions from previous generations to become <literal >ONLINE</ literal >; </para >
340
340
</listitem >
341
341
<listitem >
342
- <para ><filename >DEAD</filename >: node will never be <filename >ONLINE</filename > in this generation;</para >
342
+ <para ><literal >DEAD</literal >: node will never be <filename >ONLINE</filename > in this generation;</para >
343
343
</listitem >
344
344
</orderedlist >
345
345
374
374
<listitem >
375
375
<para >
376
376
The reconnected node selects a cluster node which is
377
- <filename >ONLINE</filename > in the highest generation and starts
377
+ <literal >ONLINE</literal > in the highest generation and starts
378
378
catching up with the current state of the cluster based on the
379
379
Write-Ahead Log (WAL).
380
380
</para >
480
480
<para >
481
481
Performs Paxos to resolve unfinished transactions.
482
482
This worker is only active during recovery or when connection with other nodes was lost.
483
- There is a single worker per PostgreSQL instance.
483
+ There is a single worker per < productname > PostgreSQL</ productname > instance.
484
484
</para >
485
485
</listitem >
486
486
</varlistentry >
489
489
<listitem >
490
490
<para >
491
491
Ballots for new generations to exclude some node(s) or add myself.
492
- There is a single worker per PostgreSQL instance.
492
+ There is a single worker per < productname > PostgreSQL</ productname > instance.
493
493
</para >
494
494
</listitem >
495
495
</varlistentry >
@@ -745,9 +745,9 @@ SELECT * FROM mtm.nodes();
745
745
algorithm to determine whether the cluster nodes have a quorum: a cluster
746
746
can only continue working if the majority of its nodes are alive and can
747
747
access each other. Majority-based approach is pointless for two nodes
748
- cluster: if one of them fails, another one becomes unaccessible . There is
749
- a special 2+1 or referee mode which trades less harware resources by
750
- decreasing availabilty : two nodes hold full copy of data, and separate
748
+ cluster: if one of them fails, another one becomes inaccessible . There is
749
+ a special 2+1 or referee mode which trades less hardware resources by
750
+ decreasing availability : two nodes hold full copy of data, and separate
751
751
referee node participates only in voting, acting as a tie-breaker.
752
752
</para >
753
753
<para >
@@ -758,7 +758,7 @@ SELECT * FROM mtm.nodes();
758
758
grant - this allows the node to get it in its turn later. While the grant is
759
759
issued, it can't be given to another node until full generation is elected
760
760
and excluded node recovers. This ensures data loss doesn't happen by the
761
- price of availabilty : in this setup two nodes (one normal and one referee)
761
+ price of availability : in this setup two nodes (one normal and one referee)
762
762
can be alive but cluster might be still unavailable if the referee winner
763
763
is down, which is impossible with classic three nodes configuration.
764
764
</para >
@@ -902,8 +902,7 @@ SELECT * FROM mtm.nodes();
902
902
<title >Adding New Nodes to the Cluster</title >
903
903
<para >With the <filename >multimaster</filename > extension, you can add or
904
904
drop cluster nodes. Before adding node, stop the load and ensure (with
905
- <literal >mtm.status()</literal > that all nodes (except the ones to be
906
- dropped) are <literal >online</literal >.
905
+ <literal >mtm.status()</literal >) that all nodes are <literal >online</literal >.
907
906
When adding a new node, you need to load all the data to this node using
908
907
<application >pg_basebackup</application > from any cluster node, and then start this node.
909
908
</para >
@@ -955,7 +954,7 @@ pg_basebackup -D <replaceable>datadir</replaceable> -h node1 -U mtmuser -c fast
955
954
<listitem >
956
955
<para >
957
956
Configure the new node to boot with <literal >recovery_target=immediate</literal > to prevent redo
958
- past the point where replication will begin. Add to <literal >postgresql.conf</literal >
957
+ past the point where replication will begin. Add to <filename >postgresql.conf</filename >:
959
958
</para >
960
959
<programlisting >
961
960
restore_command = 'false'
@@ -990,7 +989,7 @@ SELECT mtm.join_node(4, '0/12D357F0');
990
989
<title >Removing Nodes from the Cluster</title >
991
990
<para >
992
991
Before removing node, stop the load and ensure (with
993
- <literal >mtm.status()</literal > that all nodes (except the ones to be
992
+ <literal >mtm.status()</literal >) that all nodes (except the ones to be
994
993
dropped) are <literal >online</literal >. Shut down the nodes you are going to remove.
995
994
To remove the node from the cluster:
996
995
</para >
0 commit comments