Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Commit c4f2154

Browse files
committed
Fix typos in doc
1 parent 568b13e commit c4f2154

File tree

1 file changed

+20
-21
lines changed

1 file changed

+20
-21
lines changed

doc/multimaster.xml

+20-21
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@
5858
typically need more than five cluster nodes. Three cluster nodes are
5959
enough to ensure high availability in most cases.
6060
There is also a special 2+1 (referee) mode in which 2 nodes hold data and
61-
an additional one called <filename>referee</filename> only participates in voting. Compared to traditional three
61+
an additional one called <firstterm>referee</firstterm> only participates in voting. Compared to traditional three
6262
nodes setup, this is cheaper (referee resources demands are low) but availability
6363
is decreased. For details, see <xref linkend="setting-up-a-referee"/>.
6464
</para>
@@ -200,7 +200,7 @@
200200
<filename>multimaster</filename> uses
201201
<ulink url="https://postgrespro.com/docs/postgresql/current/logicaldecoding-synchronous">logical replication</ulink>
202202
and the two phase commit protocol with transaction outcome determined by
203-
<link linkend="multimaster-credits">Paxos consensus algorithm.</link>
203+
<link linkend="multimaster-credits">Paxos consensus algorithm</link>.
204204
</para>
205205
<para>
206206
When <productname>PostgreSQL</productname> loads the <filename>multimaster</filename> shared
@@ -318,9 +318,9 @@
318318
integrity, the decision to exclude or add back node(s) must be taken
319319
coherently. Generations which represent a subset of
320320
currently supposedly live nodes serve this
321-
purpose. Technically, generation is a pair <filename>&lt;n, members&gt;</filename>
322-
where <filename>n</filename> is unique number and
323-
<filename>members</filename> is subset of configured nodes. A node always
321+
purpose. Technically, generation is a pair <literal>&lt;n, members&gt;</literal>
322+
where <replaceable>n</replaceable> is unique number and
323+
<replaceable>members</replaceable> is subset of configured nodes. A node always
324324
lives in some generation and switches to the one with higher number as soon
325325
as it learns about its existence; generation numbers act as logical
326326
clocks/terms/epochs here. Each transaction is stamped during commit with
@@ -331,15 +331,15 @@
331331
resides in generation in one of three states (can be shown with <literal>mtm.status()</literal>):
332332
<orderedlist>
333333
<listitem>
334-
<para><filename>ONLINE</filename>: node is member of the generation and
335-
making transactions normally; </para>
334+
<para><literal>ONLINE</literal>: node is member of the generation and
335+
making transactions normally;</para>
336336
</listitem>
337337
<listitem>
338-
<para><filename>RECOVERY</filename>: node is member of the generation, but it
339-
must apply in recovery mode transactions from previous generations to become <filename>ONLINE;</filename> </para>
338+
<para><literal>RECOVERY</literal>: node is member of the generation, but it
339+
must apply in recovery mode transactions from previous generations to become <literal>ONLINE</literal>;</para>
340340
</listitem>
341341
<listitem>
342-
<para><filename>DEAD</filename>: node will never be <filename>ONLINE</filename> in this generation;</para>
342+
<para><literal>DEAD</literal>: node will never be <filename>ONLINE</filename> in this generation;</para>
343343
</listitem>
344344
</orderedlist>
345345

@@ -374,7 +374,7 @@
374374
<listitem>
375375
<para>
376376
The reconnected node selects a cluster node which is
377-
<filename>ONLINE</filename> in the highest generation and starts
377+
<literal>ONLINE</literal> in the highest generation and starts
378378
catching up with the current state of the cluster based on the
379379
Write-Ahead Log (WAL).
380380
</para>
@@ -480,7 +480,7 @@
480480
<para>
481481
Performs Paxos to resolve unfinished transactions.
482482
This worker is only active during recovery or when connection with other nodes was lost.
483-
There is a single worker per PostgreSQL instance.
483+
There is a single worker per <productname>PostgreSQL</productname> instance.
484484
</para>
485485
</listitem>
486486
</varlistentry>
@@ -489,7 +489,7 @@
489489
<listitem>
490490
<para>
491491
Ballots for new generations to exclude some node(s) or add myself.
492-
There is a single worker per PostgreSQL instance.
492+
There is a single worker per <productname>PostgreSQL</productname> instance.
493493
</para>
494494
</listitem>
495495
</varlistentry>
@@ -745,9 +745,9 @@ SELECT * FROM mtm.nodes();
745745
algorithm to determine whether the cluster nodes have a quorum: a cluster
746746
can only continue working if the majority of its nodes are alive and can
747747
access each other. Majority-based approach is pointless for two nodes
748-
cluster: if one of them fails, another one becomes unaccessible. There is
749-
a special 2+1 or referee mode which trades less harware resources by
750-
decreasing availabilty: two nodes hold full copy of data, and separate
748+
cluster: if one of them fails, another one becomes inaccessible. There is
749+
a special 2+1 or referee mode which trades less hardware resources by
750+
decreasing availability: two nodes hold full copy of data, and separate
751751
referee node participates only in voting, acting as a tie-breaker.
752752
</para>
753753
<para>
@@ -758,7 +758,7 @@ SELECT * FROM mtm.nodes();
758758
grant - this allows the node to get it in its turn later. While the grant is
759759
issued, it can't be given to another node until full generation is elected
760760
and excluded node recovers. This ensures data loss doesn't happen by the
761-
price of availabilty: in this setup two nodes (one normal and one referee)
761+
price of availability: in this setup two nodes (one normal and one referee)
762762
can be alive but cluster might be still unavailable if the referee winner
763763
is down, which is impossible with classic three nodes configuration.
764764
</para>
@@ -902,8 +902,7 @@ SELECT * FROM mtm.nodes();
902902
<title>Adding New Nodes to the Cluster</title>
903903
<para>With the <filename>multimaster</filename> extension, you can add or
904904
drop cluster nodes. Before adding node, stop the load and ensure (with
905-
<literal>mtm.status()</literal> that all nodes (except the ones to be
906-
dropped) are <literal>online</literal>.
905+
<literal>mtm.status()</literal>) that all nodes are <literal>online</literal>.
907906
When adding a new node, you need to load all the data to this node using
908907
<application>pg_basebackup</application> from any cluster node, and then start this node.
909908
</para>
@@ -955,7 +954,7 @@ pg_basebackup -D <replaceable>datadir</replaceable> -h node1 -U mtmuser -c fast
955954
<listitem>
956955
<para>
957956
Configure the new node to boot with <literal>recovery_target=immediate</literal> to prevent redo
958-
past the point where replication will begin. Add to <literal>postgresql.conf</literal>
957+
past the point where replication will begin. Add to <filename>postgresql.conf</filename>:
959958
</para>
960959
<programlisting>
961960
restore_command = 'false'
@@ -990,7 +989,7 @@ SELECT mtm.join_node(4, '0/12D357F0');
990989
<title>Removing Nodes from the Cluster</title>
991990
<para>
992991
Before removing node, stop the load and ensure (with
993-
<literal>mtm.status()</literal> that all nodes (except the ones to be
992+
<literal>mtm.status()</literal>) that all nodes (except the ones to be
994993
dropped) are <literal>online</literal>. Shut down the nodes you are going to remove.
995994
To remove the node from the cluster:
996995
</para>

0 commit comments

Comments
 (0)