@@ -968,7 +968,7 @@ multimaster.conn_strings = 'dbname=mydb user=myuser host=node1,
968
968
</listitem>
969
969
<listitem>
970
970
<para>
971
- Start <productname>PostgreSQL </productname> on the new node:
971
+ Start <productname>&project; </productname> on the new node:
972
972
</para>
973
973
<programlisting>
974
974
pg_ctl -D <replaceable>datadir</replaceable> -l <replaceable>pg.log</replaceable> start
@@ -1017,10 +1017,16 @@ SELECT mtm.stop_node(3);
1017
1017
</programlisting>
1018
1018
<para>
1019
1019
This command excludes node 3 from the cluster and stops replication to
1020
- this node. While the WAL lag between the node and the current cluster state
1020
+ this node.
1021
+ </para>
1022
+ <para>While the WAL lag between the node and the current cluster state
1021
1023
is less than the <varname>multimaster.max_recovery_lag</varname> value,
1022
- you can restore the node using the <function>mtm.recover_node</function> function.
1023
- For details, see <xref linkend="multimaster-restoring-a-node-manually">.
1024
+ you can restore the node by running the following command:
1025
+ <programlisting>
1026
+ SELECT mtm.recover_node(3);
1027
+ </programlisting>
1028
+ Otherwise, follow the procedure described in
1029
+ <xref linkend="multimaster-restoring-a-node-manually">.
1024
1030
</para>
1025
1031
<note>
1026
1032
<para>
@@ -1031,21 +1037,40 @@ SELECT mtm.stop_node(3);
1031
1037
</para>
1032
1038
</note>
1033
1039
<para>
1034
- To permanently drop the node from the cluster, run the
1035
- <literal>mtm.stop_node()</literal> function with the <literal>drop_slot</literal> parameter
1036
- set to <literal>true</literal>:
1040
+ To permanently drop the node from the cluster:
1041
+ </para>
1042
+ <orderedlist>
1043
+ <listitem>
1044
+ <para>Run the <literal>mtm.stop_node()</literal> function with
1045
+ the <literal>drop_slot</literal> parameter set to <literal>true</literal>:
1037
1046
</para>
1038
1047
<programlisting>
1039
1048
SELECT mtm.stop_node(3, true);
1040
1049
</programlisting>
1041
- <para>
1050
+ <para>
1042
1051
This disables replication slots for node 3 on all cluster nodes and stops replication to
1043
- this node. If you would like to return the node to the cluster, you will have to add it
1044
- as a new node. For details, see <xref linkend="multimaster-adding-new-nodes-to-the-cluster">.
1052
+ this node.
1053
+ </para>
1054
+ </listitem>
1055
+ <listitem>
1056
+ <para>Adjust <xref linkend="multimaster-node-id"> and
1057
+ <xref linkend="multimaster-conn-strings"> settings in
1058
+ <filename>postgresql.conf</filename> on the remaining
1059
+ cluster nodes to reflect the new state of the cluster.
1060
+ </para>
1061
+ </listitem>
1062
+ <listitem>
1063
+ <para>Edit the <filename>pg_hba.conf</filename> file on the remaining cluster
1064
+ nodes to disable replication to the removed node, if required.
1065
+ </para>
1066
+ </listitem>
1067
+ </orderedlist>
1068
+ <para>If you would like to return the node to the cluster later, you will have to add it
1069
+ as a new node, as explained in <xref linkend="multimaster-adding-new-nodes-to-the-cluster">.
1045
1070
</para>
1046
1071
</sect3>
1047
1072
<sect3 id="multimaster-restoring-a-node-manually">
1048
- <title>Restoring a Cluster Node</title>
1073
+ <title>Restoring a Cluster Node Manually </title>
1049
1074
<para>
1050
1075
The <filename>multimaster</filename> extension can <link linkend="multimaster-failure-detection-and-recovery">automatically restore</link> a failed node if the WAL is available for the time when the node was disconnected from the cluster. However, if the data updates on the alive nodes exceed the allowed WAL size specified in the <literal>multimaster.max_recovery_lag</literal> variable, automatic recovery is impossible. In this case, you can manually restore the failed node.
1051
1076
</para>
@@ -1077,7 +1102,19 @@ pg_basebackup -D <replaceable>datadir</replaceable> -h node1 -x
1077
1102
</listitem>
1078
1103
<listitem>
1079
1104
<para>
1080
- Start <productname>PostgreSQL</productname> on the restored node:
1105
+ On the restored node, update the <varname>multimaster.node_id</varname>
1106
+ setting to the value this node used to have before the failure.
1107
+ </para>
1108
+ </listitem>
1109
+ <listitem>
1110
+ <para>
1111
+ Make sure replication is enabled between the restored node
1112
+ and the rest of the cluster.
1113
+ </para>
1114
+ </listitem>
1115
+ <listitem>
1116
+ <para>
1117
+ Start <productname>&project;</productname> on the restored node:
1081
1118
</para>
1082
1119
<programlisting>
1083
1120
pg_ctl -D <replaceable>datadir</replaceable> -l <replaceable>pg.log</replaceable> start
0 commit comments