Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Commit 301cc01

Browse files
author
Liudmila Mantrova
committed
DOC: pg-shardman doc improvement
1 parent d5b2146 commit 301cc01

File tree

1 file changed

+19
-21
lines changed

1 file changed

+19
-21
lines changed

doc/src/sgml/pg_shardman.sgml

Lines changed: 19 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -221,7 +221,7 @@
221221
replication, which can cause data inconsistencies if a node fails
222222
permanently. A part of a distributed transaction might get lost and cause
223223
a non-atomic result if the coordinator has prepared the transaction
224-
everywhere, started commiting it, but one of the nodes failed before
224+
everywhere, started committing it, but one of the nodes failed before
225225
committing the transaction on a replica.
226226
</para>
227227

@@ -973,7 +973,7 @@ manually excluded from the cluster, as follows:
973973
<para>
974974
Run the following command to to exclude the node from the cluster:
975975
<programlisting>
976-
select shardman.rm_node(${failed_node_id}, force => true);
976+
SELECT shardman.rm_node(${failed_node_id}, force => true);
977977
</programlisting>
978978
If redundancy level is greater than zero, <filename>pg_shardman</filename> tries to replace
979979
primary shards stored on the excluded node with their replicas. The most advanced replica
@@ -1073,11 +1073,11 @@ iterations are the same, the deadlock is confirmed.
10731073
</para>
10741074

10751075
<para>
1076-
<filename>pg_shardman</filename> tries to resolve reported deadlocks
1077-
by canceling one or more backends involved in the deadlock loop using
1078-
the <function>pg_cancel_backend</function> function. This function tries
1079-
to cancel the current query, without terminating the backend. The affected backend is
1080-
randomly chosen within the deadlock loop.
1076+
<filename>pg_shardman</filename> tries to resolve reported deadlocks by
1077+
canceling one or more backends involved in the deadlock loop. It invokes the
1078+
<function>pg_cancel_backend</function> function that tries to
1079+
cancel the current query, without terminating the backend. The affected
1080+
backend is randomly chosen within the deadlock loop.
10811081
</para>
10821082

10831083
<para>
@@ -1152,10 +1152,9 @@ excludes this node from the cluster, as follows:
11521152
</term>
11531153
<listitem>
11541154
<para>
1155-
When set to <literal>on</literal>, <filename>pg_shardman</filename>
1155+
When this parameter is set to <literal>on</literal>, <filename>pg_shardman</filename>
11561156
adds replicas to the list of <xref linkend="guc-synchronous-standby-names">,
1157-
enabling synchronous replication. This parameter should not be changed
1158-
if any replicas exist.
1157+
enabling synchronous replication.
11591158
</para>
11601159
<para>
11611160
Default: <literal>off</literal>
@@ -1827,9 +1826,8 @@ excludes this node from the cluster, as follows:
18271826
<para>
18281827
Monitors the cluster state to detect distributed deadlocks and node failures.
18291828
If a distributed deadlock is detected, <filename>pg_shardman</filename>
1830-
tries to resolve the deadlock by canceling one or more queries on the affected backend
1831-
using the <function>pg_cancel_backend</function> function. For details,
1832-
see <xref linkend="shardman-detecting-deadlocks-and-failed-nodes">.
1829+
tries to resolve the deadlock by canceling one or more queries on the affected
1830+
backend. For details, see <xref linkend="shardman-detecting-deadlocks-and-failed-nodes">.
18331831
This function is redirected to the shardlord if launched on a worker node.
18341832
</para>
18351833
<para>Arguments:
@@ -1898,7 +1896,8 @@ excludes this node from the cluster, as follows:
18981896
Since <filename>pg_shardman</filename> does not control WAL recycling,
18991897
<function>shardman.recover_xacts</function> uses clog to check
19001898
the transaction status. Though unlikely, <function>shardman.recover_xacts</function>
1901-
may fail to get the transaction status and resolve the transaction.
1899+
may fail to get the transaction status and resolve the transaction
1900+
and it has to be resolved manually.
19021901
</para>
19031902
</listitem>
19041903
</varlistentry>
@@ -1917,17 +1916,16 @@ excludes this node from the cluster, as follows:
19171916
Removes all publications, subscriptions, replication slots, foreign
19181917
servers, and user mappings created on the worker node by
19191918
<filename>pg_shardman</filename>. <productname>&project;</productname>
1920-
forbids dropping replication slots with active connection. If
1919+
forbids dropping replication slots with active connections. If
19211920
<parameter>force</parameter> is <literal>true</literal>,
19221921
<filename>pg_shardman</filename> tries to kill <acronym>WAL</acronym> senders before
1923-
dropping the slots. This command does not affect the data
1924-
stored on this node.
1922+
dropping the slots, without affecting the data stored on this node. Once this transaction commits, the
1923+
<varname>synchronous_standby_names</varname> variable is set to an empty string. It is a
1924+
non-transactional action, so there is a very small chance it won't be
1925+
completed.
19251926
</para>
19261927
<para>
1927-
Also, immediately after transaction commit
1928-
set <varname>synchronous_standby_names</varname> variable to an empty string. It is a
1929-
non-transactional action and there is a very small chance it won't be
1930-
completed. You probably want to run it before <command>DROP EXTENSION pg_shardman</command>.
1928+
You may want to run this function before <command>DROP EXTENSION pg_shardman</command>.
19311929
</para>
19321930
</listitem>
19331931
</varlistentry>

0 commit comments

Comments
 (0)