Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Commit 455fa46

Browse files
committed
Update high availability documentation with comments from Markus Schiltknecht.
1 parent d2d52bb commit 455fa46

File tree

1 file changed

+49
-40
lines changed

1 file changed

+49
-40
lines changed

doc/src/sgml/high-availability.sgml

+49-40
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
<!-- $PostgreSQL: pgsql/doc/src/sgml/high-availability.sgml,v 1.22 2007/11/09 16:36:04 momjian Exp $ -->
1+
<!-- $PostgreSQL: pgsql/doc/src/sgml/high-availability.sgml,v 1.23 2007/11/10 19:14:02 momjian Exp $ -->
22

33
<chapter id="high-availability">
44
<title>High Availability, Load Balancing, and Replication</title>
@@ -94,7 +94,7 @@
9494
<para>
9595
Shared hardware functionality is common in network storage devices.
9696
Using a network file system is also possible, though care must be
97-
taken that the file system has full POSIX behavior (see <xref
97+
taken that the file system has full <acronym>POSIX</> behavior (see <xref
9898
linkend="creating-cluster-nfs">). One significant limitation of this
9999
method is that if the shared disk array fails or becomes corrupt, the
100100
primary and standby servers are both nonfunctional. Another issue is
@@ -116,7 +116,8 @@
116116
the mirroring must be done in a way that ensures the standby server
117117
has a consistent copy of the file system &mdash; specifically, writes
118118
to the standby must be done in the same order as those on the master.
119-
DRBD is a popular file system replication solution for Linux.
119+
<productname>DRBD</> is a popular file system replication solution
120+
for Linux.
120121
</para>
121122

122123
<!--
@@ -137,7 +138,7 @@ protocol to make nodes agree on a serializable transactional order.
137138

138139
<para>
139140
A warm standby server (see <xref linkend="warm-standby">) can
140-
be kept current by reading a stream of write-ahead log (WAL)
141+
be kept current by reading a stream of write-ahead log (<acronym>WAL</>)
141142
records. If the main server fails, the warm standby contains
142143
almost all of the data of the main server, and can be quickly
143144
made the new master database server. This is asynchronous and
@@ -159,7 +160,7 @@ protocol to make nodes agree on a serializable transactional order.
159160
</para>
160161

161162
<para>
162-
Slony-I is an example of this type of replication, with per-table
163+
<productname>Slony-I</> is an example of this type of replication, with per-table
163164
granularity, and support for multiple slaves. Because it
164165
updates the slave server asynchronously (in batches), there is
165166
possible data loss during fail over.
@@ -192,7 +193,8 @@ protocol to make nodes agree on a serializable transactional order.
192193
using two-phase commit (<xref linkend="sql-prepare-transaction"
193194
endterm="sql-prepare-transaction-title"> and <xref
194195
linkend="sql-commit-prepared" endterm="sql-commit-prepared-title">.
195-
Pgpool and Sequoia are an example of this type of replication.
196+
<productname>Pgpool</> and <productname>Sequoia</> are examples of
197+
this type of replication.
196198
</para>
197199
</listitem>
198200
</varlistentry>
@@ -244,22 +246,6 @@ protocol to make nodes agree on a serializable transactional order.
244246
</listitem>
245247
</varlistentry>
246248

247-
<varlistentry>
248-
<term>Data Partitioning</term>
249-
<listitem>
250-
251-
<para>
252-
Data partitioning splits tables into data sets. Each set can
253-
be modified by only one server. For example, data can be
254-
partitioned by offices, e.g. London and Paris, with a server
255-
in each office. If queries combining London and Paris data
256-
are necessary, an application can query both servers, or
257-
master/slave replication can be used to keep a read-only copy
258-
of the other office's data on each server.
259-
</para>
260-
</listitem>
261-
</varlistentry>
262-
263249
<varlistentry>
264250
<term>Commercial Solutions</term>
265251
<listitem>
@@ -293,7 +279,6 @@ protocol to make nodes agree on a serializable transactional order.
293279
<entry>Statement-Based Replication Middleware</entry>
294280
<entry>Asynchronous Multi-Master Replication</entry>
295281
<entry>Synchronous Multi-Master Replication</entry>
296-
<entry>Data Partitioning</entry>
297282
</row>
298283
</thead>
299284

@@ -308,7 +293,6 @@ protocol to make nodes agree on a serializable transactional order.
308293
<entry align="center">&bull;</entry>
309294
<entry align="center">&bull;</entry>
310295
<entry align="center">&bull;</entry>
311-
<entry align="center">&bull;</entry>
312296
</row>
313297

314298
<row>
@@ -320,7 +304,6 @@ protocol to make nodes agree on a serializable transactional order.
320304
<entry align="center">&bull;</entry>
321305
<entry align="center">&bull;</entry>
322306
<entry align="center">&bull;</entry>
323-
<entry align="center"></entry>
324307
</row>
325308

326309
<row>
@@ -332,19 +315,17 @@ protocol to make nodes agree on a serializable transactional order.
332315
<entry align="center"></entry>
333316
<entry align="center"></entry>
334317
<entry align="center"></entry>
335-
<entry align="center"></entry>
336318
</row>
337319

338320
<row>
339-
<entry>Master server never locks others</entry>
321+
<entry>No inter-server locking delay</entry>
340322
<entry align="center">&bull;</entry>
341323
<entry align="center">&bull;</entry>
342324
<entry align="center">&bull;</entry>
343325
<entry align="center">&bull;</entry>
344326
<entry align="center">&bull;</entry>
345327
<entry align="center">&bull;</entry>
346328
<entry align="center"></entry>
347-
<entry align="center">&bull;</entry>
348329
</row>
349330

350331
<row>
@@ -356,7 +337,6 @@ protocol to make nodes agree on a serializable transactional order.
356337
<entry align="center">&bull;</entry>
357338
<entry align="center"></entry>
358339
<entry align="center">&bull;</entry>
359-
<entry align="center"></entry>
360340
</row>
361341

362342
<row>
@@ -368,7 +348,6 @@ protocol to make nodes agree on a serializable transactional order.
368348
<entry align="center">&bull;</entry>
369349
<entry align="center">&bull;</entry>
370350
<entry align="center">&bull;</entry>
371-
<entry align="center">&bull;</entry>
372351
</row>
373352

374353
<row>
@@ -380,7 +359,6 @@ protocol to make nodes agree on a serializable transactional order.
380359
<entry align="center"></entry>
381360
<entry align="center">&bull;</entry>
382361
<entry align="center">&bull;</entry>
383-
<entry align="center">&bull;</entry>
384362
</row>
385363

386364
<row>
@@ -392,22 +370,53 @@ protocol to make nodes agree on a serializable transactional order.
392370
<entry align="center"></entry>
393371
<entry align="center"></entry>
394372
<entry align="center">&bull;</entry>
395-
<entry align="center">&bull;</entry>
396373
</row>
397374

398375
</tbody>
399376
</tgroup>
400377
</table>
401378

402379
<para>
403-
Many of the above solutions allow multiple servers to handle multiple
404-
queries, but none allow a single query to use multiple servers to
405-
complete faster. Multi-server parallel query execution allows multiple
406-
servers to work concurrently on a single query. This is usually
407-
accomplished by splitting the data among servers and having each server
408-
execute its part of the query and return results to a central server
409-
where they are combined and returned to the user. Pgpool-II has this
410-
capability. Also, this can be implemented using the PL/Proxy toolset.
380+
There are a few solutions that do not fit into the above categories:
411381
</para>
412382

383+
<variablelist>
384+
385+
<varlistentry>
386+
<term>Data Partitioning</term>
387+
<listitem>
388+
389+
<para>
390+
Data partitioning splits tables into data sets. Each set can
391+
be modified by only one server. For example, data can be
392+
partitioned by offices, e.g. London and Paris, with a server
393+
in each office. If queries combining London and Paris data
394+
are necessary, an application can query both servers, or
395+
master/slave replication can be used to keep a read-only copy
396+
of the other office's data on each server.
397+
</para>
398+
</listitem>
399+
</varlistentry>
400+
401+
<varlistentry>
402+
<term>Multi-Server Parallel Query Execution</term>
403+
<listitem>
404+
405+
<para>
406+
Many of the above solutions allow multiple servers to handle multiple
407+
queries, but none allow a single query to use multiple servers to
408+
complete faster. This allows multiple servers to work concurrently
409+
on a single query. This is usually accomplished by splitting the
410+
data among servers and having each server execute its part of the
411+
query and return results to a central server where they are combined
412+
and returned to the user. <productname>Pgpool-II</> has this
413+
capability. Also, this can be implemented using the
414+
<productname>PL/Proxy</> toolset.
415+
</para>
416+
417+
</listitem>
418+
</varlistentry>
419+
420+
</variablelist>
421+
413422
</chapter>

0 commit comments

Comments
 (0)