1
- <!-- $PostgreSQL: pgsql/doc/src/sgml/high-availability.sgml,v 1.22 2007/11/09 16:36:04 momjian Exp $ -->
1
+ <!-- $PostgreSQL: pgsql/doc/src/sgml/high-availability.sgml,v 1.23 2007/11/10 19:14:02 momjian Exp $ -->
2
2
3
3
<chapter id="high-availability">
4
4
<title>High Availability, Load Balancing, and Replication</title>
94
94
<para>
95
95
Shared hardware functionality is common in network storage devices.
96
96
Using a network file system is also possible, though care must be
97
- taken that the file system has full POSIX behavior (see <xref
97
+ taken that the file system has full <acronym> POSIX</> behavior (see <xref
98
98
linkend="creating-cluster-nfs">). One significant limitation of this
99
99
method is that if the shared disk array fails or becomes corrupt, the
100
100
primary and standby servers are both nonfunctional. Another issue is
116
116
the mirroring must be done in a way that ensures the standby server
117
117
has a consistent copy of the file system — specifically, writes
118
118
to the standby must be done in the same order as those on the master.
119
- DRBD is a popular file system replication solution for Linux.
119
+ <productname>DRBD</> is a popular file system replication solution
120
+ for Linux.
120
121
</para>
121
122
122
123
<!--
@@ -137,7 +138,7 @@ protocol to make nodes agree on a serializable transactional order.
137
138
138
139
<para>
139
140
A warm standby server (see <xref linkend="warm-standby">) can
140
- be kept current by reading a stream of write-ahead log (WAL)
141
+ be kept current by reading a stream of write-ahead log (<acronym> WAL</> )
141
142
records. If the main server fails, the warm standby contains
142
143
almost all of the data of the main server, and can be quickly
143
144
made the new master database server. This is asynchronous and
@@ -159,7 +160,7 @@ protocol to make nodes agree on a serializable transactional order.
159
160
</para>
160
161
161
162
<para>
162
- Slony-I is an example of this type of replication, with per-table
163
+ <productname> Slony-I</> is an example of this type of replication, with per-table
163
164
granularity, and support for multiple slaves. Because it
164
165
updates the slave server asynchronously (in batches), there is
165
166
possible data loss during fail over.
@@ -192,7 +193,8 @@ protocol to make nodes agree on a serializable transactional order.
192
193
using two-phase commit (<xref linkend="sql-prepare-transaction"
193
194
endterm="sql-prepare-transaction-title"> and <xref
194
195
linkend="sql-commit-prepared" endterm="sql-commit-prepared-title">.
195
- Pgpool and Sequoia are an example of this type of replication.
196
+ <productname>Pgpool</> and <productname>Sequoia</> are examples of
197
+ this type of replication.
196
198
</para>
197
199
</listitem>
198
200
</varlistentry>
@@ -244,22 +246,6 @@ protocol to make nodes agree on a serializable transactional order.
244
246
</listitem>
245
247
</varlistentry>
246
248
247
- <varlistentry>
248
- <term>Data Partitioning</term>
249
- <listitem>
250
-
251
- <para>
252
- Data partitioning splits tables into data sets. Each set can
253
- be modified by only one server. For example, data can be
254
- partitioned by offices, e.g. London and Paris, with a server
255
- in each office. If queries combining London and Paris data
256
- are necessary, an application can query both servers, or
257
- master/slave replication can be used to keep a read-only copy
258
- of the other office's data on each server.
259
- </para>
260
- </listitem>
261
- </varlistentry>
262
-
263
249
<varlistentry>
264
250
<term>Commercial Solutions</term>
265
251
<listitem>
@@ -293,7 +279,6 @@ protocol to make nodes agree on a serializable transactional order.
293
279
<entry>Statement-Based Replication Middleware</entry>
294
280
<entry>Asynchronous Multi-Master Replication</entry>
295
281
<entry>Synchronous Multi-Master Replication</entry>
296
- <entry>Data Partitioning</entry>
297
282
</row>
298
283
</thead>
299
284
@@ -308,7 +293,6 @@ protocol to make nodes agree on a serializable transactional order.
308
293
<entry align="center">•</entry>
309
294
<entry align="center">•</entry>
310
295
<entry align="center">•</entry>
311
- <entry align="center">•</entry>
312
296
</row>
313
297
314
298
<row>
@@ -320,7 +304,6 @@ protocol to make nodes agree on a serializable transactional order.
320
304
<entry align="center">•</entry>
321
305
<entry align="center">•</entry>
322
306
<entry align="center">•</entry>
323
- <entry align="center"></entry>
324
307
</row>
325
308
326
309
<row>
@@ -332,19 +315,17 @@ protocol to make nodes agree on a serializable transactional order.
332
315
<entry align="center"></entry>
333
316
<entry align="center"></entry>
334
317
<entry align="center"></entry>
335
- <entry align="center"></entry>
336
318
</row>
337
319
338
320
<row>
339
- <entry>Master server never locks others </entry>
321
+ <entry>No inter- server locking delay </entry>
340
322
<entry align="center">•</entry>
341
323
<entry align="center">•</entry>
342
324
<entry align="center">•</entry>
343
325
<entry align="center">•</entry>
344
326
<entry align="center">•</entry>
345
327
<entry align="center">•</entry>
346
328
<entry align="center"></entry>
347
- <entry align="center">•</entry>
348
329
</row>
349
330
350
331
<row>
@@ -356,7 +337,6 @@ protocol to make nodes agree on a serializable transactional order.
356
337
<entry align="center">•</entry>
357
338
<entry align="center"></entry>
358
339
<entry align="center">•</entry>
359
- <entry align="center"></entry>
360
340
</row>
361
341
362
342
<row>
@@ -368,7 +348,6 @@ protocol to make nodes agree on a serializable transactional order.
368
348
<entry align="center">•</entry>
369
349
<entry align="center">•</entry>
370
350
<entry align="center">•</entry>
371
- <entry align="center">•</entry>
372
351
</row>
373
352
374
353
<row>
@@ -380,7 +359,6 @@ protocol to make nodes agree on a serializable transactional order.
380
359
<entry align="center"></entry>
381
360
<entry align="center">•</entry>
382
361
<entry align="center">•</entry>
383
- <entry align="center">•</entry>
384
362
</row>
385
363
386
364
<row>
@@ -392,22 +370,53 @@ protocol to make nodes agree on a serializable transactional order.
392
370
<entry align="center"></entry>
393
371
<entry align="center"></entry>
394
372
<entry align="center">•</entry>
395
- <entry align="center">•</entry>
396
373
</row>
397
374
398
375
</tbody>
399
376
</tgroup>
400
377
</table>
401
378
402
379
<para>
403
- Many of the above solutions allow multiple servers to handle multiple
404
- queries, but none allow a single query to use multiple servers to
405
- complete faster. Multi-server parallel query execution allows multiple
406
- servers to work concurrently on a single query. This is usually
407
- accomplished by splitting the data among servers and having each server
408
- execute its part of the query and return results to a central server
409
- where they are combined and returned to the user. Pgpool-II has this
410
- capability. Also, this can be implemented using the PL/Proxy toolset.
380
+ There are a few solutions that do not fit into the above categories:
411
381
</para>
412
382
383
+ <variablelist>
384
+
385
+ <varlistentry>
386
+ <term>Data Partitioning</term>
387
+ <listitem>
388
+
389
+ <para>
390
+ Data partitioning splits tables into data sets. Each set can
391
+ be modified by only one server. For example, data can be
392
+ partitioned by offices, e.g. London and Paris, with a server
393
+ in each office. If queries combining London and Paris data
394
+ are necessary, an application can query both servers, or
395
+ master/slave replication can be used to keep a read-only copy
396
+ of the other office's data on each server.
397
+ </para>
398
+ </listitem>
399
+ </varlistentry>
400
+
401
+ <varlistentry>
402
+ <term>Multi-Server Parallel Query Execution</term>
403
+ <listitem>
404
+
405
+ <para>
406
+ Many of the above solutions allow multiple servers to handle multiple
407
+ queries, but none allow a single query to use multiple servers to
408
+ complete faster. This allows multiple servers to work concurrently
409
+ on a single query. This is usually accomplished by splitting the
410
+ data among servers and having each server execute its part of the
411
+ query and return results to a central server where they are combined
412
+ and returned to the user. <productname>Pgpool-II</> has this
413
+ capability. Also, this can be implemented using the
414
+ <productname>PL/Proxy</> toolset.
415
+ </para>
416
+
417
+ </listitem>
418
+ </varlistentry>
419
+
420
+ </variablelist>
421
+
413
422
</chapter>
0 commit comments