Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Commit 63be3b7

Browse files
committed
Fix typos in docs and comments.
Thom Brown
1 parent 9abed7d commit 63be3b7

File tree

5 files changed

+10
-10
lines changed

5 files changed

+10
-10
lines changed

doc/src/sgml/high-availability.sgml

+1-1
Original file line numberDiff line numberDiff line change
@@ -756,7 +756,7 @@ archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r'
756756
has received them. If this occurs, the standby will need to be
757757
reinitialized from a new base backup. You can avoid this by setting
758758
<varname>wal_keep_segments</> to a value large enough to ensure that
759-
WAL segments are not recycled too early, or by configuration a replication
759+
WAL segments are not recycled too early, or by configuring a replication
760760
slot for the standby. If you set up a WAL archive that's accessible from
761761
the standby, these solutions are not required, since the standby can
762762
always use the archive to catch up provided it retains enough segments.

doc/src/sgml/ref/pg_receivexlog.sgml

+1-1
Original file line numberDiff line numberDiff line change
@@ -235,7 +235,7 @@ PostgreSQL documentation
235235
When this option is used, <application>pg_receivexlog</> will report
236236
a flush position to the server, indicating when each segment has been
237237
synchronized to disk so that the server can remove that segment if it
238-
is not otherwise needed. When using this paramter, it is important
238+
is not otherwise needed. When using this parameter, it is important
239239
to make sure that <application>pg_receivexlog</> cannot become the
240240
synchronous standby through an incautious setting of
241241
<xref linkend="guc-synchronous-standby-names">; it does not flush

src/backend/replication/slot.c

+5-5
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
* Replication slots are used to keep state about replication streams
1616
* originating from this cluster. Their primary purpose is to prevent the
1717
* premature removal of WAL or of old tuple versions in a manner that would
18-
* interfere with replication; they also useful for monitoring purposes.
18+
* interfere with replication; they are also useful for monitoring purposes.
1919
* Slots need to be permanent (to allow restarts), crash-safe, and allocatable
2020
* on standbys (to support cascading setups). The requirement that slots be
2121
* usable on standbys precludes storing them in the system catalogs.
@@ -142,7 +142,7 @@ ReplicationSlotsShmemInit(void)
142142
* Check whether the passed slot name is valid and report errors at elevel.
143143
*
144144
* Slot names may consist out of [a-z0-9_]{1,NAMEDATALEN-1} which should allow
145-
* the name to be uses as a directory name on every supported OS.
145+
* the name to be used as a directory name on every supported OS.
146146
*
147147
* Returns whether the directory name is valid or not if elevel < ERROR.
148148
*/
@@ -290,7 +290,7 @@ ReplicationSlotCreate(const char *name, bool db_specific)
290290
}
291291

292292
/*
293-
* Find an previously created slot and mark it as used by this backend.
293+
* Find a previously created slot and mark it as used by this backend.
294294
*/
295295
void
296296
ReplicationSlotAcquire(const char *name)
@@ -743,7 +743,7 @@ CreateSlotOnDisk(ReplicationSlot *slot)
743743

744744
/*
745745
* No need to take out the io_in_progress_lock, nobody else can see this
746-
* slot yet, so nobody else wil write. We're reusing SaveSlotToPath which
746+
* slot yet, so nobody else will write. We're reusing SaveSlotToPath which
747747
* takes out the lock, if we'd take the lock here, we'd deadlock.
748748
*/
749749

@@ -780,7 +780,7 @@ CreateSlotOnDisk(ReplicationSlot *slot)
780780
tmppath, path)));
781781

782782
/*
783-
* If we'd now fail - really unlikely - we wouldn't know wether this slot
783+
* If we'd now fail - really unlikely - we wouldn't know whether this slot
784784
* would persist after an OS crash or not - so, force a restart. The
785785
* restart would try to fysnc this again till it works.
786786
*/

src/backend/replication/walsender.c

+2-2
Original file line numberDiff line numberDiff line change
@@ -957,7 +957,7 @@ PhysicalConfirmReceivedLocation(XLogRecPtr lsn)
957957
}
958958

959959
/*
960-
* One could argue that the slot should saved to disk now, but that'd be
960+
* One could argue that the slot should be saved to disk now, but that'd be
961961
* energy wasted - the worst lost information can do here is give us wrong
962962
* information in a statistics view - we'll just potentially be more
963963
* conservative in removing files.
@@ -1032,7 +1032,7 @@ PhysicalReplicationSlotNewXmin(TransactionId feedbackXmin)
10321032
SpinLockAcquire(&slot->mutex);
10331033
MyPgXact->xmin = InvalidTransactionId;
10341034
/*
1035-
* For physical replication we don't need the the interlock provided
1035+
* For physical replication we don't need the interlock provided
10361036
* by xmin and effective_xmin since the consequences of a missed increase
10371037
* are limited to query cancellations, so set both at once.
10381038
*/

src/bin/pg_basebackup/receivelog.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -535,7 +535,7 @@ ReceiveXlogStream(PGconn *conn, XLogRecPtr startpos, uint32 timeline,
535535
* possibly re-request, and remove older WAL safely.
536536
*
537537
* We only report it when a slot has explicitly been used, because
538-
* reporting the flush position makes one elegible as a synchronous
538+
* reporting the flush position makes one eligible as a synchronous
539539
* replica. People shouldn't include generic names in
540540
* synchronous_standby_names, but we've protected them against it so
541541
* far, so let's continue to do so in the situations when possible.

0 commit comments

Comments
 (0)