Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Commit a6fea12

Browse files
author
Amit Kapila
committed
Comments and doc fixes for commit 40d964e.
Reported-by: Justin Pryzby Author: Justin Pryzby, with few changes by me Reviewed-by: Amit Kapila and Sawada Masahiko Discussion: https://postgr.es/m/20200322021801.GB2563@telsasoft.com
1 parent 826ee1a commit a6fea12

File tree

5 files changed

+49
-49
lines changed

5 files changed

+49
-49
lines changed

doc/src/sgml/ref/vacuum.sgml

+11-11
Original file line numberDiff line numberDiff line change
@@ -232,23 +232,23 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ <replaceable class="paramet
232232
<term><literal>PARALLEL</literal></term>
233233
<listitem>
234234
<para>
235-
Perform vacuum index and cleanup index phases of <command>VACUUM</command>
235+
Perform index vacuum and index cleanup phases of <command>VACUUM</command>
236236
in parallel using <replaceable class="parameter">integer</replaceable>
237-
background workers (for the detail of each vacuum phases, please
237+
background workers (for the details of each vacuum phase, please
238238
refer to <xref linkend="vacuum-phases"/>). If the
239-
<literal>PARALLEL</literal> option is omitted, then
240-
<command>VACUUM</command> decides the number of workers based on number
241-
of indexes that support parallel vacuum operation on the relation which
242-
is further limited by <xref linkend="guc-max-parallel-workers-maintenance"/>.
243-
The index can participate in a parallel vacuum if and only if the size
239+
<literal>PARALLEL</literal> option is omitted, then the number of workers
240+
is determined based on the number of indexes that support parallel vacuum
241+
operation on the relation, and is further limited by <xref
242+
linkend="guc-max-parallel-workers-maintenance"/>.
243+
An index can participate in parallel vacuum if and only if the size
244244
of the index is more than <xref linkend="guc-min-parallel-index-scan-size"/>.
245245
Please note that it is not guaranteed that the number of parallel workers
246246
specified in <replaceable class="parameter">integer</replaceable> will
247247
be used during execution. It is possible for a vacuum to run with fewer
248248
workers than specified, or even with no workers at all. Only one worker
249249
can be used per index. So parallel workers are launched only when there
250250
are at least <literal>2</literal> indexes in the table. Workers for
251-
vacuum launches before starting each phase and exit at the end of
251+
vacuum are launched before the start of each phase and exit at the end of
252252
the phase. These behaviors might change in a future release. This
253253
option can't be used with the <literal>FULL</literal> option.
254254
</para>
@@ -358,16 +358,16 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ <replaceable class="paramet
358358
</para>
359359

360360
<para>
361-
The <option>PARALLEL</option> option is used only for vacuum purpose.
362-
Even if this option is specified with <option>ANALYZE</option> option
361+
The <option>PARALLEL</option> option is used only for vacuum purposes.
362+
If this option is specified with the <option>ANALYZE</option> option,
363363
it does not affect <option>ANALYZE</option>.
364364
</para>
365365

366366
<para>
367367
<command>VACUUM</command> causes a substantial increase in I/O traffic,
368368
which might cause poor performance for other active sessions. Therefore,
369369
it is sometimes advisable to use the cost-based vacuum delay feature. For
370-
parallel vacuum, each worker sleeps proportional to the work done by that
370+
parallel vacuum, each worker sleeps in proportion to the work done by that
371371
worker. See <xref linkend="runtime-config-resource-vacuum-cost"/> for
372372
details.
373373
</para>

src/backend/access/heap/vacuumlazy.c

+26-26
Original file line numberDiff line numberDiff line change
@@ -208,7 +208,7 @@ typedef struct LVShared
208208
* live tuples in the index vacuum case or the new live tuples in the
209209
* index cleanup case.
210210
*
211-
* estimated_count is true if the reltuples is an estimated value.
211+
* estimated_count is true if reltuples is an estimated value.
212212
*/
213213
double reltuples;
214214
bool estimated_count;
@@ -232,8 +232,8 @@ typedef struct LVShared
232232

233233
/*
234234
* Number of active parallel workers. This is used for computing the
235-
* minimum threshold of the vacuum cost balance for a worker to go for the
236-
* delay.
235+
* minimum threshold of the vacuum cost balance before a worker sleeps for
236+
* cost-based delay.
237237
*/
238238
pg_atomic_uint32 active_nworkers;
239239

@@ -732,7 +732,7 @@ vacuum_log_cleanup_info(Relation rel, LVRelStats *vacrelstats)
732732
* to reclaim dead line pointers.
733733
*
734734
* If the table has at least two indexes, we execute both index vacuum
735-
* and index cleanup with parallel workers unless the parallel vacuum is
735+
* and index cleanup with parallel workers unless parallel vacuum is
736736
* disabled. In a parallel vacuum, we enter parallel mode and then
737737
* create both the parallel context and the DSM segment before starting
738738
* heap scan so that we can record dead tuples to the DSM segment. All
@@ -809,8 +809,8 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
809809
vacrelstats->latestRemovedXid = InvalidTransactionId;
810810

811811
/*
812-
* Initialize the state for a parallel vacuum. As of now, only one worker
813-
* can be used for an index, so we invoke parallelism only if there are at
812+
* Initialize state for a parallel vacuum. As of now, only one worker can
813+
* be used for an index, so we invoke parallelism only if there are at
814814
* least two indexes on a table.
815815
*/
816816
if (params->nworkers >= 0 && vacrelstats->useindex && nindexes > 1)
@@ -837,7 +837,7 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
837837
}
838838

839839
/*
840-
* Allocate the space for dead tuples in case the parallel vacuum is not
840+
* Allocate the space for dead tuples in case parallel vacuum is not
841841
* initialized.
842842
*/
843843
if (!ParallelVacuumIsActive(lps))
@@ -2215,7 +2215,7 @@ parallel_vacuum_index(Relation *Irel, IndexBulkDeleteResult **stats,
22152215
shared_indstats = get_indstats(lvshared, idx);
22162216

22172217
/*
2218-
* Skip processing indexes that doesn't participate in parallel
2218+
* Skip processing indexes that don't participate in parallel
22192219
* operation
22202220
*/
22212221
if (shared_indstats == NULL ||
@@ -2312,12 +2312,12 @@ vacuum_one_index(Relation indrel, IndexBulkDeleteResult **stats,
23122312

23132313
/*
23142314
* Copy the index bulk-deletion result returned from ambulkdelete and
2315-
* amvacuumcleanup to the DSM segment if it's the first time to get it
2316-
* from them, because they allocate it locally and it's possible that an
2317-
* index will be vacuumed by the different vacuum process at the next
2318-
* time. The copying of the result normally happens only after the first
2319-
* time of index vacuuming. From the second time, we pass the result on
2320-
* the DSM segment so that they then update it directly.
2315+
* amvacuumcleanup to the DSM segment if it's the first cycle because they
2316+
* allocate locally and it's possible that an index will be vacuumed by a
2317+
* different vacuum process the next cycle. Copying the result normally
2318+
* happens only the first time an index is vacuumed. For any additional
2319+
* vacuum pass, we directly point to the result on the DSM segment and
2320+
* pass it to vacuum index APIs so that workers can update it directly.
23212321
*
23222322
* Since all vacuum workers write the bulk-deletion result at different
23232323
* slots we can write them without locking.
@@ -2328,8 +2328,8 @@ vacuum_one_index(Relation indrel, IndexBulkDeleteResult **stats,
23282328
shared_indstats->updated = true;
23292329

23302330
/*
2331-
* Now that the stats[idx] points to the DSM segment, we don't need
2332-
* the locally allocated results.
2331+
* Now that stats[idx] points to the DSM segment, we don't need the
2332+
* locally allocated results.
23332333
*/
23342334
pfree(*stats);
23352335
*stats = bulkdelete_res;
@@ -2449,7 +2449,7 @@ lazy_vacuum_index(Relation indrel, IndexBulkDeleteResult **stats,
24492449
* lazy_cleanup_index() -- do post-vacuum cleanup for one index relation.
24502450
*
24512451
* reltuples is the number of heap tuples and estimated_count is true
2452-
* if the reltuples is an estimated value.
2452+
* if reltuples is an estimated value.
24532453
*/
24542454
static void
24552455
lazy_cleanup_index(Relation indrel,
@@ -3050,9 +3050,9 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
30503050
/*
30513051
* Compute the number of parallel worker processes to request. Both index
30523052
* vacuum and index cleanup can be executed with parallel workers. The index
3053-
* is eligible for parallel vacuum iff it's size is greater than
3053+
* is eligible for parallel vacuum iff its size is greater than
30543054
* min_parallel_index_scan_size as invoking workers for very small indexes
3055-
* can hurt the performance.
3055+
* can hurt performance.
30563056
*
30573057
* nrequested is the number of parallel workers that user requested. If
30583058
* nrequested is 0, we compute the parallel degree based on nindexes, that is
@@ -3071,7 +3071,7 @@ compute_parallel_vacuum_workers(Relation *Irel, int nindexes, int nrequested,
30713071
int i;
30723072

30733073
/*
3074-
* We don't allow to perform parallel operation in standalone backend or
3074+
* We don't allow performing parallel operation in standalone backend or
30753075
* when parallelism is disabled.
30763076
*/
30773077
if (!IsUnderPostmaster || max_parallel_maintenance_workers == 0)
@@ -3138,13 +3138,13 @@ prepare_index_statistics(LVShared *lvshared, bool *can_parallel_vacuum,
31383138
if (!can_parallel_vacuum[i])
31393139
continue;
31403140

3141-
/* Set NOT NULL as this index do support parallelism */
3141+
/* Set NOT NULL as this index does support parallelism */
31423142
lvshared->bitmap[i >> 3] |= 1 << (i & 0x07);
31433143
}
31443144
}
31453145

31463146
/*
3147-
* Update index statistics in pg_class if the statistics is accurate.
3147+
* Update index statistics in pg_class if the statistics are accurate.
31483148
*/
31493149
static void
31503150
update_index_statistics(Relation *Irel, IndexBulkDeleteResult **stats,
@@ -3174,7 +3174,7 @@ update_index_statistics(Relation *Irel, IndexBulkDeleteResult **stats,
31743174

31753175
/*
31763176
* This function prepares and returns parallel vacuum state if we can launch
3177-
* even one worker. This function is responsible to enter parallel mode,
3177+
* even one worker. This function is responsible for entering parallel mode,
31783178
* create a parallel context, and then initialize the DSM segment.
31793179
*/
31803180
static LVParallelState *
@@ -3345,8 +3345,8 @@ begin_parallel_vacuum(Oid relid, Relation *Irel, LVRelStats *vacrelstats,
33453345
/*
33463346
* Destroy the parallel context, and end parallel mode.
33473347
*
3348-
* Since writes are not allowed during the parallel mode, so we copy the
3349-
* updated index statistics from DSM in local memory and then later use that
3348+
* Since writes are not allowed during parallel mode, copy the
3349+
* updated index statistics from DSM into local memory and then later use that
33503350
* to update the index statistics. One might think that we can exit from
33513351
* parallel mode, update the index statistics and then destroy parallel
33523352
* context, but that won't be safe (see ExitParallelMode).
@@ -3452,7 +3452,7 @@ skip_parallel_vacuum_index(Relation indrel, LVShared *lvshared)
34523452
* Perform work within a launched parallel process.
34533453
*
34543454
* Since parallel vacuum workers perform only index vacuum or index cleanup,
3455-
* we don't need to report the progress information.
3455+
* we don't need to report progress information.
34563456
*/
34573457
void
34583458
parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)

src/backend/access/transam/parallel.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -505,7 +505,7 @@ ReinitializeParallelDSM(ParallelContext *pcxt)
505505

506506
/*
507507
* Reinitialize parallel workers for a parallel context such that we could
508-
* launch the different number of workers. This is required for cases where
508+
* launch a different number of workers. This is required for cases where
509509
* we need to reuse the same DSM segment, but the number of workers can
510510
* vary from run-to-run.
511511
*/

src/backend/commands/vacuum.c

+10-10
Original file line numberDiff line numberDiff line change
@@ -2036,23 +2036,23 @@ vacuum_delay_point(void)
20362036
/*
20372037
* Computes the vacuum delay for parallel workers.
20382038
*
2039-
* The basic idea of a cost-based vacuum delay for parallel vacuum is to allow
2040-
* each worker to sleep proportional to the work done by it. We achieve this
2039+
* The basic idea of a cost-based delay for parallel vacuum is to allow each
2040+
* worker to sleep in proportion to the share of work it's done. We achieve this
20412041
* by allowing all parallel vacuum workers including the leader process to
20422042
* have a shared view of cost related parameters (mainly VacuumCostBalance).
20432043
* We allow each worker to update it as and when it has incurred any cost and
20442044
* then based on that decide whether it needs to sleep. We compute the time
20452045
* to sleep for a worker based on the cost it has incurred
20462046
* (VacuumCostBalanceLocal) and then reduce the VacuumSharedCostBalance by
2047-
* that amount. This avoids letting the workers sleep who have done less or
2048-
* no I/O as compared to other workers and therefore can ensure that workers
2049-
* who are doing more I/O got throttled more.
2047+
* that amount. This avoids putting to sleep those workers which have done less
2048+
* I/O than other workers and therefore ensure that workers
2049+
* which are doing more I/O got throttled more.
20502050
*
2051-
* We allow any worker to sleep only if it has performed the I/O above a
2052-
* certain threshold, which is calculated based on the number of active
2053-
* workers (VacuumActiveNWorkers), and the overall cost balance is more than
2054-
* VacuumCostLimit set by the system. The testing reveals that we achieve
2055-
* the required throttling if we allow a worker that has done more than 50%
2051+
* We allow a worker to sleep only if it has performed I/O above a certain
2052+
* threshold, which is calculated based on the number of active workers
2053+
* (VacuumActiveNWorkers), and the overall cost balance is more than
2054+
* VacuumCostLimit set by the system. Testing reveals that we achieve
2055+
* the required throttling if we force a worker that has done more than 50%
20562056
* of its share of work to sleep.
20572057
*/
20582058
static double

src/include/commands/vacuum.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -225,7 +225,7 @@ typedef struct VacuumParams
225225

226226
/*
227227
* The number of parallel vacuum workers. 0 by default which means choose
228-
* based on the number of indexes. -1 indicates a parallel vacuum is
228+
* based on the number of indexes. -1 indicates parallel vacuum is
229229
* disabled.
230230
*/
231231
int nworkers;

0 commit comments

Comments
 (0)