Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Commit c79f6df

Browse files
committed
Do index FSM vacuuming sooner.
In btree and SP-GiST indexes, move the responsibility for calling IndexFreeSpaceMapVacuum from the vacuumcleanup phase to the bulkdelete phase, and do it if and only if we found some pages that could be put into FSM. As in commit 851a26e, the idea is to make free pages visible to FSM searchers sooner when vacuuming very large tables (large enough to need multiple bulkdelete scans). This adds more redundant work than that commit did, since we have to scan the entire index FSM each time rather than being able to localize what needs to be updated; but it still seems worthwhile. However, we can buy something back by not touching the FSM at all when there are no pages that can be put in it. That will result in slower recovery from corrupt upper FSM pages in such a scenario, but it doesn't seem like that's a case we need to optimize for. Hash indexes don't use FSM at all. GIN, GiST, and bloom indexes update FSM during the vacuumcleanup phase not bulkdelete, so that doing something comparable to this would be a much more invasive change, and it's not clear it's worth it. BRIN indexes do things sufficiently differently that this change doesn't apply to them, either. Claudio Freire, reviewed by Masahiko Sawada and Jing Wang, some additional tweaks by me Discussion: https://postgr.es/m/CAGTBQpYR0uJCNTt3M5GOzBRHo+-GccNO1nCaQ8yEJmZKSW5q1A@mail.gmail.com
1 parent 96030f9 commit c79f6df

File tree

2 files changed

+32
-9
lines changed

2 files changed

+32
-9
lines changed

src/backend/access/nbtree/nbtree.c

+15-3
Original file line numberDiff line numberDiff line change
@@ -832,9 +832,6 @@ btvacuumcleanup(IndexVacuumInfo *info, IndexBulkDeleteResult *stats)
832832
btvacuumscan(info, stats, NULL, NULL, 0);
833833
}
834834

835-
/* Finally, vacuum the FSM */
836-
IndexFreeSpaceMapVacuum(info->index);
837-
838835
/*
839836
* It's quite possible for us to be fooled by concurrent page splits into
840837
* double-counting some index tuples, so disbelieve any total that exceeds
@@ -976,6 +973,21 @@ btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
976973

977974
MemoryContextDelete(vstate.pagedelcontext);
978975

976+
/*
977+
* If we found any recyclable pages (and recorded them in the FSM), then
978+
* forcibly update the upper-level FSM pages to ensure that searchers can
979+
* find them. It's possible that the pages were also found during
980+
* previous scans and so this is a waste of time, but it's cheap enough
981+
* relative to scanning the index that it shouldn't matter much, and
982+
* making sure that free pages are available sooner not later seems
983+
* worthwhile.
984+
*
985+
* Note that if no recyclable pages exist, we don't bother vacuuming the
986+
* FSM at all.
987+
*/
988+
if (vstate.totFreePages > 0)
989+
IndexFreeSpaceMapVacuum(rel);
990+
979991
/* update statistics */
980992
stats->num_pages = num_pages;
981993
stats->pages_free = vstate.totFreePages;

src/backend/access/spgist/spgvacuum.c

+17-6
Original file line numberDiff line numberDiff line change
@@ -845,6 +845,21 @@ spgvacuumscan(spgBulkDeleteState *bds)
845845
/* Propagate local lastUsedPage cache to metablock */
846846
SpGistUpdateMetaPage(index);
847847

848+
/*
849+
* If we found any empty pages (and recorded them in the FSM), then
850+
* forcibly update the upper-level FSM pages to ensure that searchers can
851+
* find them. It's possible that the pages were also found during
852+
* previous scans and so this is a waste of time, but it's cheap enough
853+
* relative to scanning the index that it shouldn't matter much, and
854+
* making sure that free pages are available sooner not later seems
855+
* worthwhile.
856+
*
857+
* Note that if no empty pages exist, we don't bother vacuuming the FSM at
858+
* all.
859+
*/
860+
if (bds->stats->pages_deleted > 0)
861+
IndexFreeSpaceMapVacuum(index);
862+
848863
/*
849864
* Truncate index if possible
850865
*
@@ -916,7 +931,6 @@ dummy_callback(ItemPointer itemptr, void *state)
916931
IndexBulkDeleteResult *
917932
spgvacuumcleanup(IndexVacuumInfo *info, IndexBulkDeleteResult *stats)
918933
{
919-
Relation index = info->index;
920934
spgBulkDeleteState bds;
921935

922936
/* No-op in ANALYZE ONLY mode */
@@ -926,8 +940,8 @@ spgvacuumcleanup(IndexVacuumInfo *info, IndexBulkDeleteResult *stats)
926940
/*
927941
* We don't need to scan the index if there was a preceding bulkdelete
928942
* pass. Otherwise, make a pass that won't delete any live tuples, but
929-
* might still accomplish useful stuff with redirect/placeholder cleanup,
930-
* and in any case will provide stats.
943+
* might still accomplish useful stuff with redirect/placeholder cleanup
944+
* and/or FSM housekeeping, and in any case will provide stats.
931945
*/
932946
if (stats == NULL)
933947
{
@@ -940,9 +954,6 @@ spgvacuumcleanup(IndexVacuumInfo *info, IndexBulkDeleteResult *stats)
940954
spgvacuumscan(&bds);
941955
}
942956

943-
/* Finally, vacuum the FSM */
944-
IndexFreeSpaceMapVacuum(index);
945-
946957
/*
947958
* It's quite possible for us to be fooled by concurrent tuple moves into
948959
* double-counting some index tuples, so disbelieve any total that exceeds

0 commit comments

Comments
 (0)