Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Commit 6dbb490

Browse files
committed
Combine freezing and pruning steps in VACUUM
Execute both freezing and pruning of tuples in the same heap_page_prune() function, now called heap_page_prune_and_freeze(), and emit a single WAL record containing all changes. That reduces the overall amount of WAL generated. This moves the freezing logic from vacuumlazy.c to the heap_page_prune_and_freeze() function. The main difference in the coding is that in vacuumlazy.c, we looked at the tuples after the pruning had already happened, but in heap_page_prune_and_freeze() we operate on the tuples before pruning. The heap_prepare_freeze_tuple() function is now invoked after we have determined that a tuple is not going to be pruned away. VACUUM no longer needs to loop through the items on the page after pruning. heap_page_prune_and_freeze() does all the work. It now returns the list of dead offsets, including existing LP_DEAD items, to the caller. Similarly it's now responsible for tracking 'all_visible', 'all_frozen', and 'hastup' on the caller's behalf. Author: Melanie Plageman <melanieplageman@gmail.com> Discussion: https://www.postgresql.org/message-id/20240330055710.kqg6ii2cdojsxgje@liskov
1 parent 26d138f commit 6dbb490

File tree

7 files changed

+813
-545
lines changed

7 files changed

+813
-545
lines changed

src/backend/access/heap/heapam.c

+23-44
Original file line numberDiff line numberDiff line change
@@ -6447,9 +6447,9 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask,
64476447
* XIDs or MultiXactIds that will need to be processed by a future VACUUM.
64486448
*
64496449
* VACUUM caller must assemble HeapTupleFreeze freeze plan entries for every
6450-
* tuple that we returned true for, and call heap_freeze_execute_prepared to
6451-
* execute freezing. Caller must initialize pagefrz fields for page as a
6452-
* whole before first call here for each heap page.
6450+
* tuple that we returned true for, and then execute freezing. Caller must
6451+
* initialize pagefrz fields for page as a whole before first call here for
6452+
* each heap page.
64536453
*
64546454
* VACUUM caller decides on whether or not to freeze the page as a whole.
64556455
* We'll often prepare freeze plans for a page that caller just discards.
@@ -6765,35 +6765,19 @@ heap_execute_freeze_tuple(HeapTupleHeader tuple, HeapTupleFreeze *frz)
67656765
}
67666766

67676767
/*
6768-
* heap_freeze_execute_prepared
6769-
*
6770-
* Executes freezing of one or more heap tuples on a page on behalf of caller.
6771-
* Caller passes an array of tuple plans from heap_prepare_freeze_tuple.
6772-
* Caller must set 'offset' in each plan for us. Note that we destructively
6773-
* sort caller's tuples array in-place, so caller had better be done with it.
6774-
*
6775-
* WAL-logs the changes so that VACUUM can advance the rel's relfrozenxid
6776-
* later on without any risk of unsafe pg_xact lookups, even following a hard
6777-
* crash (or when querying from a standby). We represent freezing by setting
6778-
* infomask bits in tuple headers, but this shouldn't be thought of as a hint.
6779-
* See section on buffer access rules in src/backend/storage/buffer/README.
6768+
* Perform xmin/xmax XID status sanity checks before actually executing freeze
6769+
* plans.
6770+
*
6771+
* heap_prepare_freeze_tuple doesn't perform these checks directly because
6772+
* pg_xact lookups are relatively expensive. They shouldn't be repeated by
6773+
* successive VACUUMs that each decide against freezing the same page.
67806774
*/
67816775
void
6782-
heap_freeze_execute_prepared(Relation rel, Buffer buffer,
6783-
TransactionId snapshotConflictHorizon,
6784-
HeapTupleFreeze *tuples, int ntuples)
6776+
heap_pre_freeze_checks(Buffer buffer,
6777+
HeapTupleFreeze *tuples, int ntuples)
67856778
{
67866779
Page page = BufferGetPage(buffer);
67876780

6788-
Assert(ntuples > 0);
6789-
6790-
/*
6791-
* Perform xmin/xmax XID status sanity checks before critical section.
6792-
*
6793-
* heap_prepare_freeze_tuple doesn't perform these checks directly because
6794-
* pg_xact lookups are relatively expensive. They shouldn't be repeated
6795-
* by successive VACUUMs that each decide against freezing the same page.
6796-
*/
67976781
for (int i = 0; i < ntuples; i++)
67986782
{
67996783
HeapTupleFreeze *frz = tuples + i;
@@ -6832,8 +6816,19 @@ heap_freeze_execute_prepared(Relation rel, Buffer buffer,
68326816
xmax)));
68336817
}
68346818
}
6819+
}
68356820

6836-
START_CRIT_SECTION();
6821+
/*
6822+
* Helper which executes freezing of one or more heap tuples on a page on
6823+
* behalf of caller. Caller passes an array of tuple plans from
6824+
* heap_prepare_freeze_tuple. Caller must set 'offset' in each plan for us.
6825+
* Must be called in a critical section that also marks the buffer dirty and,
6826+
* if needed, emits WAL.
6827+
*/
6828+
void
6829+
heap_freeze_prepared_tuples(Buffer buffer, HeapTupleFreeze *tuples, int ntuples)
6830+
{
6831+
Page page = BufferGetPage(buffer);
68376832

68386833
for (int i = 0; i < ntuples; i++)
68396834
{
@@ -6844,22 +6839,6 @@ heap_freeze_execute_prepared(Relation rel, Buffer buffer,
68446839
htup = (HeapTupleHeader) PageGetItem(page, itemid);
68456840
heap_execute_freeze_tuple(htup, frz);
68466841
}
6847-
6848-
MarkBufferDirty(buffer);
6849-
6850-
/* Now WAL-log freezing if necessary */
6851-
if (RelationNeedsWAL(rel))
6852-
{
6853-
log_heap_prune_and_freeze(rel, buffer, snapshotConflictHorizon,
6854-
false, /* no cleanup lock required */
6855-
PRUNE_VACUUM_SCAN,
6856-
tuples, ntuples,
6857-
NULL, 0, /* redirected */
6858-
NULL, 0, /* dead */
6859-
NULL, 0); /* unused */
6860-
}
6861-
6862-
END_CRIT_SECTION();
68636842
}
68646843

68656844
/*

src/backend/access/heap/heapam_handler.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -1122,7 +1122,7 @@ heapam_scan_analyze_next_tuple(TableScanDesc scan, TransactionId OldestXmin,
11221122
* We ignore unused and redirect line pointers. DEAD line pointers
11231123
* should be counted as dead, because we need vacuum to run to get rid
11241124
* of them. Note that this rule agrees with the way that
1125-
* heap_page_prune() counts things.
1125+
* heap_page_prune_and_freeze() counts things.
11261126
*/
11271127
if (!ItemIdIsNormal(itemid))
11281128
{

0 commit comments

Comments
 (0)