Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Commit ae7291a

Browse files
Standardize ItemIdData terminology.
The term "item pointer" should not be used to refer to ItemIdData variables, since that is needlessly ambiguous. Only ItemPointerData/ItemPointer variables should be called item pointers. To fix, establish the convention that ItemIdData variables should always be referred to either as "item identifiers" or "line pointers". The term "item identifier" already predominates in docs and translatable messages, and so should be the preferred alternative there. Discussion: https://postgr.es/m/CAH2-Wz=c=MZQjUzde3o9+2PLAPuHTpVZPPdYxN=E4ndQ2--8ew@mail.gmail.com
1 parent 08ca9d7 commit ae7291a

File tree

14 files changed

+52
-73
lines changed

14 files changed

+52
-73
lines changed

contrib/amcheck/verify_nbtree.c

+2-2
Original file line numberDiff line numberDiff line change
@@ -2164,7 +2164,7 @@ invariant_l_offset(BtreeCheckState *state, BTScanInsert key,
21642164
* Does the invariant hold that the key is less than or equal to a given upper
21652165
* bound offset item?
21662166
*
2167-
* Caller should have verified that upperbound's item pointer is consistent
2167+
* Caller should have verified that upperbound's line pointer is consistent
21682168
* using PageGetItemIdCareful() call.
21692169
*
21702170
* If this function returns false, convention is that caller throws error due
@@ -2187,7 +2187,7 @@ invariant_leq_offset(BtreeCheckState *state, BTScanInsert key,
21872187
* Does the invariant hold that the key is strictly greater than a given lower
21882188
* bound offset item?
21892189
*
2190-
* Caller should have verified that lowerbound's item pointer is consistent
2190+
* Caller should have verified that lowerbound's line pointer is consistent
21912191
* using PageGetItemIdCareful() call.
21922192
*
21932193
* If this function returns false, convention is that caller throws error due

src/backend/access/heap/README.HOT

+5-5
Original file line numberDiff line numberDiff line change
@@ -149,8 +149,8 @@ the descendant heap-only tuple. It is conceivable that someone prunes
149149
the heap-only tuple before that, and even conceivable that the line pointer
150150
is re-used for another purpose. Therefore, when following a HOT chain,
151151
it is always necessary to be prepared for the possibility that the
152-
linked-to item pointer is unused, dead, or redirected; and if it is a
153-
normal item pointer, we still have to check that XMIN of the tuple matches
152+
linked-to line pointer is unused, dead, or redirected; and if it is a
153+
normal line pointer, we still have to check that XMIN of the tuple matches
154154
the XMAX of the tuple we left. Otherwise we should assume that we have
155155
come to the end of the HOT chain. Note that this sort of XMIN/XMAX
156156
matching is required when following ordinary update chains anyway.
@@ -171,14 +171,14 @@ bit: there can be at most one visible tuple in the chain, so we can stop
171171
when we find it. This rule does not work for non-MVCC snapshots, though.)
172172

173173
Sequential scans do not need to pay attention to the HOT links because
174-
they scan every item pointer on the page anyway. The same goes for a
174+
they scan every line pointer on the page anyway. The same goes for a
175175
bitmap heap scan with a lossy bitmap.
176176

177177

178178
Pruning
179179
-------
180180

181-
HOT pruning means updating item pointers so that HOT chains are
181+
HOT pruning means updating line pointers so that HOT chains are
182182
reduced in length, by collapsing out line pointers for intermediate dead
183183
tuples. Although this makes those line pointers available for re-use,
184184
it does not immediately make the space occupied by their tuples available.
@@ -271,7 +271,7 @@ physical tuple by eliminating an intermediate heap-only tuple or
271271
replacing a physical root tuple by a redirect pointer, a decrement in
272272
the table's number of dead tuples is reported to pgstats, which may
273273
postpone autovacuuming. Note that we do not count replacing a root tuple
274-
by a DEAD item pointer as decrementing n_dead_tuples; we still want
274+
by a DEAD line pointer as decrementing n_dead_tuples; we still want
275275
autovacuum to run to clean up the index entries and DEAD item.
276276

277277
This area probably needs further work ...

src/backend/access/heap/heapam.c

+2-2
Original file line numberDiff line numberDiff line change
@@ -7163,7 +7163,7 @@ log_heap_clean(Relation reln, Buffer buffer,
71637163
* arrays need not be stored too. Note that even if all three arrays are
71647164
* empty, we want to expose the buffer as a candidate for whole-page
71657165
* storage, since this record type implies a defragmentation operation
7166-
* even if no item pointers changed state.
7166+
* even if no line pointers changed state.
71677167
*/
71687168
if (nredirected > 0)
71697169
XLogRegisterBufData(0, (char *) redirected,
@@ -7724,7 +7724,7 @@ heap_xlog_clean(XLogReaderState *record)
77247724
nunused = (end - nowunused);
77257725
Assert(nunused >= 0);
77267726

7727-
/* Update all item pointers per the record, and repair fragmentation */
7727+
/* Update all line pointers per the record, and repair fragmentation */
77287728
heap_page_prune_execute(buffer,
77297729
redirected, nredirected,
77307730
nowdead, ndead,

src/backend/access/heap/heapam_handler.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -2162,7 +2162,7 @@ heapam_scan_bitmap_next_block(TableScanDesc scan,
21622162
else
21632163
{
21642164
/*
2165-
* Bitmap is lossy, so we must examine each item pointer on the page.
2165+
* Bitmap is lossy, so we must examine each line pointer on the page.
21662166
* But we can ignore HOT chains, since we'll check each tuple anyway.
21672167
*/
21682168
Page dp = (Page) BufferGetPage(buffer);

src/backend/access/heap/pruneheap.c

+5-5
Original file line numberDiff line numberDiff line change
@@ -324,7 +324,7 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
324324

325325

326326
/*
327-
* Prune specified item pointer or a HOT chain originating at that item.
327+
* Prune specified line pointer or a HOT chain originating at line pointer.
328328
*
329329
* If the item is an index-referenced tuple (i.e. not a heap-only tuple),
330330
* the HOT chain is pruned by removing all DEAD tuples at the start of the HOT
@@ -454,7 +454,7 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
454454
}
455455

456456
/*
457-
* Likewise, a dead item pointer can't be part of the chain. (We
457+
* Likewise, a dead line pointer can't be part of the chain. (We
458458
* already eliminated the case of dead root tuple outside this
459459
* function.)
460460
*/
@@ -630,7 +630,7 @@ heap_prune_record_prunable(PruneState *prstate, TransactionId xid)
630630
prstate->new_prune_xid = xid;
631631
}
632632

633-
/* Record item pointer to be redirected */
633+
/* Record line pointer to be redirected */
634634
static void
635635
heap_prune_record_redirect(PruneState *prstate,
636636
OffsetNumber offnum, OffsetNumber rdoffnum)
@@ -645,7 +645,7 @@ heap_prune_record_redirect(PruneState *prstate,
645645
prstate->marked[rdoffnum] = true;
646646
}
647647

648-
/* Record item pointer to be marked dead */
648+
/* Record line pointer to be marked dead */
649649
static void
650650
heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum)
651651
{
@@ -656,7 +656,7 @@ heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum)
656656
prstate->marked[offnum] = true;
657657
}
658658

659-
/* Record item pointer to be marked unused */
659+
/* Record line pointer to be marked unused */
660660
static void
661661
heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum)
662662
{

src/backend/access/heap/vacuumlazy.c

+3-3
Original file line numberDiff line numberDiff line change
@@ -509,7 +509,7 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
509509
live_tuples, /* live tuples (reltuples estimate) */
510510
tups_vacuumed, /* tuples cleaned up by vacuum */
511511
nkeep, /* dead-but-not-removable tuples */
512-
nunused; /* unused item pointers */
512+
nunused; /* unused line pointers */
513513
IndexBulkDeleteResult **indstats;
514514
int i;
515515
PGRUsage ru0;
@@ -1017,7 +1017,7 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
10171017
ItemPointerSet(&(tuple.t_self), blkno, offnum);
10181018

10191019
/*
1020-
* DEAD item pointers are to be vacuumed normally; but we don't
1020+
* DEAD line pointers are to be vacuumed normally; but we don't
10211021
* count them in tups_vacuumed, else we'd be double-counting (at
10221022
* least in the common case where heap_page_prune() just freed up
10231023
* a non-HOT tuple).
@@ -1483,7 +1483,7 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
14831483
appendStringInfo(&buf,
14841484
_("%.0f dead row versions cannot be removed yet, oldest xmin: %u\n"),
14851485
nkeep, OldestXmin);
1486-
appendStringInfo(&buf, _("There were %.0f unused item pointers.\n"),
1486+
appendStringInfo(&buf, _("There were %.0f unused item identifiers.\n"),
14871487
nunused);
14881488
appendStringInfo(&buf, ngettext("Skipped %u page due to buffer pins, ",
14891489
"Skipped %u pages due to buffer pins, ",

src/backend/access/index/indexam.c

-26
Original file line numberDiff line numberDiff line change
@@ -38,32 +38,6 @@
3838
* This file contains the index_ routines which used
3939
* to be a scattered collection of stuff in access/genam.
4040
*
41-
*
42-
* old comments
43-
* Scans are implemented as follows:
44-
*
45-
* `0' represents an invalid item pointer.
46-
* `-' represents an unknown item pointer.
47-
* `X' represents a known item pointers.
48-
* `+' represents known or invalid item pointers.
49-
* `*' represents any item pointers.
50-
*
51-
* State is represented by a triple of these symbols in the order of
52-
* previous, current, next. Note that the case of reverse scans works
53-
* identically.
54-
*
55-
* State Result
56-
* (1) + + - + 0 0 (if the next item pointer is invalid)
57-
* (2) + X - (otherwise)
58-
* (3) * 0 0 * 0 0 (no change)
59-
* (4) + X 0 X 0 0 (shift)
60-
* (5) * + X + X - (shift, add unknown)
61-
*
62-
* All other states cannot occur.
63-
*
64-
* Note: It would be possible to cache the status of the previous and
65-
* next item pointer using the flags.
66-
*
6741
*-------------------------------------------------------------------------
6842
*/
6943

src/backend/access/nbtree/nbtinsert.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -1689,7 +1689,7 @@ _bt_split(Relation rel, BTScanInsert itup_key, Buffer buf, Buffer cbuf,
16891689
* Direct access to page is not good but faster - we should implement
16901690
* some new func in page API. Note we only store the tuples
16911691
* themselves, knowing that they were inserted in item-number order
1692-
* and so the item pointers can be reconstructed. See comments for
1692+
* and so the line pointers can be reconstructed. See comments for
16931693
* _bt_restore_page().
16941694
*/
16951695
XLogRegisterBufData(1,

src/backend/access/spgist/spgvacuum.c

+1-1
Original file line numberDiff line numberDiff line change
@@ -337,7 +337,7 @@ vacuumLeafPage(spgBulkDeleteState *bds, Relation index, Buffer buffer,
337337
InvalidBlockNumber, InvalidOffsetNumber);
338338

339339
/*
340-
* We implement the move step by swapping the item pointers of the source
340+
* We implement the move step by swapping the line pointers of the source
341341
* and target tuples, then replacing the newly-source tuples with
342342
* placeholders. This is perhaps unduly friendly with the page data
343343
* representation, but it's fast and doesn't risk page overflow when a

src/backend/storage/page/bufpage.c

+13-13
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ PageInit(Page page, Size pageSize, Size specialSize)
6565
* Check that the page header and checksum (if any) appear valid.
6666
*
6767
* This is called when a page has just been read in from disk. The idea is
68-
* to cheaply detect trashed pages before we go nuts following bogus item
68+
* to cheaply detect trashed pages before we go nuts following bogus line
6969
* pointers, testing invalid transaction identifiers, etc.
7070
*
7171
* It turns out to be necessary to allow zeroed pages here too. Even though
@@ -170,12 +170,12 @@ PageIsVerified(Page page, BlockNumber blkno)
170170
* reason. A WARNING is issued indicating the reason for the refusal.
171171
*
172172
* offsetNumber must be either InvalidOffsetNumber to specify finding a
173-
* free item pointer, or a value between FirstOffsetNumber and one past
174-
* the last existing item, to specify using that particular item pointer.
173+
* free line pointer, or a value between FirstOffsetNumber and one past
174+
* the last existing item, to specify using that particular line pointer.
175175
*
176176
* If offsetNumber is valid and flag PAI_OVERWRITE is set, we just store
177177
* the item at the specified offsetNumber, which must be either a
178-
* currently-unused item pointer, or one past the last existing item.
178+
* currently-unused line pointer, or one past the last existing item.
179179
*
180180
* If offsetNumber is valid and flag PAI_OVERWRITE is not set, insert
181181
* the item at the specified offsetNumber, moving existing items later
@@ -314,7 +314,7 @@ PageAddItemExtended(Page page,
314314
memmove(itemId + 1, itemId,
315315
(limit - offsetNumber) * sizeof(ItemIdData));
316316

317-
/* set the item pointer */
317+
/* set the line pointer */
318318
ItemIdSetNormal(itemId, upper, size);
319319

320320
/*
@@ -529,7 +529,7 @@ PageRepairFragmentation(Page page)
529529
itemidptr->itemoff >= (int) pd_special))
530530
ereport(ERROR,
531531
(errcode(ERRCODE_DATA_CORRUPTED),
532-
errmsg("corrupted item pointer: %u",
532+
errmsg("corrupted line pointer: %u",
533533
itemidptr->itemoff)));
534534
itemidptr->alignedlen = MAXALIGN(ItemIdGetLength(lp));
535535
totallen += itemidptr->alignedlen;
@@ -763,7 +763,7 @@ PageIndexTupleDelete(Page page, OffsetNumber offnum)
763763
offset != MAXALIGN(offset))
764764
ereport(ERROR,
765765
(errcode(ERRCODE_DATA_CORRUPTED),
766-
errmsg("corrupted item pointer: offset = %u, size = %u",
766+
errmsg("corrupted line pointer: offset = %u, size = %u",
767767
offset, (unsigned int) size)));
768768

769769
/* Amount of space to actually be deleted */
@@ -881,7 +881,7 @@ PageIndexMultiDelete(Page page, OffsetNumber *itemnos, int nitems)
881881
pd_lower, pd_upper, pd_special)));
882882

883883
/*
884-
* Scan the item pointer array and build a list of just the ones we are
884+
* Scan the line pointer array and build a list of just the ones we are
885885
* going to keep. Notice we do not modify the page yet, since we are
886886
* still validity-checking.
887887
*/
@@ -901,7 +901,7 @@ PageIndexMultiDelete(Page page, OffsetNumber *itemnos, int nitems)
901901
offset != MAXALIGN(offset))
902902
ereport(ERROR,
903903
(errcode(ERRCODE_DATA_CORRUPTED),
904-
errmsg("corrupted item pointer: offset = %u, length = %u",
904+
errmsg("corrupted line pointer: offset = %u, length = %u",
905905
offset, (unsigned int) size)));
906906

907907
if (nextitm < nitems && offnum == itemnos[nextitm])
@@ -989,14 +989,14 @@ PageIndexTupleDeleteNoCompact(Page page, OffsetNumber offnum)
989989
offset != MAXALIGN(offset))
990990
ereport(ERROR,
991991
(errcode(ERRCODE_DATA_CORRUPTED),
992-
errmsg("corrupted item pointer: offset = %u, size = %u",
992+
errmsg("corrupted line pointer: offset = %u, size = %u",
993993
offset, (unsigned int) size)));
994994

995995
/* Amount of space to actually be deleted */
996996
size = MAXALIGN(size);
997997

998998
/*
999-
* Either set the item pointer to "unused", or zap it if it's the last
999+
* Either set the line pointer to "unused", or zap it if it's the last
10001000
* one. (Note: it's possible that the next-to-last one(s) are already
10011001
* unused, but we do not trouble to try to compact them out if so.)
10021002
*/
@@ -1054,7 +1054,7 @@ PageIndexTupleDeleteNoCompact(Page page, OffsetNumber offnum)
10541054
* other tuples' data up or down as needed to keep the page compacted.
10551055
* This is better than deleting and reinserting the tuple, because it
10561056
* avoids any data shifting when the tuple size doesn't change; and
1057-
* even when it does, we avoid moving the item pointers around.
1057+
* even when it does, we avoid moving the line pointers around.
10581058
* Conceivably this could also be of use to an index AM that cares about
10591059
* the physical order of tuples as well as their ItemId order.
10601060
*
@@ -1099,7 +1099,7 @@ PageIndexTupleOverwrite(Page page, OffsetNumber offnum,
10991099
offset != MAXALIGN(offset))
11001100
ereport(ERROR,
11011101
(errcode(ERRCODE_DATA_CORRUPTED),
1102-
errmsg("corrupted item pointer: offset = %u, size = %u",
1102+
errmsg("corrupted line pointer: offset = %u, size = %u",
11031103
offset, (unsigned int) oldsize)));
11041104

11051105
/*

src/include/access/htup_details.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -564,7 +564,7 @@ do { \
564564
* MaxHeapTuplesPerPage is an upper bound on the number of tuples that can
565565
* fit on one heap page. (Note that indexes could have more, because they
566566
* use a smaller tuple header.) We arrive at the divisor because each tuple
567-
* must be maxaligned, and it must have an associated item pointer.
567+
* must be maxaligned, and it must have an associated line pointer.
568568
*
569569
* Note: with HOT, there could theoretically be more line pointers (not actual
570570
* tuples) than this on a heap page. However we constrain the number of line

src/include/access/itup.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -131,7 +131,7 @@ typedef IndexAttributeBitMapData * IndexAttributeBitMap;
131131
* fit on one index page. An index tuple must have either data or a null
132132
* bitmap, so we can safely assume it's at least 1 byte bigger than a bare
133133
* IndexTupleData struct. We arrive at the divisor because each tuple
134-
* must be maxaligned, and it must have an associated item pointer.
134+
* must be maxaligned, and it must have an associated line pointer.
135135
*
136136
* To be index-type-independent, this does not account for any special space
137137
* on the page, and is thus conservative.

src/include/storage/bufpage.h

+10-6
Original file line numberDiff line numberDiff line change
@@ -53,14 +53,18 @@
5353
*
5454
* NOTES:
5555
*
56-
* linp1..N form an ItemId array. ItemPointers point into this array
57-
* rather than pointing directly to a tuple. Note that OffsetNumbers
56+
* linp1..N form an ItemId (line pointer) array. ItemPointers point
57+
* to a physical block number and a logical offset (line pointer
58+
* number) within that block/page. Note that OffsetNumbers
5859
* conventionally start at 1, not 0.
5960
*
60-
* tuple1..N are added "backwards" on the page. because a tuple's
61-
* ItemPointer points to its ItemId entry rather than its actual
61+
* tuple1..N are added "backwards" on the page. Since an ItemPointer
62+
* offset is used to access an ItemId entry rather than an actual
6263
* byte-offset position, tuples can be physically shuffled on a page
63-
* whenever the need arises.
64+
* whenever the need arises. This indirection also keeps crash recovery
65+
* relatively simple, because the low-level details of page space
66+
* management can be controlled by standard buffer page code during
67+
* logging, and during recovery.
6468
*
6569
* AM-generic per-page information is kept in PageHeaderData.
6670
*
@@ -233,7 +237,7 @@ typedef PageHeaderData *PageHeader;
233237

234238
/*
235239
* PageGetContents
236-
* To be used in case the page does not contain item pointers.
240+
* To be used in cases where the page does not contain line pointers.
237241
*
238242
* Note: prior to 8.3 this was not guaranteed to yield a MAXALIGN'd result.
239243
* Now it is. Beware of old code that might think the offset to the contents

0 commit comments

Comments
 (0)