Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Commit e7caacf

Browse files
committed
Fix hard to hit race condition in heapam's tuple locking code.
As mentioned in its commit message, eca0f1d left open a race condition, where a page could be marked all-visible, after the code checked PageIsAllVisible() to pin the VM, but before the page is locked. Plug that hole. Reviewed-By: Robert Haas, Andres Freund Author: Amit Kapila Discussion: CAEepm=3fWAbWryVW9swHyLTY4sXVf0xbLvXqOwUoDiNCx9mBjQ@mail.gmail.com Backpatch: -
1 parent 4eb4b3f commit e7caacf

File tree

1 file changed

+38
-6
lines changed

1 file changed

+38
-6
lines changed

src/backend/access/heap/heapam.c

+38-6
Original file line numberDiff line numberDiff line change
@@ -4585,9 +4585,10 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
45854585
block = ItemPointerGetBlockNumber(tid);
45864586

45874587
/*
4588-
* Before locking the buffer, pin the visibility map page if it may be
4589-
* necessary. XXX: It might be possible for this to change after acquiring
4590-
* the lock below. We don't yet deal with that case.
4588+
* Before locking the buffer, pin the visibility map page if it appears to
4589+
* be necessary. Since we haven't got the lock yet, someone else might be
4590+
* in the middle of changing this, so we'll need to recheck after we have
4591+
* the lock.
45914592
*/
45924593
if (PageIsAllVisible(BufferGetPage(*buffer)))
45934594
visibilitymap_pin(relation, block, &vmbuffer);
@@ -5075,6 +5076,23 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
50755076
goto out_locked;
50765077
}
50775078

5079+
/*
5080+
* If we didn't pin the visibility map page and the page has become all
5081+
* visible while we were busy locking the buffer, or during some
5082+
* subsequent window during which we had it unlocked, we'll have to unlock
5083+
* and re-lock, to avoid holding the buffer lock across I/O. That's a bit
5084+
* unfortunate, especially since we'll now have to recheck whether the
5085+
* tuple has been locked or updated under us, but hopefully it won't
5086+
* happen very often.
5087+
*/
5088+
if (vmbuffer == InvalidBuffer && PageIsAllVisible(page))
5089+
{
5090+
LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
5091+
visibilitymap_pin(relation, block, &vmbuffer);
5092+
LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE);
5093+
goto l3;
5094+
}
5095+
50785096
xmax = HeapTupleHeaderGetRawXmax(tuple->t_data);
50795097
old_infomask = tuple->t_data->t_infomask;
50805098

@@ -5665,9 +5683,10 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
56655683
CHECK_FOR_INTERRUPTS();
56665684

56675685
/*
5668-
* Before locking the buffer, pin the visibility map page if it may be
5669-
* necessary. XXX: It might be possible for this to change after
5670-
* acquiring the lock below. We don't yet deal with that case.
5686+
* Before locking the buffer, pin the visibility map page if it
5687+
* appears to be necessary. Since we haven't got the lock yet,
5688+
* someone else might be in the middle of changing this, so we'll need
5689+
* to recheck after we have the lock.
56715690
*/
56725691
if (PageIsAllVisible(BufferGetPage(buf)))
56735692
visibilitymap_pin(rel, block, &vmbuffer);
@@ -5676,6 +5695,19 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
56765695

56775696
LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE);
56785697

5698+
/*
5699+
* If we didn't pin the visibility map page and the page has become
5700+
* all visible while we were busy locking the buffer, we'll have to
5701+
* unlock and re-lock, to avoid holding the buffer lock across I/O.
5702+
* That's a bit unfortunate, but hopefully shouldn't happen often.
5703+
*/
5704+
if (vmbuffer == InvalidBuffer && PageIsAllVisible(BufferGetPage(buf)))
5705+
{
5706+
LockBuffer(buf, BUFFER_LOCK_UNLOCK);
5707+
visibilitymap_pin(rel, block, &vmbuffer);
5708+
LockBuffer(buf, BUFFER_LOCK_EXCLUSIVE);
5709+
}
5710+
56795711
/*
56805712
* Check the tuple XMIN against prior XMAX, if any. If we reached the
56815713
* end of the chain, we're done, so return success.

0 commit comments

Comments
 (0)