Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Commit e49e191

Browse files
author
Amit Kapila
committed
Fix cleanup lock acquisition in SPLIT_ALLOCATE_PAGE replay.
During XLOG_HASH_SPLIT_ALLOCATE_PAGE replay, we were checking for a cleanup lock on the new bucket page after acquiring an exclusive lock on it and raising a PANIC error on failure. However, it is quite possible that checkpointer can acquire the pin on the same page before acquiring a lock on it, and then the replay will lead to an error. So instead, directly acquire the cleanup lock on the new bucket page during XLOG_HASH_SPLIT_ALLOCATE_PAGE replay operation. Reported-by: Andres Freund Author: Robert Haas Reviewed-By: Amit Kapila, Andres Freund, Vignesh C Backpatch-through: 11 Discussion: https://postgr.es/m/20220810022617.fvjkjiauaykwrbse@awork3.anarazel.de
1 parent f754e76 commit e49e191

File tree

2 files changed

+9
-6
lines changed

2 files changed

+9
-6
lines changed

src/backend/access/hash/hash_xlog.c

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -352,11 +352,10 @@ hash_xlog_split_allocate_page(XLogReaderState *record)
352352
}
353353

354354
/* replay the record for new bucket */
355-
newbuf = XLogInitBufferForRedo(record, 1);
355+
XLogReadBufferForRedoExtended(record, 1, RBM_ZERO_AND_CLEANUP_LOCK, true,
356+
&newbuf);
356357
_hash_initbuf(newbuf, xlrec->new_bucket, xlrec->new_bucket,
357358
xlrec->new_bucket_flag, true);
358-
if (!IsBufferCleanupOK(newbuf))
359-
elog(PANIC, "hash_xlog_split_allocate_page: failed to acquire cleanup lock");
360359
MarkBufferDirty(newbuf);
361360
PageSetLSN(BufferGetPage(newbuf), lsn);
362361

src/backend/access/hash/hashpage.c

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -805,9 +805,13 @@ _hash_expandtable(Relation rel, Buffer metabuf)
805805
/*
806806
* Physically allocate the new bucket's primary page. We want to do this
807807
* before changing the metapage's mapping info, in case we can't get the
808-
* disk space. Ideally, we don't need to check for cleanup lock on new
809-
* bucket as no other backend could find this bucket unless meta page is
810-
* updated. However, it is good to be consistent with old bucket locking.
808+
* disk space.
809+
*
810+
* XXX It doesn't make sense to call _hash_getnewbuf first, zeroing the
811+
* buffer, and then only afterwards check whether we have a cleanup lock.
812+
* However, since no scan can be accessing the buffer yet, any concurrent
813+
* accesses will just be from processes like the bgwriter or checkpointer
814+
* which don't care about its contents, so it doesn't really matter.
811815
*/
812816
buf_nblkno = _hash_getnewbuf(rel, start_nblkno, MAIN_FORKNUM);
813817
if (!IsBufferCleanupOK(buf_nblkno))

0 commit comments

Comments
 (0)