Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Commit 98c7c71

Browse files
committed
Fix extreme skew detection in Parallel Hash Join.
After repartitioning the inner side of a hash join that would have exceeded the allowed size, we check if all the tuples from a parent partition moved to one child partition. That is evidence that it contains duplicate keys and later attempts to repartition will also fail, so we should give up trying to limit memory (for lack of a better fallback strategy). A thinko prevented the check from working correctly in partition 0 (the one that is partially loaded into memory already). After repartitioning, we should check for extreme skew if the *parent* partition's space_exhausted flag was set, not the child partition's. The consequence was repeated futile repartitioning until per-partition data exceeded various limits including "ERROR: invalid DSA memory alloc request size 1811939328", OS allocation failure, or temporary disk space errors. (We could also do something about some of those symptoms, but that's material for separate patches.) This problem only became likely when PostgreSQL 16 introduced support for Parallel Hash Right/Full Join, allowing NULL keys into the hash table. Repartitioning always leaves NULL in partition 0, no matter how many times you do it, because the hash value is all zero bits. That's unlikely for other hashed values, but they might still have caused wasted extra effort before giving up. Back-patch to all supported releases. Reported-by: Craig Milhiser <craig@milhiser.com> Reviewed-by: Andrei Lepikhov <lepihov@gmail.com> Discussion: https://postgr.es/m/CA%2BwnhO1OfgXbmXgC4fv_uu%3DOxcDQuHvfoQ4k0DFeB0Qqd-X-rQ%40mail.gmail.com
1 parent d893a29 commit 98c7c71

File tree

1 file changed

+12
-5
lines changed

1 file changed

+12
-5
lines changed

src/backend/executor/nodeHash.c

Lines changed: 12 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1228,32 +1228,39 @@ ExecParallelHashIncreaseNumBatches(HashJoinTable hashtable)
12281228
if (BarrierArriveAndWait(&pstate->grow_batches_barrier,
12291229
WAIT_EVENT_HASH_GROW_BATCHES_DECIDE))
12301230
{
1231+
ParallelHashJoinBatch *old_batches;
12311232
bool space_exhausted = false;
12321233
bool extreme_skew_detected = false;
12331234

12341235
/* Make sure that we have the current dimensions and buckets. */
12351236
ExecParallelHashEnsureBatchAccessors(hashtable);
12361237
ExecParallelHashTableSetCurrentBatch(hashtable, 0);
12371238

1239+
old_batches = dsa_get_address(hashtable->area, pstate->old_batches);
1240+
12381241
/* Are any of the new generation of batches exhausted? */
12391242
for (int i = 0; i < hashtable->nbatch; ++i)
12401243
{
1241-
ParallelHashJoinBatch *batch = hashtable->batches[i].shared;
1244+
ParallelHashJoinBatch *batch;
1245+
ParallelHashJoinBatch *old_batch;
1246+
int parent;
12421247

1248+
batch = hashtable->batches[i].shared;
12431249
if (batch->space_exhausted ||
12441250
batch->estimated_size > pstate->space_allowed)
1245-
{
1246-
int parent;
1247-
12481251
space_exhausted = true;
12491252

1253+
parent = i % pstate->old_nbatch;
1254+
old_batch = NthParallelHashJoinBatch(old_batches, parent);
1255+
if (old_batch->space_exhausted ||
1256+
batch->estimated_size > pstate->space_allowed)
1257+
{
12501258
/*
12511259
* Did this batch receive ALL of the tuples from its
12521260
* parent batch? That would indicate that further
12531261
* repartitioning isn't going to help (the hash values
12541262
* are probably all the same).
12551263
*/
1256-
parent = i % pstate->old_nbatch;
12571264
if (batch->ntuples == hashtable->batches[parent].shared->old_ntuples)
12581265
extreme_skew_detected = true;
12591266
}

0 commit comments

Comments
 (0)