Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Commit 72d5b27

Browse files
committed
Fix oversized memory allocation in Parallel Hash Join
During the calculations of the maximum for the number of buckets, take into account that later we round that to the next power of 2. Reported-by: Karen Talarico Bug: #16925 Discussion: https://postgr.es/m/16925-ec96d83529d0d629%40postgresql.org Author: Thomas Munro, Andrei Lepikhov, Alexander Korotkov Reviewed-by: Alena Rybakina Backpatch-through: 12
1 parent 49fa183 commit 72d5b27

File tree

1 file changed

+11
-2
lines changed

1 file changed

+11
-2
lines changed

src/backend/executor/nodeHash.c

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1141,6 +1141,7 @@ ExecParallelHashIncreaseNumBatches(HashJoinTable hashtable)
11411141
double dtuples;
11421142
double dbuckets;
11431143
int new_nbuckets;
1144+
uint32 max_buckets;
11441145

11451146
/*
11461147
* We probably also need a smaller bucket array. How many
@@ -1153,9 +1154,17 @@ ExecParallelHashIncreaseNumBatches(HashJoinTable hashtable)
11531154
* array.
11541155
*/
11551156
dtuples = (old_batch0->ntuples * 2.0) / new_nbatch;
1157+
/*
1158+
* We need to calculate the maximum number of buckets to
1159+
* stay within the MaxAllocSize boundary. Round the
1160+
* maximum number to the previous power of 2 given that
1161+
* later we round the number to the next power of 2.
1162+
*/
1163+
max_buckets = MaxAllocSize / sizeof(dsa_pointer_atomic);
1164+
if ((max_buckets & (max_buckets - 1)) != 0)
1165+
max_buckets = 1 << (my_log2(max_buckets) - 1);
11561166
dbuckets = ceil(dtuples / NTUP_PER_BUCKET);
1157-
dbuckets = Min(dbuckets,
1158-
MaxAllocSize / sizeof(dsa_pointer_atomic));
1167+
dbuckets = Min(dbuckets, max_buckets);
11591168
new_nbuckets = (int) dbuckets;
11601169
new_nbuckets = Max(new_nbuckets, 1024);
11611170
new_nbuckets = 1 << my_log2(new_nbuckets);

0 commit comments

Comments
 (0)