Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Commit 714a987

Browse files
committed
Fix oversized memory allocation in Parallel Hash Join
During the calculations of the maximum for the number of buckets, take into account that later we round that to the next power of 2. Reported-by: Karen Talarico Bug: #16925 Discussion: https://postgr.es/m/16925-ec96d83529d0d629%40postgresql.org Author: Thomas Munro, Andrei Lepikhov, Alexander Korotkov Reviewed-by: Alena Rybakina Backpatch-through: 12
1 parent 37c5516 commit 714a987

File tree

1 file changed

+10
-2
lines changed

1 file changed

+10
-2
lines changed

src/backend/executor/nodeHash.c

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1160,6 +1160,7 @@ ExecParallelHashIncreaseNumBatches(HashJoinTable hashtable)
11601160
double dtuples;
11611161
double dbuckets;
11621162
int new_nbuckets;
1163+
uint32 max_buckets;
11631164

11641165
/*
11651166
* We probably also need a smaller bucket array. How many
@@ -1172,9 +1173,16 @@ ExecParallelHashIncreaseNumBatches(HashJoinTable hashtable)
11721173
* array.
11731174
*/
11741175
dtuples = (old_batch0->ntuples * 2.0) / new_nbatch;
1176+
/*
1177+
* We need to calculate the maximum number of buckets to
1178+
* stay within the MaxAllocSize boundary. Round the
1179+
* maximum number to the previous power of 2 given that
1180+
* later we round the number to the next power of 2.
1181+
*/
1182+
max_buckets = pg_prevpower2_32((uint32)
1183+
(MaxAllocSize / sizeof(dsa_pointer_atomic)));
11751184
dbuckets = ceil(dtuples / NTUP_PER_BUCKET);
1176-
dbuckets = Min(dbuckets,
1177-
MaxAllocSize / sizeof(dsa_pointer_atomic));
1185+
dbuckets = Min(dbuckets, max_buckets);
11781186
new_nbuckets = (int) dbuckets;
11791187
new_nbuckets = Max(new_nbuckets, 1024);
11801188
new_nbuckets = pg_nextpower2_32(new_nbuckets);

0 commit comments

Comments
 (0)