Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Commit 6fcde24

Browse files
committed
Fix some minor errors in new PHJ code.
Correct ExecParallelHashTuplePrealloc's estimate of whether the space_allowed limit is exceeded. Be more consistent about tuples that are exactly HASH_CHUNK_THRESHOLD in size (they're "small", not "large"). Neither of these things explain the current buildfarm unhappiness, but they're still bugs. Thomas Munro, per gripe by me Discussion: https://postgr.es/m/CAEepm=34PDuR69kfYVhmZPgMdy8pSA-MYbpesEN1SR+2oj3Y+w@mail.gmail.com
1 parent 3decd15 commit 6fcde24

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed

src/backend/executor/nodeHash.c

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2740,7 +2740,7 @@ ExecParallelHashTupleAlloc(HashJoinTable hashtable, size_t size,
27402740
*/
27412741
chunk = hashtable->current_chunk;
27422742
if (chunk != NULL &&
2743-
size < HASH_CHUNK_THRESHOLD &&
2743+
size <= HASH_CHUNK_THRESHOLD &&
27442744
chunk->maxlen - chunk->used >= size)
27452745
{
27462746

@@ -3260,6 +3260,7 @@ ExecParallelHashTuplePrealloc(HashJoinTable hashtable, int batchno, size_t size)
32603260

32613261
Assert(batchno > 0);
32623262
Assert(batchno < hashtable->nbatch);
3263+
Assert(size == MAXALIGN(size));
32633264

32643265
LWLockAcquire(&pstate->lock, LW_EXCLUSIVE);
32653266

@@ -3280,7 +3281,8 @@ ExecParallelHashTuplePrealloc(HashJoinTable hashtable, int batchno, size_t size)
32803281

32813282
if (pstate->growth != PHJ_GROWTH_DISABLED &&
32823283
batch->at_least_one_chunk &&
3283-
(batch->shared->estimated_size + size > pstate->space_allowed))
3284+
(batch->shared->estimated_size + want + HASH_CHUNK_HEADER_SIZE
3285+
> pstate->space_allowed))
32843286
{
32853287
/*
32863288
* We have determined that this batch would exceed the space budget if

0 commit comments

Comments
 (0)