Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Commit b70ea4c

Browse files
committed
Fix crashes on plans with multiple Gather (Merge) nodes.
es_query_dsa turns out to be broken by design, because it supposes that there is only one DSA for the whole query, whereas there is actually one per Gather (Merge) node. For now, work around that problem by setting and clearing the pointer around the sections of code that might need it. It's probably a better idea to get rid of es_query_dsa altogether in favor of having each node keep track individually of which DSA is relevant, but that seems like more than we would want to back-patch. Thomas Munro, reviewed and tested by Andreas Seltenreich, Amit Kapila, and by me. Discussion: http://postgr.es/m/CAEepm=1U6as=brnVvMNixEV2tpi8NuyQoTmO8Qef0-VV+=7MDA@mail.gmail.com
1 parent ac93acb commit b70ea4c

File tree

3 files changed

+18
-6
lines changed

3 files changed

+18
-6
lines changed

src/backend/executor/execParallel.c

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -543,12 +543,6 @@ ExecInitParallelPlan(PlanState *planstate, EState *estate, int nworkers)
543543
pcxt->seg);
544544
}
545545

546-
/*
547-
* Make the area available to executor nodes running in the leader. See
548-
* also ParallelQueryMain which makes it available to workers.
549-
*/
550-
estate->es_query_dsa = pei->area;
551-
552546
/*
553547
* Give parallel-aware nodes a chance to initialize their shared data.
554548
* This also initializes the elements of instrumentation->ps_instrument,
@@ -557,7 +551,11 @@ ExecInitParallelPlan(PlanState *planstate, EState *estate, int nworkers)
557551
d.pcxt = pcxt;
558552
d.instrumentation = instrumentation;
559553
d.nnodes = 0;
554+
555+
/* Install our DSA area while initializing the plan. */
556+
estate->es_query_dsa = pei->area;
560557
ExecParallelInitializeDSM(planstate, &d);
558+
estate->es_query_dsa = NULL;
561559

562560
/*
563561
* Make sure that the world hasn't shifted under our feet. This could
@@ -609,6 +607,8 @@ void
609607
ExecParallelReinitialize(PlanState *planstate,
610608
ParallelExecutorInfo *pei)
611609
{
610+
EState *estate = planstate->state;
611+
612612
/* Old workers must already be shut down */
613613
Assert(pei->finished);
614614

@@ -618,7 +618,9 @@ ExecParallelReinitialize(PlanState *planstate,
618618
pei->finished = false;
619619

620620
/* Traverse plan tree and let each child node reset associated state. */
621+
estate->es_query_dsa = pei->area;
621622
ExecParallelReInitializeDSM(planstate, pei->pcxt);
623+
estate->es_query_dsa = NULL;
622624
}
623625

624626
/*

src/backend/executor/nodeGather.c

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -278,7 +278,13 @@ gather_getnext(GatherState *gatherstate)
278278

279279
if (gatherstate->need_to_scan_locally)
280280
{
281+
EState *estate = gatherstate->ps.state;
282+
283+
/* Install our DSA area while executing the plan. */
284+
estate->es_query_dsa =
285+
gatherstate->pei ? gatherstate->pei->area : NULL;
281286
outerTupleSlot = ExecProcNode(outerPlan);
287+
estate->es_query_dsa = NULL;
282288

283289
if (!TupIsNull(outerTupleSlot))
284290
return outerTupleSlot;

src/backend/executor/nodeGatherMerge.c

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -627,8 +627,12 @@ gather_merge_readnext(GatherMergeState *gm_state, int reader, bool nowait)
627627
{
628628
PlanState *outerPlan = outerPlanState(gm_state);
629629
TupleTableSlot *outerTupleSlot;
630+
EState *estate = gm_state->ps.state;
630631

632+
/* Install our DSA area while executing the plan. */
633+
estate->es_query_dsa = gm_state->pei ? gm_state->pei->area : NULL;
631634
outerTupleSlot = ExecProcNode(outerPlan);
635+
estate->es_query_dsa = NULL;
632636

633637
if (!TupIsNull(outerTupleSlot))
634638
{

0 commit comments

Comments
 (0)