Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Commit cefb4b1

Browse files
committed
Tweak elog.c's logic for promoting errors into more severe errors.
Messages of less than ERROR severity should never be promoted (this fixes Gaetano Mendola's problem with a COMMERROR becoming a PANIC, and is obvious in hindsight anyway). Do all promotion in errstart not errfinish, to ensure that output decisions are made correctly; the former coding could suppress logging of promoted errors, which doesn't seem like a good idea. Eliminate some redundant code too.
1 parent 346900e commit cefb4b1

File tree

1 file changed

+89
-94
lines changed

1 file changed

+89
-94
lines changed

src/backend/utils/error/elog.c

+89-94
Original file line numberDiff line numberDiff line change
@@ -24,20 +24,25 @@
2424
* the elog.c routines or something they call. By far the most probable
2525
* scenario of this sort is "out of memory"; and it's also the nastiest
2626
* to handle because we'd likely also run out of memory while trying to
27-
* report this error! Our escape hatch for this condition is to force any
28-
* such messages up to ERROR level if they aren't already (so that we will
29-
* not need to return to the outer elog.c call), and to reset the ErrorContext
30-
* to empty before trying to process the inner message. Since ErrorContext
31-
* is guaranteed to have at least 8K of space in it (see mcxt.c), we should
32-
* be able to process an "out of memory" message successfully.
27+
* report this error! Our escape hatch for this case is to reset the
28+
* ErrorContext to empty before trying to process the inner error. Since
29+
* ErrorContext is guaranteed to have at least 8K of space in it (see mcxt.c),
30+
* we should be able to process an "out of memory" message successfully.
31+
* Since we lose the prior error state due to the reset, we won't be able
32+
* to return to processing the original error, but we wouldn't have anyway.
33+
* (NOTE: the escape hatch is not used for recursive situations where the
34+
* inner message is of less than ERROR severity; in that case we just
35+
* try to process it and return normally. Usually this will work, but if
36+
* it ends up in infinite recursion, we will PANIC due to error stack
37+
* overflow.)
3338
*
3439
*
3540
* Portions Copyright (c) 1996-2004, PostgreSQL Global Development Group
3641
* Portions Copyright (c) 1994, Regents of the University of California
3742
*
3843
*
3944
* IDENTIFICATION
40-
* $PostgreSQL: pgsql/src/backend/utils/error/elog.c,v 1.148 2004/08/29 05:06:50 momjian Exp $
45+
* $PostgreSQL: pgsql/src/backend/utils/error/elog.c,v 1.149 2004/09/05 02:01:41 tgl Exp $
4146
*
4247
*-------------------------------------------------------------------------
4348
*/
@@ -132,30 +137,59 @@ errstart(int elevel, const char *filename, int lineno,
132137
ErrorData *edata;
133138
bool output_to_server = false;
134139
bool output_to_client = false;
140+
int i;
135141

136142
/*
137-
* First decide whether we need to process this report at all; if it's
138-
* warning or less and not enabled for logging, just return FALSE
139-
* without starting up any error logging machinery.
140-
*/
141-
142-
/*
143-
* Convert initialization errors into fatal errors. This is probably
144-
* redundant, because PG_exception_stack will still be null anyway.
145-
*/
146-
if (elevel == ERROR && IsInitProcessingMode())
147-
elevel = FATAL;
148-
149-
/*
150-
* If we are inside a critical section, all errors become PANIC
151-
* errors. See miscadmin.h.
143+
* Check some cases in which we want to promote an error into a more
144+
* severe error. None of this logic applies for non-error messages.
152145
*/
153146
if (elevel >= ERROR)
154147
{
148+
/*
149+
* If we are inside a critical section, all errors become PANIC
150+
* errors. See miscadmin.h.
151+
*/
155152
if (CritSectionCount > 0)
156153
elevel = PANIC;
154+
155+
/*
156+
* Check reasons for treating ERROR as FATAL:
157+
*
158+
* 1. we have no handler to pass the error to (implies we are in the
159+
* postmaster or in backend startup).
160+
*
161+
* 2. ExitOnAnyError mode switch is set (initdb uses this).
162+
*
163+
* 3. the error occurred after proc_exit has begun to run. (It's
164+
* proc_exit's responsibility to see that this doesn't turn into
165+
* infinite recursion!)
166+
*/
167+
if (elevel == ERROR)
168+
{
169+
if (PG_exception_stack == NULL ||
170+
ExitOnAnyError ||
171+
proc_exit_inprogress)
172+
elevel = FATAL;
173+
}
174+
175+
/*
176+
* If the error level is ERROR or more, errfinish is not going to
177+
* return to caller; therefore, if there is any stacked error already
178+
* in progress it will be lost. This is more or less okay, except we
179+
* do not want to have a FATAL or PANIC error downgraded because the
180+
* reporting process was interrupted by a lower-grade error. So check
181+
* the stack and make sure we panic if panic is warranted.
182+
*/
183+
for (i = 0; i <= errordata_stack_depth; i++)
184+
elevel = Max(elevel, errordata[i].elevel);
157185
}
158186

187+
/*
188+
* Now decide whether we need to process this report at all; if it's
189+
* warning or less and not enabled for logging, just return FALSE
190+
* without starting up any error logging machinery.
191+
*/
192+
159193
/* Determine whether message is enabled for server log output */
160194
if (IsPostmasterEnvironment)
161195
{
@@ -210,18 +244,14 @@ errstart(int elevel, const char *filename, int lineno,
210244
* Okay, crank up a stack entry to store the info in.
211245
*/
212246

213-
if (recursion_depth++ > 0)
247+
if (recursion_depth++ > 0 && elevel >= ERROR)
214248
{
215249
/*
216-
* Ooops, error during error processing. Clear ErrorContext and
217-
* force level up to ERROR or greater, as discussed at top of
218-
* file. Adjust output decisions too.
250+
* Ooops, error during error processing. Clear ErrorContext as
251+
* discussed at top of file. We will not return to the original
252+
* error's reporter or handler, so we don't need it.
219253
*/
220254
MemoryContextReset(ErrorContext);
221-
output_to_server = true;
222-
if (whereToSendOutput == Remote && elevel != COMMERROR)
223-
output_to_client = true;
224-
elevel = Max(elevel, ERROR);
225255

226256
/*
227257
* If we recurse more than once, the problem might be something
@@ -300,74 +330,39 @@ errfinish(int dummy,...)
300330
(*econtext->callback) (econtext->arg);
301331

302332
/*
303-
* If the error level is ERROR or more, we are not going to return to
304-
* caller; therefore, if there is any stacked error already in
305-
* progress it will be lost. This is more or less okay, except we do
306-
* not want to have a FATAL or PANIC error downgraded because the
307-
* reporting process was interrupted by a lower-grade error. So check
308-
* the stack and make sure we panic if panic is warranted.
333+
* If ERROR (not more nor less) we pass it off to the current handler.
334+
* Printing it and popping the stack is the responsibility of
335+
* the handler.
309336
*/
310-
if (elevel >= ERROR)
337+
if (elevel == ERROR)
311338
{
312-
int i;
339+
/*
340+
* We do some minimal cleanup before longjmp'ing so that handlers
341+
* can execute in a reasonably sane state.
342+
*/
313343

314-
for (i = 0; i <= errordata_stack_depth; i++)
315-
elevel = Max(elevel, errordata[i].elevel);
316-
}
344+
/* This is just in case the error came while waiting for input */
345+
ImmediateInterruptOK = false;
317346

318-
/*
319-
* Check some other reasons for treating ERROR as FATAL:
320-
*
321-
* 1. we have no handler to pass the error to (implies we are in the
322-
* postmaster or in backend startup).
323-
*
324-
* 2. ExitOnAnyError mode switch is set (initdb uses this).
325-
*
326-
* 3. the error occurred after proc_exit has begun to run. (It's
327-
* proc_exit's responsibility to see that this doesn't turn into
328-
* infinite recursion!)
329-
*/
330-
if (elevel == ERROR)
331-
{
332-
if (PG_exception_stack == NULL ||
333-
ExitOnAnyError ||
334-
proc_exit_inprogress)
335-
elevel = FATAL;
336-
else
337-
{
338-
/*
339-
* Otherwise we can pass the error off to the current handler.
340-
* Printing it and popping the stack is the responsibility of
341-
* the handler.
342-
*
343-
* We do some minimal cleanup before longjmp'ing so that handlers
344-
* can execute in a reasonably sane state.
345-
*/
346-
347-
/* This is just in case the error came while waiting for input */
348-
ImmediateInterruptOK = false;
349-
350-
/*
351-
* Reset InterruptHoldoffCount in case we ereport'd from
352-
* inside an interrupt holdoff section. (We assume here that
353-
* no handler will itself be inside a holdoff section. If
354-
* necessary, such a handler could save and restore
355-
* InterruptHoldoffCount for itself, but this should make life
356-
* easier for most.)
357-
*/
358-
InterruptHoldoffCount = 0;
359-
360-
CritSectionCount = 0; /* should be unnecessary, but... */
361-
362-
/*
363-
* Note that we leave CurrentMemoryContext set to
364-
* ErrorContext. The handler should reset it to something else
365-
* soon.
366-
*/
367-
368-
recursion_depth--;
369-
PG_RE_THROW();
370-
}
347+
/*
348+
* Reset InterruptHoldoffCount in case we ereport'd from
349+
* inside an interrupt holdoff section. (We assume here that
350+
* no handler will itself be inside a holdoff section. If
351+
* necessary, such a handler could save and restore
352+
* InterruptHoldoffCount for itself, but this should make life
353+
* easier for most.)
354+
*/
355+
InterruptHoldoffCount = 0;
356+
357+
CritSectionCount = 0; /* should be unnecessary, but... */
358+
359+
/*
360+
* Note that we leave CurrentMemoryContext set to ErrorContext.
361+
* The handler should reset it to something else soon.
362+
*/
363+
364+
recursion_depth--;
365+
PG_RE_THROW();
371366
}
372367

373368
/*

0 commit comments

Comments
 (0)