Age | Commit message (Collapse) | Author |
|
Running COPY within a pipeline can break protocol synchronization in
multiple ways. psql is limited in terms of result processing if mixing
COPY commands with normal queries while controlling a pipeline with the
new meta-commands, as an effect of the following reasons:
- In COPY mode, the backend ignores additional Sync messages and will
not send a matching ReadyForQuery expected by the frontend. Doing a
\syncpipeline just after COPY will leave the frontend waiting for a
ReadyForQuery message that won't be sent, leaving psql out-of-sync.
- libpq automatically sends a Sync with the Copy message which is not
tracked in the command queue, creating an unexpected synchronisation
point that psql cannot really know about. While it is possible to track
such activity for a \copy, this cannot really be done sanely with plain
COPY queries. Backend failures during a COPY would leave the pipeline
in an aborted state while the backend would be in a clean state, ready
to process commands.
At the end, fixing those issues would require modifications in how libpq
handles pipeline and COPY. So, rather than implementing workarounds in
psql to shortcut the libpq internals (with command queue handling for
one), and because meta-commands for pipelines in psql are a new feature
with COPY in a pipeline having a limited impact compared to other
queries, this commit forbids the use of COPY within a pipeline to avoid
possible break of protocol synchronisation within psql. If there is a
use-case for COPY support within pipelines in libpq, this could always
be added in the future, if necessary.
Most of the changes of this commit impacts the tests for psql pipelines,
removing the tests related to COPY. Some TAP tests still exist for COPY
TO/FROM and \copy to/from, to check that that connections are aborted
when this operation is attempted.
Reported-by: Nikita Kalinin <n.kalinin@postgrespro.ru>
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Discussion: https://postgr.es/m/AC468509-06E8-4E2A-A4B1-63046A4AC6AB@postgrespro.ru
|
|
When the backend reads COPY data, it ignores all sync messages, as per
c01641f8aed0. With psql pipelines, it is possible to manually send sync
messages with \sendpipeline which leaves the frontend in an
unrecoverable state as the backend will not send the necessary
ReadyForQuery message that is expected to feed psql result consumption
logic.
It could be possible to artificially reduce the piped_syncs and
requested_results, however libpq's state would still have queued sync
messages in its command queue, and the only way to consume those without
directly calling pqCommandQueueAdvance() is to process ReadyForQuery
messages that won't be sent since the backend ignores these. Perhaps
this could be improved in the future, but I am not really excited about
introducing this amount of complications in libpq to manipulate the
message queues without a better use case to support it.
Hence, this patch aborts the connection if we detect excessive sync
messages after a COPY in a pipeline to avoid staying in an inconsistent
protocol state, which is the best thing we can do with pipelines in
psql for now. Note that this change does not prevent wrapping a set
of queries inside a block made of \startpipeline and \endpipeline, only
the use of \syncpipeline for a COPY.
Reported-by: Nikita Kalinin <n.kalinin@postgrespro.ru>
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Discussion: https://postgr.es/m/18944-8a926c30f68387dd@postgresql.org
|
|
A correct cocktail of COPY FROM, SELECT and/or DML queries and
\syncpipeline was able to break the logic in charge of discarding
results of a pipeline, done in discardAbortedPipelineResults(). Such
sequence make the backend generate a FATAL, due to a protocol
synchronization loss.
This problem comes down to the fact that we did not consider the case of
libpq returning a PGRES_FATAL_ERROR when discarding the results of an
aborted pipeline. The discarding code is changed so as this result
status is handled as a special case, with the caller of
discardAbortedPipelineResults() being responsible for consuming the
result.
A couple of tests are added to cover the problems reported, bringing an
interesting gain in coverage as there were no tests in the tree covering
the case of protocol synchronization loss.
Issue introduced by 41625ab8ea3d.
Reported-by: Alexander Kozhemyakin <a.kozhemyakin@postgrespro.ru>
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Co-authored-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/ebf6ce77-b180-4d6b-8eab-71f641499ddf@postgrespro.ru
|
|
The large majority of these have been introduced by recent commits done
in the v18 development cycle.
Author: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/9a7763ab-5252-429d-a943-b28941e0e28b@gmail.com
|
|
Use appendPQExpBufferStr when there are no parameters and
appendPQExpBufferChar when the string length is 1.
Unlike 3fae25cbb, which fixed this issue for code that was new to v18,
this one fixes up instances which exist in the backbranches. We've
historically tried to maintain this standard and if we're going to
continue doing that, then we won't be doing that selectively based on
when the code was introduced. Now seems like a good time to flush out the
existing misuses. Waiting until v19 just prolongs their existence in
terms of released versions that the misuses exist in.
Author: David Rowley <drowleyml@gmail.com>
Discussion: https://postgr.es/m/CAApHDvoARMvPeXTTC0HnpARBHn-WgVstc8XFCyMGOzvgu_1HvQ@mail.gmail.com
|
|
Currently, the only way to pipe queries in an ongoing pipeline (in a
\startpipeline block) is to leverage the meta-commands able to create
extended queries such as \bind, \parse or \bind_named.
While this is good enough for testing the backend with pipelines, it has
been mentioned that it can also be very useful to allow queries
terminated by semicolons to be appended to a pipeline. For example, it
would be possible to migrate existing psql scripts to use pipelines by
just adding a set of \startpipeline and \endpipeline meta-commands,
making such scripts more efficient.
Doing such a change is proving to be simple in psql: queries terminated
by semicolons can be executed through PQsendQueryParams() without any
parameters set when the pipeline mode is active, instead of
PQsendQuery(), the default, like pgbench. \watch is still forbidden
while in a pipeline, as it expects its results to be processed
synchronously.
The large portion of this commit consists in providing more test
coverage, with mixes of extended queries appended in a pipeline by \bind
and friends, and queries terminated by semicolons.
This improvement has been suggested by Daniel Vérité.
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Discussion: https://postgr.es/m/d67b9c19-d009-4a50-8020-1a0ea92366a1@manitou-mail.org
|
|
This commit adds %P to psql prompts, able to report the status of a
pipeline depending on PQpipelineStatus(): on, off or abort.
The following variables are added to report the state of an ongoing
pipeline:
- PIPELINE_SYNC_COUNT: reports the number of piped syncs.
- PIPELINE_COMMAND_COUNT: reports the number of piped commands, a
command being either \bind, \bind_named, \close or \parse.
- PIPELINE_RESULT_COUNT: reports the results available to read with
\getresults.
These variables can be used with \echo or in a prompt, using "%:name:"
in PROMPT1, PROMPT2 or PROMPT3. Some basic regression tests are added
for these. The suggestion to use variables to show the details about
the status counters comes from me. The original patch proposed was less
extensible, hardcoding the output in the prompt.
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Discussion: https://postgr.es/m/CAO6_XqroE7JuMEm1sWz55rp9fAYX2JwmcP_3m_v51vnOFdsLiQ@mail.gmail.com
|
|
With \bind, \parse, \bind_named and \close, it is possible to issue
queries from psql using the extended protocol. However, it was not
possible to send these queries using libpq's pipeline mode. This
feature has two advantages:
- Testing. Pipeline tests were only possible with pgbench, using TAP
tests. It now becomes possible to have more SQL tests that are able to
stress the backend with pipelines and extended queries. More tests will
be added in a follow-up commit that were discussed on some other
threads. Some external projects in the community had to implement their
own facility to work around this limitation.
- Emulation of custom workloads, with more control over the actions
taken by a client with libpq APIs. It is possible to emulate more
workload patterns to bottleneck the backend with the extended query
protocol.
This patch adds six new meta-commands to be able to control pipelines:
* \startpipeline starts a new pipeline. All extended queries are queued
until the end of the pipeline are reached or a sync request is sent and
processed.
* \endpipeline ends an existing pipeline. All queued commands are sent
to the server and all responses are processed by psql.
* \syncpipeline queues a synchronisation request, without flushing the
commands to the server, equivalent of PQsendPipelineSync().
* \flush, equivalent of PQflush().
* \flushrequest, equivalent of PQsendFlushRequest()
* \getresults reads the server's results for the queries in a pipeline.
Unsent data is automatically pushed when \getresults is called. It is
possible to control the number of results read in a single meta-command
execution with an optional parameter, 0 means that all the results
should be read.
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Discussion: https://postgr.es/m/CAO6_XqroE7JuMEm1sWz55rp9fAYX2JwmcP_3m_v51vnOFdsLiQ@mail.gmail.com
|
|
A patch touching this area of the code is under review, and this format
makes the readability of the code slightly harder to parse.
Extracted from a larger patch by the same author.
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Discussion: https://postgr.es/m/CAO6_XqroE7JuMEm1sWz55rp9fAYX2JwmcP_3m_v51vnOFdsLiQ@mail.gmail.com
|
|
Backpatch-through: 13
|
|
This fixes a couple of issues with the psql meta-commands mentioned
above when called repeatedly:
- The statement name is reset for each call. If a command errors out,
its send_mode would still be set, causing an incorrect path to be taken
when processing a query. For \bind_named, this could trigger an
assertion failure as a statement name is always expected for this
meta-command. This issue has been introduced by d55322b0da60.
- The memory allocated for bind parameters can be leaked. This is a bug
enlarged by d55322b0da60 that exists since 5b66de3433e2, as it is also
possible to leak memory with \bind in v16 and v17. This requires a fix
that will be done on the affected branches separately. This issue is
taken care of here for HEAD.
This patch tightens the cleanup of the state used for the extended
protocol meta-commands (bind parameters, send mode, statement name) by
doing it before running each meta-command on top of doing it once a
query has been processed, avoiding any leaks and the inconsistencies
when mixing calls, by refactoring the cleanup in a single routine used
in all the code paths where this step is required.
Reported-by: Alexander Lakhin
Author: Anthonin Bonnefoy
Discussion: https://postgr.es/m/2e5b89af-a351-ff0a-000c-037ac28314ab@gmail.com
|
|
Author: Alexander Lakhin
Discussion: https://postgr.es/m/f7e514cf-2446-21f1-a5d2-8c089a6e2168@gmail.com
|
|
Currently, only unnamed prepared statement are supported by psql with
the meta-command \bind. With only this command, it is not possible to
test named statement creation, execution or close through the extended
protocol.
This commit introduces three additional commands:
* \parse creates a prepared statement using the extended protocol,
acting as a wrapper of libpq's PQsendPrepare().
* \bind_named binds and executes an existing prepared statement using
the extended protocol, for PQsendQueryPrepared().
* \close closes an existing prepared statement using the extended
protocol, for PQsendClosePrepared().
This is going to be useful to add regression tests for the extended
query protocol, and I have some plans for that on separate threads.
Note that \bind relies on PQsendQueryParams().
The code of psql is refactored so as bind_flag is replaced by an enum in
_psqlSettings that tracks the type of libpq routine to execute, based on
the meta-command involved, with the default being PQsendQuery(). This
refactoring piece has been written by me, while Anthonin has implemented
the rest.
Author: Anthonin Bonnefoy, Michael Paquier
Reviewed-by: Aleksander Alekseev, Jelte Fennema-Nio
Discussion: https://postgr.es/m/CAO6_XqpSq0Q0kQcVLCbtagY94V2GxNP3zCnR6WnOM8WqXPK4nw@mail.gmail.com
|
|
After a query cancel, the tail end of ExecQueryAndProcessResults
took care to clear any not-yet-read PGresults; but it forgot about
the one it has already read. There would only be such a result
when handling a multi-command string made with "\;", so that you'd
have to cancel an earlier command in such a string to reach the
bug at all. Even then, there would only be leakage of a single
PGresult per cancel, so it's not surprising nobody noticed this.
But a leak is a leak.
Noted while re-reviewing 90f517821, but this is independent of that:
it dates to 7844c9918. Back-patch to v15 where that came in.
|
|
Alexander Lakhin noted an obsolete comment, which led me to revisit
some other important comments in the patch, and that study turned up a
couple of unintended ways in which the chunked-fetch code path didn't
match the normal code path in ExecQueryAndProcessResults. The only
nontrivial problem is that it didn't call PrintQueryStatus, so that
we'd not print the final status result from DML ... RETURNING
commands. To avoid code duplication, move the filter for whether a
result is from RETURNING from PrintQueryResult to PrintQueryStatus.
Discussion: https://postgr.es/m/0023bea5-79c0-476e-96c8-dad599cc3ad8@gmail.com
|
|
Formerly this was done with a cursor, which is problematic since
not all result-set-returning query types can be put into a cursor.
The new implementation is better integrated into other psql
features, too.
Daniel Vérité, reviewed by Laurenz Albe and myself (and whacked
around a bit by me, so any remaining bugs are my fault)
Discussion: https://postgr.es/m/CAKZiRmxsVTkO928CM+-ADvsMyePmU3L9DQCa9NwqjvLPcEe5QA@mail.gmail.com
|
|
This allows a RETURNING clause to be appended to a MERGE query, to
return values based on each row inserted, updated, or deleted. As with
plain INSERT, UPDATE, and DELETE commands, the returned values are
based on the new contents of the target table for INSERT and UPDATE
actions, and on its old contents for DELETE actions. Values from the
source relation may also be returned.
As with INSERT/UPDATE/DELETE, the output of MERGE ... RETURNING may be
used as the source relation for other operations such as WITH queries
and COPY commands.
Additionally, a special function merge_action() is provided, which
returns 'INSERT', 'UPDATE', or 'DELETE', depending on the action
executed for each row. The merge_action() function can be used
anywhere in the RETURNING list, including in arbitrary expressions and
subqueries, but it is an error to use it anywhere outside of a MERGE
query's RETURNING list.
Dean Rasheed, reviewed by Isaac Morland, Vik Fearing, Alvaro Herrera,
Gurjeet Singh, Jian He, Jeff Davis, Merlin Moncure, Peter Eisentraut,
and Wolfgang Walther.
Discussion: http://postgr.es/m/CAEZATCWePEGQR5LBn-vD6SfeLZafzEm2Qy_L_Oky2=qw2w3Pzg@mail.gmail.com
|
|
Reported-by: Michael Paquier
Discussion: https://postgr.es/m/ZZKTDPxBBMt3C0J9@paquier.xyz
Backpatch-through: 12
|
|
SetResultVariables() was not getting called when "printing" a result
that failed (see around PrintQueryResult), which would cause some
variables to not be set, like ROW_COUNT, SQLSTATE or ERROR. This can be
confusing as a previous result would be retained.
This state could be reached when failing to process tuples in a few
commands, like \gset when it returns no tuples, or \crosstabview. A
test is added, based on \gset.
This is arguably a bug fix, but no backpatch is done as there is a risk
of breaking scripts that rely on the previous behavior, even if they do
so accidentally.
Reported-by: amutu
Author: Japin Li
Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/18134-87126d90cb4dd049@postgresql.org
|
|
When running a repeat query with \watch in psql, it can be
helpful to be able to stop the watch process when the query
no longer returns the expected amount of rows. An example
would be to watch for the presence of a certain event in
pg_stat_activity and stopping when the event is no longer
present, or to watch an index creation and stop when the
index is created.
This adds a min_rows=MIN parameter to \watch which can be
set to a non-negative integer, and the watch query will
stop executing when it returns less than MIN rows.
Author: Greg Sabino Mullane <htamfids@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CAKAnmmKStATuddYxP71L+p0DHtp9Rvjze3XRoy0Dyw67VQ45UA@mail.gmail.com
|
|
psql's --echo-hidden, --log-file, and --single-step options
generate extra lines to clearly separate queries from other output.
Presently, these extra lines are not valid SQL comments, which
makes them a hazard for anyone trying to copy/paste the decorated
queries into a client or query editor. This commit replaces the
starting and ending asterisks in these extra lines with forward
slashes so that they are valid SQL comments that can be copy/pasted
without incident.
Author: Kirk Wolak
Reviewed-by: Pavel Stehule, Laurenz Albe, Tom Lane, Alvaro Herrera, Andrey Borodin
Discussion: https://postgr.es/m/CACLU5mTFJRJYtbvmZ26txGgmXWQo0hkGhH2o3hEquUPmSbGtBw%40mail.gmail.com
|
|
Run pgindent, pgperltidy, and reformat-dat-files.
This set of diffs is a bit larger than typical. We've updated to
pg_bsd_indent 2.1.2, which properly indents variable declarations that
have multi-line initialization expressions (the continuation lines are
now indented one tab stop). We've also updated to perltidy version
20230309 and changed some of its settings, which reduces its desire to
add whitespace to lines to make assignments etc. line up. Going
forward, that should make for fewer random-seeming changes to existing
code.
Discussion: https://postgr.es/m/20230428092545.qfb3y5wcu4cm75ur@alvherre.pgsql
|
|
Make the \g, \o, \w, and \copy commands set these variables
when closing a pipe. We missed doing this in commit b0d8f2d98,
but it seems like a good idea.
There are some remaining places in psql that intentionally don't
update these variables after running a child program:
* pager invocations
* backtick evaluation within a prompt
* \e (edit query buffer)
Corey Huinker and Tom Lane
Discussion: https://postgr.es/m/CADkLM=eSKwRGF-rnRqhtBORRtL49QsjcVUCa-kLxKTqxypsakw@mail.gmail.com
|
|
These are all not necessary from a correctness POV. However, in the near
future instr_time will be simplified to an int64, at which point gcc would
otherwise start to warn about the changed places.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20230116023639.rn36vf6ajqmfciua@awork3.anarazel.de
|
|
Author: Justin Pryzby
Discussion: https://postgr.es/m/20230110045722.GD9837@telsasoft.com
|
|
Backpatch-through: 11
|
|
This adds a new psql command \bind that sets query parameters and
causes the next query to be sent using the extended query protocol.
Example:
SELECT $1, $2 \bind 'foo' 'bar' \g
This may be useful for psql scripting, but one of the main purposes is
also to be able to test various aspects of the extended query protocol
from psql and to write tests more easily.
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/e8dd1cd5-0e04-3598-0518-a605159fe314@enterprisedb.com
|
|
Because of a small thinko in 7844c9918a43b494adde3575891d217a37062378,
psql -c would exit successfully when a query is canceled. Fix this so
that it exits with a nonzero status, just like for all other errors.
|
|
The pre-v15 behavior was to discard all but the last result,
but with the new behavior of printing all results by default,
we will send each such result to the \g file. However,
we're still opening and closing the \g file for each result,
so you lose all but the last result anyway. Move the output-file
state up to ExecQueryAndProcessResults so that we open/close the
\g file only once per command string.
To support this without changing other behavior, we must
adjust PrintQueryResult to have separate FILE * arguments
for query and status output (since status output has never
gone to the \g file). That in turn makes it a good idea
to push the responsibility for fflush'ing output down to
PrintQueryTuples and PrintQueryStatus.
Also fix an infinite loop if COPY IN/OUT is attempted in \watch.
We used to reject that, but that error exit path got broken
somewhere along the line in v15. There seems no real reason
to reject it anyway as the code now stands, so just remove
the error exit and make sure that COPY OUT data goes to the
right place.
Also remove PrintQueryResult's unused is_watch parameter,
and make some other cosmetic cleanups (adjust obsolete
comments, break some overly-long lines).
Daniel Vérité and Tom Lane
Discussion: https://postgr.es/m/4333844c-2244-4d6e-a49a-1d483fbe304f@manitou-mail.org
|
|
This is a continuation of 78fdb1e, where this flag is set in the psql
callback handler used for SIGINT. This was previously a boolean but the
C standard recommends the use of sig_atomic_t. Note that this
influences PromptInterruptContext in string.h, where the same flag is
tracked.
Author: Hayato Kuroda
Discussion: https://postgr.es/m/TYAPR01MB58669A9EC96AA3078C2CD938F5549@TYAPR01MB5866.jpnprd01.prod.outlook.com
|
|
More than twenty years ago (79fcde48b), we hacked the postmaster
to avoid a core-dump on systems that didn't support fflush(NULL).
We've mostly, though not completely, hewed to that rule ever since.
But such systems are surely gone in the wild, so in the spirit of
cleaning out no-longer-needed portability hacks let's get rid of
multiple per-file fflush() calls in favor of using fflush(NULL).
Also, we were fairly inconsistent about whether to fflush() before
popen() and system() calls. While we've received no bug reports
about that, it seems likely that at least some of these call sites
are at risk of odd behavior, such as error messages appearing in
an unexpected order. Rather than expend a lot of brain cells
figuring out which places are at hazard, let's just establish a
uniform coding rule that we should fflush(NULL) before these calls.
A no-op fflush() is surely of trivial cost compared to launching
a sub-process via a shell; while if it's not a no-op then we likely
need it.
Discussion: https://postgr.es/m/2923412.1661722825@sss.pgh.pa.us
|
|
These functions already had the free()-like behavior of handling null
pointers as a no-op. But it wasn't documented, so add it explicitly
to the documentation, too.
Discussion: https://www.postgresql.org/message-id/flat/dac5d2d0-98f5-94d9-8e69-46da2413593d%40enterprisedb.com
|
|
The new show-all-results feature in psql (7844c9918) went out of its
way to show notices next to the results of the statements (in a
multi-statement string) that caused them. This also had the
consequence that notices for a single statement were not shown until
after the statement had executed, instead of right away. After some
discussion, it seems very difficult to satisfy both of these goals, so
here we are giving up on the first goal and just show the notices as
we get them. This restores the pre-7844c9918 behavior for notices.
Reported-by: Alastair McKinley <a.mckinley@analyticsengines.com>
Author: Fabien COELHO <coelho@cri.ensmp.fr>
Discussion: https://www.postgresql.org/message-id/flat/PAXPR02MB760039506C87A2083AD85575E3DA9%40PAXPR02MB7600.eurprd02.prod.outlook.com
|
|
The changes to show all query results (7844c9918) broke \timing output
in case of an error; it didn't update the timing result and showed
0.000 ms.
Fix by updating the timing result also in the error case. Also, for
robustness, update the timing result any time a result is obtained,
not only for the last, so a sensible value is always available.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Author: Richard Guo <guofenglinux@gmail.com>
Author: Fabien COELHO <coelho@cri.ensmp.fr>
Discussion: https://www.postgresql.org/message-id/3813350.1652111765%40sss.pgh.pa.us
|
|
Run pgindent, pgperltidy, and reformat-dat-files.
I manually fixed a couple of comments that pgindent uglified.
|
|
Get rid of the separate "FATAL" log level, as it was applied
so inconsistently as to be meaningless. This mostly involves
s/pg_log_fatal/pg_log_error/g.
Create a macro pg_fatal() to handle the common use-case of
pg_log_error() immediately followed by exit(1). Various
modules had already invented either this or equivalent macros;
standardize on pg_fatal() and apply it where possible.
Invent the ability to add "detail" and "hint" messages to a
frontend message, much as we have long had in the backend.
Except where rewording was needed to convert existing coding
to detail/hint style, I have (mostly) resisted the temptation
to change existing message wording.
Patch by me. Design and patch reviewed at various stages by
Robert Haas, Kyotaro Horiguchi, Peter Eisentraut and
Daniel Gustafsson.
Discussion: https://postgr.es/m/1363732.1636496441@sss.pgh.pa.us
|
|
Previously, psql printed only the last result if a command string
returned multiple result sets. Now it prints all of them. The
previous behavior can be obtained by setting the psql variable
SHOW_ALL_RESULTS to off.
This is a significantly enhanced version of
3a5130672296ed4e682403a77a9a3ad3d21cef75 (that was later reverted).
There is also much more test coverage for various psql features now.
Author: Fabien COELHO <coelho@cri.ensmp.fr>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: "Iwata, Aya" <iwata.aya@jp.fujitsu.com> (earlier version)
Reviewed-by: Daniel Verite <daniel@manitou-mail.org> (earlier version)
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> (earlier version)
Reviewed-by: vignesh C <vignesh21@gmail.com> (earlier version)
Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904132231510.8961@lancre
|
|
Separate HandleCopyResult() from ProcessResult() in preparation for a
subsequent patch.
Author: Fabien COELHO <coelho@cri.ensmp.fr>
Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904132231510.8961@lancre
|
|
This breaks out the fetch-it-all-and-print case in SendQuery() into a
separate function. This makes the code more similar to the other
cases \gdesc and run query with FETCH_COUNT, and makes SendQuery()
itself a bit smaller.
Extracted from a larger patch with more changes in this area to
follow.
Author: Fabien COELHO <coelho@cri.ensmp.fr>
Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904132231510.8961@lancre
|
|
This makes the naming more consistent with the libpq API and the rest
of the code, and makes actually supporting multiple result sets in the
future less confusing.
Discussion: https://www.postgresql.org/message-id/flat/db72fb98-9b43-d776-7247-6ed38f28e7c6%40enterprisedb.com
|
|
The point of this patch is to reduce inclusion spam by not needing
to #include <netdb.h> or <pwd.h> in port.h (which is read by every
compile in our tree). To do that, we must remove port.h's
declarations of pqGetpwuid and pqGethostbyname.
pqGethostbyname is only used, and is only ever likely to be used,
in src/port/getaddrinfo.c --- which isn't even built on most
platforms, making pqGethostbyname dead code for most people.
Hence, deal with that by just moving it into getaddrinfo.c.
To clean up pqGetpwuid, invent a couple of simple wrapper
functions with less-messy APIs. This allows removing some
duplicate error-handling code, too.
In passing, remove thread.c from the MSVC build, since it
contains nothing we use on Windows.
Noted while working on 376ce3e40.
Discussion: https://postgr.es/m/1634252654444.90107@mit.edu
|
|
Backpatch-through: 10
|
|
Per discussion, we'll limit support for old servers to those branches
that can still be built easily on modern platforms, which as of now
is 9.2 and up.
Aside from removing code that is dead per the assumption of
server >= 9.2, I tweaked the startup warning for unsupported versions
to complain about too-old servers as well as too-new ones. The
warning that "Some psql features might not work" applies precisely
to both cases.
Discussion: https://postgr.es/m/2923349.1634942313@sss.pgh.pa.us
|
|
A repeated complaint was that scan-build thought that if the \timing
setting changes during processing of a query, the post-processing
might read garbage time values. This is probably not possible right
now, but it's not entirely inconceivable given the code structure. So
silence this warning with small restructuring that makes this more
robust. The other warnings were a few dead stores that are easy to
remove.
Discussion: https://www.postgresql.org/message-id/2570e2ae-fa0f-aac9-f72f-bb59a9983a20@enterprisedb.com
|
|
Allow a pager to be used by the \watch command. This works but isn't
very useful with traditional pagers like "less", so use a different
environment variable. The popular open source tool "pspg" (also by
Pavel) knows how to display the output if you set PSQL_WATCH_PAGER="pspg
--stream".
To make \watch react quickly when the user quits the pager or presses
^C, and also to increase the accuracy of its timing and decrease the
rate of useless context switches, change the main loop of the \watch
command to use sigwait() rather than a sleeping/polling loop, on Unix.
Supported on Unix only for now (like pspg).
Author: Pavel Stehule <pavel.stehule@gmail.com>
Author: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/CAFj8pRBfzUUPz-3gN5oAzto9SDuRSq-TQPfXU_P6h0L7hO%2BEhg%40mail.gmail.com
|
|
An incorrectly-encoded multibyte character near the end of a string
could cause various processing loops to run past the string's
terminating NUL, with results ranging from no detectable issue to
a program crash, depending on what happens to be in the following
memory.
This isn't an issue in the server, because we take care to verify
the encoding of strings before doing any interesting processing
on them. However, that lack of care leaked into client-side code
which shouldn't assume that anyone has validated the encoding of
its input.
Although this is certainly a bug worth fixing, the PG security team
elected not to regard it as a security issue, primarily because
any untrusted text should be sanitized by PQescapeLiteral or
the like before being incorporated into a SQL or psql command.
(If an app fails to do so, the same technique can be used to
cause SQL injection, with probably much more dire consequences
than a mere client-program crash.) Those functions were already
made proof against this class of problem, cf CVE-2006-2313.
To fix, invent PQmblenBounded() which is like PQmblen() except it
won't return more than the number of bytes remaining in the string.
In HEAD we can make this a new libpq function, as PQmblen() is.
It seems imprudent to change libpq's API in stable branches though,
so in the back branches define PQmblenBounded as a macro in the files
that need it. (Note that just changing PQmblen's behavior would not
be a good idea; notably, it would completely break the escaping
functions' defense against this exact problem. So we just want a
version for those callers that don't have any better way of handling
this issue.)
Per private report from houjingyi. Back-patch to all supported branches.
|
|
This reverts commit 3a5130672296ed4e682403a77a9a3ad3d21cef75.
Per discussion, this patch had too many issues to resolve at this
point of the development cycle. We'll try again in the future.
Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904132231510.8961@lancre
|
|
Previously, psql printed only the last result if a command string
returned multiple result sets. Now it prints all of them. The
previous behavior can be obtained by setting the psql variable
SHOW_ALL_RESULTS to off.
Author: Fabien COELHO <coelho@cri.ensmp.fr>
Reviewed-by: "Iwata, Aya" <iwata.aya@jp.fujitsu.com>
Reviewed-by: Daniel Verite <daniel@manitou-mail.org>
Reviewed-by: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904132231510.8961@lancre
|
|
Protocol version 3 was introduced in PostgreSQL 7.4. There shouldn't be
many clients or servers left out there without version 3 support. But as
a courtesy, I kept just enough of the old protocol support that we can
still send the "unsupported protocol version" error in v2 format, so that
old clients can display the message properly. Likewise, libpq still
understands v2 ErrorResponse messages when establishing a connection.
The impetus to do this now is that I'm working on a patch to COPY
FROM, to always prefetch some data. We cannot do that safely with the
old protocol, because it requires parsing the input one byte at a time
to detect the end-of-copy marker.
Reviewed-by: Tom Lane, Alvaro Herrera, John Naylor
Discussion: https://www.postgresql.org/message-id/9ec25819-0a8a-d51a-17dc-4150bb3cca3b%40iki.fi
|
|
When ON_ERROR_ROLLBACK is enabled, psql releases a temporary savepoint
if it's idle in a valid transaction block after executing a query. But psql
doesn't do that after RELEASE or ROLLBACK is executed because a temporary
savepoint has already been destroyed in that case.
This commit changes psql's ON_ERROR_ROLLBACK so that it doesn't release
a temporary savepoint also when COMMIT AND CHAIN is executed. A temporary
savepoint doesn't need to be released in that case because
COMMIT AND CHAIN also destroys any savepoints defined within the transaction
to commit. Otherwise psql tries to release the savepoint that
COMMIT AND CHAIN has already destroyed and cause an error
"ERROR: savepoint "pg_psql_temporary_savepoint" does not exist".
Back-patch to v12 where transaction chaining was added.
Reported-by: Arthur Nascimento
Author: Arthur Nascimento
Reviewed-by: Fujii Masao, Vik Fearing
Discussion: https://postgr.es/m/16867-3475744069228158@postgresql.org
|