Age | Commit message (Collapse) | Author |
|
The new tests verify that logical and physical replication slots are still
valid after an immediate restart on checkpoint completion when the slot was
advanced during the checkpoint.
This commit introduces two new injection points to make these tests possible:
* checkpoint-before-old-wal-removal - triggered in the checkpointer process
just before old WAL segments cleanup;
* logical-replication-slot-advance-segment - triggered in
LogicalConfirmReceivedLocation() when restart_lsn was changed enough to
point to the next WAL segment.
Discussion: https://postgr.es/m/flat/1d12d2-67235980-35-19a406a0%4063439497
Author: Vitaly Davydov <v.davydov@postgrespro.ru>
Author: Tomas Vondra <tomas@vondra.me>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Backpatch-through: 17
|
|
Running COPY within a pipeline can break protocol synchronization in
multiple ways. psql is limited in terms of result processing if mixing
COPY commands with normal queries while controlling a pipeline with the
new meta-commands, as an effect of the following reasons:
- In COPY mode, the backend ignores additional Sync messages and will
not send a matching ReadyForQuery expected by the frontend. Doing a
\syncpipeline just after COPY will leave the frontend waiting for a
ReadyForQuery message that won't be sent, leaving psql out-of-sync.
- libpq automatically sends a Sync with the Copy message which is not
tracked in the command queue, creating an unexpected synchronisation
point that psql cannot really know about. While it is possible to track
such activity for a \copy, this cannot really be done sanely with plain
COPY queries. Backend failures during a COPY would leave the pipeline
in an aborted state while the backend would be in a clean state, ready
to process commands.
At the end, fixing those issues would require modifications in how libpq
handles pipeline and COPY. So, rather than implementing workarounds in
psql to shortcut the libpq internals (with command queue handling for
one), and because meta-commands for pipelines in psql are a new feature
with COPY in a pipeline having a limited impact compared to other
queries, this commit forbids the use of COPY within a pipeline to avoid
possible break of protocol synchronisation within psql. If there is a
use-case for COPY support within pipelines in libpq, this could always
be added in the future, if necessary.
Most of the changes of this commit impacts the tests for psql pipelines,
removing the tests related to COPY. Some TAP tests still exist for COPY
TO/FROM and \copy to/from, to check that that connections are aborted
when this operation is attempted.
Reported-by: Nikita Kalinin <n.kalinin@postgrespro.ru>
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Discussion: https://postgr.es/m/AC468509-06E8-4E2A-A4B1-63046A4AC6AB@postgrespro.ru
|
|
Oversight in commit 8b2bcf3f28.
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/aECi_gSD9JnVWQ8T%40nathan
Backpatch-through: 17
|
|
Similar to commit cc733ed164c5: when an unenforced foreign key that
references a partitioned table is altered to be enforced, we scan
the constrained table based on each partition on the referenced
partitioned table. This is bogus and likely to cause the ALTER TABLE to
fail: we must only scan the constrained table as pointing to the
top-level partitioned table. Oversight in commit eec0040c4bcd. Fix by
eliding those scans.
Author: Amul Sul <sulamul@gmail.com>
Reported-by: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxF1e_gPOLtsDoaE4VCgQPC8KZW_kPAjPR5Rvv4Ew=fb2A@mail.gmail.com
|
|
Validating an unvalidated foreign key that references a partitioned
table would try to queue validations for each individual partition of
the referenced table, but this is wrong: each individual partition would
not necessarily have all the referenced rows, so errors would be raised.
Avoid doing that. The pg_constraint rows that cause this to happen are
only there to support the action triggers that implement the DELETE/
UPDATE actions of the FK, so no validating scan is necessary.
This was an oversight in commit b663b9436e75.
An equivalent oversight exists for NOT ENFORCED constraints, which is
not fixed in this commit.
Author: Amul Sul <sulamul@gmail.com>
Reported-by: Antonin Houska <ah@cybertec.at>
Reviewed-by: jian he <jian.universality@gmail.com>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Discussion: https://postgr.es/m/26983.1748418675@localhost
|
|
The choices made in commit 01463e1cc might pose copyright hazards,
and are more cutesy than informative anyway.
Reported-by: Noah Misch <noah@leadboat.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20250415155850.9b.nmisch@google.com
|
|
Commit 7406ab623fe added a gist support function that we internally
refer to by the symbol GIST_STRATNUM_PROC. This translated from
"well-known" strategy numbers to opfamily-specific strategy numbers.
However, we later (commit 630f9a43cec) changed this to fit into
index-AM-level compare type mapping, so this function actually now
maps from compare type to opfamily-specific strategy numbers. So this
name is no longer fitting.
Moreover, the index AM level also supports the opposite, a function to
map from strategy number to compare type. This is currently not
supported in gist, but one might wonder what this function is supposed
to be called when it is added.
This patch changes the naming of the gist-level functionality to be
more in line with the index-AM-level functionality. This makes sense
because these are essentially the same thing on different levels.
This also changes the names of the externally visible functions that
are provided for use as such a support function.
Reviewed-by: Paul A Jungwirth <pj@illuminatedcomputing.com>
Discussion: https://www.postgresql.org/message-id/37ebb1d9-9036-485f-a215-e55435689917%40eisentraut.org
|
|
When a MERGE's target table is the parent of an inheritance tree, any
INSERT actions insert into the parent table using ModifyTableState's
rootResultRelInfo. However, there are two bugs in the way is
initialized:
1. ExecInitMerge() incorrectly uses a different ResultRelInfo entry
from ModifyTableState's resultRelInfo array to build the insert
projection, which may not be compatible with rootResultRelInfo.
2. ExecInitModifyTable() does not fully initialize rootResultRelInfo.
Specifically, ri_WithCheckOptions, ri_WithCheckOptionExprs,
ri_returningList, and ri_projectReturning are not initialized.
This can lead to crashes, or incorrect query results due to failing to
check WCO's or process the RETURNING list for INSERT actions.
Fix both these bugs in ExecInitMerge(), noting that it is only
necessary to fully initialize rootResultRelInfo if the MERGE has
INSERT actions and the target table is a plain inheritance parent.
Backpatch to v15, where MERGE was introduced.
Reported-by: Andres Freund <andres@anarazel.de>
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Jian He <jian.universality@gmail.com>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Discussion: https://postgr.es/m/4rlmjfniiyffp6b3kv4pfy4jw3pciy6mq72rdgnedsnbsx7qe5@j5hlpiwdguvc
Backpatch-through: 15
|
|
ParseFraction only expects to deal with fields that contain a decimal
point and digit(s). However it's possible in some edge cases for it
to be passed input that doesn't look like that. In particular the
input could look like a valid floating-point number, such as ".123e6".
strtod() will happily eat that, possibly producing a result that is
not within the expected range 0..1, which can result in integer
overflow in the callers. That doesn't have any security consequences,
but it's still not very desirable. Fix by checking that the input
has the expected form.
Similarly, DecodeNumberField only expects to deal with fields that
contain a decimal point and digit(s), but it's sometimes abused to
parse strings that might not look like that. This could result in
failure to reject bogus input, yielding silly results. Again, fix
by rejecting input that doesn't look as-expected. That decision
also means that we can affirmatively answer the very old comment
questioning whether we couldn't save some duplicative code by
using ParseFractionalSecond here.
While these changes should only reject input that nobody would
consider valid, it still doesn't seem like a change to make in
stable branches. Apply to HEAD only.
Reported-by: Evgeniy Gorbanev <gorbanev.es@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/1328335.1748371099@sss.pgh.pa.us
|
|
As written, the test was throwing an error because of an unbalanced
parenthesis. The regex used in the test is adjusted to not fail and to
test the case of an opening parenthesis in a character class after some
nested square brackets.
Oversight in d46911e584d4.
Discussion: https://postgr.es/m/16ab039d1af455652bdf4173402ddda145f2c73b.camel@cybertec.at
|
|
The code that translates SIMILAR TO pattern matching expressions to
POSIX-style regular expressions did not consider that square brackets
can be nested. For example, in an expression like [[:alpha:]%_], the
logic replaced the placeholders '_' and '%' but it should not.
This commit fixes the conversion logic by tracking the nesting level of
square brackets marking character class areas, while considering that
in expressions like []] or [^]] the first closing square bracket is a
regular character. Multiple tests are added to show how the conversions
should or should not apply applied while in a character class area, with
specific cases added for all the characters converted outside character
classes like an opening parenthesis '(', dollar sign '$', etc.
Author: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/16ab039d1af455652bdf4173402ddda145f2c73b.camel@cybertec.at
Backpatch-through: 13
|
|
The test did not wait for all the subscriptions to have caught up when
dropping the subscription "tab_copy". In a slow environment, it could
be possible for the replay of the COMMIT PREPARED transaction "mygid"
to not be confirmed yet, causing one prepared transaction to be left
around before moving to the next steps of the test.
One failure noticed is a transaction found in pg_prepared_xacts for the
cases where copy_data = false and two_phase = true, but there should be
none after dropping the subscription.
As an extra safety measure, a check is added before dropping the
subscription, scanning pg_prepared_xacts to make sure that no prepared
transactions are left once both subscriptions have caught up.
Issue introduced by a8fd13cab0ba, fixing a problem similar to
eaf5321c3524.
Per buildfarm member kestrel.
Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/CALDaNm329QaZ+bwU--bW6GjbNSZ8-38cDE8QWofafub7NV67oA@mail.gmail.com
Backpatch-through: 15
|
|
Check the ctx->nested level as we go, to prevent a server from running
the client out of stack space.
The limit we choose when communicating with authorization servers can't
be overly strict, since those servers will continue to add extensions in
their JSON documents which we need to correctly ignore. For the SASL
communication, we can be more conservative, since there are no defined
extensions (and the peer is probably more Postgres code).
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Discussion: https://postgr.es/m/CAOYmi%2Bm71aRUEi0oQE9ciBnBS8xVtMn3CifaPu2kmJzUfhOZgA%40mail.gmail.com
|
|
Due to concerns raised about the approach, and memory leaks found
in sensitive contexts the functionality is reverted. This reverts
commits 45e7e8ca9, f8c115a6c, d2a1ed172, 55ef7abf8 and 042a66291
for v18 with an intent to revisit this patch for v19.
Discussion: https://postgr.es/m/594293.1747708165@sss.pgh.pa.us
|
|
9219093cab2607f modularized log_connections output to allow more
granular control over which aspects of connection establishment are
logged. It converted the boolean log_connections GUC into a list of strings
and deprecated previously supported boolean-like values on, off, true,
false, 1, 0, yes, and no. Those values still work, but they are
supported mainly for backwards compatability. As such, documented
examples of log_connections should not use these deprecated values.
Update references in the docs to deprecated log_connections values. Many
of the tests use log_connections. This commit also updates the tests to
use the new values of log_connections. In some of the tests, the updated
log_connections value covers a narrower set of aspects (e.g. the
'authentication' aspect in the tests in src/test/authentication and the
'receipt' aspect in src/test/postmaster). In other cases, the new value
for log_connections is a superset of the previous included aspects (e.g.
'all' in src/test/kerberos/t/001_auth.pl).
Reported-by: Peter Eisentraut <peter@eisentraut.org>
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://postgr.es/m/e1586594-3b69-4aea-87ce-73a7488cdc97%40eisentraut.org
|
|
The code carelessly modified mtstate->ps.plan->targetlist,
which it's not supposed to do. Fortunately, there's not
really any need to do that because the planner already
set up a perfectly acceptable targetlist for the plan node.
We just need to remove the erroneous assignments and update some
relevant comments.
As it happens, the erroneous assignments caused the targetlist to
point to a different part of the source plan tree, so that there
isn't really a risk of the pointer becoming dangling after executor
termination. The only visible effect of this change we can find is
that EXPLAIN will show upper references to the ModifyTable's output
expressions using different variables. Formerly it showed Vars from
the first target relation that survived executor-startup pruning.
Now it always shows such references using the first relation appearing
in the planner output, independently of what happens during executor
pruning. On the whole that seems like a good thing.
Also make a small tweak in ExplainPreScanNode to ensure that the first
relation will receive a refname assignment in set_rtable_names, even
if it got pruned at startup. Previously the Vars might be shown
without any table qualification, which is confusing in a multi-table
query.
I considered back-patching this, but since the bug doesn't seem to
have any really terrible consequences in existing branches, it
seems better to not change their EXPLAIN output. It's not too late
for v18 though, especially since v18 already made other changes in
the EXPLAIN output for these cases.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Author: Andres Freund <andres@anarazel.de>
Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/213261.1747611093@sss.pgh.pa.us
|
|
Commit 29f7ce6fe7 added another view that needs adjustment in the
cross-version upgrade test. This should fix the XversionUpgrade
failures in the buildfarm.
Backpatch-through: 16
Discussion: https://www.postgresql.org/message-id/18929-077d6b7093b176e2@postgresql.org
|
|
In the grammar, <expr> is a c_expr, which accepts only a limited set
of integer literals and simple expressions without parens. The
deparsing logic didn't quite match the grammar rule, and failed to use
parens e.g. for "5::bigint".
To fix, always surround the expression with parens. Would be nice to
omit the parens in simple cases, but unfortunately it's non-trivial to
detect such simple cases. Even if the expression is a simple literal
123 in the original query, after parse analysis it becomes a FuncExpr
with COERCE_IMPLICIT_CAST rather than a simple Const.
Reported-by: yonghao lee
Backpatch-through: 13
Discussion: https://www.postgresql.org/message-id/18929-077d6b7093b176e2@postgresql.org
|
|
6b94e7a6da adjusted generate_orderedappend_paths() to consider fractional
paths. However, it didn't manage to interpret the tuple_fraction value
correctly. According to the header comment of grouping_planner(), the
tuple_fraction >= 1 specifies the absolute number of expected tuples. That
number must be divided by the expected total number of tuples to get the
actual fraction.
Even though this is a bug fix, we don't backpatch it. The risks of the side
effects of plan changes on stable branches are too high.
Reported-by: Andrei Lepikhov <lepihov@gmail.com>
Discussion: https://postgr.es/m/3ca271fa-ca5c-458c-8934-eb148622b270%40gmail.com
Author: Andrei Lepikhov <lepihov@gmail.com>
Reviewed-by: Junwang Zhao <zhjwpku@gmail.com>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
|
|
In an XMLTABLE expression, columns can be marked NOT NULL, and the
parser internally fabricates an option named "is_not_null" to
represent this. However, the parser also allows users to specify
arbitrary option names. This creates a conflict: a user can
explicitly use "is_not_null" as an option name and assign it a
non-Boolean value, which violates internal assumptions and triggers an
assertion failure.
To fix, this patch checks whether a user-supplied name collides with
the internally reserved option name and raises an error if so.
Additionally, the internal name is renamed to "__pg__is_not_null" to
further reduce the risk of collision with user-defined names.
Reported-by: Евгений Горбанев <gorbanyoves@basealt.ru>
Author: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Alvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/6bac9886-65bf-4cec-96bd-e304159f28db@basealt.ru
Backpatch-through: 15
|
|
The documentation for log_check() had the parameters in the wrong
order. Also while there, rename %parameters to %params to better
documentation for similar functions which use %params. Backpatch
down to v14 where this was introduced.
Author: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/9F503B5-32F2-45D7-A0AE-952879AD65F1@yesql.se
Backpatch-through: 14
|
|
After executing ALTER SUBSCRIPTION tap_sub SET PUBLICATION, we did not
wait for the new walsender process to restart. As a result, an INSERT
executed immediately after the ALTER could be decoded and skipped,
considering it is not part of any subscribed publication. And, the old
apply worker could also confirm the LSN of such an INSERT. This could
cause the replication to resume from a point after the INSERT. In such
cases, we miss the expected warning about the missing publication.
To fix this, ensure the walsender has restarted before continuing after
ALTER SUBSCRIPTION.
Reported-by: Tom Lane as per CI
Author: vignesh C <vignesh21@gmail.com>
Reviewed-by: Xuneng Zhou <xunengzhou@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/1230066.1745992333@sss.pgh.pa.us
|
|
This cleans up the code related to the testing infrastructure of AIO
that used injection points, switching the test code to use the new
facility for injection points added by 371f2db8b05e rather than tweaks
to pass and reset arguments to the callbacks run.
This removes all the dependencies to USE_INJECTION_POINTS in the AIO
code. pgaio_io_call_inj(), pgaio_inj_io_get() and pgaio_inj_cur_handle
are now gone.
Reviewed-by: Greg Burd <greg@burd.me>
Discussion: https://postgr.es/m/Z_y9TtnXubvYAApS@paquier.xyz
|
|
This commit provides some test coverage for the runtime arguments of
injection points, for both INJECTION_POINT_CACHED() and
INJECTION_POINT(), as extended in 371f2db8b05e.
The SQL functions injection_points_cached() and injection_points_run()
are extended so as it is possible to pass an optional string value to
them.
Reviewed-by: Greg Burd <greg@burd.me>
Discussion: https://postgr.es/m/Z_y9TtnXubvYAApS@paquier.xyz
|
|
The macros INJECTION_POINT() and INJECTION_POINT_CACHED() are extended
with an optional argument that can be passed down to the callback
attached when an injection point is run, giving to callbacks the
possibility to manipulate a stack state given by the caller. The
existing callbacks in modules injection_points and test_aio have their
declarations adjusted based on that.
da7226993fd4 (core AIO infrastructure) and 93bc3d75d8e1 (test_aio) and
been relying on a set of workarounds where a static variable called
pgaio_inj_cur_handle is used as runtime argument in the injection point
callbacks used by the AIO tests, in combination with a TRY/CATCH block
to reset the argument value. The infrastructure introduced in this
commit will be reused for the AIO tests, simplifying them.
Reviewed-by: Greg Burd <greg@burd.me>
Discussion: https://postgr.es/m/Z_y9TtnXubvYAApS@paquier.xyz
|
|
Presently, LibreSSL does not have working support for RSA-PSS,
so disable that test. Per discussion at
https://marc.info/?l=libressl&m=174664225002441&w=2
they do intend to fix this, but it's a ways off yet.
Reported-by: Thomas Munro <thomas.munro@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CA+hUKG+fLqyweHqFSBcErueUVT0vDuSNWui-ySz3+d_APmq7dw@mail.gmail.com
Backpatch-through: 15
|
|
With LibreSSL, our test of error logging for cert chain depths > 0
reports the wrong certificate. This is almost certainly their bug
not ours, so just tweak the test to accept their answer.
No back-patch needed, since this test case wasn't enabled before
e0f373ee4.
Reported-by: Thomas Munro <thomas.munro@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CA+hUKG+fLqyweHqFSBcErueUVT0vDuSNWui-ySz3+d_APmq7dw@mail.gmail.com
|
|
Right now there's only one caller, so that this is merely
an exercise in shoving code from one module to another,
but there will shortly be another one. It seems better to
avoid having two copies of this highly-subject-to-change test.
Back-patch to v15, where we first introduced some tests that
don't work with LibreSSL.
Reported-by: Thomas Munro <thomas.munro@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CA+hUKG+fLqyweHqFSBcErueUVT0vDuSNWui-ySz3+d_APmq7dw@mail.gmail.com
Backpatch-through: 15
|
|
A few places that access this catalog don't set up an active
snapshot before potentially accessing its TOAST table. However,
roname (the replication origin name) is the only varlena column, so
this is only a problem if the name requires out-of-line storage.
This commit removes its TOAST table to avoid needing to set up a
snapshot. It also places a limit on replication origin names so
that attempts to set long names will fail with a more user-friendly
error. Those chosen limit of 512 bytes should be sufficient to
avoid "row is too big" errors independent of BLCKSZ, but it should
also be lenient enough for all reasonable use-cases.
Bumps catversion.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/ZvMSUPOqUU-VNADN%40nathan
|
|
|
|
With GB18030 as source encoding, applications could crash the server via
SQL functions convert() or convert_from(). Applications themselves
could crash after passing unterminated GB18030 input to libpq functions
PQescapeLiteral(), PQescapeIdentifier(), PQescapeStringConn(), or
PQescapeString(). Extension code could crash by passing unterminated
GB18030 input to jsonapi.h functions. All those functions have been
intended to handle untrusted, unterminated input safely.
A crash required allocating the input such that the last byte of the
allocation was the last byte of a virtual memory page. Some malloc()
implementations take measures against that, making the SIGSEGV hard to
reach. Back-patch to v13 (all supported versions).
Author: Noah Misch <noah@leadboat.com>
Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Backpatch-through: 13
Security: CVE-2025-4207
|
|
Start the file with static functions not specific to pe_test_vectors
tests. This way, new tests can use them without disrupting the file's
layout. Change report_result() PQExpBuffer arguments to plain strings.
Back-patch to v13 (all supported versions), for the next commit.
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Backpatch-through: 13
Security: CVE-2025-4207
|
|
For self-referencing foreign keys in partitioned tables, we weren't
handling creation of pg_constraint rows during CREATE TABLE PARTITION AS
as well as ALTER TABLE ATTACH PARTITION. This is an old bug -- mostly,
we broke this in 614a406b4ff1 while trying to fix it (so 12.13, 13.9,
14.6 and 15.0 and up all behave incorrectly). This commit reverts part
of that with additional fixes for full correctness, and installs more
tests to verify the parts we broke, not just the catalog contents but
also the user-visible behavior.
Backpatch to all live branches. In branches 13 and 14, commit
46a8c27a7226 changed the behavior during DETACH to drop a FK
constraint rather than trying to repair it, because the complete fix of
repairing catalog constraints was problematic due to lack of previous
fixes. For this reason, the test behavior in those branches is a bit
different. However, as best as I can tell, the fix works correctly
there.
In release notes we have to recommend that all self-referencing foreign
keys on partitioned tables be recreated if partitions have been created
or attached after the FK was created, keeping in mind that violating
rows might already be present on the referencing side.
Reported-by: Guillaume Lelarge <guillaume@lelarge.info>
Reported-by: Matthew Gabeler-Lee <fastcat@gmail.com>
Reported-by: Luca Vallisa <luca.vallisa@gmail.com>
Discussion: https://postgr.es/m/CAECtzeWHCA+6tTcm2Oh2+g7fURUJpLZb-=pRXgeWJ-Pi+VU=_w@mail.gmail.com
Discussion: https://postgr.es/m/18156-a44bc7096f0683e6@postgresql.org
Discussion: https://postgr.es/m/CAAT=myvsiF-Attja5DcWoUWh21R12R-sfXECY2-3ynt8kaOqjw@mail.gmail.com
|
|
The extension_control_path setting (commit 4f7f7b03758) did not
support extensions that set a custom "directory" setting in their
control file. Very few extensions use that and during the discussion
on the previous commit it was suggested to maybe remove that
functionality. But a fix was easier than initially thought, so this
just adds that support. The fix is to use the control->control_dir as
a share dir to return the path of the extension script files.
To make this work more sensibly overall, the directory suffix
"extension" is no longer to be included in the extension_control_path
value. To quote the patch, it would be
-extension_control_path = '/usr/local/share/postgresql/extension:/home/my_project/share/extension:$system'
+extension_control_path = '/usr/local/share/postgresql:/home/my_project/share:$system'
During the initial patch, there was some discussion on which of these
two approaches would be better, and the committed patch was a 50/50
decision. But the support for the "directory" setting pushed it the
other way, and also it seems like many people didn't like the previous
behavior much.
Author: Matheus Alcantara <mths.dev@pm.me>
Reviewed-by: Christoph Berg <myon@debian.org>
Reviewed-by: David E. Wheeler <david@justatheory.com>
Discussion: https://www.postgresql.org/message-id/flat/aAi1VACxhjMhjFnb%40msg.df7cb.de#0cdf7b7d727cc593b029650daa3c4fbc
|
|
The additional packaging footprint of the OAuth Curl dependency, as well
as the existence of libcurl in the address space even if OAuth isn't
ever used by a client, has raised some concerns. Split off this
dependency into a separate loadable module called libpq-oauth.
When configured using --with-libcurl, libpq.so searches for this new
module via dlopen(). End users may choose not to install the libpq-oauth
module, in which case the default flow is disabled.
For static applications using libpq.a, the libpq-oauth staticlib is a
mandatory link-time dependency for --with-libcurl builds. libpq.pc has
been updated accordingly.
The default flow relies on some libpq internals. Some of these can be
safely duplicated (such as the SIGPIPE handlers), but others need to be
shared between libpq and libpq-oauth for thread-safety. To avoid
exporting these internals to all libpq clients forever, these
dependencies are instead injected from the libpq side via an
initialization function. This also lets libpq communicate the offsets of
PGconn struct members to libpq-oauth, so that we can function without
crashing if the module on the search path came from a different build of
Postgres. (A minor-version upgrade could swap the libpq-oauth module out
from under a long-running libpq client before it does its first load of
the OAuth flow.)
This ABI is considered "private". The module has no SONAME or version
symlinks, and it's named libpq-oauth-<major>.so to avoid mixing and
matching across Postgres versions. (Future improvements may promote this
"OAuth flow plugin" to a first-class concept, at which point we would
need a public API to replace this anyway.)
Additionally, NLS support for error messages in b3f0be788a was
incomplete, because the new error macros weren't being scanned by
xgettext. Fix that now.
Per request from Tom Lane and Bruce Momjian. Based on an initial patch
by Daniel Gustafsson, who also contributed docs changes. The "bare"
dlopen() concept came from Thomas Munro. Many people reviewed the design
and implementation; thank you!
Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Christoph Berg <myon@debian.org>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Wolfgang Walther <walther@technowledgy.de>
Discussion: https://postgr.es/m/641687.1742360249%40sss.pgh.pa.us
|
|
This commit fixes two bug in ChangeVarNodes_walker() function.
* When considering RestrictInfo, walk down to its clauses based on the
presense of relid to be deleted not just in clause_relids but also in
required_relids.
* Incrementally adjust num_base_rels based on the change of clause_relids
instead of recalculating it using clause_relids, which could contain
outer-join relids.
Reported-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/CAMbWs49PE3CvnV8vrQ0Dr%3DHqgZZmX0tdNbzVNJxqc8yg-8kDQQ%40mail.gmail.com
Author: Andrei Lepikhov <lepihov@gmail.com>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
|
|
Before commit a0ed19e0a9e there was a cast around these, but the cast
inadvertently changed the signedness, but that made the format
placeholder correct. Commit a0ed19e0a9e removed the casts, so now the
format placeholders had the wrong signedness.
|
|
When a child constraint is validated and the parent constraint it
derives from isn't, pg_dump must be coerced into printing the child
constraint; failing to do would result in a dump that restores the
constraint as not valid, which would be incorrect.
Co-authored-by: jian he <jian.universality@gmail.com>
Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de>
Reported-by: jian he <jian.universality@gmail.com>
Message-id: https://postgr.es/m/CACJufxGHNNMc0E2JphUqJMzD3=bwRSuAEVBF5ekgkG8uY0Q3hg@mail.gmail.com
|
|
fc069a3a6319 implements Self-Join Elimination (SJE), which can remove base
relations when appropriate. However, regressions tests for SJE only cover
the case when placeholder variables (PHVs) are evaluated and needed only
in a single base rel. If this baserel is removed due to SJE, its clauses,
including PHVs, will be transferred to the keeping relation. Removing these
PHVs may trigger an error on plan creation -- thanks to the b3ff6c742f6c for
detecting that.
This commit skips removal of PHVs during SJE. This might also happen that
we skip the removal of some PHVs that could be removed. However, the overhead
of extra PHVs is small compared to the complexity of analysis needed to remove
them.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Author: Alena Rybakina <a.rybakina@postgrespro.ru>
Author: Andrei Lepikhov <lepihov@gmail.com>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
|
|
RHEL8 ships a patched 3.6.8 as its base Python version, and I
accidentally let some newer Python-isms creep into oauth_server.py
during development.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl>
Tested-by: Renan Alves Fonseca <renanfonseca@gmail.com>
Tested-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/16098.1745079444%40sss.pgh.pa.us
|
|
To estimate with extended statistics, we need to clear the varnullingrels
field in the expression, and duplicates are not allowed in the GroupVarInfo
list. We might re-use add_unique_group_var(), but we don't do so for two
reasons.
1) We must keep the origin_rinfos list ordered exactly the same way as
varinfos.
2) add_unique_group_var() is designed for estimate_num_groups(), where a
larger number of groups is worse. While estimating the number of hash
buckets, we have the opposite: a lesser number of groups is worse.
Therefore, we don't have to remove "known equal" vars: the removed var
may valuably contribute to the multivariate statistics to grow the number
of groups.
This commit adds custom code to estimate_multivariate_bucketsize() to
initialize varinfos properly.
Reported-by: Robins Tharakan <tharakan@gmail.com>
Discussion: https://postgr.es/m/18885-da51324078588253%40postgresql.org
Author: Andrei Lepikhov <lepihov@gmail.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: Tomas Vondra <tomas@vondra.me>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
|
|
When a foreign key constraint is placed on a partitioned table, we
actually make two pg_constraint entries associated with that table.
(I have my doubts about the wisdom of that, but it's been like that
since v12 and post-feature-freeze is no time to be messing with such
entrenched decisions.) The second "child" entry always had a name
generated according to the default rule, "table_column(s)_fkey[nnn]",
even if the primary entry had an unrelated user-specified name. The
trouble with doing that is that the default name could collide with
the user-specified name of some other constraint on the same table.
While we were willing to adjust the generated name to avoid
collisions, that only helps if it's made second; if it's made first
then creation of the other constraint would fail, potentially causing
dump/reload or pg_upgrade failures.
The core of the problem here is that we're infringing on user
namespace, so I doubt that there's any 100% solution other than to
find a way to not need the "child" entry. In the meantime, it seems
like it'd be an improvement to make the child's name be the name of
the parent constraint with an underscore and digit(s) appended as
necessary to make it unique. This rule can in theory fail in the same
way, but it seems much less probable; for one thing, this rule is
guaranteed not to match primary entries having auto-generated names.
(While an auto-generated primary name isn't user-specified to begin
with, it acts like that during dump/reload, so collisions against such
names are definitely possible.)
An additional bonus, visible in some of the regression test cases
that change here, arises from the fact that some error messages
cite the child constraint's name not the parent's. In the
previous approach the two names could be completely unrelated,
leading to user confusion --- the more so since psql's \d command
hides child constraints. With this approach it's hopefully much
clearer which constraint-the-user-knows-about is failing.
However, that does mean that there's user-visible behavior change
occurring here, making it seem like not something to back-patch.
I feel it's not too late for v18, though.
Reported-by: Kirill Reshke <reshkekirill@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Alvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/CALdSSPhGitjpTfzEMJN-Y2x+Q-5QChSxAsmSJ1-E8mQJLkHOqQ@mail.gmail.com
|
|
The stack allocated JsonLexContexts, in combination with codepaths
using goto, were causing warnings when compiling with LTO enabled
as the optimizer is unable to figure out that is safe. Rather than
contort the code with workarounds for this simply heap allocate the
structs instead as these are not in any performance critical paths.
Author: Daniel Gustafsson <daniel@yesql.se>
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/2074634.1744839761@sss.pgh.pa.us
|
|
Commit 3f28b2fcac tried to ensure that the replication origin shouldn't be
advanced in case of an ERROR in the apply worker, so that it can request
the same data again after restart. However, it is possible that an ERROR
was caught and handled by a (say PL/pgSQL) function, and the apply worker
continues to apply further changes, in which case, we shouldn't reset the
replication origin.
Ensure to reset the origin only when the apply worker exits after an
ERROR.
Commit 3f28b2fcac added new function geterrlevel, which we removed in HEAD
as part of this commit, but kept it in backbranches to avoid breaking any
applications. A separate case can be made to have such a function even for
HEAD.
Reported-by: Shawn McCoy <shawn.the.mccoy@gmail.com>
Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Backpatch-through: 16, where it was introduced
Discussion: https://postgr.es/m/CALsgZNCGARa2mcYNVTSj9uoPcJo-tPuWUGECReKpNgTpo31_Pw@mail.gmail.com
|
|
Cluster.pm's connect_fails routine has long had the ability to
sniff the postmaster log file for expected messages after a
connection failure. However, that's always had a race condition:
on some platforms it's possible for psql to exit and the test
script to slurp up the postmaster log before the backend process
has been able to write out its final log messages. Back in
commit 55828a6b6 we disabled a bunch of tests after discovering
that, and the aim of this patch is to re-enable them.
(The sibling function connect_ok doesn't seem to have a similar
problem, mainly because the messages we look for come out during
the authentication handshake, so that if psql reports successful
connection they should certainly have been emitted already.)
The solution used here is borrowed from 002_connection_limits.pl's
connect_fails_wait routine: set the server's log_min_messages setting
to DEBUG2 so that the postmaster will log child-process exit, and then
wait till we see that log entry before checking for the messages we
are actually interested in.
If a TAP test uses connect_fails' log_like or log_unlike options, and
forgets to set log_min_messages, those connect_fails calls will now
hang until timeout. Fixing up the existing callers shows that we had
several other TAP tests that were in theory vulnerable to the same
problem. It's unclear whether the lack of failures is just luck, or
lack of buildfarm coverage, or perhaps there is some obscure timing
effect that only manifests in SSL connections. In any case, this
change should in principle make those other call sites more robust.
I'm not inclined to back-patch though, unless sometime we observe
an actual failure in one of them.
Reported-by: Andrew Dunstan <andrew@dunslane.net>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/984fca80-85a8-4c6f-a5cc-bb860950b435@dunslane.net
|
|
While heapam reproduces the insertion order of rows well, updates
can move rows to varying places depending on autovacuum activity.
In most regression tests we've guarded against getting variable
results due to that, but float4.sql and float8.sql had escaped
notice so far because they update tables that are too small for
autovacuum to pay attention to.
With increasing interest in non-heap table AMs, it seems worth
allowing for update behaviors that are not like heapam's. Hence,
add ORDER BY to stabilize the results in case the updates put
the rows in a different order. (We'll continue to assume that a
seqscan will reproduce original insertion order, though. Removing
that assumption would require vastly-more-invasive test changes.)
Author: Pavel Borisov <pashkin.elfe@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CALT9ZEExHAnBoBVQzQuWPMKUbapF5-FBO3fdeYG3s2tuWQz1NQ@mail.gmail.com
|
|
This injection point was named "AtEOXact_Inval-with-transInvalInfo", not
respecting the implied naming convention that injection points should
use lower-case characters, with terms separated by dashes. All the
other points defined in the tree follow this style, so let's be more
consistent.
Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Discussion: https://postgr.es/m/OSCPR01MB14966E14C1378DEE51FB7B7C5F5B32@OSCPR01MB14966.jpnprd01.prod.outlook.com
Backpatch-through: 17
|
|
Word boundaries are based on whether a character is alphanumeric or
not. For the PG_UNICODE_FAST collation, alphanumeric includes
non-ASCII digits; whereas for the PG_C_UTF8 collation, it only
includes digits 0-9. Pass down the right information from the
pg_locale_t into initcap_wbnext to differentiate the behavior.
Reported-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/20250417135841.33.nmisch@google.com
|
|
exec_replication_command created a cmd_context to work in and
then deleted it on exit. This is pretty dangerous because
some replication commands start/finish transactions. In the
wake of commit 1afe31f03, that could lead to re-selecting a
CurrentMemoryContext that's already been deleted, leading to
hilarity such as a memory context that is its own parent.
To fix, let's make the cmd_context persist across
exec_replication_command calls; instead of deleting it, we'll just
reset it each time. In this way it retains the same identity and
there's no problem if transaction abort restores it as the working
context. It probably even saves a few microseconds to do this.
This fix also ensures that exec_replication_command returns to the
caller (PostgresMain) with the same context active that had been
when it was called (probably MessageContext). The previous
coding could get that wrong too.
Reported-by: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAO6_XqoJA7-_G6t7Uqe5nWF3nj+QBGn4F6Ptp=rUGDr0zo+KvA@mail.gmail.com
|
|
v14 commit 1f95181b44c843729caaa688f74babe9403b5850 and its v13
equivalent caused timing-dependent failures in archive recovery, at
restartpoints. The symptom was "invalid magic number 0000 in log
segment X, offset 0", "unexpected pageaddr X in log segment Y, offset 0"
[X < Y], or an assertion failure. Commit
3635a0a35aafd3bfa80b7a809bc6e91ccd36606a and predecessors back-patched
v15 changes to fix that. This test reproduces the problem
probabilistically, typically in less than 1000 iterations of the test.
Hence, buildfarm and CI runs would have surfaced enough failures to get
attention within a day.
Reported-by: Arun Thirupathi <arunth@google.com>
Discussion: https://postgr.es/m/20250306193013.36.nmisch@google.com
Backpatch-through: 13
|