git.postgresql.org Git - postgresql.git/rss log
http://git.postgresql.org/gitweb/?p=postgresql.git;a=summary
This is the main PostgreSQL git repository.enMagnus Hagander/static/git-logo.pnggit.postgresql.org Git - postgresql.git/rss log
http://git.postgresql.org/gitweb/?p=postgresql.git;a=summary
Sun, 1 Jun 2025 08:30:00 +0000Sun, 1 Jun 2025 08:30:00 +0000gitweb v.2.29.2/2.39.5postgres_fdw: Inherit the local transaction's access/deferrable modes.Etsuro Fujita <efujita@postgresql.org>Sun, 1 Jun 2025 08:30:00 +0000http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=e5a3c9d9b5ce535151d3a7e3173e8d27d2d8cd58
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=e5a3c9d9b5ce535151d3a7e3173e8d27d2d8cd58
postgres_fdw: Inherit the local transaction's access/deferrable modes.
postgres_fdw: Inherit the local transaction's access/deferrable modes.
Previously, postgres_fdw always 1) opened a remote transaction in READ
WRITE mode even when the local transaction was READ ONLY, causing a READ
ONLY transaction using it that references a foreign table mapped to a
remote view executing a volatile function to write in the remote side,
and 2) opened the remote transaction in NOT DEFERRABLE mode even when
the local transaction was DEFERRABLE, causing a SERIALIZABLE READ ONLY
DEFERRABLE transaction using it to abort due to a serialization failure
in the remote side.
To avoid these, modify postgres_fdw to open a remote transaction in the
same access/deferrable modes as the local transaction. This commit also
modifies it to open a remote subtransaction in the same access mode as
the local subtransaction.
Although these issues exist since the introduction of postgres_fdw,
there have been no reports from the field. So it seems fine to just fix
them in master only.
Author: Etsuro Fujita <etsuro.fujita@gmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAPmGK16n_hcUUWuOdmeUS%2Bw4Q6dZvTEDHb%3DOP%3D5JBzo-M3QmpQ%40mail.gmail.com
]]>
Fix MERGE into a plain inheritance parent table.Dean Rasheed <dean.a.rasheed@gmail.com>Sat, 31 May 2025 11:12:58 +0000http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=b006bcd5310eb2dad0828a286b79babce4953143
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=b006bcd5310eb2dad0828a286b79babce4953143
Fix MERGE into a plain inheritance parent table.
Fix MERGE into a plain inheritance parent table.
When a MERGE's target table is the parent of an inheritance tree, any
INSERT actions insert into the parent table using ModifyTableState's
rootResultRelInfo. However, there are two bugs in the way is
initialized:
1. ExecInitMerge() incorrectly uses a different ResultRelInfo entry
from ModifyTableState's resultRelInfo array to build the insert
projection, which may not be compatible with rootResultRelInfo.
2. ExecInitModifyTable() does not fully initialize rootResultRelInfo.
Specifically, ri_WithCheckOptions, ri_WithCheckOptionExprs,
ri_returningList, and ri_projectReturning are not initialized.
This can lead to crashes, or incorrect query results due to failing to
check WCO's or process the RETURNING list for INSERT actions.
Fix both these bugs in ExecInitMerge(), noting that it is only
necessary to fully initialize rootResultRelInfo if the MERGE has
INSERT actions and the target table is a plain inheritance parent.
Backpatch to v15, where MERGE was introduced.
Reported-by: Andres Freund <andres@anarazel.de>
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Jian He <jian.universality@gmail.com>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Discussion: https://postgr.es/m/4rlmjfniiyffp6b3kv4pfy4jw3pciy6mq72rdgnedsnbsx7qe5@j5hlpiwdguvc
Backpatch-through: 15
]]>
Change internal plan ID type from uint64 to int64Michael Paquier <michael@paquier.xyz>Sat, 31 May 2025 00:40:45 +0000http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=e050af28686e796bdf22cb53fe3fdf1c6655f315
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=e050af28686e796bdf22cb53fe3fdf1c6655f315
Change internal plan ID type from uint64 to int64
Change internal plan ID type from uint64 to int64
uint64 was chosen to be consistent with the type used by the query ID,
but the conclusion of a recent discussion for the query ID is that int64
is a better fit as the signed form is shown to the user, for PGSS or
EXPLAIN outputs.
This commit changes the plan ID to use int64, following c3eda50b0648
that has done the same for the query ID.
The plan ID is new to v18, introduced in 2a0cd38da5cc.
Author: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/aCvzJNwetyEI3Sgo@paquier.xyz
]]>
Ensure we have a snapshot when updating various system catalogs.Nathan Bossart <nathan@postgresql.org>Fri, 30 May 2025 20:17:28 +0000http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=706054b11b959c865c0c7935c34d92370d7168d4
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=706054b11b959c865c0c7935c34d92370d7168d4
Ensure we have a snapshot when updating various system catalogs.
Ensure we have a snapshot when updating various system catalogs.
A few places that access system catalogs don't set up an active
snapshot before potentially accessing their TOAST tables. To fix,
push an active snapshot just before each section of code that might
require accessing one of these TOAST tables, and pop it shortly
afterwards. While at it, this commit adds some rather strict
assertions in an attempt to prevent such issues in the future.
Commit 16bf24e0e4 recently removed pg_replication_origin's TOAST
table in order to fix the same problem for that catalog. On the
back-branches, those bugs are left in place. We cannot easily
remove a catalog's TOAST table on released major versions, and only
replication origins with extremely long names are affected. Given
the low severity of the issue, fixing older versions doesn't seem
worth the trouble of significantly modifying the patch.
Also, on v13 and v14, the aforementioned strict assertions have
been omitted because commit 2776922201, which added
HaveRegisteredOrActiveSnapshot(), was not back-patched. While we
could probably back-patch it now, I've opted against it because it
seems unlikely that new TOAST snapshot issues will be introduced in
the oldest supported versions.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/18127-fe54b6a667f29658%40postgresql.org
Discussion: https://postgr.es/m/18309-c0bf914950c46692%40postgresql.org
Discussion: https://postgr.es/m/ZvMSUPOqUU-VNADN%40nathan
Backpatch-through: 13
]]>
Fix memory leakage in postgres_fdw's DirectModify code path.Tom Lane <tgl@sss.pgh.pa.us>Fri, 30 May 2025 17:45:41 +0000http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=232d8caeaaa6ef7e7dfdc1a349ac956690949076
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=232d8caeaaa6ef7e7dfdc1a349ac956690949076
Fix memory leakage in postgres_fdw's DirectModify code path.
Fix memory leakage in postgres_fdw's DirectModify code path.
postgres_fdw tries to use PG_TRY blocks to ensure that it will
eventually free the PGresult created by the remote modify command.
However, it's fundamentally impossible for this scheme to work
reliably when there's RETURNING data, because the query could fail
in between invocations of postgres_fdw's DirectModify methods.
There is at least one instance of exactly this situation in the
regression tests, and the ensuing session-lifespan leak is visible
under Valgrind.
We can improve matters by using a memory context reset callback
attached to the ExecutorState context. That ensures that the
PGresult will be freed when the ExecutorState context is torn
down, even if control never reaches postgresEndDirectModify.
I have little faith that there aren't other potential PGresult
leakages in the backend modules that use libpq. So I think it'd
be a good idea to apply this concept universally by creating
infrastructure that attaches a reset callback to every PGresult
generated in the backend. However, that seems too invasive for
v18 at this point, let alone the back branches. So for the
moment, apply this narrow fix that just makes DirectModify safe.
I have a patch in the queue for the more general idea, but it
will have to wait for v19.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Discussion: https://postgr.es/m/2976982.1748049023@sss.pgh.pa.us
Backpatch-through: 13
]]>
Allow larger packets during GSSAPI authentication exchange.Tom Lane <tgl@sss.pgh.pa.us>Fri, 30 May 2025 16:55:15 +0000http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=d98cefe1143eea010048dc1525a51b77e11b2935
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=d98cefe1143eea010048dc1525a51b77e11b2935
Allow larger packets during GSSAPI authentication exchange.
Allow larger packets during GSSAPI authentication exchange.
Our GSSAPI code only allows packet sizes up to 16kB. However it
emerges that during authentication, larger packets might be needed;
various authorities suggest 48kB or 64kB as the maximum packet size.
This limitation caused login failure for AD users who belong to many
AD groups. To add insult to injury, we gave an unintelligible error
message, typically "GSSAPI context establishment error: The routine
must be called again to complete its function: Unknown error".
As noted in code comments, the 16kB packet limit is effectively a
protocol constant once we are doing normal data transmission: the
GSSAPI code splits the data stream at those points, and if we change
the limit then we will have cross-version compatibility problems
due to the receiver's buffer being too small in some combinations.
However, during the authentication exchange the packet sizes are
not determined by us, but by the underlying GSSAPI library. So we
might as well just try to send what the library tells us to.
An unpatched recipient will fail on a packet larger than 16kB,
but that's not worse than the sender failing without even trying.
So this doesn't introduce any meaningful compatibility problem.
We still need a buffer size limit, but we can easily make it be
64kB rather than 16kB until transport negotiation is complete.
(Larger values were discussed, but don't seem likely to add
anything.)
Reported-by: Chris Gooch <cgooch@bamfunds.com>
Fix-suggested-by: Jacob Champion <jacob.champion@enterprisedb.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://postgr.es/m/DS0PR22MB5971A9C8A3F44BCC6293C4DABE99A@DS0PR22MB5971.namprd22.prod.outlook.com
Backpatch-through: 13
]]>
Make XactLockTableWait() and ConditionalXactLockTableWait() interruptable more.Fujii Masao <fujii@postgresql.org>Fri, 30 May 2025 15:08:40 +0000http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=961553daf5d6087b175aa98f3031a46a8666cecf
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=961553daf5d6087b175aa98f3031a46a8666cecf
Make XactLockTableWait() and ConditionalXactLockTableWait() interruptable more.
Make XactLockTableWait() and ConditionalXactLockTableWait() interruptable more.
Previously, XactLockTableWait() and ConditionalXactLockTableWait() could enter
a non-interruptible loop when they successfully acquired a lock on a transaction
but the transaction still appeared to be running. Since this loop continued
until the transaction completed, it could result in long, uninterruptible waits.
Although this scenario is generally unlikely since XactLockTableWait() and
ConditionalXactLockTableWait() can basically acquire a transaction lock
only when the transaction is not running, it can occur in a hot standby.
In such cases, the transaction may still appear active due to
the KnownAssignedXids list, even while no lock on the transaction exists.
For example, this situation can happen when creating a logical replication
slot on a standby.
The cause of the non-interruptible loop was the absence of CHECK_FOR_INTERRUPTS()
within it. This commit adds CHECK_FOR_INTERRUPTS() to the loop in both functions,
ensuring they can be interrupted safely.
Back-patch to all supported branches.
Author: Kevin K Biju <kevinkbiju@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CAM45KeELdjhS-rGuvN=ZLJ_asvZACucZ9LZWVzH7bGcD12DDwg@mail.gmail.com
Backpatch-through: 13
]]>
Change internal queryid type from uint64 to int64David Rowley <drowley@postgresql.org>Fri, 30 May 2025 10:59:39 +0000http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=c3eda50b0648005281c2a3cf95375708f8ef97fc
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=c3eda50b0648005281c2a3cf95375708f8ef97fc
Change internal queryid type from uint64 to int64
Change internal queryid type from uint64 to int64
uint64 was perhaps chosen in cff440d36 as the type was uint32 prior to
that widening work.
Having this as uint64 doesn't make much sense and just adds the overhead of
having to remember that we always output this in its signed form. Let's
remove that overhead.
The signed form output is seemingly required since we have no way to
represent the full range of uint64 in an SQL type. We use BIGINT in places
like pg_stat_statements, which maps directly to int64.
The release notes "Source Code" section may want to mention this
adjustment as some extensions may wish to adjust their code.
Author: David Rowley <dgrowleyml@gmail.com>
Suggested-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/50cb0c8b-994b-48f9-a1c4-13039eb3536b@eisentraut.org
]]>
doc PG 18 relnotes: modify async I/O item for other improvementsBruce Momjian <bruce@momjian.us>Thu, 29 May 2025 16:37:05 +0000http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=03c53a73141aa0e0ee6b0c7642671c1e972bae32
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=03c53a73141aa0e0ee6b0c7642671c1e972bae32
doc PG 18 relnotes: modify async I/O item for other improvements
doc PG 18 relnotes: modify async I/O item for other improvements
Add "etc." to indicate other actions will also be improved by
asynchronous I/O.
Reported-by: Melanie Plageman
Discussion: https://postgr.es/m/CAAKRu_bqjgSYA+OdemL-X91Yv53OwsVARZy+-tRyj8YQ=kcj0A@mail.gmail.com
]]>
Avoid resource leaks when a dblink connection fails.Tom Lane <tgl@sss.pgh.pa.us>Thu, 29 May 2025 14:39:55 +0000http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=470273da0ff766d098c5bc4d0acf3991451b755b
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=470273da0ff766d098c5bc4d0acf3991451b755b
Avoid resource leaks when a dblink connection fails.
Avoid resource leaks when a dblink connection fails.
If we hit out-of-memory between creating the PGconn and inserting
it into dblink's hashtable, we'd lose track of the PGconn, which
is quite bad since it represents a live connection to a remote DB.
Fix by rearranging things so that we create the hashtable entry
first.
Also reduce the number of states we have to deal with by getting rid
of the separately-allocated remoteConn object, instead allocating it
in-line in the hashtable entries. (That incidentally removes a
session-lifespan memory leak observed in the regression tests.)
There is an apparently-irreducible remaining OOM hazard, which
is that if the connection fails at the libpq level (ie it's
CONNECTION_BAD) then we have to pstrdup the PGconn's error message
before we can release it, and theoretically that could fail. However,
in such cases we're only leaking memory not a live remote connection,
so I'm not convinced that it's worth sweating over.
This is a pretty low-probability failure mode of course, but losing
a live connection seems bad enough to justify back-patching.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Discussion: https://postgr.es/m/1346940.1748381911@sss.pgh.pa.us
Backpatch-through: 13
]]>
Fix assertion failure in pg_prewarm() on objects without storage.Fujii Masao <fujii@postgresql.org>Thu, 29 May 2025 08:50:32 +0000http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=3c4d7557e03ba1ca988a2d1a2518a4ad93976f86
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=3c4d7557e03ba1ca988a2d1a2518a4ad93976f86
Fix assertion failure in pg_prewarm() on objects without storage.
Fix assertion failure in pg_prewarm() on objects without storage.
An assertion test added in commit 049ef33 could fail when pg_prewarm()
was called on objects without storage, such as partitioned tables.
This resulted in the following failure in assert-enabled builds:
Failed Assert("RelFileNumberIsValid(rlocator.relNumber)")
Note that, in non-assert builds, pg_prewarm() just failed with an error
in that case, so there was no ill effect in practice.
This commit fixes the issue by having pg_prewarm() raise an error early
if the specified object has no storage. This approach is similar to
the fix in commit 4623d7144 for pg_freespacemap.
Back-patched to v17, where the issue was introduced.
Author: Masahiro Ikeda <ikedamsh@oss.nttdata.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/e082e6027610fd0a4091ae6d033aa117@oss.nttdata.com
Backpatch-through: 17
]]>
Add AioUringCompletion in wait_event_names.txtMichael Paquier <michael@paquier.xyz>Thu, 29 May 2025 04:25:05 +0000http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=c3623703f3630c7b89adc865bbec7cb55e87185a
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=c3623703f3630c7b89adc865bbec7cb55e87185a
Add AioUringCompletion in wait_event_names.txt
Add AioUringCompletion in wait_event_names.txt
Oversight in c325a7633fcb, where the LWLock tranche AioUringCompletion
has been added.
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/aDT5sBOxJTdulXnE@paquier.xyz
]]>
pg_stat_statements: Fix parameter number gaps in normalized queriesMichael Paquier <michael@paquier.xyz>Thu, 29 May 2025 02:26:03 +0000http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=35a428f30b15a3ab0c9a0cc26ade3b4cc3e47d8e
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=35a428f30b15a3ab0c9a0cc26ade3b4cc3e47d8e
pg_stat_statements: Fix parameter number gaps in normalized queries
pg_stat_statements: Fix parameter number gaps in normalized queries
pg_stat_statements anticipates that certain constant locations may be
recorded multiple times and attempts to avoid calculating a length for
these locations in fill_in_constant_lengths().
However, during generate_normalized_query() where normalized query
strings are generated, these locations are not excluded from
consideration. This could increment the parameter number counter for
every recorded occurrence at such a location, leading to an incorrect
normalization in certain cases with gaps in the numbers reported.
For example, take this query:
SELECT WHERE '1' IN ('2'::int, '3'::int::text)
Before this commit, it would be normalized like that, with gaps in the
parameter numbers:
SELECT WHERE $1 IN ($3::int, $4::int::text)
However the correct, less confusing one should be like that:
SELECT WHERE $1 IN ($2::int, $3::int::text)
This commit fixes the computation of the parameter numbers to track the
number of constants replaced with an $n by a separate counter instead of
the iterator used to loop through the list of locations.
The underlying query IDs are not changed, neither are the normalized
strings for existing PGSS hash entries. New entries with fresh
normalized queries would automatically get reshaped based on the new
parameter numbering.
Issue discovered while discussing a separate problem for HEAD, but this
affects all the stable branches.
Author: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/CAA5RZ0tzxvWXsacGyxrixdhy3tTTDfJQqxyFBRFh31nNHBQ5qA@mail.gmail.com
Backpatch-through: 13
]]>
Tighten parsing of datetime input.Tom Lane <tgl@sss.pgh.pa.us>Wed, 28 May 2025 19:10:48 +0000http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=e5d64fd6545d1339b58e604b812f1a1200b48839
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=e5d64fd6545d1339b58e604b812f1a1200b48839
Tighten parsing of datetime input.
Tighten parsing of datetime input.
ParseFraction only expects to deal with fields that contain a decimal
point and digit(s). However it's possible in some edge cases for it
to be passed input that doesn't look like that. In particular the
input could look like a valid floating-point number, such as ".123e6".
strtod() will happily eat that, possibly producing a result that is
not within the expected range 0..1, which can result in integer
overflow in the callers. That doesn't have any security consequences,
but it's still not very desirable. Fix by checking that the input
has the expected form.
Similarly, DecodeNumberField only expects to deal with fields that
contain a decimal point and digit(s), but it's sometimes abused to
parse strings that might not look like that. This could result in
failure to reject bogus input, yielding silly results. Again, fix
by rejecting input that doesn't look as-expected. That decision
also means that we can affirmatively answer the very old comment
questioning whether we couldn't save some duplicative code by
using ParseFractionalSecond here.
While these changes should only reject input that nobody would
consider valid, it still doesn't seem like a change to make in
stable branches. Apply to HEAD only.
Reported-by: Evgeniy Gorbanev <gorbanev.es@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/1328335.1748371099@sss.pgh.pa.us
]]>
Fix memory leakage when function compilation fails.Tom Lane <tgl@sss.pgh.pa.us>Wed, 28 May 2025 17:29:32 +0000http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=be86ca103a41224e091a0d9aaf30605a935546ec
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=be86ca103a41224e091a0d9aaf30605a935546ec
Fix memory leakage when function compilation fails.
Fix memory leakage when function compilation fails.
In pl_comp.c, initially create the plpgsql function's cache context
under the assumed-short-lived caller's context, and reparent it under
CacheMemoryContext only upon success. This avoids a process-lifespan
leak of 8kB or more if the function contains syntax errors. (This
leakage has existed for a long time without many complaints, but as
we move towards a possibly multi-threaded future, getting rid of
process-lifespan leaks grows more important.)
In funccache.c, arrange to reclaim the CachedFunction struct in case
the language-specific compile callback function throws an error;
previously, that resulted in an independent process-lifespan leak.
This is arguably a new bug in v18, since the leakage now occurred
for SQL-language functions as well as plpgsql.
Also, don't fill fn_xmin/fn_tid/dcallback until after successful
completion of the compile callback. This avoids a scenario where a
partially-built function cache might appear already valid upon later
inspection, and another scenario where dcallback might fail upon being
presented with an incomplete cache entry. We would have to reach such
a faulty cache entry via a pre-existing fn_extra pointer, so I'm not
sure these scenarios correspond to any live bug. (The predecessor
code in pl_comp.c never took any care about this, and we've heard no
complaints about that.) Still, it's better to be careful.
Given the lack of field complaints, I'm not very excited about
back-patching any of this; but it seems still in-scope for v18.
Discussion: https://postgr.es/m/999171.1748300004@sss.pgh.pa.us
]]>
Adjust regex for test with opening parenthesis in character classesMichael Paquier <michael@paquier.xyz>Wed, 28 May 2025 00:43:31 +0000http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=4fbb46f61271f4b7f46ecad3de608fc2f4d7d80f
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=4fbb46f61271f4b7f46ecad3de608fc2f4d7d80f
Adjust regex for test with opening parenthesis in character classes
Adjust regex for test with opening parenthesis in character classes
As written, the test was throwing an error because of an unbalanced
parenthesis. The regex used in the test is adjusted to not fail and to
test the case of an opening parenthesis in a character class after some
nested square brackets.
Oversight in d46911e584d4.
Discussion: https://postgr.es/m/16ab039d1af455652bdf4173402ddda145f2c73b.camel@cybertec.at