Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2 daysRevert "postgres_fdw: Inherit the local transaction's access/deferrable modes."Etsuro Fujita
We concluded that commit e5a3c9d9b is a feature rather than a fix; since it was added after feature freeze, revert it. Reported-by: Fujii Masao <masao.fujii@oss.nttdata.com> Reported-by: Michael Paquier <michael@paquier.xyz> Reported-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://postgr.es/m/ed2296f1-1a6b-4932-b870-5bb18c2591ae%40oss.nttdata.com
4 dayspg_prewarm: Allow autoprewarm to use more than 1GB to dump blocks.Robert Haas
Reported-by: Daria Shanina <vilensipkdm@gmail.com> Author: Daria Shanina <vilensipkdm@gmail.com> Author: Robert Haas <robertmhaas@gmail.com> Backpatch-through: 13
8 daysDisallow "=" in names of reloptions and foreign-data options.Tom Lane
We store values for these options as array elements with the syntax "name=value", hence a name containing "=" confuses matters when it's time to read the array back in. Since validation of the options is often done (long) after this conversion to array format, that leads to confusing and off-point error messages. We can improve matters by rejecting names containing "=" up-front. (Probably a better design would have involved pairs of array elements, but it's too late now --- and anyway, there's no evident use-case for option names like this. We already reject such names in some other contexts such as GUCs.) Reported-by: Chapman Flack <jcflack@acm.org> Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Chapman Flack <jcflack@acm.org> Discussion: https://postgr.es/m/6830EB30.8090904@acm.org Backpatch-through: 13
8 daysRename gist stratnum support functionPeter Eisentraut
Commit 7406ab623fe added a gist support function that we internally refer to by the symbol GIST_STRATNUM_PROC. This translated from "well-known" strategy numbers to opfamily-specific strategy numbers. However, we later (commit 630f9a43cec) changed this to fit into index-AM-level compare type mapping, so this function actually now maps from compare type to opfamily-specific strategy numbers. So this name is no longer fitting. Moreover, the index AM level also supports the opposite, a function to map from strategy number to compare type. This is currently not supported in gist, but one might wonder what this function is supposed to be called when it is added. This patch changes the naming of the gist-level functionality to be more in line with the index-AM-level functionality. This makes sense because these are essentially the same thing on different levels. This also changes the names of the externally visible functions that are provided for use as such a support function. Reviewed-by: Paul A Jungwirth <pj@illuminatedcomputing.com> Discussion: https://www.postgresql.org/message-id/37ebb1d9-9036-485f-a215-e55435689917%40eisentraut.org
9 dayspostgres_fdw: Inherit the local transaction's access/deferrable modes.Etsuro Fujita
Previously, postgres_fdw always 1) opened a remote transaction in READ WRITE mode even when the local transaction was READ ONLY, causing a READ ONLY transaction using it that references a foreign table mapped to a remote view executing a volatile function to write in the remote side, and 2) opened the remote transaction in NOT DEFERRABLE mode even when the local transaction was DEFERRABLE, causing a SERIALIZABLE READ ONLY DEFERRABLE transaction using it to abort due to a serialization failure in the remote side. To avoid these, modify postgres_fdw to open a remote transaction in the same access/deferrable modes as the local transaction. This commit also modifies it to open a remote subtransaction in the same access mode as the local subtransaction. Although these issues exist since the introduction of postgres_fdw, there have been no reports from the field. So it seems fine to just fix them in master only. Author: Etsuro Fujita <etsuro.fujita@gmail.com> Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CAPmGK16n_hcUUWuOdmeUS%2Bw4Q6dZvTEDHb%3DOP%3D5JBzo-M3QmpQ%40mail.gmail.com
11 daysFix memory leakage in postgres_fdw's DirectModify code path.Tom Lane
postgres_fdw tries to use PG_TRY blocks to ensure that it will eventually free the PGresult created by the remote modify command. However, it's fundamentally impossible for this scheme to work reliably when there's RETURNING data, because the query could fail in between invocations of postgres_fdw's DirectModify methods. There is at least one instance of exactly this situation in the regression tests, and the ensuing session-lifespan leak is visible under Valgrind. We can improve matters by using a memory context reset callback attached to the ExecutorState context. That ensures that the PGresult will be freed when the ExecutorState context is torn down, even if control never reaches postgresEndDirectModify. I have little faith that there aren't other potential PGresult leakages in the backend modules that use libpq. So I think it'd be a good idea to apply this concept universally by creating infrastructure that attaches a reset callback to every PGresult generated in the backend. However, that seems too invasive for v18 at this point, let alone the back branches. So for the moment, apply this narrow fix that just makes DirectModify safe. I have a patch in the queue for the more general idea, but it will have to wait for v19. Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com> Discussion: https://postgr.es/m/2976982.1748049023@sss.pgh.pa.us Backpatch-through: 13
11 daysChange internal queryid type from uint64 to int64David Rowley
uint64 was perhaps chosen in cff440d36 as the type was uint32 prior to that widening work. Having this as uint64 doesn't make much sense and just adds the overhead of having to remember that we always output this in its signed form. Let's remove that overhead. The signed form output is seemingly required since we have no way to represent the full range of uint64 in an SQL type. We use BIGINT in places like pg_stat_statements, which maps directly to int64. The release notes "Source Code" section may want to mention this adjustment as some extensions may wish to adjust their code. Author: David Rowley <dgrowleyml@gmail.com> Suggested-by: Peter Eisentraut <peter@eisentraut.org> Reviewed-by: Sami Imseih <samimseih@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/50cb0c8b-994b-48f9-a1c4-13039eb3536b@eisentraut.org
12 daysAvoid resource leaks when a dblink connection fails.Tom Lane
If we hit out-of-memory between creating the PGconn and inserting it into dblink's hashtable, we'd lose track of the PGconn, which is quite bad since it represents a live connection to a remote DB. Fix by rearranging things so that we create the hashtable entry first. Also reduce the number of states we have to deal with by getting rid of the separately-allocated remoteConn object, instead allocating it in-line in the hashtable entries. (That incidentally removes a session-lifespan memory leak observed in the regression tests.) There is an apparently-irreducible remaining OOM hazard, which is that if the connection fails at the libpq level (ie it's CONNECTION_BAD) then we have to pstrdup the PGconn's error message before we can release it, and theoretically that could fail. However, in such cases we're only leaking memory not a live remote connection, so I'm not convinced that it's worth sweating over. This is a pretty low-probability failure mode of course, but losing a live connection seems bad enough to justify back-patching. Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com> Discussion: https://postgr.es/m/1346940.1748381911@sss.pgh.pa.us Backpatch-through: 13
12 daysFix assertion failure in pg_prewarm() on objects without storage.Fujii Masao
An assertion test added in commit 049ef33 could fail when pg_prewarm() was called on objects without storage, such as partitioned tables. This resulted in the following failure in assert-enabled builds: Failed Assert("RelFileNumberIsValid(rlocator.relNumber)") Note that, in non-assert builds, pg_prewarm() just failed with an error in that case, so there was no ill effect in practice. This commit fixes the issue by having pg_prewarm() raise an error early if the specified object has no storage. This approach is similar to the fix in commit 4623d7144 for pg_freespacemap. Back-patched to v17, where the issue was introduced. Author: Masahiro Ikeda <ikedamsh@oss.nttdata.com> Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com> Reviewed-by: Richard Guo <guofenglinux@gmail.com> Reviewed-by: Fujii Masao <masao.fujii@gmail.com> Discussion: https://postgr.es/m/e082e6027610fd0a4091ae6d033aa117@oss.nttdata.com Backpatch-through: 17
13 dayspg_stat_statements: Fix parameter number gaps in normalized queriesMichael Paquier
pg_stat_statements anticipates that certain constant locations may be recorded multiple times and attempts to avoid calculating a length for these locations in fill_in_constant_lengths(). However, during generate_normalized_query() where normalized query strings are generated, these locations are not excluded from consideration. This could increment the parameter number counter for every recorded occurrence at such a location, leading to an incorrect normalization in certain cases with gaps in the numbers reported. For example, take this query: SELECT WHERE '1' IN ('2'::int, '3'::int::text) Before this commit, it would be normalized like that, with gaps in the parameter numbers: SELECT WHERE $1 IN ($3::int, $4::int::text) However the correct, less confusing one should be like that: SELECT WHERE $1 IN ($2::int, $3::int::text) This commit fixes the computation of the parameter numbers to track the number of constants replaced with an $n by a separate counter instead of the iterator used to loop through the list of locations. The underlying query IDs are not changed, neither are the normalized strings for existing PGSS hash entries. New entries with fresh normalized queries would automatically get reshaped based on the new parameter numbering. Issue discovered while discussing a separate problem for HEAD, but this affects all the stable branches. Author: Sami Imseih <samimseih@gmail.com> Discussion: https://postgr.es/m/CAA5RZ0tzxvWXsacGyxrixdhy3tTTDfJQqxyFBRFh31nNHBQ5qA@mail.gmail.com Backpatch-through: 13
2025-05-22Revert "Don't lock partitions pruned by initial pruning"Amit Langote
As pointed out by Tom Lane, the patch introduced fragile and invasive design around plan invalidation handling when locking of prunable partitions was deferred from plancache.c to the executor. In particular, it violated assumptions about CachedPlan immutability and altered executor APIs in ways that are difficult to justify given the added complexity and overhead. This also removes the firstResultRels field added to PlannedStmt in commit 28317de72, which was intended to support deferred locking of certain ModifyTable result relations. Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/605328.1747710381@sss.pgh.pa.us
2025-05-21Fix regression with location calculation of nested statementsMichael Paquier
The statement location calculated for some nested query cases was wrong when multiple queries are sent as a single string, these being separated by semicolons. As pointed by Sami Imseih, the location calculation was incorrect when the last query of nested statement with multiple queries does **NOT** finish with a semicolon for the last statement. In this case, the statement length tracked by RawStmt is 0, which is equivalent to say that the string should be used until its end. The code previously discarded this case entirely, causing the location to remain at 0, the same as pointing at the beginning of the string. This caused pg_stat_statements to store incorrect query strings. This issue has been introduced in 499edb09741b. I have looked at the diffs generated by pgaudit back then, and noticed the difference generated for this nested query case, but I have missed the point that it was an actual regression with an existing case. A test case is added in pg_stat_statements to provide some coverage, restoring the pre-17 behavior for the calculation of the query locations. Special thanks to David Steele, who, through an analysis of the test diffs generated by pgaudit with the new v18 logic, has poked me about the fact that my original analysis of the matter was wrong. The test output of pg_overexplain is updated to reflect the new logic, as the new locations refer to the beginning of the argument passed to the function explain_filter(). When the module was introduced in 8d5ceb113e3f, which was after 499edb09741b (for the new calculation method), the locations of the test were not actually right: the plan generated for the query string given in input of the function pointed to the top-level query, not the nested one. Reported-by: David Steele <david@pgbackrest.org> Author: Michael Paquier <michael@paquier.xyz> Reviewed-by: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com> Reviewed-by: Jian He <jian.universality@gmail.com> Reviewed-by: Sami Imseih <samimseih@gmail.com> Reviewed-by: David Steele <david@pgbackrest.org> Discussion: https://postgr.es/m/844a3b38-bbf1-4fb2-9fd6-f58c35c09917@pgbackrest.org
2025-05-19Fix incorrect year in some copyright noticesMichael Paquier
A couple of new files have been added in the tree with a copyright year of 2024 while we were already in 2025. These should be marked with 2025, so let's fix them. Reported-by: Shaik Mohammad Mujeeb <mujeeb.sk.dev@gmail.com> Discussion: https://postgr.es/m/CALa6HA4_Wu7-2PV0xv-Q84cT8eG7rTx6bdjUV0Pc=McAwkNMfQ@mail.gmail.com
2025-05-08Use 'void *' for arbitrary buffers, 'uint8 *' for byte arraysHeikki Linnakangas
A 'void *' argument suggests that the caller might pass an arbitrary struct, which is appropriate for functions like libc's read/write, or pq_sendbytes(). 'uint8 *' is more appropriate for byte arrays that have no structure, like the cancellation keys or SCRAM tokens. Some places used 'char *', but 'uint8 *' is better because 'char *' is commonly used for null-terminated strings. Change code around SCRAM, MD5 authentication, and cancellation key handling to follow these conventions. Discussion: https://www.postgresql.org/message-id/61be9e31-7b7d-49d5-bc11-721800d89d64@eisentraut.org
2025-05-08Suppress unnecessary explicit sorting for EPQ mergejoin pathRichard Guo
When building a ForeignPath for a joinrel, if there's a possibility that EvalPlanQual will be executed, we must identify a suitable path for EPQ checks. If the outer or inner path of the chosen path is a ForeignPath representing a pushed-down join, we replace it with its fdw_outerpath to ensure that the EPQ check path consists entirely of local joins. If the chosen path is a MergePath, and its outer or inner path is a ForeignPath that is not already well enough ordered, the MergePath will have non-NIL outersortkeys or innersortkeys indicating the desired ordering to be created by an explicit Sort node. If we then replace the outer or inner path with its corresponding fdw_outerpath, and that path is already sufficiently ordered, we end up in an inconsistent state: the MergePath has non-NIL outersortkeys or innersortkeys, and its input path is already properly ordered. This inconsistency can result in an Assert failure or the addition of a redundant Sort node. To fix, check if the new outer or inner path of a MergePath is already properly sorted, and set its outersortkeys or innersortkeys to NIL if so. Bug: #18902 Reported-by: Nikita Kalinin <n.kalinin@postgrespro.ru> Author: Richard Guo <guofenglinux@gmail.com> Reviewed-by: Tender Wang <tndrwang@gmail.com> Discussion: https://postgr.es/m/18902-71c1bed2b9f7c46f@postgresql.org
2025-05-07Fix whitespacePeter Eisentraut
2025-04-29oauth: Disallow OAuth connections via postgres_fdw/dblinkJacob Champion
A subsequent commit will reclassify oauth_client_secret from dispchar="" to dispchar="*", so that UIs will treat it like a secret. For our FDWs, this change will move that option from SERVER to USER MAPPING, which we need to avoid. But upon further discussion, we don't really want our FDWs to use our builtin Device Authorization flow at all, for several reasons: - the URL and code would be printed to the server logs, not sent over the client connection - tokens are not cached/refreshed, so every single connection has to be manually authorized by a user with a browser - oauth_client_secret needs to belong to the foreign server, but options on SERVER are publicly accessible - all non-superusers would need password_required=false, which is dangerous Future OAuth work can use FDWs as a motivating use case. But for now, disallow all oauth_* connection options for these two extensions. Reviewed-by: Noah Misch <noah@leadboat.com> Discussion: https://postgr.es/m/20250415191435.55.nmisch%40google.com
2025-04-28Add maintenance_io_concurrency flag to some read stream usersMelanie Plageman
Index vacuuming and [auto]prewarm AIO concurrency should be governed by maintenance_io_concurrency. As such, pass those read stream users the READ_STREAM_MAINTENANCE flag which will calculate their read stream distance with maintenance_io_concurrency instead of effective_io_concurrency. This was an oversight in the original commits making those operations use the read stream API. Discussion: https://postgr.es/m/flat/CAAKRu_aopDxTo4b41Mt_7Zc-z0_ngocrY8SFCCY6Aph1HgwuNw%40mail.gmail.com
2025-04-28Fix xmin advancement during fast_forward decoding.Amit Kapila
During logical decoding, we advance catalog_xmin of logical too early in fast_forward mode, resulting in required catalog data being removed by vacuum. This mode is normally used to advance the slot without processing the changes, but we still can't let the slot's xmin to advance to an incorrect value. Commit f49a80c481 fixed a similar issue where the logical slot's catalog_xmin was getting advanced prematurely during non-fast-forward mode. During xl_running_xacts processing, instead of directly advancing the slot's xmin to the oldest running xid in the record, it allowed the xmin to be held back for snapshots that can be used for not-yet-replayed transactions, as those might consider older txns as running too. However, it missed the fact that the same problem can happen during fast_forward mode decoding, as we won't build a base snapshot in that mode, and the future call to get_changes from the same slot can miss seeing the required catalog changes leading to incorrect reslts. This commit allows building the base snapshot even in fast_forward mode to prevent the early advancement of xmin. Reported-by: Amit Kapila <amit.kapila16@gmail.com> Author: Zhijie Hou <houzj.fnst@fujitsu.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: shveta malik <shveta.malik@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Backpatch-through: 13 Discussion: https://postgr.es/m/CAA4eK1LqWncUOqKijiafe+Ypt1gQAQRjctKLMY953J79xDBgAg@mail.gmail.com Discussion: https://postgr.es/m/OS0PR01MB57163087F86621D44D9A72BF94BB2@OS0PR01MB5716.jpnprd01.prod.outlook.com
2025-04-27Remove circular #include's between plpython.h and plpy_util.h.Tom Lane
plpython.h included plpy_util.h, simply on the grounds that "it's easier to just include it everywhere". However, plpy_util.h must include plpython.h, or it won't pass headerscheck. While the resulting circularity doesn't have any immediate bad effect, it's poor design. We have seen serious messes arise in the past from overly-broad inclusion footprints created by such circularities, so let's establish a project policy against it. To fix, just replace *.c files' inclusions of plpython.h with plpy_util.h. They'll pull in plpython.h indirectly; indeed, almost all have already done so via inclusions of other plpy_xxx.h headers. (Any extensions using plpython.h can do likewise without breaking the compatibility of their code with prior Postgres versions.) Reported-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com> Discussion: https://postgr.es/m/aAxQ6fcY5QQV1lo3@ip-10-97-1-34.eu-west-3.compute.internal
2025-04-25Fix typo in test file name added in commit 4909b38af0.Amit Kapila
Author: Shlok Kyal <shlok.kyal.oss@gmail.com> Backpatch-through: 13 Discussion: https://postgr.es/m/CANhcyEXsObdjkjxEnq10aJumDpa5J6aiPzgTh_w4KCWRYHLw6Q@mail.gmail.com
2025-04-20Fix a few duplicate words in commentsDavid Rowley
These are all new to v18 Author: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/CAApHDvrMcr8XD107H3NV=WHgyBcu=sx5+7=WArr-n_cWUqdFXQ@mail.gmail.com
2025-04-19Be more wary of corrupt data in pageinspect's heap_page_items().Tom Lane
The original intent in heap_page_items() was to return nulls, not throw an error or crash, if an item was sufficiently corrupt that we couldn't safely extract data from it. However, commit d6061f83a utterly missed that memo, and not only put in an un-length-checked copy of the tuple's data section, but also managed to break the check on sane nulls-bitmap length. Either mistake could possibly lead to a SIGSEGV crash if the tuple is corrupt. Bug: #18896 Reported-by: Dmitry Kovalenko <d.kovalenko@postgrespro.ru> Author: Dmitry Kovalenko <d.kovalenko@postgrespro.ru> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/18896-add267b8e06663e3@postgresql.org Backpatch-through: 13
2025-04-19Fix typos and grammar in the codeMichael Paquier
The large majority of these have been introduced by recent commits done in the v18 development cycle. Author: Alexander Lakhin <exclusion@gmail.com> Discussion: https://postgr.es/m/9a7763ab-5252-429d-a943-b28941e0e28b@gmail.com
2025-04-14Fix incorrect format placeholdersPeter Eisentraut
BlockNumber is unsigned int. Fix for commit 14ffaece0fb.
2025-04-12Fix GIN's shimTriConsistentFn to not corrupt its input.Tom Lane
Commit 0f21db36d made an assumption that GIN triConsistentFns would not modify their input entryRes[] arrays. But in fact, the "shim" triConsistentFn that we use for opclasses that don't supply their own did exactly that, potentially leading to wrong answers from a GIN index search. Through bad luck, none of the test cases that we have for such opclasses exposed the bug. One response to this could be that the assumption of consistency check functions not modifying entryRes[] arrays is a bad one, but it still seems reasonable to me. Notably, shimTriConsistentFn is itself assuming that with respect to the underlying boolean consistentFn, so it's sure being self-centered in supposing that it gets to do so. Fortunately, it's quite simple to fix shimTriConsistentFn to restore the entry-time state of entryRes[], so let's do that instead. This issue doesn't affect any core GIN opclasses, since they all supply their own triConsistentFns. It does affect contrib modules btree_gin, hstore, and intarray. Along the way, I (tgl) noticed that shimTriConsistentFn failed to pick up on a "recheck" flag returned by its first call to the boolean consistentFn. This may be only a latent problem, since it would be unlikely for a consistentFn to set recheck for the all-false case and not any other cases. (Indeed, none of our contrib modules do that.) Nonetheless, it's formally wrong. Reported-by: Vinod Sridharan <vsridh90@gmail.com> Author: Vinod Sridharan <vsridh90@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CAFMdLD7XzsXfi1+DpTqTgrD8XU0i2C99KuF=5VHLWjx4C1pkcg@mail.gmail.com Backpatch-through: 13
2025-04-12Harmonize function parameter names for Postgres 18.Peter Geoghegan
Make sure that function declarations use names that exactly match the corresponding names from function definitions in a few places. These inconsistencies were all introduced during Postgres 18 development. This commit was written with help from clang-tidy, by mechanically applying the same rules as similar clean-up commits (the earliest such commit was commit 035ce1fe).
2025-04-10Improve various new-to-v18 appendStringInfo callsDavid Rowley
Similar to 8461424fd, here we adjust a few new locations which were not using the most suitable appendStringInfo* function for the intended purpose. Author: David Rowley <drowleyml@gmail.com Discussion: https://postgr.es/m/CAApHDvqJnNjueb=Eoj8K+8n0g7nj_AcPWSiCj5RNV4fDejAfqA@mail.gmail.com
2025-04-10Fix data loss in logical replication.Amit Kapila
Data loss can happen when the DDLs like ALTER PUBLICATION ... ADD TABLE ... or ALTER TYPE ... that don't take a strong lock on table happens concurrently to DMLs on the tables involved in the DDL. This happens because logical decoding doesn't distribute invalidations to concurrent transactions and those transactions use stale cache data to decode the changes. The problem becomes bigger because we keep using the stale cache even after those in-progress transactions are finished and skip the changes required to be sent to the client. This commit fixes the issue by distributing invalidation messages from catalog-modifying transactions to all concurrent in-progress transactions. This allows the necessary rebuild of the catalog cache when decoding new changes after concurrent DDL. We observed performance regression primarily during frequent execution of *publication DDL* statements that modify the published tables. The regression is minor or nearly nonexistent for DDLs that do not affect the published tables or occur infrequently, making this a worthwhile cost to resolve a longstanding data loss issue. An alternative approach considered was to take a strong lock on each affected table during publication modification. However, this would only address issues related to publication DDLs (but not the ALTER TYPE ...) and require locking every relation in the database for publications created as FOR ALL TABLES, which is impractical. The bug exists in all supported branches, but we are backpatching till 14. The fix for 13 requires somewhat bigger changes than this fix, so the fix for that branch is still under discussion. Reported-by: hubert depesz lubaczewski <depesz@depesz.com> Reported-by: Tomas Vondra <tomas.vondra@enterprisedb.com> Author: Shlok Kyal <shlok.kyal.oss@gmail.com> Author: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: Zhijie Hou <houzj.fnst@fujitsu.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Tested-by: Benoit Lobréau <benoit.lobreau@dalibo.com> Backpatch-through: 14 Discussion: https://postgr.es/m/de52b282-1166-1180-45a2-8d8917ca74c6@enterprisedb.com Discussion: https://postgr.es/m/CAD21AoAenVqiMjpN-PvGHL1N9DWnHSq673bfgr6phmBUzx=kLQ@mail.gmail.com
2025-04-09Cleanup of pg_numa.cTomas Vondra
This moves/renames some of the functions defined in pg_numa.c: * pg_numa_get_pagesize() is renamed to pg_get_shmem_pagesize(), and moved to src/backend/storage/ipc/shmem.c. The new name better reflects that the page size is not related to NUMA, and it's specifically about the page size used for the main shared memory segment. * move pg_numa_available() to src/backend/storage/ipc/shmem.c, i.e. into the backend (which more appropriate for functions callable from SQL). While at it, improve the comment to explain what page size it returns. * remove unnecessary includes from src/port/pg_numa.c, adding unnecessary dependencies (src/port should be suitable for frontent). These were either leftovers or unnecessary thanks to the other changes in this commit. This eliminates unnecessary dependencies on backend symbols, which we don't want in src/port. Reported-by: Kirill Reshke <reshkekirill@gmail.com> Reviewed-by: Andres Freund <andres@anarazel.de> https://postgr.es/m/CALdSSPi5fj0a7UG7Fmw2cUD1uWuckU_e8dJ+6x-bJEokcSXzqA@mail.gmail.com
2025-04-08Move contrib/spi testing from core regression tests to contrib/spi.Tom Lane
It's weird to have the core regression tests depending on contrib code, and coverage testing shows that those test queries add nothing to the core-code coverage of the core tests. So pull those test bits out and put them into ordinary test scripts inside contrib/spi/, making that more like other contrib modules. Aside from being structurally nicer, anything we can take out of the core tests (which are executed multiple times per check-world run) and put into tests executed only once should be a win. It doesn't look like this change will buy a whole lot of milliseconds, but a cycle saved is a cycle earned. Also, there is some discussion around possibly removing refint and/or autoinc altogether. I don't know if that will happen, but we'd certainly need to decouple them from the core tests to do so. The tests for autoinc were quite intertwined with the undocumented "ttdummy" trigger in regress.c. That made the tests very hard to understand and contributed nothing to autoinc's testing either. So I just deleted ttdummy and rewrote the autoinc tests without it. I realized while doing this that the description of autoinc in the SGML docs is not a great description of what the function actually does, so the patch includes some updates to those docs. Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Discussion: https://postgr.es/m/3872677.1744077559@sss.pgh.pa.us
2025-04-08Fix incorrect format placeholderPeter Eisentraut
for commit 749a9e20c97
2025-04-08pg_buffercache: Change page_num type to bigintTomas Vondra
The page_num was defined as integer, which should be sufficient for the near future (with 4K pages it's 8TB). But it's virtually free to return bigint, and get a wider range. This was agreed on the thread, but I forgot to tweak this in ba2a3c2302f1. While at it, make the data types in CREATE VIEW a bit more consistent. Discussion: https://postgr.es/m/CAKZiRmxh6KWo0aqRqvmcoaX2jUxZYb4kGp3N%3Dq1w%2BDiH-696Xw%40mail.gmail.co
2025-04-08Add pg_buffercache_evict_{relation,all} functionsAndres Freund
In addition to the added functions, the pg_buffercache_evict() function now shows whether the buffer was flushed. pg_buffercache_evict_relation(): Evicts all shared buffers in a relation at once. pg_buffercache_evict_all(): Evicts all shared buffers at once. Both functions provide mechanism to evict multiple shared buffers at once. They are designed to address the inefficiency of repeatedly calling pg_buffercache_evict() for each individual buffer, which can be time-consuming when dealing with large shared buffer pools. (e.g., ~477ms vs. ~2576ms for 16GB of fully populated shared buffers). These functions are intended for developer testing and debugging purposes and are available to superusers only. Minimal tests for the new functions are included. Also, there was no test for pg_buffercache_evict(), test for this added too. No new extension version is needed, as it was already increased this release by ba2a3c2302f. Author: Nazir Bilal Yavuz <byavuz81@gmail.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Aidar Imamov <a.imamov@postgrespro.ru> Reviewed-by: Joseph Koshakow <koshy44@gmail.com> Discussion: https://postgr.es/m/CAN55FZ0h_YoSqqutxV6DES1RW8ig6wcA8CR9rJk358YRMxZFmw%40mail.gmail.com
2025-04-08Speedup child EquivalenceMember lookup in plannerDavid Rowley
When planning queries to partitioned tables, we clone all EquivalenceMembers belonging to the partitioned table into em_is_child EquivalenceMembers for each non-pruned partition. For partitioned tables with large numbers of partitions, this meant the ec_members list could become large and code searching that list would become slow. Effectively, the more partitions which were present, the more searches needed to be performed for operations such as find_ec_member_matching_expr() during create_plan() and the more partitions present, the longer these searches would take, i.e., a quadratic slowdown. To fix this, here we adjust how we store EquivalenceMembers for em_is_child members. Instead of storing these directly in ec_members, these are now stored in a new array of Lists in the EquivalenceClass, which is indexed by the relid. When we want to find EquivalenceMembers belonging to a certain child relation, we can narrow the search to the array element for that relation. To make EquivalenceMember lookup easier and to reduce the amount of code change, this commit provides a pair of functions to allow iteration over the EquivalenceMembers of an EC which also handles finding the child members, if required. Callers that never need to look at child members can remain using the foreach loop over ec_members, which will now often be faster due to only parent-level members being stored there. The actual performance increases here are highly dependent on the number of partitions and the query being planned. Performance increases can be visible with as few as 8 partitions, but the speedup is marginal for such low numbers of partitions. The speedups become much more visible with a few dozen to hundreds of partitions. With some tested queries using 56 partitions, the planner was around 3x faster than before. For use cases with thousands of partitions, these are likely to become significantly faster. Some testing has shown planner speedups of 60x or more with 8192 partitions. Author: Yuya Watari <watari.yuya@gmail.com> Co-authored-by: David Rowley <dgrowleyml@gmail.com> Reviewed-by: David Rowley <dgrowleyml@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Andrey Lepikhov <a.lepikhov@postgrespro.ru> Reviewed-by: Alena Rybakina <lena.ribackina@yandex.ru> Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com> Reviewed-by: Amit Langote <amitlangote09@gmail.com> Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> Tested-by: Thom Brown <thom@linux.com> Tested-by: newtglobal postgresql_contributors <postgresql_contributors@newtglobalcorp.com> Discussion: https://postgr.es/m/CAJ2pMkZNCgoUKSE%2B_5LthD%2BKbXKvq6h2hQN8Esxpxd%2Bcxmgomg%40mail.gmail.com
2025-04-07Add pg_buffercache_numa view with NUMA node infoTomas Vondra
Introduces a new view pg_buffercache_numa, showing NUMA memory nodes for individual buffers. For each buffer the view returns an entry for each memory page, with the associated NUMA node. The database blocks and OS memory pages may have different size - the default block size is 8KB, while the memory page is 4K (on x86). But other combinations are possible, depending on configure parameters, platform, etc. This means buffers may overlap with multiple memory pages, each associated with a different NUMA node. To determine the NUMA node for a buffer, we first need to touch the memory pages using pg_numa_touch_mem_if_required, otherwise we might get status -2 (ENOENT = The page is not present), indicating the page is either unmapped or unallocated. The view may be relatively expensive, especially when accessed for the first time in a backend, as it touches all memory pages to get reliable information about the NUMA node. This may also force allocation of the shared memory. Author: Jakub Wartak <jakub.wartak@enterprisedb.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com> Reviewed-by: Tomas Vondra <tomas@vondra.me> Discussion: https://postgr.es/m/CAKZiRmxh6KWo0aqRqvmcoaX2jUxZYb4kGp3N%3Dq1w%2BDiH-696Xw%40mail.gmail.com
2025-04-07Fix some issues in contrib/spi/refint.c.Tom Lane
check_foreign_key incorrectly used a single cache entry for its saved plans for a 'c' (cascade) trigger, although there are two different queries to execute depending on whether it fires for an update or a delete. This caused the wrong things to be done if both types of event occur in one session. (This was indeed visible in the triggers regression test, but apparently nobody ever questioned it.) To fix, add the operation type to the cache key. Its debug log output failed to distinguish update from delete events, too. Also, change the intended trigger usage from BEFORE ROW to AFTER ROW, and add checks insisting on that usage. BEFORE is really rather unsafe, since if there are other BEFORE triggers they might change or cancel the operation we are trying to check. AFTER triggers are the standard way to propagate changes to other rows, so we should follow that way here. In passing, remove a useless duplicate lookup of the cache entry. This code is mostly intended as a documentation example, so we won't consider a back-patch. Author: Dmitrii Bondar <d.bondar@postgrespro.ru> Reviewed-by: Paul Jungwirth <pj@illuminatedcomputing.com> Reviewed-by: Lilian Ontowhee <ontowhee@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/79755a2b18ed4fe5e29da6a87a1e00d1@postgrespro.ru
2025-04-07Follow-up fixes for SHA-2 patch (commit 749a9e20c).Tom Lane
This changes the check for valid characters in the salt string to only allow plain ASCII letters and digits. The previous coding was locale-dependent which doesn't really seem like a great idea here; moreover it could not work correctly in multibyte encodings. This fixes a careless pointer-use-after-pfree, too. Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Reported-by: Andres Freund <andres@anarazel.de> Author: Bernd Helmle <mailings@oopsware.de> Discussion: https://postgr.es/m/6fab35422df6b6b9727fdcc243c5fa1c667dd3b5.camel@oopsware.de
2025-04-07Fix erroneous construction of functions' dependencies on transforms.Tom Lane
The list of transform objects that a function should use is specified in CREATE FUNCTION's TRANSFORM clause, and then represented indirectly in pg_proc.protrftypes. However, ProcedureCreate completely ignored that for purposes of constructing pg_depend entries, and instead made the function depend on any transforms that exist for its parameter or return data types. This is bad in both directions: the function could be made dependent on a transform it does not actually use, or it could try to use a transform that's since been dropped. (The latter scenario would require use of a transform that's not for any of the parameter or return types, but that seems legit for cases where the function performs SQL operations internally.) To fix, pass in the list of transform objects that CreateFunction identified, and build pg_depend entries from that not from the parameter/return types. This results in changes in the expected test outputs in contrib/bool_plperl, which I guess are due to different ordering of pg_depend entries -- that test case is surely not exercising either of the problem scenarios. This fix is not back-patchable as-is: changing the signature of ProcedureCreate seems too risky in stable branches. We could do something like making ProcedureCreate a wrapper around ProcedureCreateExt or so. However, I'm more inclined to do nothing in the back branches. We had no field complaints up to now, so the hazards don't seem to be a big issue in practice. And we couldn't do anything about existing pg_depend entries, so a back-patched fix would result in a mishmash of dependencies created according to different rules. That cure could be worse than the disease, perhaps. I bumped catversion just to lay down a marker that the expected contents of pg_depend are a bit different than before. Reported-by: Chapman Flack <jcflack@acm.org> Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/3112950.1743984111@sss.pgh.pa.us
2025-04-06Fix memory leaks in px_crypt_shacrypt().Tom Lane
Per Coverity. I don't think these are of any actual significance since the function ought to be invoked in a short-lived context. Still, if it's trying to be neat it should get it right. Also const-ify a constant and fix up typedef formatting.
2025-04-05Add modern SHA-2 based password hashes to pgcrypto.Álvaro Herrera
This adapts the publicly available reference implementation on https://www.akkadia.org/drepper/SHA-crypt.txt and adds the new hash algorithms sha256crypt and sha512crypt to crypt() and gen_salt() respectively. Author: Bernd Helmle <mailings@oopsware.de> Reviewed-by: Japin Li <japinli@hotmail.com> Discussion: https://postgr.es/m/c763235a2757e2f5f9e3e27268b9028349cef659.camel@oopsware.de
2025-04-04Use streaming read I/O in autoprewarmMelanie Plageman
Make a read stream for each valid fork of each valid relation represented in the autoprewarm dump file and prewarm those blocks through the read stream API instead of by directly invoking ReadBuffer(). Co-authored-by: Nazir Bilal Yavuz <byavuz81@gmail.com> Co-authored-by: Melanie Plageman <melanieplageman@gmail.com> Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Andrey M. Borodin <x4mmm@yandex-team.ru> (earlier versions) Reviewed-by: Kirill Reshke <reshkekirill@gmail.com> (earlier versions) Reviewed-by: Matheus Alcantara <mths.dev@pm.me> (earlier versions) Discussion: https://postgr.es/m/flat/CAN55FZ3n8Gd%2BhajbL%3D5UkGzu_aHGRqnn%2BxktXq2fuds%3D1AOR6Q%40mail.gmail.com
2025-04-04Refactor autoprewarm_database_main() in preparation for read streamMelanie Plageman
Autoprewarm prewarms blocks from a dump file representing the contents of shared buffers at the time it was dumped. It uses a sorted array of BlockInfoRecords, each representing a block from one of the cluster's databases and tables. autoprewarm_database_main() prewarms all the blocks from a single database. It is optimized to ensure we don't try to open the same relation or fork over and over again if it has been dropped or is invalid. The main loop handled this by carefully setting various local variables to sentinel values when a run of blocks should be skipped. This method won't work with the read stream API. The read stream callback must be able to advance the current position in the BlockInfoRecord array to allow for reading ahead additional blocks, however a read stream maps 1-1 with a relation and fork combination. So, the main loop in autoprewarm_database_main() must also advance the position in the array of BlockInfoRecords to skip invalid relations and forks. This split control doesn't fit well with the current flow control in autoprewarm_database_main() To make it compatible with the read stream API, change autoprewarm_database_main() to explicitly fast-forward in the BlockInfoRecords array past the blocks belonging to an invalid relation or fork. This commit only implements the new control flow -- it does not use the read stream API. Co-authored-by: Nazir Bilal Yavuz <byavuz81@gmail.com> Co-authored-by: Melanie Plageman <melanieplageman@gmail.com> Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Discussion: https://postgr.es/m/flat/CAN55FZ3n8Gd%2BhajbL%3D5UkGzu_aHGRqnn%2BxktXq2fuds%3D1AOR6Q%40mail.gmail.com
2025-04-04Remove superfluous autoprewarm checkMelanie Plageman
autoprewarm_database_main() prewarms blocks from the same database. It is passed an array of sorted BlockInfoRecords and a start and stop index into the array. The range represented should include only blocks belonging to global objects or blocks from a single database. Remove an unnecessary check that the current block is from the same database and add an assert to ensure this invariant remains. Doing so removes a special case that makes future refactoring to accommodate read streamifying autoprewarm easier. Noticed off-list by Andres Freund
2025-04-04Fix autoprewarm neglect of tablespacesMelanie Plageman
While prewarming blocks from a dump file, autoprewarm_database_main() mistakenly ignored tablespace when detecting the beginning of the next relation to prewarm. Because RelFileNumbers are only unique within a tablespace, autoprewarm could miss prewarming blocks from a relation with the same RelFileNumber in a different tablespace. Though this situation is likely rare in practice, it's best to make the code correct. Do so by explicitly checking for the RelFileNumber when detecting a new relation. Reported-by: Heikki Linnakangas <hlinnaka@iki.fi> Discussion: https://postgr.es/m/97c36982-603b-494a-95f4-aaf2a12ac27e%40iki.fi
2025-04-04Convert PathKey to use CompareTypePeter Eisentraut
Change the PathKey struct to use CompareType to record the sort direction instead of hardcoding btree strategy numbers. The CompareType is then converted to the index-type-specific strategy when the plan is created. This reduces the number of places btree strategy numbers are hardcoded, and it's a self-contained subset of a larger effort to allow non-btree indexes to behave like btrees. Author: Mark Dilger <mark.dilger@enterprisedb.com> Co-authored-by: Peter Eisentraut <peter@eisentraut.org> Discussion: https://www.postgresql.org/message-id/flat/E72EAA49-354D-4C2E-8EB9-255197F55330@enterprisedb.com
2025-04-03Remove misleading read stream asserts in a few usersMelanie Plageman
Several read stream users asserted that the read stream was exhausted after looping on that very condition. It was pointed out in an a review of an as-of-yet uncommitted read stream user [1] that this was confusing and could lead the reader to think there was a possibility of some kind of race condition. Remove these asserts. [1] https://postgr.es/m/F9ACE8D0-B807-4A17-B6BD-87EF0717983D%40yesql.se
2025-04-03Add support for sorted gist index builds to btree_gistHeikki Linnakangas
This enables sortsupport in the btree_gist extension for faster builds of gist indexes. Sorted gist index build strategy is the new default now. Regression tests are unchanged (except for one small change in the 'enum' test to add coverage for enum values added later) and are using the sorted build strategy instead. One version of this was committed a long time ago already, in commit 9f984ba6d2, but it was quickly reverted because of buildfarm failures. The failures were presumably caused by some small bugs, but we never got around to debug and commit it again. This patch was written from scratch, implementing the same idea, with some fragments and ideas from the original patch. Author: Bernd Helmle <mailings@oopsware.de> Author: Andrey Borodin <x4mmm@yandex-team.ru> Discussion: https://www.postgresql.org/message-id/64d324ce2a6d535d3f0f3baeeea7b25beff82ce4.camel@oopsware.de
2025-04-03Fix boilerplate comments in btree_gistHeikki Linnakangas
A few of these were copy-pasted wrong, like the comment "Bytea ops" in btree_numeric.c. Instead of fixing the incorrect ones, replace them all with generic comment "GiST support functions". Also tidy up the inconsistent newlines between various functions while we're at it.
2025-03-30read_stream: Introduce and use optional batchmode supportAndres Freund
Submitting IO in larger batches can be more efficient than doing so one-by-one, particularly for many small reads. It does, however, require the ReadStreamBlockNumberCB callback to abide by the restrictions of AIO batching (c.f. pgaio_enter_batchmode()). Basically, the callback may not: a) block without first calling pgaio_submit_staged(), unless a to-be-waited-on lock cannot be part of a deadlock, e.g. because it is never held while waiting for IO. b) directly or indirectly start another batch pgaio_enter_batchmode() As this requires care and is nontrivial in some cases, batching is only used with explicit opt-in. This patch adds an explicit flag (READ_STREAM_USE_BATCHING) to read_stream and uses it where appropriate. There are two cases where batching would likely be beneficial, but where we aren't using it yet: 1) bitmap heap scans, because the callback reads the VM This should soon be solved, because we are planning to remove the use of the VM, due to that not being sound. 2) The first phase of heap vacuum This could be made to support batchmode, but would require some care. Reviewed-by: Noah Misch <noah@leadboat.com> Reviewed-by: Thomas Munro <thomas.munro@gmail.com> Discussion: https://postgr.es/m/uvrtrknj4kdytuboidbhwclo4gxhswwcpgadptsjvjqcluzmah%40brqs62irg4dt