diff options
author | Peter Eisentraut | 2017-10-09 01:44:17 +0000 |
---|---|---|
committer | Peter Eisentraut | 2017-10-17 19:10:33 +0000 |
commit | c29c578908dc0271eeb13a4014e54bff07a29c05 (patch) | |
tree | 1aa03fb6f1864719f2f23d4b0b9d5e2865764082 | |
parent | 6ecabead4b5993c42745f2802d857b1a79f48bf9 (diff) |
Don't use SGML empty tags
For DocBook XML compatibility, don't use SGML empty tags (</>) anymore,
replace by the full tag name. Add a warning option to catch future
occurrences.
Alexander Lakhin, Jürgen Purtz
337 files changed, 31636 insertions, 31635 deletions
diff --git a/doc/src/sgml/Makefile b/doc/src/sgml/Makefile index 164c00bb63b..428eb569fc4 100644 --- a/doc/src/sgml/Makefile +++ b/doc/src/sgml/Makefile @@ -66,10 +66,11 @@ ALLSGML := $(wildcard $(srcdir)/*.sgml $(srcdir)/ref/*.sgml) $(GENERATED_SGML) # Enable some extra warnings # -wfully-tagged needed to throw a warning on missing tags # for older tool chains, 2007-08-31 -override SPFLAGS += -wall -wno-unused-param -wno-empty -wfully-tagged +override SPFLAGS += -wall -wno-unused-param -wfully-tagged # Additional warnings for XML compatibility. The conditional is meant # to detect whether we are using OpenSP rather than the ancient # original SP. +override SPFLAGS += -wempty ifneq (,$(filter o%,$(notdir $(OSX)))) override SPFLAGS += -wdata-delim -winstance-ignore-ms -winstance-include-ms -winstance-param-entity endif diff --git a/doc/src/sgml/acronyms.sgml b/doc/src/sgml/acronyms.sgml index 29f85e08468..35514d4d9ac 100644 --- a/doc/src/sgml/acronyms.sgml +++ b/doc/src/sgml/acronyms.sgml @@ -4,8 +4,8 @@ <title>Acronyms</title> <para> - This is a list of acronyms commonly used in the <productname>PostgreSQL</> - documentation and in discussions about <productname>PostgreSQL</>. + This is a list of acronyms commonly used in the <productname>PostgreSQL</productname> + documentation and in discussions about <productname>PostgreSQL</productname>. <variablelist> @@ -153,7 +153,7 @@ <ulink url="http://en.wikipedia.org/wiki/Data_Definition_Language">Data Definition Language</ulink>, SQL commands such as <command>CREATE - TABLE</>, <command>ALTER USER</> + TABLE</command>, <command>ALTER USER</command> </para> </listitem> </varlistentry> @@ -164,8 +164,8 @@ <para> <ulink url="http://en.wikipedia.org/wiki/Data_Manipulation_Language">Data - Manipulation Language</ulink>, SQL commands such as <command>INSERT</>, - <command>UPDATE</>, <command>DELETE</> + Manipulation Language</ulink>, SQL commands such as <command>INSERT</command>, + <command>UPDATE</command>, <command>DELETE</command> </para> </listitem> </varlistentry> @@ -281,7 +281,7 @@ <listitem> <para> <link linkend="config-setting">Grand Unified Configuration</link>, - the <productname>PostgreSQL</> subsystem that handles server configuration + the <productname>PostgreSQL</productname> subsystem that handles server configuration </para> </listitem> </varlistentry> @@ -384,7 +384,7 @@ <term><acronym>LSN</acronym></term> <listitem> <para> - Log Sequence Number, see <link linkend="datatype-pg-lsn"><type>pg_lsn</></link> + Log Sequence Number, see <link linkend="datatype-pg-lsn"><type>pg_lsn</type></link> and <link linkend="wal-internals">WAL Internals</link>. </para> </listitem> @@ -486,7 +486,7 @@ <term><acronym>PGSQL</acronym></term> <listitem> <para> - <link linkend="postgres"><productname>PostgreSQL</></link> + <link linkend="postgres"><productname>PostgreSQL</productname></link> </para> </listitem> </varlistentry> @@ -495,7 +495,7 @@ <term><acronym>PGXS</acronym></term> <listitem> <para> - <link linkend="extend-pgxs"><productname>PostgreSQL</> Extension System</link> + <link linkend="extend-pgxs"><productname>PostgreSQL</productname> Extension System</link> </para> </listitem> </varlistentry> diff --git a/doc/src/sgml/adminpack.sgml b/doc/src/sgml/adminpack.sgml index fddf90c4a56..b27a4a325d9 100644 --- a/doc/src/sgml/adminpack.sgml +++ b/doc/src/sgml/adminpack.sgml @@ -8,8 +8,8 @@ </indexterm> <para> - <filename>adminpack</> provides a number of support functions which - <application>pgAdmin</> and other administration and management tools can + <filename>adminpack</filename> provides a number of support functions which + <application>pgAdmin</application> and other administration and management tools can use to provide additional functionality, such as remote management of server log files. Use of all these functions is restricted to superusers. @@ -25,7 +25,7 @@ </para> <table id="functions-adminpack-table"> - <title><filename>adminpack</> Functions</title> + <title><filename>adminpack</filename> Functions</title> <tgroup cols="3"> <thead> <row><entry>Name</entry> <entry>Return Type</entry> <entry>Description</entry> @@ -58,7 +58,7 @@ <entry><function>pg_catalog.pg_logdir_ls()</function></entry> <entry><type>setof record</type></entry> <entry> - List the log files in the <varname>log_directory</> directory + List the log files in the <varname>log_directory</varname> directory </entry> </row> </tbody> @@ -69,9 +69,9 @@ <primary>pg_file_write</primary> </indexterm> <para> - <function>pg_file_write</> writes the specified <parameter>data</> into - the file named by <parameter>filename</>. If <parameter>append</> is - false, the file must not already exist. If <parameter>append</> is true, + <function>pg_file_write</function> writes the specified <parameter>data</parameter> into + the file named by <parameter>filename</parameter>. If <parameter>append</parameter> is + false, the file must not already exist. If <parameter>append</parameter> is true, the file can already exist, and will be appended to if so. Returns the number of bytes written. </para> @@ -80,15 +80,15 @@ <primary>pg_file_rename</primary> </indexterm> <para> - <function>pg_file_rename</> renames a file. If <parameter>archivename</> - is omitted or NULL, it simply renames <parameter>oldname</> - to <parameter>newname</> (which must not already exist). - If <parameter>archivename</> is provided, it first - renames <parameter>newname</> to <parameter>archivename</> (which must - not already exist), and then renames <parameter>oldname</> - to <parameter>newname</>. In event of failure of the second rename step, - it will try to rename <parameter>archivename</> back - to <parameter>newname</> before reporting the error. + <function>pg_file_rename</function> renames a file. If <parameter>archivename</parameter> + is omitted or NULL, it simply renames <parameter>oldname</parameter> + to <parameter>newname</parameter> (which must not already exist). + If <parameter>archivename</parameter> is provided, it first + renames <parameter>newname</parameter> to <parameter>archivename</parameter> (which must + not already exist), and then renames <parameter>oldname</parameter> + to <parameter>newname</parameter>. In event of failure of the second rename step, + it will try to rename <parameter>archivename</parameter> back + to <parameter>newname</parameter> before reporting the error. Returns true on success, false if the source file(s) are not present or not writable; other cases throw errors. </para> @@ -97,19 +97,19 @@ <primary>pg_file_unlink</primary> </indexterm> <para> - <function>pg_file_unlink</> removes the specified file. + <function>pg_file_unlink</function> removes the specified file. Returns true on success, false if the specified file is not present - or the <function>unlink()</> call fails; other cases throw errors. + or the <function>unlink()</function> call fails; other cases throw errors. </para> <indexterm> <primary>pg_logdir_ls</primary> </indexterm> <para> - <function>pg_logdir_ls</> returns the start timestamps and path + <function>pg_logdir_ls</function> returns the start timestamps and path names of all the log files in the <xref linkend="guc-log-directory"> directory. The <xref linkend="guc-log-filename"> parameter must have its - default setting (<literal>postgresql-%Y-%m-%d_%H%M%S.log</>) to use this + default setting (<literal>postgresql-%Y-%m-%d_%H%M%S.log</literal>) to use this function. </para> @@ -119,12 +119,12 @@ and should not be used in new applications; instead use those shown in <xref linkend="functions-admin-signal-table"> and <xref linkend="functions-admin-genfile-table">. These functions are - provided in <filename>adminpack</> only for compatibility with old - versions of <application>pgAdmin</>. + provided in <filename>adminpack</filename> only for compatibility with old + versions of <application>pgAdmin</application>. </para> <table id="functions-adminpack-deprecated-table"> - <title>Deprecated <filename>adminpack</> Functions</title> + <title>Deprecated <filename>adminpack</filename> Functions</title> <tgroup cols="3"> <thead> <row><entry>Name</entry> <entry>Return Type</entry> <entry>Description</entry> @@ -136,22 +136,22 @@ <entry><function>pg_catalog.pg_file_read(filename text, offset bigint, nbytes bigint)</function></entry> <entry><type>text</type></entry> <entry> - Alternate name for <function>pg_read_file()</> + Alternate name for <function>pg_read_file()</function> </entry> </row> <row> <entry><function>pg_catalog.pg_file_length(filename text)</function></entry> <entry><type>bigint</type></entry> <entry> - Same as <structfield>size</> column returned - by <function>pg_stat_file()</> + Same as <structfield>size</structfield> column returned + by <function>pg_stat_file()</function> </entry> </row> <row> <entry><function>pg_catalog.pg_logfile_rotate()</function></entry> <entry><type>integer</type></entry> <entry> - Alternate name for <function>pg_rotate_logfile()</>, but note that it + Alternate name for <function>pg_rotate_logfile()</function>, but note that it returns integer 0 or 1 rather than <type>boolean</type> </entry> </row> diff --git a/doc/src/sgml/advanced.sgml b/doc/src/sgml/advanced.sgml index f47c01987be..bf87df4dcb1 100644 --- a/doc/src/sgml/advanced.sgml +++ b/doc/src/sgml/advanced.sgml @@ -145,7 +145,7 @@ DETAIL: Key (city)=(Berkeley) is not present in table "cities". </indexterm> <para> - <firstterm>Transactions</> are a fundamental concept of all database + <firstterm>Transactions</firstterm> are a fundamental concept of all database systems. The essential point of a transaction is that it bundles multiple steps into a single, all-or-nothing operation. The intermediate states between the steps are not visible to other concurrent transactions, @@ -182,8 +182,8 @@ UPDATE branches SET balance = balance + 100.00 remain a happy customer if she was debited without Bob being credited. We need a guarantee that if something goes wrong partway through the operation, none of the steps executed so far will take effect. Grouping - the updates into a <firstterm>transaction</> gives us this guarantee. - A transaction is said to be <firstterm>atomic</>: from the point of + the updates into a <firstterm>transaction</firstterm> gives us this guarantee. + A transaction is said to be <firstterm>atomic</firstterm>: from the point of view of other transactions, it either happens completely or not at all. </para> @@ -216,9 +216,9 @@ UPDATE branches SET balance = balance + 100.00 </para> <para> - In <productname>PostgreSQL</>, a transaction is set up by surrounding + In <productname>PostgreSQL</productname>, a transaction is set up by surrounding the SQL commands of the transaction with - <command>BEGIN</> and <command>COMMIT</> commands. So our banking + <command>BEGIN</command> and <command>COMMIT</command> commands. So our banking transaction would actually look like: <programlisting> @@ -233,23 +233,23 @@ COMMIT; <para> If, partway through the transaction, we decide we do not want to commit (perhaps we just noticed that Alice's balance went negative), - we can issue the command <command>ROLLBACK</> instead of - <command>COMMIT</>, and all our updates so far will be canceled. + we can issue the command <command>ROLLBACK</command> instead of + <command>COMMIT</command>, and all our updates so far will be canceled. </para> <para> - <productname>PostgreSQL</> actually treats every SQL statement as being - executed within a transaction. If you do not issue a <command>BEGIN</> + <productname>PostgreSQL</productname> actually treats every SQL statement as being + executed within a transaction. If you do not issue a <command>BEGIN</command> command, - then each individual statement has an implicit <command>BEGIN</> and - (if successful) <command>COMMIT</> wrapped around it. A group of - statements surrounded by <command>BEGIN</> and <command>COMMIT</> - is sometimes called a <firstterm>transaction block</>. + then each individual statement has an implicit <command>BEGIN</command> and + (if successful) <command>COMMIT</command> wrapped around it. A group of + statements surrounded by <command>BEGIN</command> and <command>COMMIT</command> + is sometimes called a <firstterm>transaction block</firstterm>. </para> <note> <para> - Some client libraries issue <command>BEGIN</> and <command>COMMIT</> + Some client libraries issue <command>BEGIN</command> and <command>COMMIT</command> commands automatically, so that you might get the effect of transaction blocks without asking. Check the documentation for the interface you are using. @@ -258,11 +258,11 @@ COMMIT; <para> It's possible to control the statements in a transaction in a more - granular fashion through the use of <firstterm>savepoints</>. Savepoints + granular fashion through the use of <firstterm>savepoints</firstterm>. Savepoints allow you to selectively discard parts of the transaction, while committing the rest. After defining a savepoint with - <command>SAVEPOINT</>, you can if needed roll back to the savepoint - with <command>ROLLBACK TO</>. All the transaction's database changes + <command>SAVEPOINT</command>, you can if needed roll back to the savepoint + with <command>ROLLBACK TO</command>. All the transaction's database changes between defining the savepoint and rolling back to it are discarded, but changes earlier than the savepoint are kept. </para> @@ -308,7 +308,7 @@ COMMIT; <para> This example is, of course, oversimplified, but there's a lot of control possible in a transaction block through the use of savepoints. - Moreover, <command>ROLLBACK TO</> is the only way to regain control of a + Moreover, <command>ROLLBACK TO</command> is the only way to regain control of a transaction block that was put in aborted state by the system due to an error, short of rolling it back completely and starting again. @@ -325,7 +325,7 @@ COMMIT; </indexterm> <para> - A <firstterm>window function</> performs a calculation across a set of + A <firstterm>window function</firstterm> performs a calculation across a set of table rows that are somehow related to the current row. This is comparable to the type of calculation that can be done with an aggregate function. However, window functions do not cause rows to become grouped into a single @@ -360,31 +360,31 @@ SELECT depname, empno, salary, avg(salary) OVER (PARTITION BY depname) FROM emps </screen> The first three output columns come directly from the table - <structname>empsalary</>, and there is one output row for each row in the + <structname>empsalary</structname>, and there is one output row for each row in the table. The fourth column represents an average taken across all the table - rows that have the same <structfield>depname</> value as the current row. - (This actually is the same function as the non-window <function>avg</> - aggregate, but the <literal>OVER</> clause causes it to be + rows that have the same <structfield>depname</structfield> value as the current row. + (This actually is the same function as the non-window <function>avg</function> + aggregate, but the <literal>OVER</literal> clause causes it to be treated as a window function and computed across the window frame.) </para> <para> - A window function call always contains an <literal>OVER</> clause + A window function call always contains an <literal>OVER</literal> clause directly following the window function's name and argument(s). This is what syntactically distinguishes it from a normal function or non-window - aggregate. The <literal>OVER</> clause determines exactly how the + aggregate. The <literal>OVER</literal> clause determines exactly how the rows of the query are split up for processing by the window function. - The <literal>PARTITION BY</> clause within <literal>OVER</> + The <literal>PARTITION BY</literal> clause within <literal>OVER</literal> divides the rows into groups, or partitions, that share the same - values of the <literal>PARTITION BY</> expression(s). For each row, + values of the <literal>PARTITION BY</literal> expression(s). For each row, the window function is computed across the rows that fall into the same partition as the current row. </para> <para> You can also control the order in which rows are processed by - window functions using <literal>ORDER BY</> within <literal>OVER</>. - (The window <literal>ORDER BY</> does not even have to match the + window functions using <literal>ORDER BY</literal> within <literal>OVER</literal>. + (The window <literal>ORDER BY</literal> does not even have to match the order in which the rows are output.) Here is an example: <programlisting> @@ -409,39 +409,39 @@ FROM empsalary; (10 rows) </screen> - As shown here, the <function>rank</> function produces a numerical rank - for each distinct <literal>ORDER BY</> value in the current row's - partition, using the order defined by the <literal>ORDER BY</> clause. - <function>rank</> needs no explicit parameter, because its behavior - is entirely determined by the <literal>OVER</> clause. + As shown here, the <function>rank</function> function produces a numerical rank + for each distinct <literal>ORDER BY</literal> value in the current row's + partition, using the order defined by the <literal>ORDER BY</literal> clause. + <function>rank</function> needs no explicit parameter, because its behavior + is entirely determined by the <literal>OVER</literal> clause. </para> <para> The rows considered by a window function are those of the <quote>virtual - table</> produced by the query's <literal>FROM</> clause as filtered by its - <literal>WHERE</>, <literal>GROUP BY</>, and <literal>HAVING</> clauses + table</quote> produced by the query's <literal>FROM</literal> clause as filtered by its + <literal>WHERE</literal>, <literal>GROUP BY</literal>, and <literal>HAVING</literal> clauses if any. For example, a row removed because it does not meet the - <literal>WHERE</> condition is not seen by any window function. + <literal>WHERE</literal> condition is not seen by any window function. A query can contain multiple window functions that slice up the data - in different ways using different <literal>OVER</> clauses, but + in different ways using different <literal>OVER</literal> clauses, but they all act on the same collection of rows defined by this virtual table. </para> <para> - We already saw that <literal>ORDER BY</> can be omitted if the ordering + We already saw that <literal>ORDER BY</literal> can be omitted if the ordering of rows is not important. It is also possible to omit <literal>PARTITION - BY</>, in which case there is a single partition containing all rows. + BY</literal>, in which case there is a single partition containing all rows. </para> <para> There is another important concept associated with window functions: for each row, there is a set of rows within its partition called its - <firstterm>window frame</>. Some window functions act only + <firstterm>window frame</firstterm>. Some window functions act only on the rows of the window frame, rather than of the whole partition. - By default, if <literal>ORDER BY</> is supplied then the frame consists of + By default, if <literal>ORDER BY</literal> is supplied then the frame consists of all rows from the start of the partition up through the current row, plus any following rows that are equal to the current row according to the - <literal>ORDER BY</> clause. When <literal>ORDER BY</> is omitted the + <literal>ORDER BY</literal> clause. When <literal>ORDER BY</literal> is omitted the default frame consists of all rows in the partition. <footnote> <para> @@ -450,7 +450,7 @@ FROM empsalary; <xref linkend="syntax-window-functions"> for details. </para> </footnote> - Here is an example using <function>sum</>: + Here is an example using <function>sum</function>: </para> <programlisting> @@ -474,11 +474,11 @@ SELECT salary, sum(salary) OVER () FROM empsalary; </screen> <para> - Above, since there is no <literal>ORDER BY</> in the <literal>OVER</> + Above, since there is no <literal>ORDER BY</literal> in the <literal>OVER</literal> clause, the window frame is the same as the partition, which for lack of - <literal>PARTITION BY</> is the whole table; in other words each sum is + <literal>PARTITION BY</literal> is the whole table; in other words each sum is taken over the whole table and so we get the same result for each output - row. But if we add an <literal>ORDER BY</> clause, we get very different + row. But if we add an <literal>ORDER BY</literal> clause, we get very different results: </para> @@ -510,8 +510,8 @@ SELECT salary, sum(salary) OVER (ORDER BY salary) FROM empsalary; <para> Window functions are permitted only in the <literal>SELECT</literal> list - and the <literal>ORDER BY</> clause of the query. They are forbidden - elsewhere, such as in <literal>GROUP BY</>, <literal>HAVING</> + and the <literal>ORDER BY</literal> clause of the query. They are forbidden + elsewhere, such as in <literal>GROUP BY</literal>, <literal>HAVING</literal> and <literal>WHERE</literal> clauses. This is because they logically execute after the processing of those clauses. Also, window functions execute after non-window aggregate functions. This means it is valid to @@ -534,15 +534,15 @@ WHERE pos < 3; </programlisting> The above query only shows the rows from the inner query having - <literal>rank</> less than 3. + <literal>rank</literal> less than 3. </para> <para> When a query involves multiple window functions, it is possible to write - out each one with a separate <literal>OVER</> clause, but this is + out each one with a separate <literal>OVER</literal> clause, but this is duplicative and error-prone if the same windowing behavior is wanted for several functions. Instead, each windowing behavior can be named - in a <literal>WINDOW</> clause and then referenced in <literal>OVER</>. + in a <literal>WINDOW</literal> clause and then referenced in <literal>OVER</literal>. For example: <programlisting> @@ -623,13 +623,13 @@ CREATE TABLE capitals ( <para> In this case, a row of <classname>capitals</classname> - <firstterm>inherits</firstterm> all columns (<structfield>name</>, - <structfield>population</>, and <structfield>altitude</>) from its + <firstterm>inherits</firstterm> all columns (<structfield>name</structfield>, + <structfield>population</structfield>, and <structfield>altitude</structfield>) from its <firstterm>parent</firstterm>, <classname>cities</classname>. The type of the column <structfield>name</structfield> is <type>text</type>, a native <productname>PostgreSQL</productname> type for variable length character strings. State capitals have - an extra column, <structfield>state</>, that shows their state. In + an extra column, <structfield>state</structfield>, that shows their state. In <productname>PostgreSQL</productname>, a table can inherit from zero or more other tables. </para> diff --git a/doc/src/sgml/amcheck.sgml b/doc/src/sgml/amcheck.sgml index dd71dbd679b..0dd68f0ba14 100644 --- a/doc/src/sgml/amcheck.sgml +++ b/doc/src/sgml/amcheck.sgml @@ -8,19 +8,19 @@ </indexterm> <para> - The <filename>amcheck</> module provides functions that allow you to + The <filename>amcheck</filename> module provides functions that allow you to verify the logical consistency of the structure of indexes. If the structure appears to be valid, no error is raised. </para> <para> - The functions verify various <emphasis>invariants</> in the + The functions verify various <emphasis>invariants</emphasis> in the structure of the representation of particular indexes. The correctness of the access method functions behind index scans and other important operations relies on these invariants always holding. For example, certain functions verify, among other things, - that all B-Tree pages have items in <quote>logical</> order (e.g., - for B-Tree indexes on <type>text</>, index tuples should be in + that all B-Tree pages have items in <quote>logical</quote> order (e.g., + for B-Tree indexes on <type>text</type>, index tuples should be in collated lexical order). If that particular invariant somehow fails to hold, we can expect binary searches on the affected page to incorrectly guide index scans, resulting in wrong answers to SQL @@ -35,7 +35,7 @@ functions. </para> <para> - <filename>amcheck</> functions may be used only by superusers. + <filename>amcheck</filename> functions may be used only by superusers. </para> <sect2> @@ -82,7 +82,7 @@ ORDER BY c.relpages DESC LIMIT 10; (10 rows) </screen> This example shows a session that performs verification of every - catalog index in the database <quote>test</>. Details of just + catalog index in the database <quote>test</quote>. Details of just the 10 largest indexes verified are displayed. Since no error is raised, all indexes tested appear to be logically consistent. Naturally, this query could easily be changed to call @@ -90,10 +90,10 @@ ORDER BY c.relpages DESC LIMIT 10; database where verification is supported. </para> <para> - <function>bt_index_check</function> acquires an <literal>AccessShareLock</> + <function>bt_index_check</function> acquires an <literal>AccessShareLock</literal> on the target index and the heap relation it belongs to. This lock mode is the same lock mode acquired on relations by simple - <literal>SELECT</> statements. + <literal>SELECT</literal> statements. <function>bt_index_check</function> does not verify invariants that span child/parent relationships, nor does it verify that the target index is consistent with its heap relation. When a @@ -132,13 +132,13 @@ ORDER BY c.relpages DESC LIMIT 10; logical inconsistency or other problem. </para> <para> - A <literal>ShareLock</> is required on the target index by + A <literal>ShareLock</literal> is required on the target index by <function>bt_index_parent_check</function> (a - <literal>ShareLock</> is also acquired on the heap relation). + <literal>ShareLock</literal> is also acquired on the heap relation). These locks prevent concurrent data modification from - <command>INSERT</>, <command>UPDATE</>, and <command>DELETE</> + <command>INSERT</command>, <command>UPDATE</command>, and <command>DELETE</command> commands. The locks also prevent the underlying relation from - being concurrently processed by <command>VACUUM</>, as well as + being concurrently processed by <command>VACUUM</command>, as well as all other utility commands. Note that the function holds locks only while running, not for the entire transaction. </para> @@ -159,13 +159,13 @@ ORDER BY c.relpages DESC LIMIT 10; </sect2> <sect2> - <title>Using <filename>amcheck</> effectively</title> + <title>Using <filename>amcheck</filename> effectively</title> <para> - <filename>amcheck</> can be effective at detecting various types of + <filename>amcheck</filename> can be effective at detecting various types of failure modes that <link linkend="app-initdb-data-checksums"><application>data page - checksums</></link> will always fail to catch. These include: + checksums</application></link> will always fail to catch. These include: <itemizedlist> <listitem> @@ -176,13 +176,13 @@ ORDER BY c.relpages DESC LIMIT 10; <para> This includes issues caused by the comparison rules of operating system collations changing. Comparisons of datums of a collatable - type like <type>text</> must be immutable (just as all + type like <type>text</type> must be immutable (just as all comparisons used for B-Tree index scans must be immutable), which implies that operating system collation rules must never change. Though rare, updates to operating system collation rules can cause these issues. More commonly, an inconsistency in the collation order between a master server and a standby server is - implicated, possibly because the <emphasis>major</> operating + implicated, possibly because the <emphasis>major</emphasis> operating system version in use is inconsistent. Such inconsistencies will generally only arise on standby servers, and so can generally only be detected on standby servers. @@ -190,25 +190,25 @@ ORDER BY c.relpages DESC LIMIT 10; <para> If a problem like this arises, it may not affect each individual index that is ordered using an affected collation, simply because - <emphasis>indexed</> values might happen to have the same + <emphasis>indexed</emphasis> values might happen to have the same absolute ordering regardless of the behavioral inconsistency. See <xref linkend="locale"> and <xref linkend="collation"> for - further details about how <productname>PostgreSQL</> uses + further details about how <productname>PostgreSQL</productname> uses operating system locales and collations. </para> </listitem> <listitem> <para> Corruption caused by hypothetical undiscovered bugs in the - underlying <productname>PostgreSQL</> access method code or sort + underlying <productname>PostgreSQL</productname> access method code or sort code. </para> <para> Automatic verification of the structural integrity of indexes plays a role in the general testing of new or proposed - <productname>PostgreSQL</> features that could plausibly allow a + <productname>PostgreSQL</productname> features that could plausibly allow a logical inconsistency to be introduced. One obvious testing - strategy is to call <filename>amcheck</> functions continuously + strategy is to call <filename>amcheck</filename> functions continuously when running the standard regression tests. See <xref linkend="regress-run"> for details on running the tests. </para> @@ -219,12 +219,12 @@ ORDER BY c.relpages DESC LIMIT 10; simply not be enabled. </para> <para> - Note that <filename>amcheck</> examines a page as represented in some + Note that <filename>amcheck</filename> examines a page as represented in some shared memory buffer at the time of verification if there is only a shared buffer hit when accessing the block. Consequently, - <filename>amcheck</> does not necessarily examine data read from the + <filename>amcheck</filename> does not necessarily examine data read from the file system at the time of verification. Note that when checksums are - enabled, <filename>amcheck</> may raise an error due to a checksum + enabled, <filename>amcheck</filename> may raise an error due to a checksum failure when a corrupt block is read into a buffer. </para> </listitem> @@ -234,7 +234,7 @@ ORDER BY c.relpages DESC LIMIT 10; and operating system. </para> <para> - <productname>PostgreSQL</> does not protect against correctable + <productname>PostgreSQL</productname> does not protect against correctable memory errors and it is assumed you will operate using RAM that uses industry standard Error Correcting Codes (ECC) or better protection. However, ECC memory is typically only immune to @@ -244,7 +244,7 @@ ORDER BY c.relpages DESC LIMIT 10; </para> </listitem> </itemizedlist> - In general, <filename>amcheck</> can only prove the presence of + In general, <filename>amcheck</filename> can only prove the presence of corruption; it cannot prove its absence. </para> @@ -252,19 +252,19 @@ ORDER BY c.relpages DESC LIMIT 10; <sect2> <title>Repairing corruption</title> <para> - No error concerning corruption raised by <filename>amcheck</> should - ever be a false positive. In practice, <filename>amcheck</> is more + No error concerning corruption raised by <filename>amcheck</filename> should + ever be a false positive. In practice, <filename>amcheck</filename> is more likely to find software bugs than problems with hardware. - <filename>amcheck</> raises errors in the event of conditions that, + <filename>amcheck</filename> raises errors in the event of conditions that, by definition, should never happen, and so careful analysis of - <filename>amcheck</> errors is often required. + <filename>amcheck</filename> errors is often required. </para> <para> There is no general method of repairing problems that - <filename>amcheck</> detects. An explanation for the root cause of + <filename>amcheck</filename> detects. An explanation for the root cause of an invariant violation should be sought. <xref linkend="pageinspect"> may play a useful role in diagnosing - corruption that <filename>amcheck</> detects. A <command>REINDEX</> + corruption that <filename>amcheck</filename> detects. A <command>REINDEX</command> may not be effective in repairing corruption. </para> diff --git a/doc/src/sgml/arch-dev.sgml b/doc/src/sgml/arch-dev.sgml index c835e87215e..5423aadb9c8 100644 --- a/doc/src/sgml/arch-dev.sgml +++ b/doc/src/sgml/arch-dev.sgml @@ -118,7 +118,7 @@ <para> <productname>PostgreSQL</productname> is implemented using a - simple <quote>process per user</> client/server model. In this model + simple <quote>process per user</quote> client/server model. In this model there is one <firstterm>client process</firstterm> connected to exactly one <firstterm>server process</firstterm>. As we do not know ahead of time how many connections will be made, we have to @@ -137,9 +137,9 @@ The client process can be any program that understands the <productname>PostgreSQL</productname> protocol described in <xref linkend="protocol">. Many clients are based on the - C-language library <application>libpq</>, but several independent + C-language library <application>libpq</application>, but several independent implementations of the protocol exist, such as the Java - <application>JDBC</> driver. + <application>JDBC</application> driver. </para> <para> @@ -184,8 +184,8 @@ text) for valid syntax. If the syntax is correct a <firstterm>parse tree</firstterm> is built up and handed back; otherwise an error is returned. The parser and lexer are - implemented using the well-known Unix tools <application>bison</> - and <application>flex</>. + implemented using the well-known Unix tools <application>bison</application> + and <application>flex</application>. </para> <para> @@ -251,7 +251,7 @@ back by the parser as input and does the semantic interpretation needed to understand which tables, functions, and operators are referenced by the query. The data structure that is built to represent this - information is called the <firstterm>query tree</>. + information is called the <firstterm>query tree</firstterm>. </para> <para> @@ -259,10 +259,10 @@ system catalog lookups can only be done within a transaction, and we do not wish to start a transaction immediately upon receiving a query string. The raw parsing stage is sufficient to identify the transaction - control commands (<command>BEGIN</>, <command>ROLLBACK</>, etc), and + control commands (<command>BEGIN</command>, <command>ROLLBACK</command>, etc), and these can then be correctly executed without any further analysis. Once we know that we are dealing with an actual query (such as - <command>SELECT</> or <command>UPDATE</>), it is okay to + <command>SELECT</command> or <command>UPDATE</command>), it is okay to start a transaction if we're not already in one. Only then can the transformation process be invoked. </para> @@ -270,10 +270,10 @@ <para> The query tree created by the transformation process is structurally similar to the raw parse tree in most places, but it has many differences - in detail. For example, a <structname>FuncCall</> node in the + in detail. For example, a <structname>FuncCall</structname> node in the parse tree represents something that looks syntactically like a function - call. This might be transformed to either a <structname>FuncExpr</> - or <structname>Aggref</> node depending on whether the referenced + call. This might be transformed to either a <structname>FuncExpr</structname> + or <structname>Aggref</structname> node depending on whether the referenced name turns out to be an ordinary function or an aggregate function. Also, information about the actual data types of columns and expression results is added to the query tree. @@ -354,10 +354,10 @@ <para> The planner's search procedure actually works with data structures - called <firstterm>paths</>, which are simply cut-down representations of + called <firstterm>paths</firstterm>, which are simply cut-down representations of plans containing only as much information as the planner needs to make its decisions. After the cheapest path is determined, a full-fledged - <firstterm>plan tree</> is built to pass to the executor. This represents + <firstterm>plan tree</firstterm> is built to pass to the executor. This represents the desired execution plan in sufficient detail for the executor to run it. In the rest of this section we'll ignore the distinction between paths and plans. @@ -378,12 +378,12 @@ <literal>relation.attribute OPR constant</literal>. If <literal>relation.attribute</literal> happens to match the key of the B-tree index and <literal>OPR</literal> is one of the operators listed in - the index's <firstterm>operator class</>, another plan is created using + the index's <firstterm>operator class</firstterm>, another plan is created using the B-tree index to scan the relation. If there are further indexes present and the restrictions in the query happen to match a key of an index, further plans will be considered. Index scan plans are also generated for indexes that have a sort ordering that can match the - query's <literal>ORDER BY</> clause (if any), or a sort ordering that + query's <literal>ORDER BY</literal> clause (if any), or a sort ordering that might be useful for merge joining (see below). </para> @@ -462,9 +462,9 @@ the base relations, plus nested-loop, merge, or hash join nodes as needed, plus any auxiliary steps needed, such as sort nodes or aggregate-function calculation nodes. Most of these plan node - types have the additional ability to do <firstterm>selection</> + types have the additional ability to do <firstterm>selection</firstterm> (discarding rows that do not meet a specified Boolean condition) - and <firstterm>projection</> (computation of a derived column set + and <firstterm>projection</firstterm> (computation of a derived column set based on given column values, that is, evaluation of scalar expressions where needed). One of the responsibilities of the planner is to attach selection conditions from the @@ -496,7 +496,7 @@ subplan) is, let's say, a <literal>Sort</literal> node and again recursion is needed to obtain an input row. The child node of the <literal>Sort</literal> might - be a <literal>SeqScan</> node, representing actual reading of a table. + be a <literal>SeqScan</literal> node, representing actual reading of a table. Execution of this node causes the executor to fetch a row from the table and return it up to the calling node. The <literal>Sort</literal> node will repeatedly call its child to obtain all the rows to be sorted. @@ -529,24 +529,24 @@ <para> The executor mechanism is used to evaluate all four basic SQL query types: - <command>SELECT</>, <command>INSERT</>, <command>UPDATE</>, and - <command>DELETE</>. For <command>SELECT</>, the top-level executor + <command>SELECT</command>, <command>INSERT</command>, <command>UPDATE</command>, and + <command>DELETE</command>. For <command>SELECT</command>, the top-level executor code only needs to send each row returned by the query plan tree off - to the client. For <command>INSERT</>, each returned row is inserted - into the target table specified for the <command>INSERT</>. This is - done in a special top-level plan node called <literal>ModifyTable</>. + to the client. For <command>INSERT</command>, each returned row is inserted + into the target table specified for the <command>INSERT</command>. This is + done in a special top-level plan node called <literal>ModifyTable</literal>. (A simple - <command>INSERT ... VALUES</> command creates a trivial plan tree - consisting of a single <literal>Result</> node, which computes just one - result row, and <literal>ModifyTable</> above it to perform the insertion. - But <command>INSERT ... SELECT</> can demand the full power - of the executor mechanism.) For <command>UPDATE</>, the planner arranges + <command>INSERT ... VALUES</command> command creates a trivial plan tree + consisting of a single <literal>Result</literal> node, which computes just one + result row, and <literal>ModifyTable</literal> above it to perform the insertion. + But <command>INSERT ... SELECT</command> can demand the full power + of the executor mechanism.) For <command>UPDATE</command>, the planner arranges that each computed row includes all the updated column values, plus - the <firstterm>TID</> (tuple ID, or row ID) of the original target row; - this data is fed into a <literal>ModifyTable</> node, which uses the + the <firstterm>TID</firstterm> (tuple ID, or row ID) of the original target row; + this data is fed into a <literal>ModifyTable</literal> node, which uses the information to create a new updated row and mark the old row deleted. - For <command>DELETE</>, the only column that is actually returned by the - plan is the TID, and the <literal>ModifyTable</> node simply uses the TID + For <command>DELETE</command>, the only column that is actually returned by the + plan is the TID, and the <literal>ModifyTable</literal> node simply uses the TID to visit each target row and mark it deleted. </para> diff --git a/doc/src/sgml/array.sgml b/doc/src/sgml/array.sgml index 88eb4be04d0..9187f6e02e7 100644 --- a/doc/src/sgml/array.sgml +++ b/doc/src/sgml/array.sgml @@ -32,7 +32,7 @@ CREATE TABLE sal_emp ( ); </programlisting> As shown, an array data type is named by appending square brackets - (<literal>[]</>) to the data type name of the array elements. The + (<literal>[]</literal>) to the data type name of the array elements. The above command will create a table named <structname>sal_emp</structname> with a column of type <type>text</type> (<structfield>name</structfield>), a @@ -69,7 +69,7 @@ CREATE TABLE tictactoe ( <para> An alternative syntax, which conforms to the SQL standard by using - the keyword <literal>ARRAY</>, can be used for one-dimensional arrays. + the keyword <literal>ARRAY</literal>, can be used for one-dimensional arrays. <structfield>pay_by_quarter</structfield> could have been defined as: <programlisting> @@ -79,7 +79,7 @@ CREATE TABLE tictactoe ( <programlisting> pay_by_quarter integer ARRAY, </programlisting> - As before, however, <productname>PostgreSQL</> does not enforce the + As before, however, <productname>PostgreSQL</productname> does not enforce the size restriction in any case. </para> </sect2> @@ -107,8 +107,8 @@ CREATE TABLE tictactoe ( for the type, as recorded in its <literal>pg_type</literal> entry. Among the standard data types provided in the <productname>PostgreSQL</productname> distribution, all use a comma - (<literal>,</>), except for type <type>box</> which uses a semicolon - (<literal>;</>). Each <replaceable>val</replaceable> is + (<literal>,</literal>), except for type <type>box</type> which uses a semicolon + (<literal>;</literal>). Each <replaceable>val</replaceable> is either a constant of the array element type, or a subarray. An example of an array constant is: <programlisting> @@ -119,10 +119,10 @@ CREATE TABLE tictactoe ( </para> <para> - To set an element of an array constant to NULL, write <literal>NULL</> + To set an element of an array constant to NULL, write <literal>NULL</literal> for the element value. (Any upper- or lower-case variant of - <literal>NULL</> will do.) If you want an actual string value - <quote>NULL</>, you must put double quotes around it. + <literal>NULL</literal> will do.) If you want an actual string value + <quote>NULL</quote>, you must put double quotes around it. </para> <para> @@ -176,7 +176,7 @@ ERROR: multidimensional arrays must have array expressions with matching dimens </para> <para> - The <literal>ARRAY</> constructor syntax can also be used: + The <literal>ARRAY</literal> constructor syntax can also be used: <programlisting> INSERT INTO sal_emp VALUES ('Bill', @@ -190,7 +190,7 @@ INSERT INTO sal_emp </programlisting> Notice that the array elements are ordinary SQL constants or expressions; for instance, string literals are single quoted, instead of - double quoted as they would be in an array literal. The <literal>ARRAY</> + double quoted as they would be in an array literal. The <literal>ARRAY</literal> constructor syntax is discussed in more detail in <xref linkend="sql-syntax-array-constructors">. </para> @@ -222,8 +222,8 @@ SELECT name FROM sal_emp WHERE pay_by_quarter[1] <> pay_by_quarter[2]; The array subscript numbers are written within square brackets. By default <productname>PostgreSQL</productname> uses a one-based numbering convention for arrays, that is, - an array of <replaceable>n</> elements starts with <literal>array[1]</literal> and - ends with <literal>array[<replaceable>n</>]</literal>. + an array of <replaceable>n</replaceable> elements starts with <literal>array[1]</literal> and + ends with <literal>array[<replaceable>n</replaceable>]</literal>. </para> <para> @@ -259,8 +259,8 @@ SELECT schedule[1:2][1:1] FROM sal_emp WHERE name = 'Bill'; If any dimension is written as a slice, i.e., contains a colon, then all dimensions are treated as slices. Any dimension that has only a single number (no colon) is treated as being from 1 - to the number specified. For example, <literal>[2]</> is treated as - <literal>[1:2]</>, as in this example: + to the number specified. For example, <literal>[2]</literal> is treated as + <literal>[1:2]</literal>, as in this example: <programlisting> SELECT schedule[1:2][2] FROM sal_emp WHERE name = 'Bill'; @@ -272,7 +272,7 @@ SELECT schedule[1:2][2] FROM sal_emp WHERE name = 'Bill'; </programlisting> To avoid confusion with the non-slice case, it's best to use slice syntax - for all dimensions, e.g., <literal>[1:2][1:1]</>, not <literal>[2][1:1]</>. + for all dimensions, e.g., <literal>[1:2][1:1]</literal>, not <literal>[2][1:1]</literal>. </para> <para> @@ -302,9 +302,9 @@ SELECT schedule[:][1:1] FROM sal_emp WHERE name = 'Bill'; An array subscript expression will return null if either the array itself or any of the subscript expressions are null. Also, null is returned if a subscript is outside the array bounds (this case does not raise an error). - For example, if <literal>schedule</> - currently has the dimensions <literal>[1:3][1:2]</> then referencing - <literal>schedule[3][3]</> yields NULL. Similarly, an array reference + For example, if <literal>schedule</literal> + currently has the dimensions <literal>[1:3][1:2]</literal> then referencing + <literal>schedule[3][3]</literal> yields NULL. Similarly, an array reference with the wrong number of subscripts yields a null rather than an error. </para> @@ -423,16 +423,16 @@ UPDATE sal_emp SET pay_by_quarter[1:2] = '{27000,27000}' A stored array value can be enlarged by assigning to elements not already present. Any positions between those previously present and the newly assigned elements will be filled with nulls. For example, if array - <literal>myarray</> currently has 4 elements, it will have six - elements after an update that assigns to <literal>myarray[6]</>; - <literal>myarray[5]</> will contain null. + <literal>myarray</literal> currently has 4 elements, it will have six + elements after an update that assigns to <literal>myarray[6]</literal>; + <literal>myarray[5]</literal> will contain null. Currently, enlargement in this fashion is only allowed for one-dimensional arrays, not multidimensional arrays. </para> <para> Subscripted assignment allows creation of arrays that do not use one-based - subscripts. For example one might assign to <literal>myarray[-2:7]</> to + subscripts. For example one might assign to <literal>myarray[-2:7]</literal> to create an array with subscript values from -2 to 7. </para> @@ -457,8 +457,8 @@ SELECT ARRAY[5,6] || ARRAY[[1,2],[3,4]]; <para> The concatenation operator allows a single element to be pushed onto the beginning or end of a one-dimensional array. It also accepts two - <replaceable>N</>-dimensional arrays, or an <replaceable>N</>-dimensional - and an <replaceable>N+1</>-dimensional array. + <replaceable>N</replaceable>-dimensional arrays, or an <replaceable>N</replaceable>-dimensional + and an <replaceable>N+1</replaceable>-dimensional array. </para> <para> @@ -501,10 +501,10 @@ SELECT array_dims(ARRAY[[1,2],[3,4]] || ARRAY[[5,6],[7,8],[9,0]]); </para> <para> - When an <replaceable>N</>-dimensional array is pushed onto the beginning - or end of an <replaceable>N+1</>-dimensional array, the result is - analogous to the element-array case above. Each <replaceable>N</>-dimensional - sub-array is essentially an element of the <replaceable>N+1</>-dimensional + When an <replaceable>N</replaceable>-dimensional array is pushed onto the beginning + or end of an <replaceable>N+1</replaceable>-dimensional array, the result is + analogous to the element-array case above. Each <replaceable>N</replaceable>-dimensional + sub-array is essentially an element of the <replaceable>N+1</replaceable>-dimensional array's outer dimension. For example: <programlisting> SELECT array_dims(ARRAY[1,2] || ARRAY[[3,4],[5,6]]); @@ -587,9 +587,9 @@ SELECT array_append(ARRAY[1, 2], NULL); -- this might have been meant The heuristic it uses to resolve the constant's type is to assume it's of the same type as the operator's other input — in this case, integer array. So the concatenation operator is presumed to - represent <function>array_cat</>, not <function>array_append</>. When + represent <function>array_cat</function>, not <function>array_append</function>. When that's the wrong choice, it could be fixed by casting the constant to the - array's element type; but explicit use of <function>array_append</> might + array's element type; but explicit use of <function>array_append</function> might be a preferable solution. </para> </sect2> @@ -633,7 +633,7 @@ SELECT * FROM sal_emp WHERE 10000 = ALL (pay_by_quarter); </para> <para> - Alternatively, the <function>generate_subscripts</> function can be used. + Alternatively, the <function>generate_subscripts</function> function can be used. For example: <programlisting> @@ -648,7 +648,7 @@ SELECT * FROM </para> <para> - You can also search an array using the <literal>&&</> operator, + You can also search an array using the <literal>&&</literal> operator, which checks whether the left operand overlaps with the right operand. For instance: @@ -662,8 +662,8 @@ SELECT * FROM sal_emp WHERE pay_by_quarter && ARRAY[10000]; </para> <para> - You can also search for specific values in an array using the <function>array_position</> - and <function>array_positions</> functions. The former returns the subscript of + You can also search for specific values in an array using the <function>array_position</function> + and <function>array_positions</function> functions. The former returns the subscript of the first occurrence of a value in an array; the latter returns an array with the subscripts of all occurrences of the value in the array. For example: @@ -703,13 +703,13 @@ SELECT array_positions(ARRAY[1, 4, 3, 1, 3, 4, 2, 1], 1); The external text representation of an array value consists of items that are interpreted according to the I/O conversion rules for the array's element type, plus decoration that indicates the array structure. - The decoration consists of curly braces (<literal>{</> and <literal>}</>) + The decoration consists of curly braces (<literal>{</literal> and <literal>}</literal>) around the array value plus delimiter characters between adjacent items. - The delimiter character is usually a comma (<literal>,</>) but can be - something else: it is determined by the <literal>typdelim</> setting + The delimiter character is usually a comma (<literal>,</literal>) but can be + something else: it is determined by the <literal>typdelim</literal> setting for the array's element type. Among the standard data types provided in the <productname>PostgreSQL</productname> distribution, all use a comma, - except for type <type>box</>, which uses a semicolon (<literal>;</>). + except for type <type>box</type>, which uses a semicolon (<literal>;</literal>). In a multidimensional array, each dimension (row, plane, cube, etc.) gets its own level of curly braces, and delimiters must be written between adjacent curly-braced entities of the same level. @@ -719,7 +719,7 @@ SELECT array_positions(ARRAY[1, 4, 3, 1, 3, 4, 2, 1], 1); The array output routine will put double quotes around element values if they are empty strings, contain curly braces, delimiter characters, double quotes, backslashes, or white space, or match the word - <literal>NULL</>. Double quotes and backslashes + <literal>NULL</literal>. Double quotes and backslashes embedded in element values will be backslash-escaped. For numeric data types it is safe to assume that double quotes will never appear, but for textual data types one should be prepared to cope with either the presence @@ -731,10 +731,10 @@ SELECT array_positions(ARRAY[1, 4, 3, 1, 3, 4, 2, 1], 1); set to one. To represent arrays with other lower bounds, the array subscript ranges can be specified explicitly before writing the array contents. - This decoration consists of square brackets (<literal>[]</>) + This decoration consists of square brackets (<literal>[]</literal>) around each array dimension's lower and upper bounds, with - a colon (<literal>:</>) delimiter character in between. The - array dimension decoration is followed by an equal sign (<literal>=</>). + a colon (<literal>:</literal>) delimiter character in between. The + array dimension decoration is followed by an equal sign (<literal>=</literal>). For example: <programlisting> SELECT f1[1][-2][3] AS e1, f1[1][-1][5] AS e2 @@ -750,23 +750,23 @@ SELECT f1[1][-2][3] AS e1, f1[1][-1][5] AS e2 </para> <para> - If the value written for an element is <literal>NULL</> (in any case + If the value written for an element is <literal>NULL</literal> (in any case variant), the element is taken to be NULL. The presence of any quotes or backslashes disables this and allows the literal string value - <quote>NULL</> to be entered. Also, for backward compatibility with - pre-8.2 versions of <productname>PostgreSQL</>, the <xref + <quote>NULL</quote> to be entered. Also, for backward compatibility with + pre-8.2 versions of <productname>PostgreSQL</productname>, the <xref linkend="guc-array-nulls"> configuration parameter can be turned - <literal>off</> to suppress recognition of <literal>NULL</> as a NULL. + <literal>off</literal> to suppress recognition of <literal>NULL</literal> as a NULL. </para> <para> As shown previously, when writing an array value you can use double - quotes around any individual array element. You <emphasis>must</> do so + quotes around any individual array element. You <emphasis>must</emphasis> do so if the element value would otherwise confuse the array-value parser. For example, elements containing curly braces, commas (or the data type's delimiter character), double quotes, backslashes, or leading or trailing whitespace must be double-quoted. Empty strings and strings matching the - word <literal>NULL</> must be quoted, too. To put a double quote or + word <literal>NULL</literal> must be quoted, too. To put a double quote or backslash in a quoted array element value, use escape string syntax and precede it with a backslash. Alternatively, you can avoid quotes and use backslash-escaping to protect all data characters that would otherwise @@ -785,17 +785,17 @@ SELECT f1[1][-2][3] AS e1, f1[1][-1][5] AS e2 <para> Remember that what you write in an SQL command will first be interpreted as a string literal, and then as an array. This doubles the number of - backslashes you need. For example, to insert a <type>text</> array + backslashes you need. For example, to insert a <type>text</type> array value containing a backslash and a double quote, you'd need to write: <programlisting> INSERT ... VALUES (E'{"\\\\","\\""}'); </programlisting> The escape string processor removes one level of backslashes, so that - what arrives at the array-value parser looks like <literal>{"\\","\""}</>. - In turn, the strings fed to the <type>text</> data type's input routine - become <literal>\</> and <literal>"</> respectively. (If we were working + what arrives at the array-value parser looks like <literal>{"\\","\""}</literal>. + In turn, the strings fed to the <type>text</type> data type's input routine + become <literal>\</literal> and <literal>"</literal> respectively. (If we were working with a data type whose input routine also treated backslashes specially, - <type>bytea</> for example, we might need as many as eight backslashes + <type>bytea</type> for example, we might need as many as eight backslashes in the command to get one backslash into the stored array element.) Dollar quoting (see <xref linkend="sql-syntax-dollar-quoting">) can be used to avoid the need to double backslashes. @@ -804,10 +804,10 @@ INSERT ... VALUES (E'{"\\\\","\\""}'); <tip> <para> - The <literal>ARRAY</> constructor syntax (see + The <literal>ARRAY</literal> constructor syntax (see <xref linkend="sql-syntax-array-constructors">) is often easier to work with than the array-literal syntax when writing array values in SQL - commands. In <literal>ARRAY</>, individual element values are written the + commands. In <literal>ARRAY</literal>, individual element values are written the same way they would be written when not members of an array. </para> </tip> diff --git a/doc/src/sgml/auth-delay.sgml b/doc/src/sgml/auth-delay.sgml index 9a6e3e9bb4d..9221d2dfb65 100644 --- a/doc/src/sgml/auth-delay.sgml +++ b/doc/src/sgml/auth-delay.sgml @@ -18,7 +18,7 @@ <para> In order to function, this module must be loaded via - <xref linkend="guc-shared-preload-libraries"> in <filename>postgresql.conf</>. + <xref linkend="guc-shared-preload-libraries"> in <filename>postgresql.conf</filename>. </para> <sect2> @@ -29,7 +29,7 @@ <term> <varname>auth_delay.milliseconds</varname> (<type>int</type>) <indexterm> - <primary><varname>auth_delay.milliseconds</> configuration parameter</primary> + <primary><varname>auth_delay.milliseconds</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -42,7 +42,7 @@ </variablelist> <para> - These parameters must be set in <filename>postgresql.conf</>. + These parameters must be set in <filename>postgresql.conf</filename>. Typical usage might be: </para> diff --git a/doc/src/sgml/auto-explain.sgml b/doc/src/sgml/auto-explain.sgml index 38e6f50c802..240098c82f7 100644 --- a/doc/src/sgml/auto-explain.sgml +++ b/doc/src/sgml/auto-explain.sgml @@ -24,10 +24,10 @@ LOAD 'auto_explain'; </programlisting> (You must be superuser to do that.) More typical usage is to preload - it into some or all sessions by including <literal>auto_explain</> in + it into some or all sessions by including <literal>auto_explain</literal> in <xref linkend="guc-session-preload-libraries"> or <xref linkend="guc-shared-preload-libraries"> in - <filename>postgresql.conf</>. Then you can track unexpectedly slow queries + <filename>postgresql.conf</filename>. Then you can track unexpectedly slow queries no matter when they happen. Of course there is a price in overhead for that. </para> @@ -47,7 +47,7 @@ LOAD 'auto_explain'; <term> <varname>auto_explain.log_min_duration</varname> (<type>integer</type>) <indexterm> - <primary><varname>auto_explain.log_min_duration</> configuration parameter</primary> + <primary><varname>auto_explain.log_min_duration</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -66,13 +66,13 @@ LOAD 'auto_explain'; <term> <varname>auto_explain.log_analyze</varname> (<type>boolean</type>) <indexterm> - <primary><varname>auto_explain.log_analyze</> configuration parameter</primary> + <primary><varname>auto_explain.log_analyze</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - <varname>auto_explain.log_analyze</varname> causes <command>EXPLAIN ANALYZE</> - output, rather than just <command>EXPLAIN</> output, to be printed + <varname>auto_explain.log_analyze</varname> causes <command>EXPLAIN ANALYZE</command> + output, rather than just <command>EXPLAIN</command> output, to be printed when an execution plan is logged. This parameter is off by default. Only superusers can change this setting. </para> @@ -92,14 +92,14 @@ LOAD 'auto_explain'; <term> <varname>auto_explain.log_buffers</varname> (<type>boolean</type>) <indexterm> - <primary><varname>auto_explain.log_buffers</> configuration parameter</primary> + <primary><varname>auto_explain.log_buffers</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> <varname>auto_explain.log_buffers</varname> controls whether buffer usage statistics are printed when an execution plan is logged; it's - equivalent to the <literal>BUFFERS</> option of <command>EXPLAIN</>. + equivalent to the <literal>BUFFERS</literal> option of <command>EXPLAIN</command>. This parameter has no effect unless <varname>auto_explain.log_analyze</varname> is enabled. This parameter is off by default. @@ -112,14 +112,14 @@ LOAD 'auto_explain'; <term> <varname>auto_explain.log_timing</varname> (<type>boolean</type>) <indexterm> - <primary><varname>auto_explain.log_timing</> configuration parameter</primary> + <primary><varname>auto_explain.log_timing</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> <varname>auto_explain.log_timing</varname> controls whether per-node timing information is printed when an execution plan is logged; it's - equivalent to the <literal>TIMING</> option of <command>EXPLAIN</>. + equivalent to the <literal>TIMING</literal> option of <command>EXPLAIN</command>. The overhead of repeatedly reading the system clock can slow down queries significantly on some systems, so it may be useful to set this parameter to off when only actual row counts, and not exact times, are @@ -136,7 +136,7 @@ LOAD 'auto_explain'; <term> <varname>auto_explain.log_triggers</varname> (<type>boolean</type>) <indexterm> - <primary><varname>auto_explain.log_triggers</> configuration parameter</primary> + <primary><varname>auto_explain.log_triggers</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -155,14 +155,14 @@ LOAD 'auto_explain'; <term> <varname>auto_explain.log_verbose</varname> (<type>boolean</type>) <indexterm> - <primary><varname>auto_explain.log_verbose</> configuration parameter</primary> + <primary><varname>auto_explain.log_verbose</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> <varname>auto_explain.log_verbose</varname> controls whether verbose details are printed when an execution plan is logged; it's - equivalent to the <literal>VERBOSE</> option of <command>EXPLAIN</>. + equivalent to the <literal>VERBOSE</literal> option of <command>EXPLAIN</command>. This parameter is off by default. Only superusers can change this setting. </para> @@ -173,13 +173,13 @@ LOAD 'auto_explain'; <term> <varname>auto_explain.log_format</varname> (<type>enum</type>) <indexterm> - <primary><varname>auto_explain.log_format</> configuration parameter</primary> + <primary><varname>auto_explain.log_format</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> <varname>auto_explain.log_format</varname> selects the - <command>EXPLAIN</> output format to be used. + <command>EXPLAIN</command> output format to be used. The allowed values are <literal>text</literal>, <literal>xml</literal>, <literal>json</literal>, and <literal>yaml</literal>. The default is text. Only superusers can change this setting. @@ -191,7 +191,7 @@ LOAD 'auto_explain'; <term> <varname>auto_explain.log_nested_statements</varname> (<type>boolean</type>) <indexterm> - <primary><varname>auto_explain.log_nested_statements</> configuration parameter</primary> + <primary><varname>auto_explain.log_nested_statements</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -208,7 +208,7 @@ LOAD 'auto_explain'; <term> <varname>auto_explain.sample_rate</varname> (<type>real</type>) <indexterm> - <primary><varname>auto_explain.sample_rate</> configuration parameter</primary> + <primary><varname>auto_explain.sample_rate</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -224,7 +224,7 @@ LOAD 'auto_explain'; <para> In ordinary usage, these parameters are set - in <filename>postgresql.conf</>, although superusers can alter them + in <filename>postgresql.conf</filename>, although superusers can alter them on-the-fly within their own sessions. Typical usage might be: </para> diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml index bd55e8bb775..dd9c1bff5b3 100644 --- a/doc/src/sgml/backup.sgml +++ b/doc/src/sgml/backup.sgml @@ -3,10 +3,10 @@ <chapter id="backup"> <title>Backup and Restore</title> - <indexterm zone="backup"><primary>backup</></> + <indexterm zone="backup"><primary>backup</primary></indexterm> <para> - As with everything that contains valuable data, <productname>PostgreSQL</> + As with everything that contains valuable data, <productname>PostgreSQL</productname> databases should be backed up regularly. While the procedure is essentially simple, it is important to have a clear understanding of the underlying techniques and assumptions. @@ -14,9 +14,9 @@ <para> There are three fundamentally different approaches to backing up - <productname>PostgreSQL</> data: + <productname>PostgreSQL</productname> data: <itemizedlist> - <listitem><para><acronym>SQL</> dump</para></listitem> + <listitem><para><acronym>SQL</acronym> dump</para></listitem> <listitem><para>File system level backup</para></listitem> <listitem><para>Continuous archiving</para></listitem> </itemizedlist> @@ -25,30 +25,30 @@ </para> <sect1 id="backup-dump"> - <title><acronym>SQL</> Dump</title> + <title><acronym>SQL</acronym> Dump</title> <para> The idea behind this dump method is to generate a file with SQL commands that, when fed back to the server, will recreate the database in the same state as it was at the time of the dump. - <productname>PostgreSQL</> provides the utility program + <productname>PostgreSQL</productname> provides the utility program <xref linkend="app-pgdump"> for this purpose. The basic usage of this command is: <synopsis> pg_dump <replaceable class="parameter">dbname</replaceable> > <replaceable class="parameter">outfile</replaceable> </synopsis> - As you see, <application>pg_dump</> writes its result to the + As you see, <application>pg_dump</application> writes its result to the standard output. We will see below how this can be useful. - While the above command creates a text file, <application>pg_dump</> + While the above command creates a text file, <application>pg_dump</application> can create files in other formats that allow for parallelism and more fine-grained control of object restoration. </para> <para> - <application>pg_dump</> is a regular <productname>PostgreSQL</> + <application>pg_dump</application> is a regular <productname>PostgreSQL</productname> client application (albeit a particularly clever one). This means that you can perform this backup procedure from any remote host that has - access to the database. But remember that <application>pg_dump</> + access to the database. But remember that <application>pg_dump</application> does not operate with special permissions. In particular, it must have read access to all tables that you want to back up, so in order to back up the entire database you almost always have to run it as a @@ -60,9 +60,9 @@ pg_dump <replaceable class="parameter">dbname</replaceable> > <replaceable cl </para> <para> - To specify which database server <application>pg_dump</> should + To specify which database server <application>pg_dump</application> should contact, use the command line options <option>-h - <replaceable>host</></> and <option>-p <replaceable>port</></>. The + <replaceable>host</replaceable></option> and <option>-p <replaceable>port</replaceable></option>. The default host is the local host or whatever your <envar>PGHOST</envar> environment variable specifies. Similarly, the default port is indicated by the <envar>PGPORT</envar> @@ -72,30 +72,30 @@ pg_dump <replaceable class="parameter">dbname</replaceable> > <replaceable cl </para> <para> - Like any other <productname>PostgreSQL</> client application, - <application>pg_dump</> will by default connect with the database + Like any other <productname>PostgreSQL</productname> client application, + <application>pg_dump</application> will by default connect with the database user name that is equal to the current operating system user name. To override this, either specify the <option>-U</option> option or set the environment variable <envar>PGUSER</envar>. Remember that - <application>pg_dump</> connections are subject to the normal + <application>pg_dump</application> connections are subject to the normal client authentication mechanisms (which are described in <xref linkend="client-authentication">). </para> <para> - An important advantage of <application>pg_dump</> over the other backup - methods described later is that <application>pg_dump</>'s output can - generally be re-loaded into newer versions of <productname>PostgreSQL</>, + An important advantage of <application>pg_dump</application> over the other backup + methods described later is that <application>pg_dump</application>'s output can + generally be re-loaded into newer versions of <productname>PostgreSQL</productname>, whereas file-level backups and continuous archiving are both extremely - server-version-specific. <application>pg_dump</> is also the only method + server-version-specific. <application>pg_dump</application> is also the only method that will work when transferring a database to a different machine architecture, such as going from a 32-bit to a 64-bit server. </para> <para> - Dumps created by <application>pg_dump</> are internally consistent, + Dumps created by <application>pg_dump</application> are internally consistent, meaning, the dump represents a snapshot of the database at the time - <application>pg_dump</> began running. <application>pg_dump</> does not + <application>pg_dump</application> began running. <application>pg_dump</application> does not block other operations on the database while it is working. (Exceptions are those operations that need to operate with an exclusive lock, such as most forms of <command>ALTER TABLE</command>.) @@ -105,20 +105,20 @@ pg_dump <replaceable class="parameter">dbname</replaceable> > <replaceable cl <title>Restoring the Dump</title> <para> - Text files created by <application>pg_dump</> are intended to + Text files created by <application>pg_dump</application> are intended to be read in by the <application>psql</application> program. The general command form to restore a dump is <synopsis> psql <replaceable class="parameter">dbname</replaceable> < <replaceable class="parameter">infile</replaceable> </synopsis> where <replaceable class="parameter">infile</replaceable> is the - file output by the <application>pg_dump</> command. The database <replaceable + file output by the <application>pg_dump</application> command. The database <replaceable class="parameter">dbname</replaceable> will not be created by this - command, so you must create it yourself from <literal>template0</> - before executing <application>psql</> (e.g., with + command, so you must create it yourself from <literal>template0</literal> + before executing <application>psql</application> (e.g., with <literal>createdb -T template0 <replaceable - class="parameter">dbname</></literal>). <application>psql</> - supports options similar to <application>pg_dump</> for specifying + class="parameter">dbname</replaceable></literal>). <application>psql</application> + supports options similar to <application>pg_dump</application> for specifying the database server to connect to and the user name to use. See the <xref linkend="app-psql"> reference page for more information. Non-text file dumps are restored using the <xref @@ -134,10 +134,10 @@ psql <replaceable class="parameter">dbname</replaceable> < <replaceable class </para> <para> - By default, the <application>psql</> script will continue to + By default, the <application>psql</application> script will continue to execute after an SQL error is encountered. You might wish to run <application>psql</application> with - the <literal>ON_ERROR_STOP</> variable set to alter that + the <literal>ON_ERROR_STOP</literal> variable set to alter that behavior and have <application>psql</application> exit with an exit status of 3 if an SQL error occurs: <programlisting> @@ -147,8 +147,8 @@ psql --set ON_ERROR_STOP=on dbname < infile Alternatively, you can specify that the whole dump should be restored as a single transaction, so the restore is either fully completed or fully rolled back. This mode can be specified by - passing the <option>-1</> or <option>--single-transaction</> - command-line options to <application>psql</>. When using this + passing the <option>-1</option> or <option>--single-transaction</option> + command-line options to <application>psql</application>. When using this mode, be aware that even a minor error can rollback a restore that has already run for many hours. However, that might still be preferable to manually cleaning up a complex database @@ -156,22 +156,22 @@ psql --set ON_ERROR_STOP=on dbname < infile </para> <para> - The ability of <application>pg_dump</> and <application>psql</> to + The ability of <application>pg_dump</application> and <application>psql</application> to write to or read from pipes makes it possible to dump a database directly from one server to another, for example: <programlisting> -pg_dump -h <replaceable>host1</> <replaceable>dbname</> | psql -h <replaceable>host2</> <replaceable>dbname</> +pg_dump -h <replaceable>host1</replaceable> <replaceable>dbname</replaceable> | psql -h <replaceable>host2</replaceable> <replaceable>dbname</replaceable> </programlisting> </para> <important> <para> - The dumps produced by <application>pg_dump</> are relative to - <literal>template0</>. This means that any languages, procedures, - etc. added via <literal>template1</> will also be dumped by - <application>pg_dump</>. As a result, when restoring, if you are - using a customized <literal>template1</>, you must create the - empty database from <literal>template0</>, as in the example + The dumps produced by <application>pg_dump</application> are relative to + <literal>template0</literal>. This means that any languages, procedures, + etc. added via <literal>template1</literal> will also be dumped by + <application>pg_dump</application>. As a result, when restoring, if you are + using a customized <literal>template1</literal>, you must create the + empty database from <literal>template0</literal>, as in the example above. </para> </important> @@ -183,52 +183,52 @@ pg_dump -h <replaceable>host1</> <replaceable>dbname</> | psql -h <replaceable>h see <xref linkend="vacuum-for-statistics"> and <xref linkend="autovacuum"> for more information. For more advice on how to load large amounts of data - into <productname>PostgreSQL</> efficiently, refer to <xref + into <productname>PostgreSQL</productname> efficiently, refer to <xref linkend="populate">. </para> </sect2> <sect2 id="backup-dump-all"> - <title>Using <application>pg_dumpall</></title> + <title>Using <application>pg_dumpall</application></title> <para> - <application>pg_dump</> dumps only a single database at a time, + <application>pg_dump</application> dumps only a single database at a time, and it does not dump information about roles or tablespaces (because those are cluster-wide rather than per-database). To support convenient dumping of the entire contents of a database cluster, the <xref linkend="app-pg-dumpall"> program is provided. - <application>pg_dumpall</> backs up each database in a given + <application>pg_dumpall</application> backs up each database in a given cluster, and also preserves cluster-wide data such as role and tablespace definitions. The basic usage of this command is: <synopsis> -pg_dumpall > <replaceable>outfile</> +pg_dumpall > <replaceable>outfile</replaceable> </synopsis> - The resulting dump can be restored with <application>psql</>: + The resulting dump can be restored with <application>psql</application>: <synopsis> psql -f <replaceable class="parameter">infile</replaceable> postgres </synopsis> (Actually, you can specify any existing database name to start from, - but if you are loading into an empty cluster then <literal>postgres</> + but if you are loading into an empty cluster then <literal>postgres</literal> should usually be used.) It is always necessary to have - database superuser access when restoring a <application>pg_dumpall</> + database superuser access when restoring a <application>pg_dumpall</application> dump, as that is required to restore the role and tablespace information. If you use tablespaces, make sure that the tablespace paths in the dump are appropriate for the new installation. </para> <para> - <application>pg_dumpall</> works by emitting commands to re-create + <application>pg_dumpall</application> works by emitting commands to re-create roles, tablespaces, and empty databases, then invoking - <application>pg_dump</> for each database. This means that while + <application>pg_dump</application> for each database. This means that while each database will be internally consistent, the snapshots of different databases are not synchronized. </para> <para> Cluster-wide data can be dumped alone using the - <application>pg_dumpall</> <option>--globals-only</> option. + <application>pg_dumpall</application> <option>--globals-only</option> option. This is necessary to fully backup the cluster if running the - <application>pg_dump</> command on individual databases. + <application>pg_dump</application> command on individual databases. </para> </sect2> @@ -237,8 +237,8 @@ psql -f <replaceable class="parameter">infile</replaceable> postgres <para> Some operating systems have maximum file size limits that cause - problems when creating large <application>pg_dump</> output files. - Fortunately, <application>pg_dump</> can write to the standard + problems when creating large <application>pg_dump</application> output files. + Fortunately, <application>pg_dump</application> can write to the standard output, so you can use standard Unix tools to work around this potential problem. There are several possible methods: </para> @@ -268,7 +268,7 @@ cat <replaceable class="parameter">filename</replaceable>.gz | gunzip | psql <re </formalpara> <formalpara> - <title>Use <command>split</>.</title> + <title>Use <command>split</command>.</title> <para> The <command>split</command> command allows you to split the output into smaller files that are @@ -288,10 +288,10 @@ cat <replaceable class="parameter">filename</replaceable>* | psql <replaceable c </formalpara> <formalpara> - <title>Use <application>pg_dump</>'s custom dump format.</title> + <title>Use <application>pg_dump</application>'s custom dump format.</title> <para> If <productname>PostgreSQL</productname> was built on a system with the - <application>zlib</> compression library installed, the custom dump + <application>zlib</application> compression library installed, the custom dump format will compress data as it writes it to the output file. This will produce dump file sizes similar to using <command>gzip</command>, but it has the added advantage that tables can be restored selectively. The @@ -301,8 +301,8 @@ cat <replaceable class="parameter">filename</replaceable>* | psql <replaceable c pg_dump -Fc <replaceable class="parameter">dbname</replaceable> > <replaceable class="parameter">filename</replaceable> </programlisting> - A custom-format dump is not a script for <application>psql</>, but - instead must be restored with <application>pg_restore</>, for example: + A custom-format dump is not a script for <application>psql</application>, but + instead must be restored with <application>pg_restore</application>, for example: <programlisting> pg_restore -d <replaceable class="parameter">dbname</replaceable> <replaceable class="parameter">filename</replaceable> @@ -314,12 +314,12 @@ pg_restore -d <replaceable class="parameter">dbname</replaceable> <replaceable c </formalpara> <para> - For very large databases, you might need to combine <command>split</> + For very large databases, you might need to combine <command>split</command> with one of the other two approaches. </para> <formalpara> - <title>Use <application>pg_dump</>'s parallel dump feature.</title> + <title>Use <application>pg_dump</application>'s parallel dump feature.</title> <para> To speed up the dump of a large database, you can use <application>pg_dump</application>'s parallel mode. This will dump @@ -344,7 +344,7 @@ pg_dump -j <replaceable class="parameter">num</replaceable> -F d -f <replaceable <para> An alternative backup strategy is to directly copy the files that - <productname>PostgreSQL</> uses to store the data in the database; + <productname>PostgreSQL</productname> uses to store the data in the database; <xref linkend="creating-cluster"> explains where these files are located. You can use whatever method you prefer for doing file system backups; for example: @@ -356,13 +356,13 @@ tar -cf backup.tar /usr/local/pgsql/data <para> There are two restrictions, however, which make this method - impractical, or at least inferior to the <application>pg_dump</> + impractical, or at least inferior to the <application>pg_dump</application> method: <orderedlist> <listitem> <para> - The database server <emphasis>must</> be shut down in order to + The database server <emphasis>must</emphasis> be shut down in order to get a usable backup. Half-way measures such as disallowing all connections will <emphasis>not</emphasis> work (in part because <command>tar</command> and similar tools do not take @@ -379,7 +379,7 @@ tar -cf backup.tar /usr/local/pgsql/data If you have dug into the details of the file system layout of the database, you might be tempted to try to back up or restore only certain individual tables or databases from their respective files or - directories. This will <emphasis>not</> work because the + directories. This will <emphasis>not</emphasis> work because the information contained in these files is not usable without the commit log files, <filename>pg_xact/*</filename>, which contain the commit status of @@ -399,7 +399,7 @@ tar -cf backup.tar /usr/local/pgsql/data <quote>consistent snapshot</quote> of the data directory, if the file system supports that functionality (and you are willing to trust that it is implemented correctly). The typical procedure is - to make a <quote>frozen snapshot</> of the volume containing the + to make a <quote>frozen snapshot</quote> of the volume containing the database, then copy the whole data directory (not just parts, see above) from the snapshot to a backup device, then release the frozen snapshot. This will work even while the database server is running. @@ -419,7 +419,7 @@ tar -cf backup.tar /usr/local/pgsql/data the volumes. For example, if your data files and WAL log are on different disks, or if tablespaces are on different file systems, it might not be possible to use snapshot backup because the snapshots - <emphasis>must</> be simultaneous. + <emphasis>must</emphasis> be simultaneous. Read your file system documentation very carefully before trusting the consistent-snapshot technique in such situations. </para> @@ -435,13 +435,13 @@ tar -cf backup.tar /usr/local/pgsql/data </para> <para> - Another option is to use <application>rsync</> to perform a file - system backup. This is done by first running <application>rsync</> + Another option is to use <application>rsync</application> to perform a file + system backup. This is done by first running <application>rsync</application> while the database server is running, then shutting down the database - server long enough to do an <command>rsync --checksum</>. - (<option>--checksum</> is necessary because <command>rsync</> only + server long enough to do an <command>rsync --checksum</command>. + (<option>--checksum</option> is necessary because <command>rsync</command> only has file modification-time granularity of one second.) The - second <application>rsync</> will be quicker than the first, + second <application>rsync</application> will be quicker than the first, because it has relatively little data to transfer, and the end result will be consistent because the server was down. This method allows a file system backup to be performed with minimal downtime. @@ -471,12 +471,12 @@ tar -cf backup.tar /usr/local/pgsql/data </indexterm> <para> - At all times, <productname>PostgreSQL</> maintains a - <firstterm>write ahead log</> (WAL) in the <filename>pg_wal/</> + At all times, <productname>PostgreSQL</productname> maintains a + <firstterm>write ahead log</firstterm> (WAL) in the <filename>pg_wal/</filename> subdirectory of the cluster's data directory. The log records every change made to the database's data files. This log exists primarily for crash-safety purposes: if the system crashes, the - database can be restored to consistency by <quote>replaying</> the + database can be restored to consistency by <quote>replaying</quote> the log entries made since the last checkpoint. However, the existence of the log makes it possible to use a third strategy for backing up databases: we can combine a file-system-level backup with backup of @@ -492,7 +492,7 @@ tar -cf backup.tar /usr/local/pgsql/data Any internal inconsistency in the backup will be corrected by log replay (this is not significantly different from what happens during crash recovery). So we do not need a file system snapshot capability, - just <application>tar</> or a similar archiving tool. + just <application>tar</application> or a similar archiving tool. </para> </listitem> <listitem> @@ -508,7 +508,7 @@ tar -cf backup.tar /usr/local/pgsql/data It is not necessary to replay the WAL entries all the way to the end. We could stop the replay at any point and have a consistent snapshot of the database as it was at that time. Thus, - this technique supports <firstterm>point-in-time recovery</>: it is + this technique supports <firstterm>point-in-time recovery</firstterm>: it is possible to restore the database to its state at any time since your base backup was taken. </para> @@ -517,7 +517,7 @@ tar -cf backup.tar /usr/local/pgsql/data <para> If we continuously feed the series of WAL files to another machine that has been loaded with the same base backup file, we - have a <firstterm>warm standby</> system: at any point we can bring up + have a <firstterm>warm standby</firstterm> system: at any point we can bring up the second machine and it will have a nearly-current copy of the database. </para> @@ -530,7 +530,7 @@ tar -cf backup.tar /usr/local/pgsql/data <application>pg_dump</application> and <application>pg_dumpall</application> do not produce file-system-level backups and cannot be used as part of a continuous-archiving solution. - Such dumps are <emphasis>logical</> and do not contain enough + Such dumps are <emphasis>logical</emphasis> and do not contain enough information to be used by WAL replay. </para> </note> @@ -546,10 +546,10 @@ tar -cf backup.tar /usr/local/pgsql/data <para> To recover successfully using continuous archiving (also called - <quote>online backup</> by many database vendors), you need a continuous + <quote>online backup</quote> by many database vendors), you need a continuous sequence of archived WAL files that extends back at least as far as the start time of your backup. So to get started, you should set up and test - your procedure for archiving WAL files <emphasis>before</> you take your + your procedure for archiving WAL files <emphasis>before</emphasis> you take your first base backup. Accordingly, we first discuss the mechanics of archiving WAL files. </para> @@ -558,15 +558,15 @@ tar -cf backup.tar /usr/local/pgsql/data <title>Setting Up WAL Archiving</title> <para> - In an abstract sense, a running <productname>PostgreSQL</> system + In an abstract sense, a running <productname>PostgreSQL</productname> system produces an indefinitely long sequence of WAL records. The system physically divides this sequence into WAL <firstterm>segment - files</>, which are normally 16MB apiece (although the segment size - can be altered during <application>initdb</>). The segment + files</firstterm>, which are normally 16MB apiece (although the segment size + can be altered during <application>initdb</application>). The segment files are given numeric names that reflect their position in the abstract WAL sequence. When not using WAL archiving, the system normally creates just a few segment files and then - <quote>recycles</> them by renaming no-longer-needed segment files + <quote>recycles</quote> them by renaming no-longer-needed segment files to higher segment numbers. It's assumed that segment files whose contents precede the checkpoint-before-last are no longer of interest and can be recycled. @@ -577,33 +577,33 @@ tar -cf backup.tar /usr/local/pgsql/data file once it is filled, and save that data somewhere before the segment file is recycled for reuse. Depending on the application and the available hardware, there could be many different ways of <quote>saving - the data somewhere</>: we could copy the segment files to an NFS-mounted + the data somewhere</quote>: we could copy the segment files to an NFS-mounted directory on another machine, write them onto a tape drive (ensuring that you have a way of identifying the original name of each file), or batch them together and burn them onto CDs, or something else entirely. To provide the database administrator with flexibility, - <productname>PostgreSQL</> tries not to make any assumptions about how - the archiving will be done. Instead, <productname>PostgreSQL</> lets + <productname>PostgreSQL</productname> tries not to make any assumptions about how + the archiving will be done. Instead, <productname>PostgreSQL</productname> lets the administrator specify a shell command to be executed to copy a completed segment file to wherever it needs to go. The command could be - as simple as a <literal>cp</>, or it could invoke a complex shell + as simple as a <literal>cp</literal>, or it could invoke a complex shell script — it's all up to you. </para> <para> To enable WAL archiving, set the <xref linkend="guc-wal-level"> - configuration parameter to <literal>replica</> or higher, - <xref linkend="guc-archive-mode"> to <literal>on</>, + configuration parameter to <literal>replica</literal> or higher, + <xref linkend="guc-archive-mode"> to <literal>on</literal>, and specify the shell command to use in the <xref linkend="guc-archive-command"> configuration parameter. In practice these settings will always be placed in the <filename>postgresql.conf</filename> file. - In <varname>archive_command</>, - <literal>%p</> is replaced by the path name of the file to - archive, while <literal>%f</> is replaced by only the file name. + In <varname>archive_command</varname>, + <literal>%p</literal> is replaced by the path name of the file to + archive, while <literal>%f</literal> is replaced by only the file name. (The path name is relative to the current working directory, i.e., the cluster's data directory.) - Use <literal>%%</> if you need to embed an actual <literal>%</> + Use <literal>%%</literal> if you need to embed an actual <literal>%</literal> character in the command. The simplest useful command is something like: <programlisting> @@ -611,9 +611,9 @@ archive_command = 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/ser archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"' # Windows </programlisting> which will copy archivable WAL segments to the directory - <filename>/mnt/server/archivedir</>. (This is an example, not a + <filename>/mnt/server/archivedir</filename>. (This is an example, not a recommendation, and might not work on all platforms.) After the - <literal>%p</> and <literal>%f</> parameters have been replaced, + <literal>%p</literal> and <literal>%f</literal> parameters have been replaced, the actual command executed might look like this: <programlisting> test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/00000001000000A900000065 /mnt/server/archivedir/00000001000000A900000065 @@ -623,7 +623,7 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 <para> The archive command will be executed under the ownership of the same - user that the <productname>PostgreSQL</> server is running as. Since + user that the <productname>PostgreSQL</productname> server is running as. Since the series of WAL files being archived contains effectively everything in your database, you will want to be sure that the archived data is protected from prying eyes; for example, archive into a directory that @@ -633,9 +633,9 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 <para> It is important that the archive command return zero exit status if and only if it succeeds. Upon getting a zero result, - <productname>PostgreSQL</> will assume that the file has been + <productname>PostgreSQL</productname> will assume that the file has been successfully archived, and will remove or recycle it. However, a nonzero - status tells <productname>PostgreSQL</> that the file was not archived; + status tells <productname>PostgreSQL</productname> that the file was not archived; it will try again periodically until it succeeds. </para> @@ -650,14 +650,14 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 <para> It is advisable to test your proposed archive command to ensure that it indeed does not overwrite an existing file, <emphasis>and that it returns - nonzero status in this case</>. + nonzero status in this case</emphasis>. The example command above for Unix ensures this by including a separate - <command>test</> step. On some Unix platforms, <command>cp</> has - switches such as <option>-i</> that can be used to do the same thing + <command>test</command> step. On some Unix platforms, <command>cp</command> has + switches such as <option>-i</option> that can be used to do the same thing less verbosely, but you should not rely on these without verifying that - the right exit status is returned. (In particular, GNU <command>cp</> - will return status zero when <option>-i</> is used and the target file - already exists, which is <emphasis>not</> the desired behavior.) + the right exit status is returned. (In particular, GNU <command>cp</command> + will return status zero when <option>-i</option> is used and the target file + already exists, which is <emphasis>not</emphasis> the desired behavior.) </para> <para> @@ -668,10 +668,10 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 fills, nothing further can be archived until the tape is swapped. You should ensure that any error condition or request to a human operator is reported appropriately so that the situation can be - resolved reasonably quickly. The <filename>pg_wal/</> directory will + resolved reasonably quickly. The <filename>pg_wal/</filename> directory will continue to fill with WAL segment files until the situation is resolved. - (If the file system containing <filename>pg_wal/</> fills up, - <productname>PostgreSQL</> will do a PANIC shutdown. No committed + (If the file system containing <filename>pg_wal/</filename> fills up, + <productname>PostgreSQL</productname> will do a PANIC shutdown. No committed transactions will be lost, but the database will remain offline until you free some space.) </para> @@ -682,7 +682,7 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 operation continues even if the archiving process falls a little behind. If archiving falls significantly behind, this will increase the amount of data that would be lost in the event of a disaster. It will also mean that - the <filename>pg_wal/</> directory will contain large numbers of + the <filename>pg_wal/</filename> directory will contain large numbers of not-yet-archived segment files, which could eventually exceed available disk space. You are advised to monitor the archiving process to ensure that it is working as you intend. @@ -692,16 +692,16 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 In writing your archive command, you should assume that the file names to be archived can be up to 64 characters long and can contain any combination of ASCII letters, digits, and dots. It is not necessary to - preserve the original relative path (<literal>%p</>) but it is necessary to - preserve the file name (<literal>%f</>). + preserve the original relative path (<literal>%p</literal>) but it is necessary to + preserve the file name (<literal>%f</literal>). </para> <para> Note that although WAL archiving will allow you to restore any - modifications made to the data in your <productname>PostgreSQL</> database, + modifications made to the data in your <productname>PostgreSQL</productname> database, it will not restore changes made to configuration files (that is, - <filename>postgresql.conf</>, <filename>pg_hba.conf</> and - <filename>pg_ident.conf</>), since those are edited manually rather + <filename>postgresql.conf</filename>, <filename>pg_hba.conf</filename> and + <filename>pg_ident.conf</filename>), since those are edited manually rather than through SQL operations. You might wish to keep the configuration files in a location that will be backed up by your regular file system backup procedures. See @@ -719,32 +719,32 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 to a new WAL segment file at least that often. Note that archived files that are archived early due to a forced switch are still the same length as completely full files. It is therefore unwise to set a very - short <varname>archive_timeout</> — it will bloat your archive - storage. <varname>archive_timeout</> settings of a minute or so are + short <varname>archive_timeout</varname> — it will bloat your archive + storage. <varname>archive_timeout</varname> settings of a minute or so are usually reasonable. </para> <para> Also, you can force a segment switch manually with - <function>pg_switch_wal</> if you want to ensure that a + <function>pg_switch_wal</function> if you want to ensure that a just-finished transaction is archived as soon as possible. Other utility functions related to WAL management are listed in <xref linkend="functions-admin-backup-table">. </para> <para> - When <varname>wal_level</> is <literal>minimal</> some SQL commands + When <varname>wal_level</varname> is <literal>minimal</literal> some SQL commands are optimized to avoid WAL logging, as described in <xref linkend="populate-pitr">. If archiving or streaming replication were turned on during execution of one of these statements, WAL would not contain enough information for archive recovery. (Crash recovery is - unaffected.) For this reason, <varname>wal_level</> can only be changed at - server start. However, <varname>archive_command</> can be changed with a + unaffected.) For this reason, <varname>wal_level</varname> can only be changed at + server start. However, <varname>archive_command</varname> can be changed with a configuration file reload. If you wish to temporarily stop archiving, - one way to do it is to set <varname>archive_command</> to the empty - string (<literal>''</>). - This will cause WAL files to accumulate in <filename>pg_wal/</> until a - working <varname>archive_command</> is re-established. + one way to do it is to set <varname>archive_command</varname> to the empty + string (<literal>''</literal>). + This will cause WAL files to accumulate in <filename>pg_wal/</filename> until a + working <varname>archive_command</varname> is re-established. </para> </sect2> @@ -763,8 +763,8 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 <para> It is not necessary to be concerned about the amount of time it takes to make a base backup. However, if you normally run the - server with <varname>full_page_writes</> disabled, you might notice a drop - in performance while the backup runs since <varname>full_page_writes</> is + server with <varname>full_page_writes</varname> disabled, you might notice a drop + in performance while the backup runs since <varname>full_page_writes</varname> is effectively forced on during backup mode. </para> @@ -772,13 +772,13 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 To make use of the backup, you will need to keep all the WAL segment files generated during and after the file system backup. To aid you in doing this, the base backup process - creates a <firstterm>backup history file</> that is immediately + creates a <firstterm>backup history file</firstterm> that is immediately stored into the WAL archive area. This file is named after the first WAL segment file that you need for the file system backup. For example, if the starting WAL file is - <literal>0000000100001234000055CD</> the backup history file will be + <literal>0000000100001234000055CD</literal> the backup history file will be named something like - <literal>0000000100001234000055CD.007C9330.backup</>. (The second + <literal>0000000100001234000055CD.007C9330.backup</literal>. (The second part of the file name stands for an exact position within the WAL file, and can ordinarily be ignored.) Once you have safely archived the file system backup and the WAL segment files used during the @@ -847,14 +847,14 @@ test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/0 <programlisting> SELECT pg_start_backup('label', false, false); </programlisting> - where <literal>label</> is any string you want to use to uniquely + where <literal>label</literal> is any string you want to use to uniquely identify this backup operation. The connection - calling <function>pg_start_backup</> must be maintained until the end of + calling <function>pg_start_backup</function> must be maintained until the end of the backup, or the backup will be automatically aborted. </para> <para> - By default, <function>pg_start_backup</> can take a long time to finish. + By default, <function>pg_start_backup</function> can take a long time to finish. This is because it performs a checkpoint, and the I/O required for the checkpoint will be spread out over a significant period of time, by default half your inter-checkpoint interval @@ -862,19 +862,19 @@ SELECT pg_start_backup('label', false, false); <xref linkend="guc-checkpoint-completion-target">). This is usually what you want, because it minimizes the impact on query processing. If you want to start the backup as soon as - possible, change the second parameter to <literal>true</>, which will + possible, change the second parameter to <literal>true</literal>, which will issue an immediate checkpoint using as much I/O as available. </para> <para> - The third parameter being <literal>false</> tells - <function>pg_start_backup</> to initiate a non-exclusive base backup. + The third parameter being <literal>false</literal> tells + <function>pg_start_backup</function> to initiate a non-exclusive base backup. </para> </listitem> <listitem> <para> Perform the backup, using any convenient file-system-backup tool - such as <application>tar</> or <application>cpio</> (not + such as <application>tar</application> or <application>cpio</application> (not <application>pg_dump</application> or <application>pg_dumpall</application>). It is neither necessary nor desirable to stop normal operation of the database @@ -898,45 +898,45 @@ SELECT * FROM pg_stop_backup(false, true); ready to archive. </para> <para> - The <function>pg_stop_backup</> will return one row with three + The <function>pg_stop_backup</function> will return one row with three values. The second of these fields should be written to a file named - <filename>backup_label</> in the root directory of the backup. The + <filename>backup_label</filename> in the root directory of the backup. The third field should be written to a file named - <filename>tablespace_map</> unless the field is empty. These files are + <filename>tablespace_map</filename> unless the field is empty. These files are vital to the backup working, and must be written without modification. </para> </listitem> <listitem> <para> Once the WAL segment files active during the backup are archived, you are - done. The file identified by <function>pg_stop_backup</>'s first return + done. The file identified by <function>pg_stop_backup</function>'s first return value is the last segment that is required to form a complete set of - backup files. On a primary, if <varname>archive_mode</> is enabled and the - <literal>wait_for_archive</> parameter is <literal>true</>, - <function>pg_stop_backup</> does not return until the last segment has + backup files. On a primary, if <varname>archive_mode</varname> is enabled and the + <literal>wait_for_archive</literal> parameter is <literal>true</literal>, + <function>pg_stop_backup</function> does not return until the last segment has been archived. - On a standby, <varname>archive_mode</> must be <literal>always</> in order - for <function>pg_stop_backup</> to wait. + On a standby, <varname>archive_mode</varname> must be <literal>always</literal> in order + for <function>pg_stop_backup</function> to wait. Archiving of these files happens automatically since you have - already configured <varname>archive_command</>. In most cases this + already configured <varname>archive_command</varname>. In most cases this happens quickly, but you are advised to monitor your archive system to ensure there are no delays. If the archive process has fallen behind because of failures of the archive command, it will keep retrying until the archive succeeds and the backup is complete. If you wish to place a time limit on the execution of - <function>pg_stop_backup</>, set an appropriate + <function>pg_stop_backup</function>, set an appropriate <varname>statement_timeout</varname> value, but make note that if - <function>pg_stop_backup</> terminates because of this your backup + <function>pg_stop_backup</function> terminates because of this your backup may not be valid. </para> <para> If the backup process monitors and ensures that all WAL segment files required for the backup are successfully archived then the - <literal>wait_for_archive</> parameter (which defaults to true) can be set + <literal>wait_for_archive</literal> parameter (which defaults to true) can be set to false to have - <function>pg_stop_backup</> return as soon as the stop backup record is - written to the WAL. By default, <function>pg_stop_backup</> will wait + <function>pg_stop_backup</function> return as soon as the stop backup record is + written to the WAL. By default, <function>pg_stop_backup</function> will wait until all WAL has been archived, which can take some time. This option must be used with caution: if WAL archiving is not monitored correctly then the backup might not include all of the WAL files and will @@ -952,7 +952,7 @@ SELECT * FROM pg_stop_backup(false, true); The process for an exclusive backup is mostly the same as for a non-exclusive one, but it differs in a few key steps. This type of backup can only be taken on a primary and does not allow concurrent backups. - Prior to <productname>PostgreSQL</> 9.6, this + Prior to <productname>PostgreSQL</productname> 9.6, this was the only low-level method available, but it is now recommended that all users upgrade their scripts to use non-exclusive backups if possible. </para> @@ -971,20 +971,20 @@ SELECT * FROM pg_stop_backup(false, true); <programlisting> SELECT pg_start_backup('label'); </programlisting> - where <literal>label</> is any string you want to use to uniquely + where <literal>label</literal> is any string you want to use to uniquely identify this backup operation. - <function>pg_start_backup</> creates a <firstterm>backup label</> file, - called <filename>backup_label</>, in the cluster directory with + <function>pg_start_backup</function> creates a <firstterm>backup label</firstterm> file, + called <filename>backup_label</filename>, in the cluster directory with information about your backup, including the start time and label string. - The function also creates a <firstterm>tablespace map</> file, - called <filename>tablespace_map</>, in the cluster directory with - information about tablespace symbolic links in <filename>pg_tblspc/</> if + The function also creates a <firstterm>tablespace map</firstterm> file, + called <filename>tablespace_map</filename>, in the cluster directory with + information about tablespace symbolic links in <filename>pg_tblspc/</filename> if one or more such link is present. Both files are critical to the integrity of the backup, should you need to restore from it. </para> <para> - By default, <function>pg_start_backup</> can take a long time to finish. + By default, <function>pg_start_backup</function> can take a long time to finish. This is because it performs a checkpoint, and the I/O required for the checkpoint will be spread out over a significant period of time, by default half your inter-checkpoint interval @@ -1002,7 +1002,7 @@ SELECT pg_start_backup('label', true); <listitem> <para> Perform the backup, using any convenient file-system-backup tool - such as <application>tar</> or <application>cpio</> (not + such as <application>tar</application> or <application>cpio</application> (not <application>pg_dump</application> or <application>pg_dumpall</application>). It is neither necessary nor desirable to stop normal operation of the database @@ -1012,7 +1012,7 @@ SELECT pg_start_backup('label', true); </para> <para> Note that if the server crashes during the backup it may not be - possible to restart until the <literal>backup_label</> file has been + possible to restart until the <literal>backup_label</literal> file has been manually deleted from the <envar>PGDATA</envar> directory. </para> </listitem> @@ -1033,22 +1033,22 @@ SELECT pg_stop_backup(); <listitem> <para> Once the WAL segment files active during the backup are archived, you are - done. The file identified by <function>pg_stop_backup</>'s result is + done. The file identified by <function>pg_stop_backup</function>'s result is the last segment that is required to form a complete set of backup files. - If <varname>archive_mode</> is enabled, - <function>pg_stop_backup</> does not return until the last segment has + If <varname>archive_mode</varname> is enabled, + <function>pg_stop_backup</function> does not return until the last segment has been archived. Archiving of these files happens automatically since you have - already configured <varname>archive_command</>. In most cases this + already configured <varname>archive_command</varname>. In most cases this happens quickly, but you are advised to monitor your archive system to ensure there are no delays. If the archive process has fallen behind because of failures of the archive command, it will keep retrying until the archive succeeds and the backup is complete. If you wish to place a time limit on the execution of - <function>pg_stop_backup</>, set an appropriate + <function>pg_stop_backup</function>, set an appropriate <varname>statement_timeout</varname> value, but make note that if - <function>pg_stop_backup</> terminates because of this your backup + <function>pg_stop_backup</function> terminates because of this your backup may not be valid. </para> </listitem> @@ -1063,21 +1063,21 @@ SELECT pg_stop_backup(); When taking a base backup of an active database, this situation is normal and not an error. However, you need to ensure that you can distinguish complaints of this sort from real errors. For example, some versions - of <application>rsync</> return a separate exit code for - <quote>vanished source files</>, and you can write a driver script to + of <application>rsync</application> return a separate exit code for + <quote>vanished source files</quote>, and you can write a driver script to accept this exit code as a non-error case. Also, some versions of - GNU <application>tar</> return an error code indistinguishable from - a fatal error if a file was truncated while <application>tar</> was - copying it. Fortunately, GNU <application>tar</> versions 1.16 and + GNU <application>tar</application> return an error code indistinguishable from + a fatal error if a file was truncated while <application>tar</application> was + copying it. Fortunately, GNU <application>tar</application> versions 1.16 and later exit with 1 if a file was changed during the backup, - and 2 for other errors. With GNU <application>tar</> version 1.23 and + and 2 for other errors. With GNU <application>tar</application> version 1.23 and later, you can use the warning options <literal>--warning=no-file-changed --warning=no-file-removed</literal> to hide the related warning messages. </para> <para> Be certain that your backup includes all of the files under - the database cluster directory (e.g., <filename>/usr/local/pgsql/data</>). + the database cluster directory (e.g., <filename>/usr/local/pgsql/data</filename>). If you are using tablespaces that do not reside underneath this directory, be careful to include them as well (and be sure that your backup archives symbolic links as links, otherwise the restore will corrupt @@ -1086,21 +1086,21 @@ SELECT pg_stop_backup(); <para> You should, however, omit from the backup the files within the - cluster's <filename>pg_wal/</> subdirectory. This + cluster's <filename>pg_wal/</filename> subdirectory. This slight adjustment is worthwhile because it reduces the risk of mistakes when restoring. This is easy to arrange if - <filename>pg_wal/</> is a symbolic link pointing to someplace outside + <filename>pg_wal/</filename> is a symbolic link pointing to someplace outside the cluster directory, which is a common setup anyway for performance - reasons. You might also want to exclude <filename>postmaster.pid</> - and <filename>postmaster.opts</>, which record information - about the running <application>postmaster</>, not about the - <application>postmaster</> which will eventually use this backup. - (These files can confuse <application>pg_ctl</>.) + reasons. You might also want to exclude <filename>postmaster.pid</filename> + and <filename>postmaster.opts</filename>, which record information + about the running <application>postmaster</application>, not about the + <application>postmaster</application> which will eventually use this backup. + (These files can confuse <application>pg_ctl</application>.) </para> <para> It is often a good idea to also omit from the backup the files - within the cluster's <filename>pg_replslot/</> directory, so that + within the cluster's <filename>pg_replslot/</filename> directory, so that replication slots that exist on the master do not become part of the backup. Otherwise, the subsequent use of the backup to create a standby may result in indefinite retention of WAL files on the standby, and @@ -1114,10 +1114,10 @@ SELECT pg_stop_backup(); </para> <para> - The contents of the directories <filename>pg_dynshmem/</>, - <filename>pg_notify/</>, <filename>pg_serial/</>, - <filename>pg_snapshots/</>, <filename>pg_stat_tmp/</>, - and <filename>pg_subtrans/</> (but not the directories themselves) can be + The contents of the directories <filename>pg_dynshmem/</filename>, + <filename>pg_notify/</filename>, <filename>pg_serial/</filename>, + <filename>pg_snapshots/</filename>, <filename>pg_stat_tmp/</filename>, + and <filename>pg_subtrans/</filename> (but not the directories themselves) can be omitted from the backup as they will be initialized on postmaster startup. If <xref linkend="guc-stats-temp-directory"> is set and is under the data directory then the contents of that directory can also be omitted. @@ -1131,13 +1131,13 @@ SELECT pg_stop_backup(); <para> The backup label - file includes the label string you gave to <function>pg_start_backup</>, - as well as the time at which <function>pg_start_backup</> was run, and + file includes the label string you gave to <function>pg_start_backup</function>, + as well as the time at which <function>pg_start_backup</function> was run, and the name of the starting WAL file. In case of confusion it is therefore possible to look inside a backup file and determine exactly which backup session the dump file came from. The tablespace map file includes the symbolic link names as they exist in the directory - <filename>pg_tblspc/</> and the full path of each symbolic link. + <filename>pg_tblspc/</filename> and the full path of each symbolic link. These files are not merely for your information; their presence and contents are critical to the proper operation of the system's recovery process. @@ -1146,7 +1146,7 @@ SELECT pg_stop_backup(); <para> It is also possible to make a backup while the server is stopped. In this case, you obviously cannot use - <function>pg_start_backup</> or <function>pg_stop_backup</>, and + <function>pg_start_backup</function> or <function>pg_stop_backup</function>, and you will therefore be left to your own devices to keep track of which backup is which and how far back the associated WAL files go. It is generally better to follow the continuous archiving procedure above. @@ -1173,7 +1173,7 @@ SELECT pg_stop_backup(); location in case you need them later. Note that this precaution will require that you have enough free space on your system to hold two copies of your existing database. If you do not have enough space, - you should at least save the contents of the cluster's <filename>pg_wal</> + you should at least save the contents of the cluster's <filename>pg_wal</filename> subdirectory, as it might contain logs which were not archived before the system went down. </para> @@ -1188,17 +1188,17 @@ SELECT pg_stop_backup(); <para> Restore the database files from your file system backup. Be sure that they are restored with the right ownership (the database system user, not - <literal>root</>!) and with the right permissions. If you are using + <literal>root</literal>!) and with the right permissions. If you are using tablespaces, - you should verify that the symbolic links in <filename>pg_tblspc/</> + you should verify that the symbolic links in <filename>pg_tblspc/</filename> were correctly restored. </para> </listitem> <listitem> <para> - Remove any files present in <filename>pg_wal/</>; these came from the + Remove any files present in <filename>pg_wal/</filename>; these came from the file system backup and are therefore probably obsolete rather than current. - If you didn't archive <filename>pg_wal/</> at all, then recreate + If you didn't archive <filename>pg_wal/</filename> at all, then recreate it with proper permissions, being careful to ensure that you re-establish it as a symbolic link if you had it set up that way before. @@ -1207,16 +1207,16 @@ SELECT pg_stop_backup(); <listitem> <para> If you have unarchived WAL segment files that you saved in step 2, - copy them into <filename>pg_wal/</>. (It is best to copy them, + copy them into <filename>pg_wal/</filename>. (It is best to copy them, not move them, so you still have the unmodified files if a problem occurs and you have to start over.) </para> </listitem> <listitem> <para> - Create a recovery command file <filename>recovery.conf</> in the cluster + Create a recovery command file <filename>recovery.conf</filename> in the cluster data directory (see <xref linkend="recovery-config">). You might - also want to temporarily modify <filename>pg_hba.conf</> to prevent + also want to temporarily modify <filename>pg_hba.conf</filename> to prevent ordinary users from connecting until you are sure the recovery was successful. </para> </listitem> @@ -1227,7 +1227,7 @@ SELECT pg_stop_backup(); recovery be terminated because of an external error, the server can simply be restarted and it will continue recovery. Upon completion of the recovery process, the server will rename - <filename>recovery.conf</> to <filename>recovery.done</> (to prevent + <filename>recovery.conf</filename> to <filename>recovery.done</filename> (to prevent accidentally re-entering recovery mode later) and then commence normal database operations. </para> @@ -1236,7 +1236,7 @@ SELECT pg_stop_backup(); <para> Inspect the contents of the database to ensure you have recovered to the desired state. If not, return to step 1. If all is well, - allow your users to connect by restoring <filename>pg_hba.conf</> to normal. + allow your users to connect by restoring <filename>pg_hba.conf</filename> to normal. </para> </listitem> </orderedlist> @@ -1245,32 +1245,32 @@ SELECT pg_stop_backup(); <para> The key part of all this is to set up a recovery configuration file that describes how you want to recover and how far the recovery should - run. You can use <filename>recovery.conf.sample</> (normally - located in the installation's <filename>share/</> directory) as a + run. You can use <filename>recovery.conf.sample</filename> (normally + located in the installation's <filename>share/</filename> directory) as a prototype. The one thing that you absolutely must specify in - <filename>recovery.conf</> is the <varname>restore_command</>, - which tells <productname>PostgreSQL</> how to retrieve archived - WAL file segments. Like the <varname>archive_command</>, this is - a shell command string. It can contain <literal>%f</>, which is - replaced by the name of the desired log file, and <literal>%p</>, + <filename>recovery.conf</filename> is the <varname>restore_command</varname>, + which tells <productname>PostgreSQL</productname> how to retrieve archived + WAL file segments. Like the <varname>archive_command</varname>, this is + a shell command string. It can contain <literal>%f</literal>, which is + replaced by the name of the desired log file, and <literal>%p</literal>, which is replaced by the path name to copy the log file to. (The path name is relative to the current working directory, i.e., the cluster's data directory.) - Write <literal>%%</> if you need to embed an actual <literal>%</> + Write <literal>%%</literal> if you need to embed an actual <literal>%</literal> character in the command. The simplest useful command is something like: <programlisting> restore_command = 'cp /mnt/server/archivedir/%f %p' </programlisting> which will copy previously archived WAL segments from the directory - <filename>/mnt/server/archivedir</>. Of course, you can use something + <filename>/mnt/server/archivedir</filename>. Of course, you can use something much more complicated, perhaps even a shell script that requests the operator to mount an appropriate tape. </para> <para> It is important that the command return nonzero exit status on failure. - The command <emphasis>will</> be called requesting files that are not + The command <emphasis>will</emphasis> be called requesting files that are not present in the archive; it must return nonzero when so asked. This is not an error condition. An exception is that if the command was terminated by a signal (other than <systemitem>SIGTERM</systemitem>, which is used as @@ -1282,27 +1282,27 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' <para> Not all of the requested files will be WAL segment files; you should also expect requests for files with a suffix of - <literal>.backup</> or <literal>.history</>. Also be aware that - the base name of the <literal>%p</> path will be different from - <literal>%f</>; do not expect them to be interchangeable. + <literal>.backup</literal> or <literal>.history</literal>. Also be aware that + the base name of the <literal>%p</literal> path will be different from + <literal>%f</literal>; do not expect them to be interchangeable. </para> <para> WAL segments that cannot be found in the archive will be sought in - <filename>pg_wal/</>; this allows use of recent un-archived segments. + <filename>pg_wal/</filename>; this allows use of recent un-archived segments. However, segments that are available from the archive will be used in - preference to files in <filename>pg_wal/</>. + preference to files in <filename>pg_wal/</filename>. </para> <para> Normally, recovery will proceed through all available WAL segments, thereby restoring the database to the current point in time (or as close as possible given the available WAL segments). Therefore, a normal - recovery will end with a <quote>file not found</> message, the exact text + recovery will end with a <quote>file not found</quote> message, the exact text of the error message depending upon your choice of - <varname>restore_command</>. You may also see an error message + <varname>restore_command</varname>. You may also see an error message at the start of recovery for a file named something like - <filename>00000001.history</>. This is also normal and does not + <filename>00000001.history</filename>. This is also normal and does not indicate a problem in simple recovery situations; see <xref linkend="backup-timelines"> for discussion. </para> @@ -1310,8 +1310,8 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' <para> If you want to recover to some previous point in time (say, right before the junior DBA dropped your main transaction table), just specify the - required <link linkend="recovery-target-settings">stopping point</link> in <filename>recovery.conf</>. You can specify - the stop point, known as the <quote>recovery target</>, either by + required <link linkend="recovery-target-settings">stopping point</link> in <filename>recovery.conf</filename>. You can specify + the stop point, known as the <quote>recovery target</quote>, either by date/time, named restore point or by completion of a specific transaction ID. As of this writing only the date/time and named restore point options are very usable, since there are no tools to help you identify with any @@ -1321,7 +1321,7 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' <note> <para> The stop point must be after the ending time of the base backup, i.e., - the end time of <function>pg_stop_backup</>. You cannot use a base backup + the end time of <function>pg_stop_backup</function>. You cannot use a base backup to recover to a time when that backup was in progress. (To recover to such a time, you must go back to your previous base backup and roll forward from there.) @@ -1332,14 +1332,14 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' If recovery finds corrupted WAL data, recovery will halt at that point and the server will not start. In such a case the recovery process could be re-run from the beginning, specifying a - <quote>recovery target</> before the point of corruption so that recovery + <quote>recovery target</quote> before the point of corruption so that recovery can complete normally. If recovery fails for an external reason, such as a system crash or if the WAL archive has become inaccessible, then the recovery can simply be restarted and it will restart almost from where it failed. Recovery restart works much like checkpointing in normal operation: the server periodically forces all its state to disk, and then updates - the <filename>pg_control</> file to indicate that the already-processed + the <filename>pg_control</filename> file to indicate that the already-processed WAL data need not be scanned again. </para> @@ -1359,7 +1359,7 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' suppose you dropped a critical table at 5:15PM on Tuesday evening, but didn't realize your mistake until Wednesday noon. Unfazed, you get out your backup, restore to the point-in-time 5:14PM - Tuesday evening, and are up and running. In <emphasis>this</> history of + Tuesday evening, and are up and running. In <emphasis>this</emphasis> history of the database universe, you never dropped the table. But suppose you later realize this wasn't such a great idea, and would like to return to sometime Wednesday morning in the original history. @@ -1372,8 +1372,8 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' </para> <para> - To deal with this problem, <productname>PostgreSQL</> has a notion - of <firstterm>timelines</>. Whenever an archive recovery completes, + To deal with this problem, <productname>PostgreSQL</productname> has a notion + of <firstterm>timelines</firstterm>. Whenever an archive recovery completes, a new timeline is created to identify the series of WAL records generated after that recovery. The timeline ID number is part of WAL segment file names so a new timeline does @@ -1384,13 +1384,13 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' and so have to do several point-in-time recoveries by trial and error until you find the best place to branch off from the old history. Without timelines this process would soon generate an unmanageable mess. With - timelines, you can recover to <emphasis>any</> prior state, including + timelines, you can recover to <emphasis>any</emphasis> prior state, including states in timeline branches that you abandoned earlier. </para> <para> - Every time a new timeline is created, <productname>PostgreSQL</> creates - a <quote>timeline history</> file that shows which timeline it branched + Every time a new timeline is created, <productname>PostgreSQL</productname> creates + a <quote>timeline history</quote> file that shows which timeline it branched off from and when. These history files are necessary to allow the system to pick the right WAL segment files when recovering from an archive that contains multiple timelines. Therefore, they are archived into the WAL @@ -1408,7 +1408,7 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' that was current when the base backup was taken. If you wish to recover into some child timeline (that is, you want to return to some state that was itself generated after a recovery attempt), you need to specify the - target timeline ID in <filename>recovery.conf</>. You cannot recover into + target timeline ID in <filename>recovery.conf</filename>. You cannot recover into timelines that branched off earlier than the base backup. </para> </sect2> @@ -1424,18 +1424,18 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' <title>Standalone Hot Backups</title> <para> - It is possible to use <productname>PostgreSQL</>'s backup facilities to + It is possible to use <productname>PostgreSQL</productname>'s backup facilities to produce standalone hot backups. These are backups that cannot be used for point-in-time recovery, yet are typically much faster to backup and - restore than <application>pg_dump</> dumps. (They are also much larger - than <application>pg_dump</> dumps, so in some cases the speed advantage + restore than <application>pg_dump</application> dumps. (They are also much larger + than <application>pg_dump</application> dumps, so in some cases the speed advantage might be negated.) </para> <para> As with base backups, the easiest way to produce a standalone hot backup is to use the <xref linkend="app-pgbasebackup"> - tool. If you include the <literal>-X</> parameter when calling + tool. If you include the <literal>-X</literal> parameter when calling it, all the write-ahead log required to use the backup will be included in the backup automatically, and no special action is required to restore the backup. @@ -1445,16 +1445,16 @@ restore_command = 'cp /mnt/server/archivedir/%f %p' If more flexibility in copying the backup files is needed, a lower level process can be used for standalone hot backups as well. To prepare for low level standalone hot backups, make sure - <varname>wal_level</> is set to - <literal>replica</> or higher, <varname>archive_mode</> to - <literal>on</>, and set up an <varname>archive_command</> that performs - archiving only when a <emphasis>switch file</> exists. For example: + <varname>wal_level</varname> is set to + <literal>replica</literal> or higher, <varname>archive_mode</varname> to + <literal>on</literal>, and set up an <varname>archive_command</varname> that performs + archiving only when a <emphasis>switch file</emphasis> exists. For example: <programlisting> archive_command = 'test ! -f /var/lib/pgsql/backup_in_progress || (test ! -f /var/lib/pgsql/archive/%f && cp %p /var/lib/pgsql/archive/%f)' </programlisting> This command will perform archiving when - <filename>/var/lib/pgsql/backup_in_progress</> exists, and otherwise - silently return zero exit status (allowing <productname>PostgreSQL</> + <filename>/var/lib/pgsql/backup_in_progress</filename> exists, and otherwise + silently return zero exit status (allowing <productname>PostgreSQL</productname> to recycle the unwanted WAL file). </para> @@ -1469,11 +1469,11 @@ psql -c "select pg_stop_backup();" rm /var/lib/pgsql/backup_in_progress tar -rf /var/lib/pgsql/backup.tar /var/lib/pgsql/archive/ </programlisting> - The switch file <filename>/var/lib/pgsql/backup_in_progress</> is + The switch file <filename>/var/lib/pgsql/backup_in_progress</filename> is created first, enabling archiving of completed WAL files to occur. After the backup the switch file is removed. Archived WAL files are then added to the backup so that both base backup and all required - WAL files are part of the same <application>tar</> file. + WAL files are part of the same <application>tar</application> file. Please remember to add error handling to your backup scripts. </para> @@ -1488,7 +1488,7 @@ tar -rf /var/lib/pgsql/backup.tar /var/lib/pgsql/archive/ <programlisting> archive_command = 'gzip < %p > /var/lib/pgsql/archive/%f' </programlisting> - You will then need to use <application>gunzip</> during recovery: + You will then need to use <application>gunzip</application> during recovery: <programlisting> restore_command = 'gunzip < /mnt/server/archivedir/%f > %p' </programlisting> @@ -1501,7 +1501,7 @@ restore_command = 'gunzip < /mnt/server/archivedir/%f > %p' <para> Many people choose to use scripts to define their <varname>archive_command</varname>, so that their - <filename>postgresql.conf</> entry looks very simple: + <filename>postgresql.conf</filename> entry looks very simple: <programlisting> archive_command = 'local_backup_script.sh "%p" "%f"' </programlisting> @@ -1509,7 +1509,7 @@ archive_command = 'local_backup_script.sh "%p" "%f"' more than a single command in the archiving process. This allows all complexity to be managed within the script, which can be written in a popular scripting language such as - <application>bash</> or <application>perl</>. + <application>bash</application> or <application>perl</application>. </para> <para> @@ -1543,7 +1543,7 @@ archive_command = 'local_backup_script.sh "%p" "%f"' <para> When using an <varname>archive_command</varname> script, it's desirable to enable <xref linkend="guc-logging-collector">. - Any messages written to <systemitem>stderr</> from the script will then + Any messages written to <systemitem>stderr</systemitem> from the script will then appear in the database server log, allowing complex configurations to be diagnosed easily if they fail. </para> @@ -1563,7 +1563,7 @@ archive_command = 'local_backup_script.sh "%p" "%f"' <para> If a <xref linkend="sql-createdatabase"> command is executed while a base backup is being taken, and then - the template database that the <command>CREATE DATABASE</> copied + the template database that the <command>CREATE DATABASE</command> copied is modified while the base backup is still in progress, it is possible that recovery will cause those modifications to be propagated into the created database as well. This is of course @@ -1602,7 +1602,7 @@ archive_command = 'local_backup_script.sh "%p" "%f"' before you do so.) Turning off page snapshots does not prevent use of the logs for PITR operations. An area for future development is to compress archived WAL data by removing - unnecessary page copies even when <varname>full_page_writes</> is + unnecessary page copies even when <varname>full_page_writes</varname> is on. In the meantime, administrators might wish to reduce the number of page snapshots included in WAL by increasing the checkpoint interval parameters as much as feasible. diff --git a/doc/src/sgml/bgworker.sgml b/doc/src/sgml/bgworker.sgml index ea1b5c0c8e3..0b092f6e492 100644 --- a/doc/src/sgml/bgworker.sgml +++ b/doc/src/sgml/bgworker.sgml @@ -11,17 +11,17 @@ PostgreSQL can be extended to run user-supplied code in separate processes. Such processes are started, stopped and monitored by <command>postgres</command>, which permits them to have a lifetime closely linked to the server's status. - These processes have the option to attach to <productname>PostgreSQL</>'s + These processes have the option to attach to <productname>PostgreSQL</productname>'s shared memory area and to connect to databases internally; they can also run multiple transactions serially, just like a regular client-connected server - process. Also, by linking to <application>libpq</> they can connect to the + process. Also, by linking to <application>libpq</application> they can connect to the server and behave like a regular client application. </para> <warning> <para> There are considerable robustness and security risks in using background - worker processes because, being written in the <literal>C</> language, + worker processes because, being written in the <literal>C</literal> language, they have unrestricted access to data. Administrators wishing to enable modules that include background worker process should exercise extreme caution. Only carefully audited modules should be permitted to run @@ -31,15 +31,15 @@ <para> Background workers can be initialized at the time that - <productname>PostgreSQL</> is started by including the module name in - <varname>shared_preload_libraries</>. A module wishing to run a background + <productname>PostgreSQL</productname> is started by including the module name in + <varname>shared_preload_libraries</varname>. A module wishing to run a background worker can register it by calling <function>RegisterBackgroundWorker(<type>BackgroundWorker *worker</type>)</function> - from its <function>_PG_init()</>. Background workers can also be started + from its <function>_PG_init()</function>. Background workers can also be started after the system is up and running by calling the function <function>RegisterDynamicBackgroundWorker(<type>BackgroundWorker *worker, BackgroundWorkerHandle **handle</type>)</function>. Unlike - <function>RegisterBackgroundWorker</>, which can only be called from within + <function>RegisterBackgroundWorker</function>, which can only be called from within the postmaster, <function>RegisterDynamicBackgroundWorker</function> must be called from a regular backend. </para> @@ -65,7 +65,7 @@ typedef struct BackgroundWorker </para> <para> - <structfield>bgw_name</> and <structfield>bgw_type</structfield> are + <structfield>bgw_name</structfield> and <structfield>bgw_type</structfield> are strings to be used in log messages, process listings and similar contexts. <structfield>bgw_type</structfield> should be the same for all background workers of the same type, so that it is possible to group such workers in a @@ -76,7 +76,7 @@ typedef struct BackgroundWorker </para> <para> - <structfield>bgw_flags</> is a bitwise-or'd bit mask indicating the + <structfield>bgw_flags</structfield> is a bitwise-or'd bit mask indicating the capabilities that the module wants. Possible values are: <variablelist> @@ -114,14 +114,14 @@ typedef struct BackgroundWorker <para> <structfield>bgw_start_time</structfield> is the server state during which - <command>postgres</> should start the process; it can be one of - <literal>BgWorkerStart_PostmasterStart</> (start as soon as - <command>postgres</> itself has finished its own initialization; processes + <command>postgres</command> should start the process; it can be one of + <literal>BgWorkerStart_PostmasterStart</literal> (start as soon as + <command>postgres</command> itself has finished its own initialization; processes requesting this are not eligible for database connections), - <literal>BgWorkerStart_ConsistentState</> (start as soon as a consistent state + <literal>BgWorkerStart_ConsistentState</literal> (start as soon as a consistent state has been reached in a hot standby, allowing processes to connect to databases and run read-only queries), and - <literal>BgWorkerStart_RecoveryFinished</> (start as soon as the system has + <literal>BgWorkerStart_RecoveryFinished</literal> (start as soon as the system has entered normal read-write state). Note the last two values are equivalent in a server that's not a hot standby. Note that this setting only indicates when the processes are to be started; they do not stop when a different state @@ -152,9 +152,9 @@ typedef struct BackgroundWorker </para> <para> - <structfield>bgw_main_arg</structfield> is the <type>Datum</> argument + <structfield>bgw_main_arg</structfield> is the <type>Datum</type> argument to the background worker main function. This main function should take a - single argument of type <type>Datum</> and return <type>void</>. + single argument of type <type>Datum</type> and return <type>void</type>. <structfield>bgw_main_arg</structfield> will be passed as the argument. In addition, the global variable <literal>MyBgworkerEntry</literal> points to a copy of the <structname>BackgroundWorker</structname> structure @@ -165,39 +165,39 @@ typedef struct BackgroundWorker <para> On Windows (and anywhere else where <literal>EXEC_BACKEND</literal> is defined) or in dynamic background workers it is not safe to pass a - <type>Datum</> by reference, only by value. If an argument is required, it + <type>Datum</type> by reference, only by value. If an argument is required, it is safest to pass an int32 or other small value and use that as an index - into an array allocated in shared memory. If a value like a <type>cstring</> + into an array allocated in shared memory. If a value like a <type>cstring</type> or <type>text</type> is passed then the pointer won't be valid from the new background worker process. </para> <para> <structfield>bgw_extra</structfield> can contain extra data to be passed - to the background worker. Unlike <structfield>bgw_main_arg</>, this data + to the background worker. Unlike <structfield>bgw_main_arg</structfield>, this data is not passed as an argument to the worker's main function, but it can be accessed via <literal>MyBgworkerEntry</literal>, as discussed above. </para> <para> <structfield>bgw_notify_pid</structfield> is the PID of a PostgreSQL - backend process to which the postmaster should send <literal>SIGUSR1</> + backend process to which the postmaster should send <literal>SIGUSR1</literal> when the process is started or exits. It should be 0 for workers registered at postmaster startup time, or when the backend registering the worker does not wish to wait for the worker to start up. Otherwise, it should be - initialized to <literal>MyProcPid</>. + initialized to <literal>MyProcPid</literal>. </para> <para>Once running, the process can connect to a database by calling <function>BackgroundWorkerInitializeConnection(<parameter>char *dbname</parameter>, <parameter>char *username</parameter>)</function> or <function>BackgroundWorkerInitializeConnectionByOid(<parameter>Oid dboid</parameter>, <parameter>Oid useroid</parameter>)</function>. This allows the process to run transactions and queries using the - <literal>SPI</literal> interface. If <varname>dbname</> is NULL or - <varname>dboid</> is <literal>InvalidOid</>, the session is not connected + <literal>SPI</literal> interface. If <varname>dbname</varname> is NULL or + <varname>dboid</varname> is <literal>InvalidOid</literal>, the session is not connected to any particular database, but shared catalogs can be accessed. - If <varname>username</> is NULL or <varname>useroid</> is - <literal>InvalidOid</>, the process will run as the superuser created - during <command>initdb</>. + If <varname>username</varname> is NULL or <varname>useroid</varname> is + <literal>InvalidOid</literal>, the process will run as the superuser created + during <command>initdb</command>. A background worker can only call one of these two functions, and only once. It is not possible to switch databases. </para> @@ -207,24 +207,24 @@ typedef struct BackgroundWorker background worker's main function, and must be unblocked by it; this is to allow the process to customize its signal handlers, if necessary. Signals can be unblocked in the new process by calling - <function>BackgroundWorkerUnblockSignals</> and blocked by calling - <function>BackgroundWorkerBlockSignals</>. + <function>BackgroundWorkerUnblockSignals</function> and blocked by calling + <function>BackgroundWorkerBlockSignals</function>. </para> <para> If <structfield>bgw_restart_time</structfield> for a background worker is - configured as <literal>BGW_NEVER_RESTART</>, or if it exits with an exit - code of 0 or is terminated by <function>TerminateBackgroundWorker</>, + configured as <literal>BGW_NEVER_RESTART</literal>, or if it exits with an exit + code of 0 or is terminated by <function>TerminateBackgroundWorker</function>, it will be automatically unregistered by the postmaster on exit. Otherwise, it will be restarted after the time period configured via - <structfield>bgw_restart_time</>, or immediately if the postmaster + <structfield>bgw_restart_time</structfield>, or immediately if the postmaster reinitializes the cluster due to a backend failure. Backends which need to suspend execution only temporarily should use an interruptible sleep rather than exiting; this can be achieved by calling <function>WaitLatch()</function>. Make sure the - <literal>WL_POSTMASTER_DEATH</> flag is set when calling that function, and + <literal>WL_POSTMASTER_DEATH</literal> flag is set when calling that function, and verify the return code for a prompt exit in the emergency case that - <command>postgres</> itself has terminated. + <command>postgres</command> itself has terminated. </para> <para> @@ -238,29 +238,29 @@ typedef struct BackgroundWorker opaque handle that can subsequently be passed to <function>GetBackgroundWorkerPid(<parameter>BackgroundWorkerHandle *</parameter>, <parameter>pid_t *</parameter>)</function> or <function>TerminateBackgroundWorker(<parameter>BackgroundWorkerHandle *</parameter>)</function>. - <function>GetBackgroundWorkerPid</> can be used to poll the status of the - worker: a return value of <literal>BGWH_NOT_YET_STARTED</> indicates that + <function>GetBackgroundWorkerPid</function> can be used to poll the status of the + worker: a return value of <literal>BGWH_NOT_YET_STARTED</literal> indicates that the worker has not yet been started by the postmaster; <literal>BGWH_STOPPED</literal> indicates that it has been started but is no longer running; and <literal>BGWH_STARTED</literal> indicates that it is currently running. In this last case, the PID will also be returned via the second argument. - <function>TerminateBackgroundWorker</> causes the postmaster to send - <literal>SIGTERM</> to the worker if it is running, and to unregister it + <function>TerminateBackgroundWorker</function> causes the postmaster to send + <literal>SIGTERM</literal> to the worker if it is running, and to unregister it as soon as it is not. </para> <para> In some cases, a process which registers a background worker may wish to wait for the worker to start up. This can be accomplished by initializing - <structfield>bgw_notify_pid</structfield> to <literal>MyProcPid</> and + <structfield>bgw_notify_pid</structfield> to <literal>MyProcPid</literal> and then passing the <type>BackgroundWorkerHandle *</type> obtained at registration time to <function>WaitForBackgroundWorkerStartup(<parameter>BackgroundWorkerHandle *handle</parameter>, <parameter>pid_t *</parameter>)</function> function. This function will block until the postmaster has attempted to start the background worker, or until the postmaster dies. If the background runner - is running, the return value will <literal>BGWH_STARTED</>, and + is running, the return value will <literal>BGWH_STARTED</literal>, and the PID will be written to the provided address. Otherwise, the return value will be <literal>BGWH_STOPPED</literal> or <literal>BGWH_POSTMASTER_DIED</literal>. @@ -279,7 +279,7 @@ typedef struct BackgroundWorker </para> <para> - The <filename>src/test/modules/worker_spi</> module + The <filename>src/test/modules/worker_spi</filename> module contains a working example, which demonstrates some useful techniques. </para> diff --git a/doc/src/sgml/biblio.sgml b/doc/src/sgml/biblio.sgml index 5462bc38e44..d7547e6e921 100644 --- a/doc/src/sgml/biblio.sgml +++ b/doc/src/sgml/biblio.sgml @@ -171,7 +171,7 @@ ssimkovi@ag.or.at <abstract> <para> Discusses SQL history and syntax, and describes the addition of - <literal>INTERSECT</> and <literal>EXCEPT</> constructs into + <literal>INTERSECT</literal> and <literal>EXCEPT</literal> constructs into <productname>PostgreSQL</productname>. Prepared as a Master's Thesis with the support of O. Univ. Prof. Dr. Georg Gottlob and Univ. Ass. Mag. Katrin Seyr at Vienna University of Technology. diff --git a/doc/src/sgml/bki.sgml b/doc/src/sgml/bki.sgml index af6d8d1d2a9..33378b46eaa 100644 --- a/doc/src/sgml/bki.sgml +++ b/doc/src/sgml/bki.sgml @@ -21,7 +21,7 @@ input file used by <application>initdb</application> is created as part of building and installing <productname>PostgreSQL</productname> by a program named <filename>genbki.pl</filename>, which reads some - specially formatted C header files in the <filename>src/include/catalog/</> + specially formatted C header files in the <filename>src/include/catalog/</filename> directory of the source tree. The created <acronym>BKI</acronym> file is called <filename>postgres.bki</filename> and is normally installed in the @@ -67,13 +67,13 @@ <variablelist> <varlistentry> <term> - <literal>create</> + <literal>create</literal> <replaceable class="parameter">tablename</replaceable> <replaceable class="parameter">tableoid</replaceable> - <optional><literal>bootstrap</></optional> - <optional><literal>shared_relation</></optional> - <optional><literal>without_oids</></optional> - <optional><literal>rowtype_oid</> <replaceable>oid</></optional> + <optional><literal>bootstrap</literal></optional> + <optional><literal>shared_relation</literal></optional> + <optional><literal>without_oids</literal></optional> + <optional><literal>rowtype_oid</literal> <replaceable>oid</replaceable></optional> (<replaceable class="parameter">name1</replaceable> = <replaceable class="parameter">type1</replaceable> <optional>FORCE NOT NULL | FORCE NULL </optional> <optional>, @@ -93,7 +93,7 @@ <para> The following column types are supported directly by - <filename>bootstrap.c</>: <type>bool</type>, + <filename>bootstrap.c</filename>: <type>bool</type>, <type>bytea</type>, <type>char</type> (1 byte), <type>name</type>, <type>int2</type>, <type>int4</type>, <type>regproc</type>, <type>regclass</type>, @@ -104,31 +104,31 @@ <type>_oid</type> (array), <type>_char</type> (array), <type>_aclitem</type> (array). Although it is possible to create tables containing columns of other types, this cannot be done until - after <structname>pg_type</> has been created and filled with + after <structname>pg_type</structname> has been created and filled with appropriate entries. (That effectively means that only these column types can be used in bootstrapped tables, but non-bootstrap catalogs can contain any built-in type.) </para> <para> - When <literal>bootstrap</> is specified, + When <literal>bootstrap</literal> is specified, the table will only be created on disk; nothing is entered into <structname>pg_class</structname>, <structname>pg_attribute</structname>, etc, for it. Thus the table will not be accessible by ordinary SQL operations until - such entries are made the hard way (with <literal>insert</> + such entries are made the hard way (with <literal>insert</literal> commands). This option is used for creating <structname>pg_class</structname> etc themselves. </para> <para> - The table is created as shared if <literal>shared_relation</> is + The table is created as shared if <literal>shared_relation</literal> is specified. - It will have OIDs unless <literal>without_oids</> is specified. - The table's row type OID (<structname>pg_type</> OID) can optionally - be specified via the <literal>rowtype_oid</> clause; if not specified, - an OID is automatically generated for it. (The <literal>rowtype_oid</> - clause is useless if <literal>bootstrap</> is specified, but it can be + It will have OIDs unless <literal>without_oids</literal> is specified. + The table's row type OID (<structname>pg_type</structname> OID) can optionally + be specified via the <literal>rowtype_oid</literal> clause; if not specified, + an OID is automatically generated for it. (The <literal>rowtype_oid</literal> + clause is useless if <literal>bootstrap</literal> is specified, but it can be provided anyway for documentation.) </para> </listitem> @@ -136,7 +136,7 @@ <varlistentry> <term> - <literal>open</> <replaceable class="parameter">tablename</replaceable> + <literal>open</literal> <replaceable class="parameter">tablename</replaceable> </term> <listitem> @@ -150,7 +150,7 @@ <varlistentry> <term> - <literal>close</> <optional><replaceable class="parameter">tablename</replaceable></optional> + <literal>close</literal> <optional><replaceable class="parameter">tablename</replaceable></optional> </term> <listitem> @@ -163,7 +163,7 @@ <varlistentry> <term> - <literal>insert</> <optional><literal>OID =</> <replaceable class="parameter">oid_value</replaceable></optional> <literal>(</> <replaceable class="parameter">value1</replaceable> <replaceable class="parameter">value2</replaceable> ... <literal>)</> + <literal>insert</literal> <optional><literal>OID =</literal> <replaceable class="parameter">oid_value</replaceable></optional> <literal>(</literal> <replaceable class="parameter">value1</replaceable> <replaceable class="parameter">value2</replaceable> ... <literal>)</literal> </term> <listitem> @@ -188,14 +188,14 @@ <varlistentry> <term> - <literal>declare</> <optional><literal>unique</></optional> - <literal>index</> <replaceable class="parameter">indexname</replaceable> + <literal>declare</literal> <optional><literal>unique</literal></optional> + <literal>index</literal> <replaceable class="parameter">indexname</replaceable> <replaceable class="parameter">indexoid</replaceable> - <literal>on</> <replaceable class="parameter">tablename</replaceable> - <literal>using</> <replaceable class="parameter">amname</replaceable> - <literal>(</> <replaceable class="parameter">opclass1</replaceable> + <literal>on</literal> <replaceable class="parameter">tablename</replaceable> + <literal>using</literal> <replaceable class="parameter">amname</replaceable> + <literal>(</literal> <replaceable class="parameter">opclass1</replaceable> <replaceable class="parameter">name1</replaceable> - <optional>, ...</optional> <literal>)</> + <optional>, ...</optional> <literal>)</literal> </term> <listitem> @@ -220,10 +220,10 @@ <varlistentry> <term> - <literal>declare toast</> + <literal>declare toast</literal> <replaceable class="parameter">toasttableoid</replaceable> <replaceable class="parameter">toastindexoid</replaceable> - <literal>on</> <replaceable class="parameter">tablename</replaceable> + <literal>on</literal> <replaceable class="parameter">tablename</replaceable> </term> <listitem> @@ -234,14 +234,14 @@ <replaceable class="parameter">toasttableoid</replaceable> and its index is assigned OID <replaceable class="parameter">toastindexoid</replaceable>. - As with <literal>declare index</>, filling of the index + As with <literal>declare index</literal>, filling of the index is postponed. </para> </listitem> </varlistentry> <varlistentry> - <term><literal>build indices</></term> + <term><literal>build indices</literal></term> <listitem> <para> @@ -257,17 +257,17 @@ <title>Structure of the Bootstrap <acronym>BKI</acronym> File</title> <para> - The <literal>open</> command cannot be used until the tables it uses + The <literal>open</literal> command cannot be used until the tables it uses exist and have entries for the table that is to be opened. - (These minimum tables are <structname>pg_class</>, - <structname>pg_attribute</>, <structname>pg_proc</>, and - <structname>pg_type</>.) To allow those tables themselves to be filled, - <literal>create</> with the <literal>bootstrap</> option implicitly opens + (These minimum tables are <structname>pg_class</structname>, + <structname>pg_attribute</structname>, <structname>pg_proc</structname>, and + <structname>pg_type</structname>.) To allow those tables themselves to be filled, + <literal>create</literal> with the <literal>bootstrap</literal> option implicitly opens the created table for data insertion. </para> <para> - Also, the <literal>declare index</> and <literal>declare toast</> + Also, the <literal>declare index</literal> and <literal>declare toast</literal> commands cannot be used until the system catalogs they need have been created and filled in. </para> @@ -278,17 +278,17 @@ <orderedlist> <listitem> <para> - <literal>create bootstrap</> one of the critical tables + <literal>create bootstrap</literal> one of the critical tables </para> </listitem> <listitem> <para> - <literal>insert</> data describing at least the critical tables + <literal>insert</literal> data describing at least the critical tables </para> </listitem> <listitem> <para> - <literal>close</> + <literal>close</literal> </para> </listitem> <listitem> @@ -298,22 +298,22 @@ </listitem> <listitem> <para> - <literal>create</> (without <literal>bootstrap</>) a noncritical table + <literal>create</literal> (without <literal>bootstrap</literal>) a noncritical table </para> </listitem> <listitem> <para> - <literal>open</> + <literal>open</literal> </para> </listitem> <listitem> <para> - <literal>insert</> desired data + <literal>insert</literal> desired data </para> </listitem> <listitem> <para> - <literal>close</> + <literal>close</literal> </para> </listitem> <listitem> @@ -328,7 +328,7 @@ </listitem> <listitem> <para> - <literal>build indices</> + <literal>build indices</literal> </para> </listitem> </orderedlist> diff --git a/doc/src/sgml/bloom.sgml b/doc/src/sgml/bloom.sgml index 396348c5237..e13ebf80fdf 100644 --- a/doc/src/sgml/bloom.sgml +++ b/doc/src/sgml/bloom.sgml @@ -8,7 +8,7 @@ </indexterm> <para> - <literal>bloom</> provides an index access method based on + <literal>bloom</literal> provides an index access method based on <ulink url="http://en.wikipedia.org/wiki/Bloom_filter">Bloom filters</ulink>. </para> @@ -42,29 +42,29 @@ <title>Parameters</title> <para> - A <literal>bloom</> index accepts the following parameters in its - <literal>WITH</> clause: + A <literal>bloom</literal> index accepts the following parameters in its + <literal>WITH</literal> clause: </para> <variablelist> <varlistentry> - <term><literal>length</></term> + <term><literal>length</literal></term> <listitem> <para> Length of each signature (index entry) in bits. The default - is <literal>80</> bits and maximum is <literal>4096</>. + is <literal>80</literal> bits and maximum is <literal>4096</literal>. </para> </listitem> </varlistentry> </variablelist> <variablelist> <varlistentry> - <term><literal>col1 — col32</></term> + <term><literal>col1 — col32</literal></term> <listitem> <para> Number of bits generated for each index column. Each parameter's name refers to the number of the index column that it controls. The default - is <literal>2</> bits and maximum is <literal>4095</>. Parameters for + is <literal>2</literal> bits and maximum is <literal>4095</literal>. Parameters for index columns not actually used are ignored. </para> </listitem> @@ -87,8 +87,8 @@ CREATE INDEX bloomidx ON tbloom USING bloom (i1,i2,i3) <para> The index is created with a signature length of 80 bits, with attributes i1 and i2 mapped to 2 bits, and attribute i3 mapped to 4 bits. We could - have omitted the <literal>length</>, <literal>col1</>, - and <literal>col2</> specifications since those have the default values. + have omitted the <literal>length</literal>, <literal>col1</literal>, + and <literal>col2</literal> specifications since those have the default values. </para> <para> @@ -175,7 +175,7 @@ CREATE INDEX Note the relatively large number of false positives: 2439 rows were selected to be visited in the heap, but none actually matched the query. We could reduce that by specifying a larger signature length. - In this example, creating the index with <literal>length=200</> + In this example, creating the index with <literal>length=200</literal> reduced the number of false positives to 55; but it doubled the index size (to 306 MB) and ended up being slower for this query (125 ms overall). </para> @@ -213,7 +213,7 @@ CREATE INDEX <para> An operator class for bloom indexes requires only a hash function for the indexed data type and an equality operator for searching. This example - shows the operator class definition for the <type>text</> data type: + shows the operator class definition for the <type>text</type> data type: </para> <programlisting> @@ -230,7 +230,7 @@ DEFAULT FOR TYPE text USING bloom AS <itemizedlist> <listitem> <para> - Only operator classes for <type>int4</> and <type>text</> are + Only operator classes for <type>int4</type> and <type>text</type> are included with the module. </para> </listitem> diff --git a/doc/src/sgml/brin.sgml b/doc/src/sgml/brin.sgml index 8dcc29925bc..91c01700ed5 100644 --- a/doc/src/sgml/brin.sgml +++ b/doc/src/sgml/brin.sgml @@ -16,7 +16,7 @@ <acronym>BRIN</acronym> is designed for handling very large tables in which certain columns have some natural correlation with their physical location within the table. - A <firstterm>block range</> is a group of pages that are physically + A <firstterm>block range</firstterm> is a group of pages that are physically adjacent in the table; for each block range, some summary info is stored by the index. For example, a table storing a store's sale orders might have @@ -29,7 +29,7 @@ <para> <acronym>BRIN</acronym> indexes can satisfy queries via regular bitmap index scans, and will return all tuples in all pages within each range if - the summary info stored by the index is <firstterm>consistent</> with the + the summary info stored by the index is <firstterm>consistent</firstterm> with the query conditions. The query executor is in charge of rechecking these tuples and discarding those that do not match the query conditions — in other words, these @@ -51,9 +51,9 @@ <para> The size of the block range is determined at index creation time by - the <literal>pages_per_range</> storage parameter. The number of index + the <literal>pages_per_range</literal> storage parameter. The number of index entries will be equal to the size of the relation in pages divided by - the selected value for <literal>pages_per_range</>. Therefore, the smaller + the selected value for <literal>pages_per_range</literal>. Therefore, the smaller the number, the larger the index becomes (because of the need to store more index entries), but at the same time the summary data stored can be more precise and more data blocks can be skipped during an index scan. @@ -99,9 +99,9 @@ </para> <para> - The <firstterm>minmax</> + The <firstterm>minmax</firstterm> operator classes store the minimum and the maximum values appearing - in the indexed column within the range. The <firstterm>inclusion</> + in the indexed column within the range. The <firstterm>inclusion</firstterm> operator classes store a value which includes the values in the indexed column within the range. </para> @@ -162,21 +162,21 @@ </entry> </row> <row> - <entry><literal>box_inclusion_ops</></entry> + <entry><literal>box_inclusion_ops</literal></entry> <entry><type>box</type></entry> <entry> - <literal><<</> - <literal>&<</> - <literal>&&</> - <literal>&></> - <literal>>></> - <literal>~=</> - <literal>@></> - <literal><@</> - <literal>&<|</> - <literal><<|</> + <literal><<</literal> + <literal>&<</literal> + <literal>&&</literal> + <literal>&></literal> + <literal>>></literal> + <literal>~=</literal> + <literal>@></literal> + <literal><@</literal> + <literal>&<|</literal> + <literal><<|</literal> <literal>|>></literal> - <literal>|&></> + <literal>|&></literal> </entry> </row> <row> @@ -249,11 +249,11 @@ <entry><literal>network_inclusion_ops</literal></entry> <entry><type>inet</type></entry> <entry> - <literal>&&</> - <literal>>>=</> + <literal>&&</literal> + <literal>>>=</literal> <literal><<=</literal> <literal>=</literal> - <literal>>></> + <literal>>></literal> <literal><<</literal> </entry> </row> @@ -346,18 +346,18 @@ </entry> </row> <row> - <entry><literal>range_inclusion_ops</></entry> + <entry><literal>range_inclusion_ops</literal></entry> <entry><type>any range type</type></entry> <entry> - <literal><<</> - <literal>&<</> - <literal>&&</> - <literal>&></> - <literal>>></> - <literal>@></> - <literal><@</> - <literal>-|-</> - <literal>=</> + <literal><<</literal> + <literal>&<</literal> + <literal>&&</literal> + <literal>&></literal> + <literal>>></literal> + <literal>@></literal> + <literal><@</literal> + <literal>-|-</literal> + <literal>=</literal> <literal><</literal> <literal><=</literal> <literal>=</literal> @@ -505,11 +505,11 @@ <variablelist> <varlistentry> - <term><function>BrinOpcInfo *opcInfo(Oid type_oid)</></term> + <term><function>BrinOpcInfo *opcInfo(Oid type_oid)</function></term> <listitem> <para> Returns internal information about the indexed columns' summary data. - The return value must point to a palloc'd <structname>BrinOpcInfo</>, + The return value must point to a palloc'd <structname>BrinOpcInfo</structname>, which has this definition: <programlisting> typedef struct BrinOpcInfo @@ -524,7 +524,7 @@ typedef struct BrinOpcInfo TypeCacheEntry *oi_typcache[FLEXIBLE_ARRAY_MEMBER]; } BrinOpcInfo; </programlisting> - <structname>BrinOpcInfo</>.<structfield>oi_opaque</> can be used by the + <structname>BrinOpcInfo</structname>.<structfield>oi_opaque</structfield> can be used by the operator class routines to pass information between support procedures during an index scan. </para> @@ -797,8 +797,8 @@ typedef struct BrinOpcInfo It should accept two arguments with the same data type as the operator class, and return the union of them. The inclusion operator class can store union values with different data types if it is defined with the - <literal>STORAGE</> parameter. The return value of the union - function should match the <literal>STORAGE</> data type. + <literal>STORAGE</literal> parameter. The return value of the union + function should match the <literal>STORAGE</literal> data type. </para> <para> @@ -823,11 +823,11 @@ typedef struct BrinOpcInfo on another operator strategy as shown in <xref linkend="brin-extensibility-inclusion-table">, or the same operator strategy as themselves. They require the dependency - operator to be defined with the <literal>STORAGE</> data type as the + operator to be defined with the <literal>STORAGE</literal> data type as the left-hand-side argument and the other supported data type to be the right-hand-side argument of the supported operator. See - <literal>float4_minmax_ops</> as an example of minmax, and - <literal>box_inclusion_ops</> as an example of inclusion. + <literal>float4_minmax_ops</literal> as an example of minmax, and + <literal>box_inclusion_ops</literal> as an example of inclusion. </para> </sect1> </chapter> diff --git a/doc/src/sgml/btree-gin.sgml b/doc/src/sgml/btree-gin.sgml index 375e7ec4be6..e491fa76e7d 100644 --- a/doc/src/sgml/btree-gin.sgml +++ b/doc/src/sgml/btree-gin.sgml @@ -8,16 +8,16 @@ </indexterm> <para> - <filename>btree_gin</> provides sample GIN operator classes that + <filename>btree_gin</filename> provides sample GIN operator classes that implement B-tree equivalent behavior for the data types - <type>int2</>, <type>int4</>, <type>int8</>, <type>float4</>, - <type>float8</>, <type>timestamp with time zone</>, - <type>timestamp without time zone</>, <type>time with time zone</>, - <type>time without time zone</>, <type>date</>, <type>interval</>, - <type>oid</>, <type>money</>, <type>"char"</>, - <type>varchar</>, <type>text</>, <type>bytea</>, <type>bit</>, - <type>varbit</>, <type>macaddr</>, <type>macaddr8</>, <type>inet</>, - <type>cidr</>, and all <type>enum</> types. + <type>int2</type>, <type>int4</type>, <type>int8</type>, <type>float4</type>, + <type>float8</type>, <type>timestamp with time zone</type>, + <type>timestamp without time zone</type>, <type>time with time zone</type>, + <type>time without time zone</type>, <type>date</type>, <type>interval</type>, + <type>oid</type>, <type>money</type>, <type>"char"</type>, + <type>varchar</type>, <type>text</type>, <type>bytea</type>, <type>bit</type>, + <type>varbit</type>, <type>macaddr</type>, <type>macaddr8</type>, <type>inet</type>, + <type>cidr</type>, and all <type>enum</type> types. </para> <para> diff --git a/doc/src/sgml/btree-gist.sgml b/doc/src/sgml/btree-gist.sgml index f3c639c2f3b..dcb939f1fbf 100644 --- a/doc/src/sgml/btree-gist.sgml +++ b/doc/src/sgml/btree-gist.sgml @@ -8,16 +8,16 @@ </indexterm> <para> - <filename>btree_gist</> provides GiST index operator classes that + <filename>btree_gist</filename> provides GiST index operator classes that implement B-tree equivalent behavior for the data types - <type>int2</>, <type>int4</>, <type>int8</>, <type>float4</>, - <type>float8</>, <type>numeric</>, <type>timestamp with time zone</>, - <type>timestamp without time zone</>, <type>time with time zone</>, - <type>time without time zone</>, <type>date</>, <type>interval</>, - <type>oid</>, <type>money</>, <type>char</>, - <type>varchar</>, <type>text</>, <type>bytea</>, <type>bit</>, - <type>varbit</>, <type>macaddr</>, <type>macaddr8</>, <type>inet</>, - <type>cidr</>, <type>uuid</>, and all <type>enum</> types. + <type>int2</type>, <type>int4</type>, <type>int8</type>, <type>float4</type>, + <type>float8</type>, <type>numeric</type>, <type>timestamp with time zone</type>, + <type>timestamp without time zone</type>, <type>time with time zone</type>, + <type>time without time zone</type>, <type>date</type>, <type>interval</type>, + <type>oid</type>, <type>money</type>, <type>char</type>, + <type>varchar</type>, <type>text</type>, <type>bytea</type>, <type>bit</type>, + <type>varbit</type>, <type>macaddr</type>, <type>macaddr8</type>, <type>inet</type>, + <type>cidr</type>, <type>uuid</type>, and all <type>enum</type> types. </para> <para> @@ -33,7 +33,7 @@ </para> <para> - In addition to the typical B-tree search operators, <filename>btree_gist</> + In addition to the typical B-tree search operators, <filename>btree_gist</filename> also provides index support for <literal><></literal> (<quote>not equals</quote>). This may be useful in combination with an <link linkend="SQL-CREATETABLE-EXCLUDE">exclusion constraint</link>, @@ -42,14 +42,14 @@ <para> Also, for data types for which there is a natural distance metric, - <filename>btree_gist</> defines a distance operator <literal><-></>, + <filename>btree_gist</filename> defines a distance operator <literal><-></literal>, and provides GiST index support for nearest-neighbor searches using this operator. Distance operators are provided for - <type>int2</>, <type>int4</>, <type>int8</>, <type>float4</>, - <type>float8</>, <type>timestamp with time zone</>, - <type>timestamp without time zone</>, - <type>time without time zone</>, <type>date</>, <type>interval</>, - <type>oid</>, and <type>money</>. + <type>int2</type>, <type>int4</type>, <type>int8</type>, <type>float4</type>, + <type>float8</type>, <type>timestamp with time zone</type>, + <type>timestamp without time zone</type>, + <type>time without time zone</type>, <type>date</type>, <type>interval</type>, + <type>oid</type>, and <type>money</type>. </para> <sect2> diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml index cfec2465d26..ef60a58631a 100644 --- a/doc/src/sgml/catalogs.sgml +++ b/doc/src/sgml/catalogs.sgml @@ -387,7 +387,7 @@ </para> <table> - <title><structname>pg_aggregate</> Columns</title> + <title><structname>pg_aggregate</structname> Columns</title> <tgroup cols="4"> <thead> @@ -410,9 +410,9 @@ <entry><type>char</type></entry> <entry></entry> <entry>Aggregate kind: - <literal>n</literal> for <quote>normal</> aggregates, - <literal>o</literal> for <quote>ordered-set</> aggregates, or - <literal>h</literal> for <quote>hypothetical-set</> aggregates + <literal>n</literal> for <quote>normal</quote> aggregates, + <literal>o</literal> for <quote>ordered-set</quote> aggregates, or + <literal>h</literal> for <quote>hypothetical-set</quote> aggregates </entry> </row> <row> @@ -421,7 +421,7 @@ <entry></entry> <entry>Number of direct (non-aggregated) arguments of an ordered-set or hypothetical-set aggregate, counting a variadic array as one argument. - If equal to <structfield>pronargs</>, the aggregate must be variadic + If equal to <structfield>pronargs</structfield>, the aggregate must be variadic and the variadic array describes the aggregated arguments as well as the final direct arguments. Always zero for normal aggregates.</entry> @@ -592,7 +592,7 @@ </para> <table> - <title><structname>pg_am</> Columns</title> + <title><structname>pg_am</structname> Columns</title> <tgroup cols="4"> <thead> @@ -644,7 +644,7 @@ <note> <para> - Before <productname>PostgreSQL</> 9.6, <structname>pg_am</structname> + Before <productname>PostgreSQL</productname> 9.6, <structname>pg_am</structname> contained many additional columns representing properties of index access methods. That data is now only directly visible at the C code level. However, <function>pg_index_column_has_property()</function> and related @@ -667,8 +667,8 @@ The catalog <structname>pg_amop</structname> stores information about operators associated with access method operator families. There is one row for each operator that is a member of an operator family. A family - member can be either a <firstterm>search</> operator or an - <firstterm>ordering</> operator. An operator + member can be either a <firstterm>search</firstterm> operator or an + <firstterm>ordering</firstterm> operator. An operator can appear in more than one family, but cannot appear in more than one search position nor more than one ordering position within a family. (It is allowed, though unlikely, for an operator to be used for both @@ -676,7 +676,7 @@ </para> <table> - <title><structname>pg_amop</> Columns</title> + <title><structname>pg_amop</structname> Columns</title> <tgroup cols="4"> <thead> @@ -728,8 +728,8 @@ <entry><structfield>amoppurpose</structfield></entry> <entry><type>char</type></entry> <entry></entry> - <entry>Operator purpose, either <literal>s</> for search or - <literal>o</> for ordering</entry> + <entry>Operator purpose, either <literal>s</literal> for search or + <literal>o</literal> for ordering</entry> </row> <row> @@ -759,26 +759,26 @@ </table> <para> - A <quote>search</> operator entry indicates that an index of this operator + A <quote>search</quote> operator entry indicates that an index of this operator family can be searched to find all rows satisfying - <literal>WHERE</> - <replaceable>indexed_column</> - <replaceable>operator</> - <replaceable>constant</>. + <literal>WHERE</literal> + <replaceable>indexed_column</replaceable> + <replaceable>operator</replaceable> + <replaceable>constant</replaceable>. Obviously, such an operator must return <type>boolean</type>, and its left-hand input type must match the index's column data type. </para> <para> - An <quote>ordering</> operator entry indicates that an index of this + An <quote>ordering</quote> operator entry indicates that an index of this operator family can be scanned to return rows in the order represented by - <literal>ORDER BY</> - <replaceable>indexed_column</> - <replaceable>operator</> - <replaceable>constant</>. + <literal>ORDER BY</literal> + <replaceable>indexed_column</replaceable> + <replaceable>operator</replaceable> + <replaceable>constant</replaceable>. Such an operator could return any sortable data type, though again its left-hand input type must match the index's column data type. - The exact semantics of the <literal>ORDER BY</> are specified by the + The exact semantics of the <literal>ORDER BY</literal> are specified by the <structfield>amopsortfamily</structfield> column, which must reference a B-tree operator family for the operator's result type. </para> @@ -787,19 +787,19 @@ <para> At present, it's assumed that the sort order for an ordering operator is the default for the referenced operator family, i.e., <literal>ASC NULLS - LAST</>. This might someday be relaxed by adding additional columns + LAST</literal>. This might someday be relaxed by adding additional columns to specify sort options explicitly. </para> </note> <para> - An entry's <structfield>amopmethod</> must match the - <structname>opfmethod</> of its containing operator family (including - <structfield>amopmethod</> here is an intentional denormalization of the + An entry's <structfield>amopmethod</structfield> must match the + <structname>opfmethod</structname> of its containing operator family (including + <structfield>amopmethod</structfield> here is an intentional denormalization of the catalog structure for performance reasons). Also, - <structfield>amoplefttype</> and <structfield>amoprighttype</> must match - the <structfield>oprleft</> and <structfield>oprright</> fields of the - referenced <structname>pg_operator</> entry. + <structfield>amoplefttype</structfield> and <structfield>amoprighttype</structfield> must match + the <structfield>oprleft</structfield> and <structfield>oprright</structfield> fields of the + referenced <structname>pg_operator</structname> entry. </para> </sect1> @@ -880,14 +880,14 @@ <para> The usual interpretation of the - <structfield>amproclefttype</> and <structfield>amprocrighttype</> fields + <structfield>amproclefttype</structfield> and <structfield>amprocrighttype</structfield> fields is that they identify the left and right input types of the operator(s) that a particular support procedure supports. For some access methods these match the input data type(s) of the support procedure itself, for - others not. There is a notion of <quote>default</> support procedures for - an index, which are those with <structfield>amproclefttype</> and - <structfield>amprocrighttype</> both equal to the index operator class's - <structfield>opcintype</>. + others not. There is a notion of <quote>default</quote> support procedures for + an index, which are those with <structfield>amproclefttype</structfield> and + <structfield>amprocrighttype</structfield> both equal to the index operator class's + <structfield>opcintype</structfield>. </para> </sect1> @@ -909,7 +909,7 @@ </para> <table> - <title><structname>pg_attrdef</> Columns</title> + <title><structname>pg_attrdef</structname> Columns</title> <tgroup cols="4"> <thead> @@ -964,7 +964,7 @@ The <structfield>adsrc</structfield> field is historical, and is best not used, because it does not track outside changes that might affect the representation of the default value. Reverse-compiling the - <structfield>adbin</structfield> field (with <function>pg_get_expr</> for + <structfield>adbin</structfield> field (with <function>pg_get_expr</function> for example) is a better way to display the default value. </para> @@ -993,7 +993,7 @@ </para> <table> - <title><structname>pg_attribute</> Columns</title> + <title><structname>pg_attribute</structname> Columns</title> <tgroup cols="4"> <thead> @@ -1072,7 +1072,7 @@ <entry> Number of dimensions, if the column is an array type; otherwise 0. (Presently, the number of dimensions of an array is not enforced, - so any nonzero value effectively means <quote>it's an array</>.) + so any nonzero value effectively means <quote>it's an array</quote>.) </entry> </row> @@ -1096,7 +1096,7 @@ supplied at table creation time (for example, the maximum length of a <type>varchar</type> column). It is passed to type-specific input functions and length coercion functions. - The value will generally be -1 for types that do not need <structfield>atttypmod</>. + The value will generally be -1 for types that do not need <structfield>atttypmod</structfield>. </entry> </row> @@ -1105,7 +1105,7 @@ <entry><type>bool</type></entry> <entry></entry> <entry> - A copy of <literal>pg_type.typbyval</> of this column's type + A copy of <literal>pg_type.typbyval</literal> of this column's type </entry> </row> @@ -1114,7 +1114,7 @@ <entry><type>char</type></entry> <entry></entry> <entry> - Normally a copy of <literal>pg_type.typstorage</> of this + Normally a copy of <literal>pg_type.typstorage</literal> of this column's type. For TOAST-able data types, this can be altered after column creation to control storage policy. </entry> @@ -1125,7 +1125,7 @@ <entry><type>char</type></entry> <entry></entry> <entry> - A copy of <literal>pg_type.typalign</> of this column's type + A copy of <literal>pg_type.typalign</literal> of this column's type </entry> </row> @@ -1216,7 +1216,7 @@ <entry><type>text[]</type></entry> <entry></entry> <entry> - Attribute-level options, as <quote>keyword=value</> strings + Attribute-level options, as <quote>keyword=value</quote> strings </entry> </row> @@ -1225,7 +1225,7 @@ <entry><type>text[]</type></entry> <entry></entry> <entry> - Attribute-level foreign data wrapper options, as <quote>keyword=value</> strings + Attribute-level foreign data wrapper options, as <quote>keyword=value</quote> strings </entry> </row> @@ -1237,9 +1237,9 @@ In a dropped column's <structname>pg_attribute</structname> entry, <structfield>atttypid</structfield> is reset to zero, but <structfield>attlen</structfield> and the other fields copied from - <structname>pg_type</> are still valid. This arrangement is needed + <structname>pg_type</structname> are still valid. This arrangement is needed to cope with the situation where the dropped column's data type was - later dropped, and so there is no <structname>pg_type</> row anymore. + later dropped, and so there is no <structname>pg_type</structname> row anymore. <structfield>attlen</structfield> and the other fields can be used to interpret the contents of a row of the table. </para> @@ -1256,9 +1256,9 @@ <para> The catalog <structname>pg_authid</structname> contains information about database authorization identifiers (roles). A role subsumes the concepts - of <quote>users</> and <quote>groups</>. A user is essentially just a - role with the <structfield>rolcanlogin</> flag set. Any role (with or - without <structfield>rolcanlogin</>) can have other roles as members; see + of <quote>users</quote> and <quote>groups</quote>. A user is essentially just a + role with the <structfield>rolcanlogin</structfield> flag set. Any role (with or + without <structfield>rolcanlogin</structfield>) can have other roles as members; see <link linkend="catalog-pg-auth-members"><structname>pg_auth_members</structname></link>. </para> @@ -1283,7 +1283,7 @@ </para> <table> - <title><structname>pg_authid</> Columns</title> + <title><structname>pg_authid</structname> Columns</title> <tgroup cols="3"> <thead> @@ -1390,20 +1390,20 @@ <para> For an MD5 encrypted password, <structfield>rolpassword</structfield> - column will begin with the string <literal>md5</> followed by a + column will begin with the string <literal>md5</literal> followed by a 32-character hexadecimal MD5 hash. The MD5 hash will be of the user's password concatenated to their user name. For example, if user - <literal>joe</> has password <literal>xyzzy</>, <productname>PostgreSQL</> - will store the md5 hash of <literal>xyzzyjoe</>. + <literal>joe</literal> has password <literal>xyzzy</literal>, <productname>PostgreSQL</productname> + will store the md5 hash of <literal>xyzzyjoe</literal>. </para> <para> If the password is encrypted with SCRAM-SHA-256, it has the format: <synopsis> -SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt></>$<replaceable><StoredKey></>:<replaceable><ServerKey></> +SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable><salt></replaceable>$<replaceable><StoredKey></replaceable>:<replaceable><ServerKey></replaceable> </synopsis> - where <replaceable>salt</>, <replaceable>StoredKey</> and - <replaceable>ServerKey</> are in Base64 encoded format. This format is + where <replaceable>salt</replaceable>, <replaceable>StoredKey</replaceable> and + <replaceable>ServerKey</replaceable> are in Base64 encoded format. This format is the same as that specified by RFC 5803. </para> @@ -1435,7 +1435,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_auth_members</> Columns</title> + <title><structname>pg_auth_members</structname> Columns</title> <tgroup cols="4"> <thead> @@ -1459,7 +1459,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><structfield>member</structfield></entry> <entry><type>oid</type></entry> <entry><literal><link linkend="catalog-pg-authid"><structname>pg_authid</structname></link>.oid</literal></entry> - <entry>ID of a role that is a member of <structfield>roleid</></entry> + <entry>ID of a role that is a member of <structfield>roleid</structfield></entry> </row> <row> @@ -1473,8 +1473,8 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><structfield>admin_option</structfield></entry> <entry><type>bool</type></entry> <entry></entry> - <entry>True if <structfield>member</> can grant membership in - <structfield>roleid</> to others</entry> + <entry>True if <structfield>member</structfield> can grant membership in + <structfield>roleid</structfield> to others</entry> </row> </tbody> </tgroup> @@ -1501,14 +1501,14 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< cannot be deduced from some generic rule. For example, casting between a domain and its base type is not explicitly represented in <structname>pg_cast</structname>. Another important exception is that - <quote>automatic I/O conversion casts</>, those performed using a data - type's own I/O functions to convert to or from <type>text</> or other + <quote>automatic I/O conversion casts</quote>, those performed using a data + type's own I/O functions to convert to or from <type>text</type> or other string types, are not explicitly represented in <structname>pg_cast</structname>. </para> <table> - <title><structname>pg_cast</> Columns</title> + <title><structname>pg_cast</structname> Columns</title> <tgroup cols="4"> <thead> @@ -1558,11 +1558,11 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry></entry> <entry> Indicates what contexts the cast can be invoked in. - <literal>e</> means only as an explicit cast (using - <literal>CAST</> or <literal>::</> syntax). - <literal>a</> means implicitly in assignment + <literal>e</literal> means only as an explicit cast (using + <literal>CAST</literal> or <literal>::</literal> syntax). + <literal>a</literal> means implicitly in assignment to a target column, as well as explicitly. - <literal>i</> means implicitly in expressions, as well as the + <literal>i</literal> means implicitly in expressions, as well as the other cases. </entry> </row> @@ -1572,9 +1572,9 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry></entry> <entry> Indicates how the cast is performed. - <literal>f</> means that the function specified in the <structfield>castfunc</> field is used. - <literal>i</> means that the input/output functions are used. - <literal>b</> means that the types are binary-coercible, thus no conversion is required. + <literal>f</literal> means that the function specified in the <structfield>castfunc</structfield> field is used. + <literal>i</literal> means that the input/output functions are used. + <literal>b</literal> means that the types are binary-coercible, thus no conversion is required. </entry> </row> </tbody> @@ -1586,18 +1586,18 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< always take the cast source type as their first argument type, and return the cast destination type as their result type. A cast function can have up to three arguments. The second argument, - if present, must be type <type>integer</>; it receives the type + if present, must be type <type>integer</type>; it receives the type modifier associated with the destination type, or -1 if there is none. The third argument, - if present, must be type <type>boolean</>; it receives <literal>true</> - if the cast is an explicit cast, <literal>false</> otherwise. + if present, must be type <type>boolean</type>; it receives <literal>true</literal> + if the cast is an explicit cast, <literal>false</literal> otherwise. </para> <para> It is legitimate to create a <structname>pg_cast</structname> entry in which the source and target types are the same, if the associated function takes more than one argument. Such entries represent - <quote>length coercion functions</> that coerce values of the type + <quote>length coercion functions</quote> that coerce values of the type to be legal for a particular type modifier value. </para> @@ -1624,14 +1624,14 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< table. This includes indexes (but see also <structname>pg_index</structname>), sequences (but see also <structname>pg_sequence</structname>), views, materialized - views, composite types, and TOAST tables; see <structfield>relkind</>. + views, composite types, and TOAST tables; see <structfield>relkind</structfield>. Below, when we mean all of these kinds of objects we speak of <quote>relations</quote>. Not all columns are meaningful for all relation types. </para> <table> - <title><structname>pg_class</> Columns</title> + <title><structname>pg_class</structname> Columns</title> <tgroup cols="4"> <thead> @@ -1673,7 +1673,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><literal><link linkend="catalog-pg-type"><structname>pg_type</structname></link>.oid</literal></entry> <entry> The OID of the data type that corresponds to this table's row type, - if any (zero for indexes, which have no <structname>pg_type</> entry) + if any (zero for indexes, which have no <structname>pg_type</structname> entry) </entry> </row> @@ -1706,7 +1706,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>oid</type></entry> <entry></entry> <entry>Name of the on-disk file of this relation; zero means this - is a <quote>mapped</> relation whose disk file name is determined + is a <quote>mapped</quote> relation whose disk file name is determined by low-level state</entry> </row> @@ -1795,8 +1795,8 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>char</type></entry> <entry></entry> <entry> - <literal>p</> = permanent table, <literal>u</> = unlogged table, - <literal>t</> = temporary table + <literal>p</literal> = permanent table, <literal>u</literal> = unlogged table, + <literal>t</literal> = temporary table </entry> </row> @@ -1805,15 +1805,15 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>char</type></entry> <entry></entry> <entry> - <literal>r</> = ordinary table, - <literal>i</> = index, - <literal>S</> = sequence, - <literal>t</> = TOAST table, - <literal>v</> = view, - <literal>m</> = materialized view, - <literal>c</> = composite type, - <literal>f</> = foreign table, - <literal>p</> = partitioned table + <literal>r</literal> = ordinary table, + <literal>i</literal> = index, + <literal>S</literal> = sequence, + <literal>t</literal> = TOAST table, + <literal>v</literal> = view, + <literal>m</literal> = materialized view, + <literal>c</literal> = composite type, + <literal>f</literal> = foreign table, + <literal>p</literal> = partitioned table </entry> </row> @@ -1834,7 +1834,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>int2</type></entry> <entry></entry> <entry> - Number of <literal>CHECK</> constraints on the table; see + Number of <literal>CHECK</literal> constraints on the table; see <link linkend="catalog-pg-constraint"><structname>pg_constraint</structname></link> catalog </entry> </row> @@ -1917,11 +1917,11 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>char</type></entry> <entry></entry> <entry> - Columns used to form <quote>replica identity</> for rows: - <literal>d</> = default (primary key, if any), - <literal>n</> = nothing, - <literal>f</> = all columns - <literal>i</> = index with <structfield>indisreplident</structfield> set, or default + Columns used to form <quote>replica identity</quote> for rows: + <literal>d</literal> = default (primary key, if any), + <literal>n</literal> = nothing, + <literal>f</literal> = all columns + <literal>i</literal> = index with <structfield>indisreplident</structfield> set, or default </entry> </row> @@ -1938,9 +1938,9 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry></entry> <entry> All transaction IDs before this one have been replaced with a permanent - (<quote>frozen</>) transaction ID in this table. This is used to track + (<quote>frozen</quote>) transaction ID in this table. This is used to track whether the table needs to be vacuumed in order to prevent transaction - ID wraparound or to allow <literal>pg_xact</> to be shrunk. Zero + ID wraparound or to allow <literal>pg_xact</literal> to be shrunk. Zero (<symbol>InvalidTransactionId</symbol>) if the relation is not a table. </entry> </row> @@ -1953,7 +1953,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< All multixact IDs before this one have been replaced by a transaction ID in this table. This is used to track whether the table needs to be vacuumed in order to prevent multixact ID - wraparound or to allow <literal>pg_multixact</> to be shrunk. Zero + wraparound or to allow <literal>pg_multixact</literal> to be shrunk. Zero (<symbol>InvalidMultiXactId</symbol>) if the relation is not a table. </entry> </row> @@ -1975,7 +1975,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>text[]</type></entry> <entry></entry> <entry> - Access-method-specific options, as <quote>keyword=value</> strings + Access-method-specific options, as <quote>keyword=value</quote> strings </entry> </row> @@ -1993,13 +1993,13 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </table> <para> - Several of the Boolean flags in <structname>pg_class</> are maintained + Several of the Boolean flags in <structname>pg_class</structname> are maintained lazily: they are guaranteed to be true if that's the correct state, but may not be reset to false immediately when the condition is no longer - true. For example, <structfield>relhasindex</> is set by + true. For example, <structfield>relhasindex</structfield> is set by <command>CREATE INDEX</command>, but it is never cleared by <command>DROP INDEX</command>. Instead, <command>VACUUM</command> clears - <structfield>relhasindex</> if it finds the table has no indexes. This + <structfield>relhasindex</structfield> if it finds the table has no indexes. This arrangement avoids race conditions and improves concurrency. </para> </sect1> @@ -2019,7 +2019,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_collation</> Columns</title> + <title><structname>pg_collation</structname> Columns</title> <tgroup cols="4"> <thead> @@ -2082,14 +2082,14 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><structfield>collcollate</structfield></entry> <entry><type>name</type></entry> <entry></entry> - <entry><symbol>LC_COLLATE</> for this collation object</entry> + <entry><symbol>LC_COLLATE</symbol> for this collation object</entry> </row> <row> <entry><structfield>collctype</structfield></entry> <entry><type>name</type></entry> <entry></entry> - <entry><symbol>LC_CTYPE</> for this collation object</entry> + <entry><symbol>LC_CTYPE</symbol> for this collation object</entry> </row> <row> @@ -2107,27 +2107,27 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </table> <para> - Note that the unique key on this catalog is (<structfield>collname</>, - <structfield>collencoding</>, <structfield>collnamespace</>) not just - (<structfield>collname</>, <structfield>collnamespace</>). + Note that the unique key on this catalog is (<structfield>collname</structfield>, + <structfield>collencoding</structfield>, <structfield>collnamespace</structfield>) not just + (<structfield>collname</structfield>, <structfield>collnamespace</structfield>). <productname>PostgreSQL</productname> generally ignores all - collations that do not have <structfield>collencoding</> equal to + collations that do not have <structfield>collencoding</structfield> equal to either the current database's encoding or -1, and creation of new entries - with the same name as an entry with <structfield>collencoding</> = -1 + with the same name as an entry with <structfield>collencoding</structfield> = -1 is forbidden. Therefore it is sufficient to use a qualified SQL name - (<replaceable>schema</>.<replaceable>name</>) to identify a collation, + (<replaceable>schema</replaceable>.<replaceable>name</replaceable>) to identify a collation, even though this is not unique according to the catalog definition. The reason for defining the catalog this way is that - <application>initdb</> fills it in at cluster initialization time with + <application>initdb</application> fills it in at cluster initialization time with entries for all locales available on the system, so it must be able to hold entries for all encodings that might ever be used in the cluster. </para> <para> - In the <literal>template0</> database, it could be useful to create + In the <literal>template0</literal> database, it could be useful to create collations whose encoding does not match the database encoding, since they could match the encodings of databases later cloned from - <literal>template0</>. This would currently have to be done manually. + <literal>template0</literal>. This would currently have to be done manually. </para> </sect1> @@ -2143,13 +2143,13 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< key, unique, foreign key, and exclusion constraints on tables. (Column constraints are not treated specially. Every column constraint is equivalent to some table constraint.) - Not-null constraints are represented in the <structname>pg_attribute</> + Not-null constraints are represented in the <structname>pg_attribute</structname> catalog, not here. </para> <para> User-defined constraint triggers (created with <command>CREATE CONSTRAINT - TRIGGER</>) also give rise to an entry in this table. + TRIGGER</command>) also give rise to an entry in this table. </para> <para> @@ -2157,7 +2157,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_constraint</> Columns</title> + <title><structname>pg_constraint</structname> Columns</title> <tgroup cols="4"> <thead> @@ -2198,12 +2198,12 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>char</type></entry> <entry></entry> <entry> - <literal>c</> = check constraint, - <literal>f</> = foreign key constraint, - <literal>p</> = primary key constraint, - <literal>u</> = unique constraint, - <literal>t</> = constraint trigger, - <literal>x</> = exclusion constraint + <literal>c</literal> = check constraint, + <literal>f</literal> = foreign key constraint, + <literal>p</literal> = primary key constraint, + <literal>u</literal> = unique constraint, + <literal>t</literal> = constraint trigger, + <literal>x</literal> = exclusion constraint </entry> </row> @@ -2263,11 +2263,11 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>char</type></entry> <entry></entry> <entry>Foreign key update action code: - <literal>a</> = no action, - <literal>r</> = restrict, - <literal>c</> = cascade, - <literal>n</> = set null, - <literal>d</> = set default + <literal>a</literal> = no action, + <literal>r</literal> = restrict, + <literal>c</literal> = cascade, + <literal>n</literal> = set null, + <literal>d</literal> = set default </entry> </row> @@ -2276,11 +2276,11 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>char</type></entry> <entry></entry> <entry>Foreign key deletion action code: - <literal>a</> = no action, - <literal>r</> = restrict, - <literal>c</> = cascade, - <literal>n</> = set null, - <literal>d</> = set default + <literal>a</literal> = no action, + <literal>r</literal> = restrict, + <literal>c</literal> = cascade, + <literal>n</literal> = set null, + <literal>d</literal> = set default </entry> </row> @@ -2289,9 +2289,9 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>char</type></entry> <entry></entry> <entry>Foreign key match type: - <literal>f</> = full, - <literal>p</> = partial, - <literal>s</> = simple + <literal>f</literal> = full, + <literal>p</literal> = partial, + <literal>s</literal> = simple </entry> </row> @@ -2329,7 +2329,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <row> <entry><structfield>conkey</structfield></entry> <entry><type>int2[]</type></entry> - <entry><literal><link linkend="catalog-pg-attribute"><structname>pg_attribute</structname></link>.attnum</></entry> + <entry><literal><link linkend="catalog-pg-attribute"><structname>pg_attribute</structname></link>.attnum</literal></entry> <entry>If a table constraint (including foreign keys, but not constraint triggers), list of the constrained columns</entry> </row> @@ -2337,35 +2337,35 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <row> <entry><structfield>confkey</structfield></entry> <entry><type>int2[]</type></entry> - <entry><literal><link linkend="catalog-pg-attribute"><structname>pg_attribute</structname></link>.attnum</></entry> + <entry><literal><link linkend="catalog-pg-attribute"><structname>pg_attribute</structname></link>.attnum</literal></entry> <entry>If a foreign key, list of the referenced columns</entry> </row> <row> <entry><structfield>conpfeqop</structfield></entry> <entry><type>oid[]</type></entry> - <entry><literal><link linkend="catalog-pg-operator"><structname>pg_operator</structname></link>.oid</></entry> + <entry><literal><link linkend="catalog-pg-operator"><structname>pg_operator</structname></link>.oid</literal></entry> <entry>If a foreign key, list of the equality operators for PK = FK comparisons</entry> </row> <row> <entry><structfield>conppeqop</structfield></entry> <entry><type>oid[]</type></entry> - <entry><literal><link linkend="catalog-pg-operator"><structname>pg_operator</structname></link>.oid</></entry> + <entry><literal><link linkend="catalog-pg-operator"><structname>pg_operator</structname></link>.oid</literal></entry> <entry>If a foreign key, list of the equality operators for PK = PK comparisons</entry> </row> <row> <entry><structfield>conffeqop</structfield></entry> <entry><type>oid[]</type></entry> - <entry><literal><link linkend="catalog-pg-operator"><structname>pg_operator</structname></link>.oid</></entry> + <entry><literal><link linkend="catalog-pg-operator"><structname>pg_operator</structname></link>.oid</literal></entry> <entry>If a foreign key, list of the equality operators for FK = FK comparisons</entry> </row> <row> <entry><structfield>conexclop</structfield></entry> <entry><type>oid[]</type></entry> - <entry><literal><link linkend="catalog-pg-operator"><structname>pg_operator</structname></link>.oid</></entry> + <entry><literal><link linkend="catalog-pg-operator"><structname>pg_operator</structname></link>.oid</literal></entry> <entry>If an exclusion constraint, list of the per-column exclusion operators</entry> </row> @@ -2392,7 +2392,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< For other cases, a zero appears in <structfield>conkey</structfield> and the associated index must be consulted to discover the expression that is constrained. (<structfield>conkey</structfield> thus has the - same contents as <structname>pg_index</>.<structfield>indkey</> for the + same contents as <structname>pg_index</structname>.<structfield>indkey</structfield> for the index.) </para> @@ -2400,7 +2400,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <para> <structfield>consrc</structfield> is not updated when referenced objects change; for example, it won't track renaming of columns. Rather than - relying on this field, it's best to use <function>pg_get_constraintdef()</> + relying on this field, it's best to use <function>pg_get_constraintdef()</function> to extract the definition of a check constraint. </para> </note> @@ -2429,7 +2429,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_conversion</> Columns</title> + <title><structname>pg_conversion</structname> Columns</title> <tgroup cols="4"> <thead> @@ -2529,7 +2529,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_database</> Columns</title> + <title><structname>pg_database</structname> Columns</title> <tgroup cols="4"> <thead> @@ -2592,7 +2592,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry></entry> <entry> If true, then this database can be cloned by - any user with <literal>CREATEDB</> privileges; + any user with <literal>CREATEDB</literal> privileges; if false, then only superusers or the owner of the database can clone it. </entry> @@ -2604,7 +2604,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry></entry> <entry> If false then no one can connect to this database. This is - used to protect the <literal>template0</> database from being altered. + used to protect the <literal>template0</literal> database from being altered. </entry> </row> @@ -2634,11 +2634,11 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry></entry> <entry> All transaction IDs before this one have been replaced with a permanent - (<quote>frozen</>) transaction ID in this database. This is used to + (<quote>frozen</quote>) transaction ID in this database. This is used to track whether the database needs to be vacuumed in order to prevent - transaction ID wraparound or to allow <literal>pg_xact</> to be shrunk. + transaction ID wraparound or to allow <literal>pg_xact</literal> to be shrunk. It is the minimum of the per-table - <structname>pg_class</>.<structfield>relfrozenxid</> values. + <structname>pg_class</structname>.<structfield>relfrozenxid</structfield> values. </entry> </row> @@ -2650,9 +2650,9 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< All multixact IDs before this one have been replaced with a transaction ID in this database. This is used to track whether the database needs to be vacuumed in order to prevent - multixact ID wraparound or to allow <literal>pg_multixact</> to be shrunk. + multixact ID wraparound or to allow <literal>pg_multixact</literal> to be shrunk. It is the minimum of the per-table - <structname>pg_class</>.<structfield>relminmxid</> values. + <structname>pg_class</structname>.<structfield>relminmxid</structfield> values. </entry> </row> @@ -2663,7 +2663,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry> The default tablespace for the database. Within this database, all tables for which - <structname>pg_class</>.<structfield>reltablespace</> is zero + <structname>pg_class</structname>.<structfield>reltablespace</structfield> is zero will be stored in this tablespace; in particular, all the non-shared system catalogs will be there. </entry> @@ -2707,7 +2707,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_db_role_setting</> Columns</title> + <title><structname>pg_db_role_setting</structname> Columns</title> <tgroup cols="4"> <thead> @@ -2754,12 +2754,12 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </indexterm> <para> - The catalog <structname>pg_default_acl</> stores initial + The catalog <structname>pg_default_acl</structname> stores initial privileges to be assigned to newly created objects. </para> <table> - <title><structname>pg_default_acl</> Columns</title> + <title><structname>pg_default_acl</structname> Columns</title> <tgroup cols="4"> <thead> @@ -2800,10 +2800,10 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry></entry> <entry> Type of object this entry is for: - <literal>r</> = relation (table, view), - <literal>S</> = sequence, - <literal>f</> = function, - <literal>T</> = type + <literal>r</literal> = relation (table, view), + <literal>S</literal> = sequence, + <literal>f</literal> = function, + <literal>T</literal> = type </entry> </row> @@ -2820,21 +2820,21 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </table> <para> - A <structname>pg_default_acl</> entry shows the initial privileges to + A <structname>pg_default_acl</structname> entry shows the initial privileges to be assigned to an object belonging to the indicated user. There are - currently two types of entry: <quote>global</> entries with - <structfield>defaclnamespace</> = 0, and <quote>per-schema</> entries + currently two types of entry: <quote>global</quote> entries with + <structfield>defaclnamespace</structfield> = 0, and <quote>per-schema</quote> entries that reference a particular schema. If a global entry is present then - it <emphasis>overrides</> the normal hard-wired default privileges + it <emphasis>overrides</emphasis> the normal hard-wired default privileges for the object type. A per-schema entry, if present, represents privileges - to be <emphasis>added to</> the global or hard-wired default privileges. + to be <emphasis>added to</emphasis> the global or hard-wired default privileges. </para> <para> Note that when an ACL entry in another catalog is null, it is taken to represent the hard-wired default privileges for its object, - <emphasis>not</> whatever might be in <structname>pg_default_acl</> - at the moment. <structname>pg_default_acl</> is only consulted during + <emphasis>not</emphasis> whatever might be in <structname>pg_default_acl</structname> + at the moment. <structname>pg_default_acl</structname> is only consulted during object creation. </para> @@ -2851,9 +2851,9 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <para> The catalog <structname>pg_depend</structname> records the dependency relationships between database objects. This information allows - <command>DROP</> commands to find which other objects must be dropped - by <command>DROP CASCADE</> or prevent dropping in the <command>DROP - RESTRICT</> case. + <command>DROP</command> commands to find which other objects must be dropped + by <command>DROP CASCADE</command> or prevent dropping in the <command>DROP + RESTRICT</command> case. </para> <para> @@ -2863,7 +2863,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_depend</> Columns</title> + <title><structname>pg_depend</structname> Columns</title> <tgroup cols="4"> <thead> @@ -2896,7 +2896,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry></entry> <entry> For a table column, this is the column number (the - <structfield>objid</> and <structfield>classid</> refer to the + <structfield>objid</structfield> and <structfield>classid</structfield> refer to the table itself). For all other object types, this column is zero. </entry> @@ -2922,7 +2922,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry></entry> <entry> For a table column, this is the column number (the - <structfield>refobjid</> and <structfield>refclassid</> refer + <structfield>refobjid</structfield> and <structfield>refclassid</structfield> refer to the table itself). For all other object types, this column is zero. </entry> @@ -2945,17 +2945,17 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< In all cases, a <structname>pg_depend</structname> entry indicates that the referenced object cannot be dropped without also dropping the dependent object. However, there are several subflavors identified by - <structfield>deptype</>: + <structfield>deptype</structfield>: <variablelist> <varlistentry> - <term><symbol>DEPENDENCY_NORMAL</> (<literal>n</>)</term> + <term><symbol>DEPENDENCY_NORMAL</symbol> (<literal>n</literal>)</term> <listitem> <para> A normal relationship between separately-created objects. The dependent object can be dropped without affecting the referenced object. The referenced object can only be dropped - by specifying <literal>CASCADE</>, in which case the dependent + by specifying <literal>CASCADE</literal>, in which case the dependent object is dropped, too. Example: a table column has a normal dependency on its data type. </para> @@ -2963,12 +2963,12 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </varlistentry> <varlistentry> - <term><symbol>DEPENDENCY_AUTO</> (<literal>a</>)</term> + <term><symbol>DEPENDENCY_AUTO</symbol> (<literal>a</literal>)</term> <listitem> <para> The dependent object can be dropped separately from the referenced object, and should be automatically dropped - (regardless of <literal>RESTRICT</> or <literal>CASCADE</> + (regardless of <literal>RESTRICT</literal> or <literal>CASCADE</literal> mode) if the referenced object is dropped. Example: a named constraint on a table is made autodependent on the table, so that it will go away if the table is dropped. @@ -2977,41 +2977,41 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </varlistentry> <varlistentry> - <term><symbol>DEPENDENCY_INTERNAL</> (<literal>i</>)</term> + <term><symbol>DEPENDENCY_INTERNAL</symbol> (<literal>i</literal>)</term> <listitem> <para> The dependent object was created as part of creation of the referenced object, and is really just a part of its internal - implementation. A <command>DROP</> of the dependent object + implementation. A <command>DROP</command> of the dependent object will be disallowed outright (we'll tell the user to issue a - <command>DROP</> against the referenced object, instead). A - <command>DROP</> of the referenced object will be propagated + <command>DROP</command> against the referenced object, instead). A + <command>DROP</command> of the referenced object will be propagated through to drop the dependent object whether - <command>CASCADE</> is specified or not. Example: a trigger + <command>CASCADE</command> is specified or not. Example: a trigger that's created to enforce a foreign-key constraint is made internally dependent on the constraint's - <structname>pg_constraint</> entry. + <structname>pg_constraint</structname> entry. </para> </listitem> </varlistentry> <varlistentry> - <term><symbol>DEPENDENCY_EXTENSION</> (<literal>e</>)</term> + <term><symbol>DEPENDENCY_EXTENSION</symbol> (<literal>e</literal>)</term> <listitem> <para> - The dependent object is a member of the <firstterm>extension</> that is + The dependent object is a member of the <firstterm>extension</firstterm> that is the referenced object (see <link linkend="catalog-pg-extension"><structname>pg_extension</structname></link>). The dependent object can be dropped only via - <command>DROP EXTENSION</> on the referenced object. Functionally + <command>DROP EXTENSION</command> on the referenced object. Functionally this dependency type acts the same as an internal dependency, but - it's kept separate for clarity and to simplify <application>pg_dump</>. + it's kept separate for clarity and to simplify <application>pg_dump</application>. </para> </listitem> </varlistentry> <varlistentry> - <term><symbol>DEPENDENCY_AUTO_EXTENSION</> (<literal>x</>)</term> + <term><symbol>DEPENDENCY_AUTO_EXTENSION</symbol> (<literal>x</literal>)</term> <listitem> <para> The dependent object is not a member of the extension that is the @@ -3024,7 +3024,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </varlistentry> <varlistentry> - <term><symbol>DEPENDENCY_PIN</> (<literal>p</>)</term> + <term><symbol>DEPENDENCY_PIN</symbol> (<literal>p</literal>)</term> <listitem> <para> There is no dependent object; this type of entry is a signal @@ -3051,7 +3051,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </indexterm> <para> - The catalog <structname>pg_description</> stores optional descriptions + The catalog <structname>pg_description</structname> stores optional descriptions (comments) for each database object. Descriptions can be manipulated with the <xref linkend="sql-comment"> command and viewed with <application>psql</application>'s <literal>\d</literal> commands. @@ -3066,7 +3066,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_description</> Columns</title> + <title><structname>pg_description</structname> Columns</title> <tgroup cols="4"> <thead> @@ -3099,7 +3099,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry></entry> <entry> For a comment on a table column, this is the column number (the - <structfield>objoid</> and <structfield>classoid</> refer to + <structfield>objoid</structfield> and <structfield>classoid</structfield> refer to the table itself). For all other object types, this column is zero. </entry> @@ -3133,7 +3133,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_enum</> Columns</title> + <title><structname>pg_enum</structname> Columns</title> <tgroup cols="4"> <thead> @@ -3157,7 +3157,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><structfield>enumtypid</structfield></entry> <entry><type>oid</type></entry> <entry><literal><link linkend="catalog-pg-type"><structname>pg_type</structname></link>.oid</literal></entry> - <entry>The OID of the <structname>pg_type</> entry owning this enum value</entry> + <entry>The OID of the <structname>pg_type</structname> entry owning this enum value</entry> </row> <row> @@ -3191,7 +3191,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <para> When an enum type is created, its members are assigned sort-order - positions 1..<replaceable>n</>. But members added later might be given + positions 1..<replaceable>n</replaceable>. But members added later might be given negative or fractional values of <structfield>enumsortorder</structfield>. The only requirement on these values is that they be correctly ordered and unique within each enum type. @@ -3212,7 +3212,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_event_trigger</> Columns</title> + <title><structname>pg_event_trigger</structname> Columns</title> <tgroup cols="4"> <thead> @@ -3260,10 +3260,10 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry> Controls in which <xref linkend="guc-session-replication-role"> modes the event trigger fires. - <literal>O</> = trigger fires in <quote>origin</> and <quote>local</> modes, - <literal>D</> = trigger is disabled, - <literal>R</> = trigger fires in <quote>replica</> mode, - <literal>A</> = trigger fires always. + <literal>O</literal> = trigger fires in <quote>origin</quote> and <quote>local</quote> modes, + <literal>D</literal> = trigger is disabled, + <literal>R</literal> = trigger fires in <quote>replica</quote> mode, + <literal>A</literal> = trigger fires always. </entry> </row> @@ -3296,7 +3296,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_extension</> Columns</title> + <title><structname>pg_extension</structname> Columns</title> <tgroup cols="4"> <thead> @@ -3355,16 +3355,16 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><structfield>extconfig</structfield></entry> <entry><type>oid[]</type></entry> <entry><literal><link linkend="catalog-pg-class"><structname>pg_class</structname></link>.oid</literal></entry> - <entry>Array of <type>regclass</> OIDs for the extension's configuration - table(s), or <literal>NULL</> if none</entry> + <entry>Array of <type>regclass</type> OIDs for the extension's configuration + table(s), or <literal>NULL</literal> if none</entry> </row> <row> <entry><structfield>extcondition</structfield></entry> <entry><type>text[]</type></entry> <entry></entry> - <entry>Array of <literal>WHERE</>-clause filter conditions for the - extension's configuration table(s), or <literal>NULL</> if none</entry> + <entry>Array of <literal>WHERE</literal>-clause filter conditions for the + extension's configuration table(s), or <literal>NULL</literal> if none</entry> </row> </tbody> @@ -3372,7 +3372,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </table> <para> - Note that unlike most catalogs with a <quote>namespace</> column, + Note that unlike most catalogs with a <quote>namespace</quote> column, <structfield>extnamespace</structfield> is not meant to imply that the extension belongs to that schema. Extension names are never schema-qualified. Rather, <structfield>extnamespace</structfield> @@ -3399,7 +3399,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_foreign_data_wrapper</> Columns</title> + <title><structname>pg_foreign_data_wrapper</structname> Columns</title> <tgroup cols="4"> <thead> @@ -3474,7 +3474,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>text[]</type></entry> <entry></entry> <entry> - Foreign-data wrapper specific options, as <quote>keyword=value</> strings + Foreign-data wrapper specific options, as <quote>keyword=value</quote> strings </entry> </row> </tbody> @@ -3498,7 +3498,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_foreign_server</> Columns</title> + <title><structname>pg_foreign_server</structname> Columns</title> <tgroup cols="4"> <thead> @@ -3570,7 +3570,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>text[]</type></entry> <entry></entry> <entry> - Foreign server specific options, as <quote>keyword=value</> strings + Foreign server specific options, as <quote>keyword=value</quote> strings </entry> </row> </tbody> @@ -3596,7 +3596,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_foreign_table</> Columns</title> + <title><structname>pg_foreign_table</structname> Columns</title> <tgroup cols="4"> <thead> @@ -3613,7 +3613,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><structfield>ftrelid</structfield></entry> <entry><type>oid</type></entry> <entry><literal><link linkend="catalog-pg-class"><structname>pg_class</structname></link>.oid</literal></entry> - <entry>OID of the <structname>pg_class</> entry for this foreign table</entry> + <entry>OID of the <structname>pg_class</structname> entry for this foreign table</entry> </row> <row> @@ -3628,7 +3628,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>text[]</type></entry> <entry></entry> <entry> - Foreign table options, as <quote>keyword=value</> strings + Foreign table options, as <quote>keyword=value</quote> strings </entry> </row> </tbody> @@ -3651,7 +3651,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_index</> Columns</title> + <title><structname>pg_index</structname> Columns</title> <tgroup cols="4"> <thead> @@ -3668,14 +3668,14 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><structfield>indexrelid</structfield></entry> <entry><type>oid</type></entry> <entry><literal><link linkend="catalog-pg-class"><structname>pg_class</structname></link>.oid</literal></entry> - <entry>The OID of the <structname>pg_class</> entry for this index</entry> + <entry>The OID of the <structname>pg_class</structname> entry for this index</entry> </row> <row> <entry><structfield>indrelid</structfield></entry> <entry><type>oid</type></entry> <entry><literal><link linkend="catalog-pg-class"><structname>pg_class</structname></link>.oid</literal></entry> - <entry>The OID of the <structname>pg_class</> entry for the table this index is for</entry> + <entry>The OID of the <structname>pg_class</structname> entry for the table this index is for</entry> </row> <row> @@ -3698,7 +3698,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>bool</type></entry> <entry></entry> <entry>If true, this index represents the primary key of the table - (<structfield>indisunique</> should always be true when this is true)</entry> + (<structfield>indisunique</structfield> should always be true when this is true)</entry> </row> <row> @@ -3714,7 +3714,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry></entry> <entry>If true, the uniqueness check is enforced immediately on insertion - (irrelevant if <structfield>indisunique</> is not true)</entry> + (irrelevant if <structfield>indisunique</structfield> is not true)</entry> </row> <row> @@ -3731,7 +3731,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry> If true, the index is currently valid for queries. False means the index is possibly incomplete: it must still be modified by - <command>INSERT</>/<command>UPDATE</> operations, but it cannot safely + <command>INSERT</command>/<command>UPDATE</command> operations, but it cannot safely be used for queries. If it is unique, the uniqueness property is not guaranteed true either. </entry> @@ -3742,8 +3742,8 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>bool</type></entry> <entry></entry> <entry> - If true, queries must not use the index until the <structfield>xmin</> - of this <structname>pg_index</> row is below their <symbol>TransactionXmin</symbol> + If true, queries must not use the index until the <structfield>xmin</structfield> + of this <structname>pg_index</structname> row is below their <symbol>TransactionXmin</symbol> event horizon, because the table may contain broken HOT chains with incompatible rows that they can see </entry> @@ -3755,7 +3755,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry></entry> <entry> If true, the index is currently ready for inserts. False means the - index must be ignored by <command>INSERT</>/<command>UPDATE</> + index must be ignored by <command>INSERT</command>/<command>UPDATE</command> operations. </entry> </row> @@ -3775,9 +3775,9 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>bool</type></entry> <entry></entry> <entry> - If true this index has been chosen as <quote>replica identity</> + If true this index has been chosen as <quote>replica identity</quote> using <command>ALTER TABLE ... REPLICA IDENTITY USING INDEX - ...</> + ...</command> </entry> </row> @@ -3836,7 +3836,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< Expression trees (in <function>nodeToString()</function> representation) for index attributes that are not simple column references. This is a list with one element for each zero - entry in <structfield>indkey</>. Null if all index attributes + entry in <structfield>indkey</structfield>. Null if all index attributes are simple references. </entry> </row> @@ -3866,14 +3866,14 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </indexterm> <para> - The catalog <structname>pg_inherits</> records information about + The catalog <structname>pg_inherits</structname> records information about table inheritance hierarchies. There is one entry for each direct parent-child table relationship in the database. (Indirect inheritance can be determined by following chains of entries.) </para> <table> - <title><structname>pg_inherits</> Columns</title> + <title><structname>pg_inherits</structname> Columns</title> <tgroup cols="4"> <thead> @@ -3928,7 +3928,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </indexterm> <para> - The catalog <structname>pg_init_privs</> records information about + The catalog <structname>pg_init_privs</structname> records information about the initial privileges of objects in the system. There is one entry for each object in the database which has a non-default (non-NULL) initial set of privileges. @@ -3936,7 +3936,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <para> Objects can have initial privileges either by having those privileges set - when the system is initialized (by <application>initdb</>) or when the + when the system is initialized (by <application>initdb</application>) or when the object is created during a <command>CREATE EXTENSION</command> and the extension script sets initial privileges using the <command>GRANT</command> system. Note that the system will automatically handle recording of the @@ -3944,12 +3944,12 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< only use the <command>GRANT</command> and <command>REVOKE</command> statements in their script to have the privileges recorded. The <literal>privtype</literal> column indicates if the initial privilege was - set by <application>initdb</> or during a + set by <application>initdb</application> or during a <command>CREATE EXTENSION</command> command. </para> <para> - Objects which have initial privileges set by <application>initdb</> will + Objects which have initial privileges set by <application>initdb</application> will have entries where <literal>privtype</literal> is <literal>'i'</literal>, while objects which have initial privileges set by <command>CREATE EXTENSION</command> will have entries where @@ -3957,7 +3957,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_init_privs</> Columns</title> + <title><structname>pg_init_privs</structname> Columns</title> <tgroup cols="4"> <thead> @@ -3990,7 +3990,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry></entry> <entry> For a table column, this is the column number (the - <structfield>objoid</> and <structfield>classoid</> refer to the + <structfield>objoid</structfield> and <structfield>classoid</structfield> refer to the table itself). For all other object types, this column is zero. </entry> @@ -4039,7 +4039,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_language</> Columns</title> + <title><structname>pg_language</structname> Columns</title> <tgroup cols="4"> <thead> @@ -4116,7 +4116,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><literal><link linkend="catalog-pg-proc"><structname>pg_proc</structname></link>.oid</literal></entry> <entry> This references a function that is responsible for executing - <quote>inline</> anonymous code blocks + <quote>inline</quote> anonymous code blocks (<xref linkend="sql-do"> blocks). Zero if inline blocks are not supported. </entry> @@ -4162,24 +4162,24 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< The catalog <structname>pg_largeobject</structname> holds the data making up <quote>large objects</quote>. A large object is identified by an OID assigned when it is created. Each large object is broken into - segments or <quote>pages</> small enough to be conveniently stored as rows + segments or <quote>pages</quote> small enough to be conveniently stored as rows in <structname>pg_largeobject</structname>. - The amount of data per page is defined to be <symbol>LOBLKSIZE</> (which is currently - <literal>BLCKSZ/4</>, or typically 2 kB). + The amount of data per page is defined to be <symbol>LOBLKSIZE</symbol> (which is currently + <literal>BLCKSZ/4</literal>, or typically 2 kB). </para> <para> - Prior to <productname>PostgreSQL</> 9.0, there was no permission structure + Prior to <productname>PostgreSQL</productname> 9.0, there was no permission structure associated with large objects. As a result, <structname>pg_largeobject</structname> was publicly readable and could be used to obtain the OIDs (and contents) of all large objects in the system. This is no longer the case; use - <link linkend="catalog-pg-largeobject-metadata"><structname>pg_largeobject_metadata</></link> + <link linkend="catalog-pg-largeobject-metadata"><structname>pg_largeobject_metadata</structname></link> to obtain a list of large object OIDs. </para> <table> - <title><structname>pg_largeobject</> Columns</title> + <title><structname>pg_largeobject</structname> Columns</title> <tgroup cols="4"> <thead> @@ -4213,7 +4213,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry></entry> <entry> Actual data stored in the large object. - This will never be more than <symbol>LOBLKSIZE</> bytes and might be less. + This will never be more than <symbol>LOBLKSIZE</symbol> bytes and might be less. </entry> </row> </tbody> @@ -4223,9 +4223,9 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <para> Each row of <structname>pg_largeobject</structname> holds data for one page of a large object, beginning at - byte offset (<literal>pageno * LOBLKSIZE</>) within the object. The implementation + byte offset (<literal>pageno * LOBLKSIZE</literal>) within the object. The implementation allows sparse storage: pages might be missing, and might be shorter than - <literal>LOBLKSIZE</> bytes even if they are not the last page of the object. + <literal>LOBLKSIZE</literal> bytes even if they are not the last page of the object. Missing regions within a large object read as zeroes. </para> @@ -4242,11 +4242,11 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< The catalog <structname>pg_largeobject_metadata</structname> holds metadata associated with large objects. The actual large object data is stored in - <link linkend="catalog-pg-largeobject"><structname>pg_largeobject</></link>. + <link linkend="catalog-pg-largeobject"><structname>pg_largeobject</structname></link>. </para> <table> - <title><structname>pg_largeobject_metadata</> Columns</title> + <title><structname>pg_largeobject_metadata</structname> Columns</title> <tgroup cols="4"> <thead> @@ -4299,14 +4299,14 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </indexterm> <para> - The catalog <structname>pg_namespace</> stores namespaces. + The catalog <structname>pg_namespace</structname> stores namespaces. A namespace is the structure underlying SQL schemas: each namespace can have a separate collection of relations, types, etc. without name conflicts. </para> <table> - <title><structname>pg_namespace</> Columns</title> + <title><structname>pg_namespace</structname> Columns</title> <tgroup cols="4"> <thead> @@ -4381,7 +4381,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_opclass</> Columns</title> + <title><structname>pg_opclass</structname> Columns</title> <tgroup cols="4"> <thead> @@ -4447,14 +4447,14 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><structfield>opcdefault</structfield></entry> <entry><type>bool</type></entry> <entry></entry> - <entry>True if this operator class is the default for <structfield>opcintype</></entry> + <entry>True if this operator class is the default for <structfield>opcintype</structfield></entry> </row> <row> <entry><structfield>opckeytype</structfield></entry> <entry><type>oid</type></entry> <entry><literal><link linkend="catalog-pg-type"><structname>pg_type</structname></link>.oid</literal></entry> - <entry>Type of data stored in index, or zero if same as <structfield>opcintype</></entry> + <entry>Type of data stored in index, or zero if same as <structfield>opcintype</structfield></entry> </row> </tbody> @@ -4462,11 +4462,11 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </table> <para> - An operator class's <structfield>opcmethod</> must match the - <structname>opfmethod</> of its containing operator family. + An operator class's <structfield>opcmethod</structfield> must match the + <structname>opfmethod</structname> of its containing operator family. Also, there must be no more than one <structname>pg_opclass</structname> - row having <structname>opcdefault</> true for any given combination of - <structname>opcmethod</> and <structname>opcintype</>. + row having <structname>opcdefault</structname> true for any given combination of + <structname>opcmethod</structname> and <structname>opcintype</structname>. </para> </sect1> @@ -4480,13 +4480,13 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </indexterm> <para> - The catalog <structname>pg_operator</> stores information about operators. + The catalog <structname>pg_operator</structname> stores information about operators. See <xref linkend="sql-createoperator"> and <xref linkend="xoper"> for more information. </para> <table> - <title><structname>pg_operator</> Columns</title> + <title><structname>pg_operator</structname> Columns</title> <tgroup cols="4"> <thead> @@ -4534,8 +4534,8 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>char</type></entry> <entry></entry> <entry> - <literal>b</> = infix (<quote>both</quote>), <literal>l</> = prefix - (<quote>left</quote>), <literal>r</> = postfix (<quote>right</quote>) + <literal>b</literal> = infix (<quote>both</quote>), <literal>l</literal> = prefix + (<quote>left</quote>), <literal>r</literal> = postfix (<quote>right</quote>) </entry> </row> @@ -4632,7 +4632,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< Each operator family is a collection of operators and associated support routines that implement the semantics specified for a particular index access method. Furthermore, the operators in a family are all - <quote>compatible</>, in a way that is specified by the access method. + <quote>compatible</quote>, in a way that is specified by the access method. The operator family concept allows cross-data-type operators to be used with indexes and to be reasoned about using knowledge of access method semantics. @@ -4643,7 +4643,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_opfamily</> Columns</title> + <title><structname>pg_opfamily</structname> Columns</title> <tgroup cols="4"> <thead> @@ -4720,7 +4720,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_partitioned_table</> Columns</title> + <title><structname>pg_partitioned_table</structname> Columns</title> <tgroup cols="4"> <thead> @@ -4738,7 +4738,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><structfield>partrelid</structfield></entry> <entry><type>oid</type></entry> <entry><literal><link linkend="catalog-pg-class"><structname>pg_class</structname></link>.oid</literal></entry> - <entry>The OID of the <structname>pg_class</> entry for this partitioned table</entry> + <entry>The OID of the <structname>pg_class</structname> entry for this partitioned table</entry> </row> <row> @@ -4746,8 +4746,8 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>char</type></entry> <entry></entry> <entry> - Partitioning strategy; <literal>l</> = list partitioned table, - <literal>r</> = range partitioned table + Partitioning strategy; <literal>l</literal> = list partitioned table, + <literal>r</literal> = range partitioned table </entry> </row> @@ -4763,7 +4763,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>oid</type></entry> <entry><literal><link linkend="catalog-pg-class"><structname>pg_class</structname></link>.oid</literal></entry> <entry> - The OID of the <structname>pg_class</> entry for the default partition + The OID of the <structname>pg_class</structname> entry for the default partition of this partitioned table, or zero if this partitioned table does not have a default partition. </entry> @@ -4813,7 +4813,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< Expression trees (in <function>nodeToString()</function> representation) for partition key columns that are not simple column references. This is a list with one element for each zero - entry in <structfield>partattrs</>. Null if all partition key columns + entry in <structfield>partattrs</structfield>. Null if all partition key columns are simple references. </entry> </row> @@ -4833,9 +4833,9 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <para> The catalog <structname>pg_pltemplate</structname> stores - <quote>template</> information for procedural languages. + <quote>template</quote> information for procedural languages. A template for a language allows the language to be created in a - particular database by a simple <command>CREATE LANGUAGE</> command, + particular database by a simple <command>CREATE LANGUAGE</command> command, with no need to specify implementation details. </para> @@ -4848,7 +4848,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_pltemplate</> Columns</title> + <title><structname>pg_pltemplate</structname> Columns</title> <tgroup cols="3"> <thead> @@ -4921,7 +4921,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <note> <para> - It is likely that <structname>pg_pltemplate</> will be removed in some + It is likely that <structname>pg_pltemplate</structname> will be removed in some future release of <productname>PostgreSQL</productname>, in favor of keeping this knowledge about procedural languages in their respective extension installation scripts. @@ -4944,7 +4944,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< command that it applies to (possibly all commands), the roles that it applies to, the expression to be added as a security-barrier qualification to queries that include the table, and the expression - to be added as a <literal>WITH CHECK</> option for queries that attempt to + to be added as a <literal>WITH CHECK</literal> option for queries that attempt to add new records to the table. </para> @@ -4982,11 +4982,11 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>char</type></entry> <entry></entry> <entry>The command type to which the policy is applied: - <literal>r</> for <command>SELECT</>, - <literal>a</> for <command>INSERT</>, - <literal>w</> for <command>UPDATE</>, - <literal>d</> for <command>DELETE</>, - or <literal>*</> for all</entry> + <literal>r</literal> for <command>SELECT</command>, + <literal>a</literal> for <command>INSERT</command>, + <literal>w</literal> for <command>UPDATE</command>, + <literal>d</literal> for <command>DELETE</command>, + or <literal>*</literal> for all</entry> </row> <row> @@ -5023,8 +5023,8 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <note> <para> - Policies stored in <structname>pg_policy</> are applied only when - <structname>pg_class</>.<structfield>relrowsecurity</> is set for + Policies stored in <structname>pg_policy</structname> are applied only when + <structname>pg_class</structname>.<structfield>relrowsecurity</structfield> is set for their table. </para> </note> @@ -5039,7 +5039,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </indexterm> <para> - The catalog <structname>pg_proc</> stores information about functions (or procedures). + The catalog <structname>pg_proc</structname> stores information about functions (or procedures). See <xref linkend="sql-createfunction"> and <xref linkend="xfunc"> for more information. </para> @@ -5051,7 +5051,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_proc</> Columns</title> + <title><structname>pg_proc</structname> Columns</title> <tgroup cols="4"> <thead> @@ -5106,7 +5106,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>float4</type></entry> <entry></entry> <entry>Estimated execution cost (in units of - <xref linkend="guc-cpu-operator-cost">); if <structfield>proretset</>, + <xref linkend="guc-cpu-operator-cost">); if <structfield>proretset</structfield>, this is cost per row returned</entry> </row> @@ -5114,7 +5114,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><structfield>prorows</structfield></entry> <entry><type>float4</type></entry> <entry></entry> - <entry>Estimated number of result rows (zero if not <structfield>proretset</>)</entry> + <entry>Estimated number of result rows (zero if not <structfield>proretset</structfield>)</entry> </row> <row> @@ -5151,7 +5151,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><structfield>prosecdef</structfield></entry> <entry><type>bool</type></entry> <entry></entry> - <entry>Function is a security definer (i.e., a <quote>setuid</> + <entry>Function is a security definer (i.e., a <quote>setuid</quote> function)</entry> </row> @@ -5195,11 +5195,11 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <structfield>provolatile</structfield> tells whether the function's result depends only on its input arguments, or is affected by outside factors. - It is <literal>i</literal> for <quote>immutable</> functions, + It is <literal>i</literal> for <quote>immutable</quote> functions, which always deliver the same result for the same inputs. - It is <literal>s</literal> for <quote>stable</> functions, + It is <literal>s</literal> for <quote>stable</quote> functions, whose results (for fixed inputs) do not change within a scan. - It is <literal>v</literal> for <quote>volatile</> functions, + It is <literal>v</literal> for <quote>volatile</quote> functions, whose results might change at any time. (Use <literal>v</literal> also for functions with side-effects, so that calls to them cannot get optimized away.) @@ -5251,7 +5251,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry> An array with the data types of the function arguments. This includes only input arguments (including <literal>INOUT</literal> and - <literal>VARIADIC</> arguments), and thus represents + <literal>VARIADIC</literal> arguments), and thus represents the call signature of the function. </entry> </row> @@ -5266,7 +5266,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <literal>INOUT</literal> arguments); however, if all the arguments are <literal>IN</literal> arguments, this field will be null. Note that subscripting is 1-based, whereas for historical reasons - <structfield>proargtypes</> is subscripted from 0. + <structfield>proargtypes</structfield> is subscripted from 0. </entry> </row> @@ -5276,15 +5276,15 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry></entry> <entry> An array with the modes of the function arguments, encoded as - <literal>i</literal> for <literal>IN</> arguments, - <literal>o</literal> for <literal>OUT</> arguments, - <literal>b</literal> for <literal>INOUT</> arguments, - <literal>v</literal> for <literal>VARIADIC</> arguments, - <literal>t</literal> for <literal>TABLE</> arguments. + <literal>i</literal> for <literal>IN</literal> arguments, + <literal>o</literal> for <literal>OUT</literal> arguments, + <literal>b</literal> for <literal>INOUT</literal> arguments, + <literal>v</literal> for <literal>VARIADIC</literal> arguments, + <literal>t</literal> for <literal>TABLE</literal> arguments. If all the arguments are <literal>IN</literal> arguments, this field will be null. Note that subscripts correspond to positions of - <structfield>proallargtypes</> not <structfield>proargtypes</>. + <structfield>proallargtypes</structfield> not <structfield>proargtypes</structfield>. </entry> </row> @@ -5297,7 +5297,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< Arguments without a name are set to empty strings in the array. If none of the arguments have a name, this field will be null. Note that subscripts correspond to positions of - <structfield>proallargtypes</> not <structfield>proargtypes</>. + <structfield>proallargtypes</structfield> not <structfield>proargtypes</structfield>. </entry> </row> @@ -5308,9 +5308,9 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry> Expression trees (in <function>nodeToString()</function> representation) for default values. This is a list with - <structfield>pronargdefaults</> elements, corresponding to the last - <replaceable>N</> <emphasis>input</> arguments (i.e., the last - <replaceable>N</> <structfield>proargtypes</> positions). + <structfield>pronargdefaults</structfield> elements, corresponding to the last + <replaceable>N</replaceable> <emphasis>input</emphasis> arguments (i.e., the last + <replaceable>N</replaceable> <structfield>proargtypes</structfield> positions). If none of the arguments have defaults, this field will be null. </entry> </row> @@ -5525,7 +5525,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_range</> Columns</title> + <title><structname>pg_range</structname> Columns</title> <tgroup cols="4"> <thead> @@ -5586,10 +5586,10 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </table> <para> - <structfield>rngsubopc</> (plus <structfield>rngcollation</>, if the + <structfield>rngsubopc</structfield> (plus <structfield>rngcollation</structfield>, if the element type is collatable) determines the sort ordering used by the range - type. <structfield>rngcanonical</> is used when the element type is - discrete. <structfield>rngsubdiff</> is optional but should be supplied to + type. <structfield>rngcanonical</structfield> is used when the element type is + discrete. <structfield>rngsubdiff</structfield> is optional but should be supplied to improve performance of GiST indexes on the range type. </para> @@ -5655,7 +5655,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_rewrite</> Columns</title> + <title><structname>pg_rewrite</structname> Columns</title> <tgroup cols="4"> <thead> @@ -5694,9 +5694,9 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>char</type></entry> <entry></entry> <entry> - Event type that the rule is for: 1 = <command>SELECT</>, 2 = - <command>UPDATE</>, 3 = <command>INSERT</>, 4 = - <command>DELETE</> + Event type that the rule is for: 1 = <command>SELECT</command>, 2 = + <command>UPDATE</command>, 3 = <command>INSERT</command>, 4 = + <command>DELETE</command> </entry> </row> @@ -5707,10 +5707,10 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry> Controls in which <xref linkend="guc-session-replication-role"> modes the rule fires. - <literal>O</> = rule fires in <quote>origin</> and <quote>local</> modes, - <literal>D</> = rule is disabled, - <literal>R</> = rule fires in <quote>replica</> mode, - <literal>A</> = rule fires always. + <literal>O</literal> = rule fires in <quote>origin</quote> and <quote>local</quote> modes, + <literal>D</literal> = rule is disabled, + <literal>R</literal> = rule fires in <quote>replica</quote> mode, + <literal>A</literal> = rule fires always. </entry> </row> @@ -5809,7 +5809,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry></entry> <entry> For a security label on a table column, this is the column number (the - <structfield>objoid</> and <structfield>classoid</> refer to + <structfield>objoid</structfield> and <structfield>classoid</structfield> refer to the table itself). For all other object types, this column is zero. </entry> @@ -5847,7 +5847,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_sequence</> Columns</title> + <title><structname>pg_sequence</structname> Columns</title> <tgroup cols="4"> <thead> @@ -5864,7 +5864,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><structfield>seqrelid</structfield></entry> <entry><type>oid</type></entry> <entry><literal><link linkend="catalog-pg-class"><structname>pg_class</structname></link>.oid</literal></entry> - <entry>The OID of the <structname>pg_class</> entry for this sequence</entry> + <entry>The OID of the <structname>pg_class</structname> entry for this sequence</entry> </row> <row> @@ -5949,7 +5949,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_shdepend</> Columns</title> + <title><structname>pg_shdepend</structname> Columns</title> <tgroup cols="4"> <thead> @@ -5990,7 +5990,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry></entry> <entry> For a table column, this is the column number (the - <structfield>objid</> and <structfield>classid</> refer to the + <structfield>objid</structfield> and <structfield>classid</structfield> refer to the table itself). For all other object types, this column is zero. </entry> </row> @@ -6027,11 +6027,11 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< In all cases, a <structname>pg_shdepend</structname> entry indicates that the referenced object cannot be dropped without also dropping the dependent object. However, there are several subflavors identified by - <structfield>deptype</>: + <structfield>deptype</structfield>: <variablelist> <varlistentry> - <term><symbol>SHARED_DEPENDENCY_OWNER</> (<literal>o</>)</term> + <term><symbol>SHARED_DEPENDENCY_OWNER</symbol> (<literal>o</literal>)</term> <listitem> <para> The referenced object (which must be a role) is the owner of the @@ -6041,20 +6041,20 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </varlistentry> <varlistentry> - <term><symbol>SHARED_DEPENDENCY_ACL</> (<literal>a</>)</term> + <term><symbol>SHARED_DEPENDENCY_ACL</symbol> (<literal>a</literal>)</term> <listitem> <para> The referenced object (which must be a role) is mentioned in the ACL (access control list, i.e., privileges list) of the - dependent object. (A <symbol>SHARED_DEPENDENCY_ACL</> entry is + dependent object. (A <symbol>SHARED_DEPENDENCY_ACL</symbol> entry is not made for the owner of the object, since the owner will have - a <symbol>SHARED_DEPENDENCY_OWNER</> entry anyway.) + a <symbol>SHARED_DEPENDENCY_OWNER</symbol> entry anyway.) </para> </listitem> </varlistentry> <varlistentry> - <term><symbol>SHARED_DEPENDENCY_POLICY</> (<literal>r</>)</term> + <term><symbol>SHARED_DEPENDENCY_POLICY</symbol> (<literal>r</literal>)</term> <listitem> <para> The referenced object (which must be a role) is mentioned as the @@ -6064,7 +6064,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </varlistentry> <varlistentry> - <term><symbol>SHARED_DEPENDENCY_PIN</> (<literal>p</>)</term> + <term><symbol>SHARED_DEPENDENCY_PIN</symbol> (<literal>p</literal>)</term> <listitem> <para> There is no dependent object; this type of entry is a signal @@ -6111,7 +6111,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_shdescription</> Columns</title> + <title><structname>pg_shdescription</structname> Columns</title> <tgroup cols="4"> <thead> @@ -6235,16 +6235,16 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <para> - Normally there is one entry, with <structfield>stainherit</> = - <literal>false</>, for each table column that has been analyzed. + Normally there is one entry, with <structfield>stainherit</structfield> = + <literal>false</literal>, for each table column that has been analyzed. If the table has inheritance children, a second entry with - <structfield>stainherit</> = <literal>true</> is also created. This row + <structfield>stainherit</structfield> = <literal>true</literal> is also created. This row represents the column's statistics over the inheritance tree, i.e., statistics for the data you'd see with - <literal>SELECT <replaceable>column</> FROM <replaceable>table</>*</literal>, - whereas the <structfield>stainherit</> = <literal>false</> row represents + <literal>SELECT <replaceable>column</replaceable> FROM <replaceable>table</replaceable>*</literal>, + whereas the <structfield>stainherit</structfield> = <literal>false</literal> row represents the results of - <literal>SELECT <replaceable>column</> FROM ONLY <replaceable>table</></literal>. + <literal>SELECT <replaceable>column</replaceable> FROM ONLY <replaceable>table</replaceable></literal>. </para> <para> @@ -6254,7 +6254,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< references the index. No entry is made for an ordinary non-expression index column, however, since it would be redundant with the entry for the underlying table column. Currently, entries for index expressions - always have <structfield>stainherit</> = <literal>false</>. + always have <structfield>stainherit</structfield> = <literal>false</literal>. </para> <para> @@ -6281,7 +6281,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_statistic</> Columns</title> + <title><structname>pg_statistic</structname> Columns</title> <tgroup cols="4"> <thead> @@ -6339,56 +6339,56 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< A value less than zero is the negative of a multiplier for the number of rows in the table; for example, a column in which about 80% of the values are nonnull and each nonnull value appears about twice on - average could be represented by <structfield>stadistinct</> = -0.4. + average could be represented by <structfield>stadistinct</structfield> = -0.4. A zero value means the number of distinct values is unknown. </entry> </row> <row> - <entry><structfield>stakind<replaceable>N</></structfield></entry> + <entry><structfield>stakind<replaceable>N</replaceable></structfield></entry> <entry><type>int2</type></entry> <entry></entry> <entry> A code number indicating the kind of statistics stored in the - <replaceable>N</>th <quote>slot</quote> of the + <replaceable>N</replaceable>th <quote>slot</quote> of the <structname>pg_statistic</structname> row. </entry> </row> <row> - <entry><structfield>staop<replaceable>N</></structfield></entry> + <entry><structfield>staop<replaceable>N</replaceable></structfield></entry> <entry><type>oid</type></entry> <entry><literal><link linkend="catalog-pg-operator"><structname>pg_operator</structname></link>.oid</literal></entry> <entry> An operator used to derive the statistics stored in the - <replaceable>N</>th <quote>slot</quote>. For example, a + <replaceable>N</replaceable>th <quote>slot</quote>. For example, a histogram slot would show the <literal><</literal> operator that defines the sort order of the data. </entry> </row> <row> - <entry><structfield>stanumbers<replaceable>N</></structfield></entry> + <entry><structfield>stanumbers<replaceable>N</replaceable></structfield></entry> <entry><type>float4[]</type></entry> <entry></entry> <entry> Numerical statistics of the appropriate kind for the - <replaceable>N</>th <quote>slot</quote>, or null if the slot + <replaceable>N</replaceable>th <quote>slot</quote>, or null if the slot kind does not involve numerical values </entry> </row> <row> - <entry><structfield>stavalues<replaceable>N</></structfield></entry> + <entry><structfield>stavalues<replaceable>N</replaceable></structfield></entry> <entry><type>anyarray</type></entry> <entry></entry> <entry> Column data values of the appropriate kind for the - <replaceable>N</>th <quote>slot</quote>, or null if the slot + <replaceable>N</replaceable>th <quote>slot</quote>, or null if the slot kind does not store any data values. Each array's element values are actually of the specific column's data type, or a related type such as an array's element type, so there is no way to define - these columns' type more specifically than <type>anyarray</>. + these columns' type more specifically than <type>anyarray</type>. </entry> </row> </tbody> @@ -6407,12 +6407,12 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <para> The catalog <structname>pg_statistic_ext</structname> holds extended planner statistics. - Each row in this catalog corresponds to a <firstterm>statistics object</> + Each row in this catalog corresponds to a <firstterm>statistics object</firstterm> created with <xref linkend="sql-createstatistics">. </para> <table> - <title><structname>pg_statistic_ext</> Columns</title> + <title><structname>pg_statistic_ext</structname> Columns</title> <tgroup cols="4"> <thead> @@ -6485,7 +6485,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>pg_ndistinct</type></entry> <entry></entry> <entry> - N-distinct counts, serialized as <structname>pg_ndistinct</> type + N-distinct counts, serialized as <structname>pg_ndistinct</structname> type </entry> </row> @@ -6495,7 +6495,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry></entry> <entry> Functional dependency statistics, serialized - as <structname>pg_dependencies</> type + as <structname>pg_dependencies</structname> type </entry> </row> @@ -6507,7 +6507,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< The <structfield>stxkind</structfield> field is filled at creation of the statistics object, indicating which statistic type(s) are desired. The fields after it are initially NULL and are filled only when the - corresponding statistic has been computed by <command>ANALYZE</>. + corresponding statistic has been computed by <command>ANALYZE</command>. </para> </sect1> @@ -6677,10 +6677,10 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry></entry> <entry> State code: - <literal>i</> = initialize, - <literal>d</> = data is being copied, - <literal>s</> = synchronized, - <literal>r</> = ready (normal replication) + <literal>i</literal> = initialize, + <literal>d</literal> = data is being copied, + <literal>s</literal> = synchronized, + <literal>r</literal> = ready (normal replication) </entry> </row> @@ -6689,7 +6689,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>pg_lsn</type></entry> <entry></entry> <entry> - End LSN for <literal>s</> and <literal>r</> states. + End LSN for <literal>s</literal> and <literal>r</literal> states. </entry> </row> </tbody> @@ -6718,7 +6718,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_tablespace</> Columns</title> + <title><structname>pg_tablespace</structname> Columns</title> <tgroup cols="4"> <thead> @@ -6769,7 +6769,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>text[]</type></entry> <entry></entry> <entry> - Tablespace-level options, as <quote>keyword=value</> strings + Tablespace-level options, as <quote>keyword=value</quote> strings </entry> </row> </tbody> @@ -6792,7 +6792,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_transform</> Columns</title> + <title><structname>pg_transform</structname> Columns</title> <tgroup cols="4"> <thead> @@ -6861,7 +6861,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_trigger</> Columns</title> + <title><structname>pg_trigger</structname> Columns</title> <tgroup cols="4"> <thead> @@ -6916,10 +6916,10 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry> Controls in which <xref linkend="guc-session-replication-role"> modes the trigger fires. - <literal>O</> = trigger fires in <quote>origin</> and <quote>local</> modes, - <literal>D</> = trigger is disabled, - <literal>R</> = trigger fires in <quote>replica</> mode, - <literal>A</> = trigger fires always. + <literal>O</literal> = trigger fires in <quote>origin</quote> and <quote>local</quote> modes, + <literal>D</literal> = trigger is disabled, + <literal>R</literal> = trigger fires in <quote>replica</quote> mode, + <literal>A</literal> = trigger fires always. </entry> </row> @@ -6928,7 +6928,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>bool</type></entry> <entry></entry> <entry>True if trigger is internally generated (usually, to enforce - the constraint identified by <structfield>tgconstraint</>)</entry> + the constraint identified by <structfield>tgconstraint</structfield>)</entry> </row> <row> @@ -6950,7 +6950,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><structfield>tgconstraint</structfield></entry> <entry><type>oid</type></entry> <entry><literal><link linkend="catalog-pg-constraint"><structname>pg_constraint</structname></link>.oid</literal></entry> - <entry>The <structname>pg_constraint</> entry associated with the trigger, if any</entry> + <entry>The <structname>pg_constraint</structname> entry associated with the trigger, if any</entry> </row> <row> @@ -6994,7 +6994,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>pg_node_tree</type></entry> <entry></entry> <entry>Expression tree (in <function>nodeToString()</function> - representation) for the trigger's <literal>WHEN</> condition, or null + representation) for the trigger's <literal>WHEN</literal> condition, or null if none</entry> </row> @@ -7002,7 +7002,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><structfield>tgoldtable</structfield></entry> <entry><type>name</type></entry> <entry></entry> - <entry><literal>REFERENCING</> clause name for <literal>OLD TABLE</>, + <entry><literal>REFERENCING</literal> clause name for <literal>OLD TABLE</literal>, or null if none</entry> </row> @@ -7010,7 +7010,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><structfield>tgnewtable</structfield></entry> <entry><type>name</type></entry> <entry></entry> - <entry><literal>REFERENCING</> clause name for <literal>NEW TABLE</>, + <entry><literal>REFERENCING</literal> clause name for <literal>NEW TABLE</literal>, or null if none</entry> </row> </tbody> @@ -7019,18 +7019,18 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <para> Currently, column-specific triggering is supported only for - <literal>UPDATE</> events, and so <structfield>tgattr</> is relevant + <literal>UPDATE</literal> events, and so <structfield>tgattr</structfield> is relevant only for that event type. <structfield>tgtype</structfield> might contain bits for other event types as well, but those are presumed - to be table-wide regardless of what is in <structfield>tgattr</>. + to be table-wide regardless of what is in <structfield>tgattr</structfield>. </para> <note> <para> - When <structfield>tgconstraint</> is nonzero, - <structfield>tgconstrrelid</>, <structfield>tgconstrindid</>, - <structfield>tgdeferrable</>, and <structfield>tginitdeferred</> are - largely redundant with the referenced <structname>pg_constraint</> entry. + When <structfield>tgconstraint</structfield> is nonzero, + <structfield>tgconstrrelid</structfield>, <structfield>tgconstrindid</structfield>, + <structfield>tgdeferrable</structfield>, and <structfield>tginitdeferred</structfield> are + largely redundant with the referenced <structname>pg_constraint</structname> entry. However, it is possible for a non-deferrable trigger to be associated with a deferrable constraint: foreign key constraints can have some deferrable and some non-deferrable triggers. @@ -7070,7 +7070,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_ts_config</> Columns</title> + <title><structname>pg_ts_config</structname> Columns</title> <tgroup cols="4"> <thead> @@ -7145,7 +7145,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_ts_config_map</> Columns</title> + <title><structname>pg_ts_config_map</structname> Columns</title> <tgroup cols="4"> <thead> @@ -7162,7 +7162,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><structfield>mapcfg</structfield></entry> <entry><type>oid</type></entry> <entry><literal><link linkend="catalog-pg-ts-config"><structname>pg_ts_config</structname></link>.oid</literal></entry> - <entry>The OID of the <structname>pg_ts_config</> entry owning this map entry</entry> + <entry>The OID of the <structname>pg_ts_config</structname> entry owning this map entry</entry> </row> <row> @@ -7177,7 +7177,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>integer</type></entry> <entry></entry> <entry>Order in which to consult this entry (lower - <structfield>mapseqno</>s first)</entry> + <structfield>mapseqno</structfield>s first)</entry> </row> <row> @@ -7206,7 +7206,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< needed; the dictionary itself provides values for the user-settable parameters supported by the template. This division of labor allows dictionaries to be created by unprivileged users. The parameters - are specified by a text string <structfield>dictinitoption</>, + are specified by a text string <structfield>dictinitoption</structfield>, whose format and meaning vary depending on the template. </para> @@ -7216,7 +7216,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_ts_dict</> Columns</title> + <title><structname>pg_ts_dict</structname> Columns</title> <tgroup cols="4"> <thead> @@ -7299,7 +7299,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_ts_parser</> Columns</title> + <title><structname>pg_ts_parser</structname> Columns</title> <tgroup cols="4"> <thead> @@ -7396,7 +7396,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_ts_template</> Columns</title> + <title><structname>pg_ts_template</structname> Columns</title> <tgroup cols="4"> <thead> @@ -7470,7 +7470,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_type</> Columns</title> + <title><structname>pg_type</structname> Columns</title> <tgroup cols="4"> <thead> @@ -7521,7 +7521,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< For a fixed-size type, <structfield>typlen</structfield> is the number of bytes in the internal representation of the type. But for a variable-length type, <structfield>typlen</structfield> is negative. - -1 indicates a <quote>varlena</> type (one that has a length word), + -1 indicates a <quote>varlena</quote> type (one that has a length word), -2 indicates a null-terminated C string. </entry> </row> @@ -7566,7 +7566,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry> <structfield>typcategory</structfield> is an arbitrary classification of data types that is used by the parser to determine which implicit - casts should be <quote>preferred</>. + casts should be <quote>preferred</quote>. See <xref linkend="catalog-typcategory-table">. </entry> </row> @@ -7711,7 +7711,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <structfield>typalign</structfield> is the alignment required when storing a value of this type. It applies to storage on disk as well as most representations of the value inside - <productname>PostgreSQL</>. + <productname>PostgreSQL</productname>. When multiple values are stored consecutively, such as in the representation of a complete row on disk, padding is inserted before a datum of this type so that it begins on the @@ -7723,16 +7723,16 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< Possible values are: <itemizedlist> <listitem> - <para><literal>c</> = <type>char</type> alignment, i.e., no alignment needed.</para> + <para><literal>c</literal> = <type>char</type> alignment, i.e., no alignment needed.</para> </listitem> <listitem> - <para><literal>s</> = <type>short</type> alignment (2 bytes on most machines).</para> + <para><literal>s</literal> = <type>short</type> alignment (2 bytes on most machines).</para> </listitem> <listitem> - <para><literal>i</> = <type>int</type> alignment (4 bytes on most machines).</para> + <para><literal>i</literal> = <type>int</type> alignment (4 bytes on most machines).</para> </listitem> <listitem> - <para><literal>d</> = <type>double</type> alignment (8 bytes on many machines, but by no means all).</para> + <para><literal>d</literal> = <type>double</type> alignment (8 bytes on many machines, but by no means all).</para> </listitem> </itemizedlist> </para><note> @@ -7757,24 +7757,24 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< Possible values are <itemizedlist> <listitem> - <para><literal>p</>: Value must always be stored plain.</para> + <para><literal>p</literal>: Value must always be stored plain.</para> </listitem> <listitem> <para> - <literal>e</>: Value can be stored in a <quote>secondary</quote> + <literal>e</literal>: Value can be stored in a <quote>secondary</quote> relation (if relation has one, see <literal>pg_class.reltoastrelid</literal>). </para> </listitem> <listitem> - <para><literal>m</>: Value can be stored compressed inline.</para> + <para><literal>m</literal>: Value can be stored compressed inline.</para> </listitem> <listitem> - <para><literal>x</>: Value can be stored compressed inline or stored in <quote>secondary</quote> storage.</para> + <para><literal>x</literal>: Value can be stored compressed inline or stored in <quote>secondary</quote> storage.</para> </listitem> </itemizedlist> - Note that <literal>m</> columns can also be moved out to secondary - storage, but only as a last resort (<literal>e</> and <literal>x</> columns are + Note that <literal>m</literal> columns can also be moved out to secondary + storage, but only as a last resort (<literal>e</literal> and <literal>x</literal> columns are moved first). </para></entry> </row> @@ -7805,9 +7805,9 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>int4</type></entry> <entry></entry> <entry><para> - Domains use <structfield>typtypmod</structfield> to record the <literal>typmod</> + Domains use <structfield>typtypmod</structfield> to record the <literal>typmod</literal> to be applied to their base type (-1 if base type does not use a - <literal>typmod</>). -1 if this type is not a domain. + <literal>typmod</literal>). -1 if this type is not a domain. </para></entry> </row> @@ -7817,7 +7817,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry></entry> <entry><para> <structfield>typndims</structfield> is the number of array dimensions - for a domain over an array (that is, <structfield>typbasetype</> is + for a domain over an array (that is, <structfield>typbasetype</structfield> is an array type). Zero for types other than domains over array types. </para></entry> @@ -7842,7 +7842,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>pg_node_tree</type></entry> <entry></entry> <entry><para> - If <structfield>typdefaultbin</> is not null, it is the + If <structfield>typdefaultbin</structfield> is not null, it is the <function>nodeToString()</function> representation of a default expression for the type. This is only used for domains. @@ -7854,12 +7854,12 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>text</type></entry> <entry></entry> <entry><para> - <structfield>typdefault</> is null if the type has no associated - default value. If <structfield>typdefaultbin</> is not null, - <structfield>typdefault</> must contain a human-readable version of the - default expression represented by <structfield>typdefaultbin</>. If - <structfield>typdefaultbin</> is null and <structfield>typdefault</> is - not, then <structfield>typdefault</> is the external representation of + <structfield>typdefault</structfield> is null if the type has no associated + default value. If <structfield>typdefaultbin</structfield> is not null, + <structfield>typdefault</structfield> must contain a human-readable version of the + default expression represented by <structfield>typdefaultbin</structfield>. If + <structfield>typdefaultbin</structfield> is null and <structfield>typdefault</structfield> is + not, then <structfield>typdefault</structfield> is the external representation of the type's default value, which can be fed to the type's input converter to produce a constant. </para></entry> @@ -7882,13 +7882,13 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <para> <xref linkend="catalog-typcategory-table"> lists the system-defined values - of <structfield>typcategory</>. Any future additions to this list will + of <structfield>typcategory</structfield>. Any future additions to this list will also be upper-case ASCII letters. All other ASCII characters are reserved for user-defined categories. </para> <table id="catalog-typcategory-table"> - <title><structfield>typcategory</> Codes</title> + <title><structfield>typcategory</structfield> Codes</title> <tgroup cols="2"> <thead> @@ -7957,7 +7957,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </row> <row> <entry><literal>X</literal></entry> - <entry><type>unknown</> type</entry> + <entry><type>unknown</type> type</entry> </row> </tbody> </tgroup> @@ -7982,7 +7982,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_user_mapping</> Columns</title> + <title><structname>pg_user_mapping</structname> Columns</title> <tgroup cols="4"> <thead> @@ -8023,7 +8023,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><type>text[]</type></entry> <entry></entry> <entry> - User mapping specific options, as <quote>keyword=value</> strings + User mapping specific options, as <quote>keyword=value</quote> strings </entry> </row> </tbody> @@ -8241,7 +8241,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_available_extensions</> Columns</title> + <title><structname>pg_available_extensions</structname> Columns</title> <tgroup cols="3"> <thead> @@ -8303,7 +8303,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_available_extension_versions</> Columns</title> + <title><structname>pg_available_extension_versions</structname> Columns</title> <tgroup cols="3"> <thead> @@ -8385,11 +8385,11 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <para> The view <structname>pg_config</structname> describes the compile-time configuration parameters of the currently installed - version of <productname>PostgreSQL</>. It is intended, for example, to + version of <productname>PostgreSQL</productname>. It is intended, for example, to be used by software packages that want to interface to - <productname>PostgreSQL</> to facilitate finding the required header + <productname>PostgreSQL</productname> to facilitate finding the required header files and libraries. It provides the same basic information as the - <xref linkend="app-pgconfig"> <productname>PostgreSQL</> client + <xref linkend="app-pgconfig"> <productname>PostgreSQL</productname> client application. </para> @@ -8399,7 +8399,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_config</> Columns</title> + <title><structname>pg_config</structname> Columns</title> <tgroup cols="3"> <thead> <row> @@ -8470,15 +8470,15 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <note> <para> Cursors are used internally to implement some of the components - of <productname>PostgreSQL</>, such as procedural languages. - Therefore, the <structname>pg_cursors</> view might include cursors + of <productname>PostgreSQL</productname>, such as procedural languages. + Therefore, the <structname>pg_cursors</structname> view might include cursors that have not been explicitly created by the user. </para> </note> </para> <table> - <title><structname>pg_cursors</> Columns</title> + <title><structname>pg_cursors</structname> Columns</title> <tgroup cols="3"> <thead> @@ -8526,7 +8526,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><structfield>is_scrollable</structfield></entry> <entry><type>boolean</type></entry> <entry> - <literal>true</> if the cursor is scrollable (that is, it + <literal>true</literal> if the cursor is scrollable (that is, it allows rows to be retrieved in a nonsequential manner); <literal>false</literal> otherwise </entry> @@ -8557,16 +8557,16 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <para> The view <structname>pg_file_settings</structname> provides a summary of the contents of the server's configuration file(s). A row appears in - this view for each <quote>name = value</> entry appearing in the files, + this view for each <quote>name = value</quote> entry appearing in the files, with annotations indicating whether the value could be applied successfully. Additional row(s) may appear for problems not linked to - a <quote>name = value</> entry, such as syntax errors in the files. + a <quote>name = value</quote> entry, such as syntax errors in the files. </para> <para> This view is helpful for checking whether planned changes in the configuration files will work, or for diagnosing a previous failure. - Note that this view reports on the <emphasis>current</> contents of the + Note that this view reports on the <emphasis>current</emphasis> contents of the files, not on what was last applied by the server. (The <link linkend="view-pg-settings"><structname>pg_settings</structname></link> view is usually sufficient to determine that.) @@ -8578,7 +8578,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_file_settings</> Columns</title> + <title><structname>pg_file_settings</structname> Columns</title> <tgroup cols="3"> <thead> @@ -8604,7 +8604,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <row> <entry><structfield>seqno</structfield></entry> <entry><structfield>integer</structfield></entry> - <entry>Order in which the entries are processed (1..<replaceable>n</>)</entry> + <entry>Order in which the entries are processed (1..<replaceable>n</replaceable>)</entry> </row> <row> <entry><structfield>name</structfield></entry> @@ -8634,14 +8634,14 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <para> If the configuration file contains syntax errors or invalid parameter names, the server will not attempt to apply any settings from it, and - therefore all the <structfield>applied</> fields will read as false. + therefore all the <structfield>applied</structfield> fields will read as false. In such a case there will be one or more rows with non-null <structfield>error</structfield> fields indicating the problem(s). Otherwise, individual settings will be applied if possible. If an individual setting cannot be applied (e.g., invalid value, or the setting cannot be changed after server start) it will have an appropriate message in the <structfield>error</structfield> field. Another way that - an entry might have <structfield>applied</> = false is that it is + an entry might have <structfield>applied</structfield> = false is that it is overridden by a later entry for the same parameter name; this case is not considered an error so nothing appears in the <structfield>error</structfield> field. @@ -8666,12 +8666,12 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< compatibility: it emulates a catalog that existed in <productname>PostgreSQL</productname> before version 8.1. It shows the names and members of all roles that are marked as not - <structfield>rolcanlogin</>, which is an approximation to the set + <structfield>rolcanlogin</structfield>, which is an approximation to the set of roles that are being used as groups. </para> <table> - <title><structname>pg_group</> Columns</title> + <title><structname>pg_group</structname> Columns</title> <tgroup cols="4"> <thead> @@ -8720,7 +8720,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <para> The view <structname>pg_hba_file_rules</structname> provides a summary of the contents of the client authentication configuration - file, <filename>pg_hba.conf</>. A row appears in this view for each + file, <filename>pg_hba.conf</filename>. A row appears in this view for each non-empty, non-comment line in the file, with annotations indicating whether the rule could be applied successfully. </para> @@ -8728,7 +8728,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <para> This view can be helpful for checking whether planned changes in the authentication configuration file will work, or for diagnosing a previous - failure. Note that this view reports on the <emphasis>current</> contents + failure. Note that this view reports on the <emphasis>current</emphasis> contents of the file, not on what was last loaded by the server. </para> @@ -8738,7 +8738,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_hba_file_rules</> Columns</title> + <title><structname>pg_hba_file_rules</structname> Columns</title> <tgroup cols="3"> <thead> @@ -8753,7 +8753,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry><structfield>line_number</structfield></entry> <entry><structfield>integer</structfield></entry> <entry> - Line number of this rule in <filename>pg_hba.conf</> + Line number of this rule in <filename>pg_hba.conf</filename> </entry> </row> <row> @@ -8809,7 +8809,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <para> Usually, a row reflecting an incorrect entry will have values for only - the <structfield>line_number</> and <structfield>error</> fields. + the <structfield>line_number</structfield> and <structfield>error</structfield> fields. </para> <para> @@ -8831,7 +8831,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< </para> <table> - <title><structname>pg_indexes</> Columns</title> + <title><structname>pg_indexes</structname> Columns</title> <tgroup cols="4"> <thead> @@ -8912,12 +8912,12 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< in the same way as in <structname>pg_description</structname> or <structname>pg_depend</structname>). Also, the right to extend a relation is represented as a separate lockable object. - Also, <quote>advisory</> locks can be taken on numbers that have + Also, <quote>advisory</quote> locks can be taken on numbers that have user-defined meanings. </para> <table> - <title><structname>pg_locks</> Columns</title> + <title><structname>pg_locks</structname> Columns</title> <tgroup cols="4"> <thead> @@ -8935,15 +8935,15 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry></entry> <entry> Type of the lockable object: - <literal>relation</>, - <literal>extend</>, - <literal>page</>, - <literal>tuple</>, - <literal>transactionid</>, - <literal>virtualxid</>, - <literal>object</>, - <literal>userlock</>, or - <literal>advisory</> + <literal>relation</literal>, + <literal>extend</literal>, + <literal>page</literal>, + <literal>tuple</literal>, + <literal>transactionid</literal>, + <literal>virtualxid</literal>, + <literal>object</literal>, + <literal>userlock</literal>, or + <literal>advisory</literal> </entry> </row> <row> @@ -9025,7 +9025,7 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< <entry></entry> <entry> Column number targeted by the lock (the - <structfield>classid</> and <structfield>objid</> refer to the + <structfield>classid</structfield> and <structfield>objid</structfield> refer to the table itself), or zero if the target is some other general database object, or null if the target is not a general database object @@ -9107,23 +9107,23 @@ SCRAM-SHA-256$<replaceable><iteration count></>:<replaceable><salt>< Advisory locks can be acquired on keys consisting of either a single <type>bigint</type> value or two integer values. A <type>bigint</type> key is displayed with its - high-order half in the <structfield>classid</> column, its low-order half - in the <structfield>objid</> column, and <structfield>objsubid</> equal + high-order half in the <structfield>classid</structfield> column, its low-order half + in the <structfield>objid</structfield> column, and <structfield>objsubid</structfield> equal to 1. The original <type>bigint</type> value can be reassembled with the expression <literal>(classid::bigint << 32) | objid::bigint</literal>. Integer keys are displayed with the first key in the - <structfield>classid</> column, the second key in the <structfield>objid</> - column, and <structfield>objsubid</> equal to 2. The actual meaning of + <structfield>classid</structfield> column, the second key in the <structfield>objid</structfield> + column, and <structfield>objsubid</structfield> equal to 2. The actual meaning of the keys is up to the user. Advisory locks are local to each database, - so the <structfield>database</> column is meaningful for an advisory lock. + so the <structfield>database</structfield> column is meaningful for an advisory lock. </para> <para> <structname>pg_locks</structname> provides a global view of all locks in the database cluster, not only those relevant to the current database. Although its <structfield>relation</structfield> column can be joined - against <structname>pg_class</>.<structfield>oid</> to identify locked + against <structname>pg_class</structname>.<structfield>oid</structfield> to identify locked relations, this will only work correctly for relations in the current database (those for which the <structfield>database</structfield> column is either the current database's OID or zero). @@ -9141,7 +9141,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid; </programlisting> Also, if you are using prepared transactions, the - <structfield>virtualtransaction</> column can be joined to the + <structfield>virtualtransaction</structfield> column can be joined to the <structfield>transaction</structfield> column of the <link linkend="view-pg-prepared-xacts"><structname>pg_prepared_xacts</structname></link> view to get more information on prepared transactions that hold locks. @@ -9163,7 +9163,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx information about which processes are ahead of which others in lock wait queues, nor information about which processes are parallel workers running on behalf of which other client sessions. It is better to use - the <function>pg_blocking_pids()</> function + the <function>pg_blocking_pids()</function> function (see <xref linkend="functions-info-session-table">) to identify which process(es) a waiting process is blocked behind. </para> @@ -9172,10 +9172,10 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx The <structname>pg_locks</structname> view displays data from both the regular lock manager and the predicate lock manager, which are separate systems; in addition, the regular lock manager subdivides its - locks into regular and <firstterm>fast-path</> locks. + locks into regular and <firstterm>fast-path</firstterm> locks. This data is not guaranteed to be entirely consistent. When the view is queried, - data on fast-path locks (with <structfield>fastpath</> = <literal>true</>) + data on fast-path locks (with <structfield>fastpath</structfield> = <literal>true</literal>) is gathered from each backend one at a time, without freezing the state of the entire lock manager, so it is possible for locks to be taken or released while information is gathered. Note, however, that these locks are @@ -9218,7 +9218,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx </para> <table> - <title><structname>pg_matviews</> Columns</title> + <title><structname>pg_matviews</structname> Columns</title> <tgroup cols="4"> <thead> @@ -9291,7 +9291,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx </para> <table> - <title><structname>pg_policies</> Columns</title> + <title><structname>pg_policies</structname> Columns</title> <tgroup cols="4"> <thead> @@ -9381,7 +9381,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx </para> <table> - <title><structname>pg_prepared_statements</> Columns</title> + <title><structname>pg_prepared_statements</structname> Columns</title> <tgroup cols="3"> <thead> @@ -9467,7 +9467,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx </para> <table> - <title><structname>pg_prepared_xacts</> Columns</title> + <title><structname>pg_prepared_xacts</structname> Columns</title> <tgroup cols="4"> <thead> @@ -9706,7 +9706,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx <entry><structfield>slot_type</structfield></entry> <entry><type>text</type></entry> <entry></entry> - <entry>The slot type - <literal>physical</> or <literal>logical</></entry> + <entry>The slot type - <literal>physical</literal> or <literal>logical</literal></entry> </row> <row> @@ -9787,7 +9787,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx <entry></entry> <entry>The address (<literal>LSN</literal>) up to which the logical slot's consumer has confirmed receiving data. Data older than this is - not available anymore. <literal>NULL</> for physical slots. + not available anymore. <literal>NULL</literal> for physical slots. </entry> </row> @@ -9817,7 +9817,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx </para> <table> - <title><structname>pg_roles</> Columns</title> + <title><structname>pg_roles</structname> Columns</title> <tgroup cols="4"> <thead> @@ -9900,7 +9900,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx <entry><structfield>rolpassword</structfield></entry> <entry><type>text</type></entry> <entry></entry> - <entry>Not the password (always reads as <literal>********</>)</entry> + <entry>Not the password (always reads as <literal>********</literal>)</entry> </row> <row> @@ -9953,7 +9953,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx </para> <table> - <title><structname>pg_rules</> Columns</title> + <title><structname>pg_rules</structname> Columns</title> <tgroup cols="4"> <thead> @@ -9994,9 +9994,9 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx </table> <para> - The <structname>pg_rules</> view excludes the <literal>ON SELECT</> rules + The <structname>pg_rules</structname> view excludes the <literal>ON SELECT</literal> rules of views and materialized views; those can be seen in - <structname>pg_views</> and <structname>pg_matviews</>. + <structname>pg_views</structname> and <structname>pg_matviews</structname>. </para> </sect1> @@ -10011,11 +10011,11 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx <para> The view <structname>pg_seclabels</structname> provides information about security labels. It as an easier-to-query version of the - <link linkend="catalog-pg-seclabel"><structname>pg_seclabel</></> catalog. + <link linkend="catalog-pg-seclabel"><structname>pg_seclabel</structname></link> catalog. </para> <table> - <title><structname>pg_seclabels</> Columns</title> + <title><structname>pg_seclabels</structname> Columns</title> <tgroup cols="4"> <thead> @@ -10045,7 +10045,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx <entry></entry> <entry> For a security label on a table column, this is the column number (the - <structfield>objoid</> and <structfield>classoid</> refer to + <structfield>objoid</structfield> and <structfield>classoid</structfield> refer to the table itself). For all other object types, this column is zero. </entry> @@ -10105,7 +10105,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx </para> <table> - <title><structname>pg_sequences</> Columns</title> + <title><structname>pg_sequences</structname> Columns</title> <tgroup cols="4"> <thead> @@ -10206,12 +10206,12 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx interface to the <xref linkend="sql-show"> and <xref linkend="sql-set"> commands. It also provides access to some facts about each parameter that are - not directly available from <command>SHOW</>, such as minimum and + not directly available from <command>SHOW</command>, such as minimum and maximum values. </para> <table> - <title><structname>pg_settings</> Columns</title> + <title><structname>pg_settings</structname> Columns</title> <tgroup cols="3"> <thead> @@ -10260,8 +10260,8 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx <row> <entry><structfield>vartype</structfield></entry> <entry><type>text</type></entry> - <entry>Parameter type (<literal>bool</>, <literal>enum</>, - <literal>integer</>, <literal>real</>, or <literal>string</>) + <entry>Parameter type (<literal>bool</literal>, <literal>enum</literal>, + <literal>integer</literal>, <literal>real</literal>, or <literal>string</literal>) </entry> </row> <row> @@ -10306,7 +10306,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx values set from sources other than configuration files, or when examined by a user who is neither a superuser or a member of <literal>pg_read_all_settings</literal>); helpful when using - <literal>include</> directives in configuration files</entry> + <literal>include</literal> directives in configuration files</entry> </row> <row> <entry><structfield>sourceline</structfield></entry> @@ -10384,7 +10384,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx Changes to these settings can be made in <filename>postgresql.conf</filename> without restarting the server. They can also be set for a particular session in the connection request - packet (for example, via <application>libpq</>'s <literal>PGOPTIONS</> + packet (for example, via <application>libpq</application>'s <literal>PGOPTIONS</literal> environment variable), but only if the connecting user is a superuser. However, these settings never change in a session after it is started. If you change them in <filename>postgresql.conf</filename>, send a @@ -10402,7 +10402,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx Changes to these settings can be made in <filename>postgresql.conf</filename> without restarting the server. They can also be set for a particular session in the connection request - packet (for example, via <application>libpq</>'s <literal>PGOPTIONS</> + packet (for example, via <application>libpq</application>'s <literal>PGOPTIONS</literal> environment variable); any user can make such a change for their session. However, these settings never change in a session after it is started. If you change them in <filename>postgresql.conf</filename>, send a @@ -10418,10 +10418,10 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx <listitem> <para> These settings can be set from <filename>postgresql.conf</filename>, - or within a session via the <command>SET</> command; but only superusers - can change them via <command>SET</>. Changes in + or within a session via the <command>SET</command> command; but only superusers + can change them via <command>SET</command>. Changes in <filename>postgresql.conf</filename> will affect existing sessions - only if no session-local value has been established with <command>SET</>. + only if no session-local value has been established with <command>SET</command>. </para> </listitem> </varlistentry> @@ -10431,10 +10431,10 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx <listitem> <para> These settings can be set from <filename>postgresql.conf</filename>, - or within a session via the <command>SET</> command. Any user is + or within a session via the <command>SET</command> command. Any user is allowed to change their session-local value. Changes in <filename>postgresql.conf</filename> will affect existing sessions - only if no session-local value has been established with <command>SET</>. + only if no session-local value has been established with <command>SET</command>. </para> </listitem> </varlistentry> @@ -10473,7 +10473,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx compatibility: it emulates a catalog that existed in <productname>PostgreSQL</productname> before version 8.1. It shows properties of all roles that are marked as - <structfield>rolcanlogin</> in + <structfield>rolcanlogin</structfield> in <link linkend="catalog-pg-authid"><structname>pg_authid</structname></link>. </para> @@ -10486,7 +10486,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx </para> <table> - <title><structname>pg_shadow</> Columns</title> + <title><structname>pg_shadow</structname> Columns</title> <tgroup cols="4"> <thead> @@ -10600,7 +10600,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx </para> <table> - <title><structname>pg_stats</> Columns</title> + <title><structname>pg_stats</structname> Columns</title> <tgroup cols="4"> <thead> @@ -10663,7 +10663,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx If greater than zero, the estimated number of distinct values in the column. If less than zero, the negative of the number of distinct values divided by the number of rows. (The negated form is used when - <command>ANALYZE</> believes that the number of distinct values is + <command>ANALYZE</command> believes that the number of distinct values is likely to increase as the table grows; the positive form is used when the column seems to have a fixed number of possible values.) For example, -1 indicates a unique column in which the number of distinct @@ -10699,10 +10699,10 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx <entry> A list of values that divide the column's values into groups of approximately equal population. The values in - <structfield>most_common_vals</>, if present, are omitted from this + <structfield>most_common_vals</structfield>, if present, are omitted from this histogram calculation. (This column is null if the column data type - does not have a <literal><</> operator or if the - <structfield>most_common_vals</> list accounts for the entire + does not have a <literal><</literal> operator or if the + <structfield>most_common_vals</structfield> list accounts for the entire population.) </entry> </row> @@ -10717,7 +10717,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx When the value is near -1 or +1, an index scan on the column will be estimated to be cheaper than when it is near zero, due to reduction of random access to the disk. (This column is null if the column data - type does not have a <literal><</> operator.) + type does not have a <literal><</literal> operator.) </entry> </row> @@ -10761,7 +10761,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx <para> The maximum number of entries in the array fields can be controlled on a - column-by-column basis using the <command>ALTER TABLE SET STATISTICS</> + column-by-column basis using the <command>ALTER TABLE SET STATISTICS</command> command, or globally by setting the <xref linkend="guc-default-statistics-target"> run-time parameter. </para> @@ -10781,7 +10781,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx </para> <table> - <title><structname>pg_tables</> Columns</title> + <title><structname>pg_tables</structname> Columns</title> <tgroup cols="4"> <thead> @@ -10862,7 +10862,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx </para> <table> - <title><structname>pg_timezone_abbrevs</> Columns</title> + <title><structname>pg_timezone_abbrevs</structname> Columns</title> <tgroup cols="3"> <thead> @@ -10910,7 +10910,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx <para> The view <structname>pg_timezone_names</structname> provides a list - of time zone names that are recognized by <command>SET TIMEZONE</>, + of time zone names that are recognized by <command>SET TIMEZONE</command>, along with their associated abbreviations, UTC offsets, and daylight-savings status. (Technically, <productname>PostgreSQL</productname> does not use UTC because leap @@ -10919,11 +10919,11 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx linkend="view-pg-timezone-abbrevs"><structname>pg_timezone_abbrevs</structname></link>, many of these names imply a set of daylight-savings transition date rules. Therefore, the associated information changes across local DST boundaries. The displayed information is computed based on the current - value of <function>CURRENT_TIMESTAMP</>. + value of <function>CURRENT_TIMESTAMP</function>. </para> <table> - <title><structname>pg_timezone_names</> Columns</title> + <title><structname>pg_timezone_names</structname> Columns</title> <tgroup cols="3"> <thead> @@ -10976,7 +10976,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx </para> <table> - <title><structname>pg_user</> Columns</title> + <title><structname>pg_user</structname> Columns</title> <tgroup cols="3"> <thead> @@ -11032,7 +11032,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx <row> <entry><structfield>passwd</structfield></entry> <entry><type>text</type></entry> - <entry>Not the password (always reads as <literal>********</>)</entry> + <entry>Not the password (always reads as <literal>********</literal>)</entry> </row> <row> @@ -11069,7 +11069,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx </para> <table> - <title><structname>pg_user_mappings</> Columns</title> + <title><structname>pg_user_mappings</structname> Columns</title> <tgroup cols="4"> <thead> @@ -11126,7 +11126,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx <entry><type>text[]</type></entry> <entry></entry> <entry> - User mapping specific options, as <quote>keyword=value</> strings + User mapping specific options, as <quote>keyword=value</quote> strings </entry> </row> </tbody> @@ -11141,12 +11141,12 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx <listitem> <para> current user is the user being mapped, and owns the server or - holds <literal>USAGE</> privilege on it + holds <literal>USAGE</literal> privilege on it </para> </listitem> <listitem> <para> - current user is the server owner and mapping is for <literal>PUBLIC</> + current user is the server owner and mapping is for <literal>PUBLIC</literal> </para> </listitem> <listitem> @@ -11173,7 +11173,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx </para> <table> - <title><structname>pg_views</> Columns</title> + <title><structname>pg_views</structname> Columns</title> <tgroup cols="4"> <thead> diff --git a/doc/src/sgml/charset.sgml b/doc/src/sgml/charset.sgml index 63f7de5b438..3874a3f1ea8 100644 --- a/doc/src/sgml/charset.sgml +++ b/doc/src/sgml/charset.sgml @@ -35,12 +35,12 @@ <sect1 id="locale"> <title>Locale Support</title> - <indexterm zone="locale"><primary>locale</></> + <indexterm zone="locale"><primary>locale</primary></indexterm> <para> - <firstterm>Locale</> support refers to an application respecting + <firstterm>Locale</firstterm> support refers to an application respecting cultural preferences regarding alphabets, sorting, number - formatting, etc. <productname>PostgreSQL</> uses the standard ISO + formatting, etc. <productname>PostgreSQL</productname> uses the standard ISO C and <acronym>POSIX</acronym> locale facilities provided by the server operating system. For additional information refer to the documentation of your system. @@ -67,14 +67,14 @@ initdb --locale=sv_SE <para> This example for Unix systems sets the locale to Swedish - (<literal>sv</>) as spoken - in Sweden (<literal>SE</>). Other possibilities might include - <literal>en_US</> (U.S. English) and <literal>fr_CA</> (French + (<literal>sv</literal>) as spoken + in Sweden (<literal>SE</literal>). Other possibilities might include + <literal>en_US</literal> (U.S. English) and <literal>fr_CA</literal> (French Canadian). If more than one character set can be used for a locale then the specifications can take the form - <replaceable>language_territory.codeset</>. For example, - <literal>fr_BE.UTF-8</> represents the French language (fr) as - spoken in Belgium (BE), with a <acronym>UTF-8</> character set + <replaceable>language_territory.codeset</replaceable>. For example, + <literal>fr_BE.UTF-8</literal> represents the French language (fr) as + spoken in Belgium (BE), with a <acronym>UTF-8</acronym> character set encoding. </para> @@ -82,9 +82,9 @@ initdb --locale=sv_SE What locales are available on your system under what names depends on what was provided by the operating system vendor and what was installed. On most Unix systems, the command - <literal>locale -a</> will provide a list of available locales. - Windows uses more verbose locale names, such as <literal>German_Germany</> - or <literal>Swedish_Sweden.1252</>, but the principles are the same. + <literal>locale -a</literal> will provide a list of available locales. + Windows uses more verbose locale names, such as <literal>German_Germany</literal> + or <literal>Swedish_Sweden.1252</literal>, but the principles are the same. </para> <para> @@ -97,28 +97,28 @@ initdb --locale=sv_SE <tgroup cols="2"> <tbody> <row> - <entry><envar>LC_COLLATE</></> - <entry>String sort order</> + <entry><envar>LC_COLLATE</envar></entry> + <entry>String sort order</entry> </row> <row> - <entry><envar>LC_CTYPE</></> - <entry>Character classification (What is a letter? Its upper-case equivalent?)</> + <entry><envar>LC_CTYPE</envar></entry> + <entry>Character classification (What is a letter? Its upper-case equivalent?)</entry> </row> <row> - <entry><envar>LC_MESSAGES</></> - <entry>Language of messages</> + <entry><envar>LC_MESSAGES</envar></entry> + <entry>Language of messages</entry> </row> <row> - <entry><envar>LC_MONETARY</></> - <entry>Formatting of currency amounts</> + <entry><envar>LC_MONETARY</envar></entry> + <entry>Formatting of currency amounts</entry> </row> <row> - <entry><envar>LC_NUMERIC</></> - <entry>Formatting of numbers</> + <entry><envar>LC_NUMERIC</envar></entry> + <entry>Formatting of numbers</entry> </row> <row> - <entry><envar>LC_TIME</></> - <entry>Formatting of dates and times</> + <entry><envar>LC_TIME</envar></entry> + <entry>Formatting of dates and times</entry> </row> </tbody> </tgroup> @@ -133,8 +133,8 @@ initdb --locale=sv_SE <para> If you want the system to behave as if it had no locale support, - use the special locale name <literal>C</>, or equivalently - <literal>POSIX</>. + use the special locale name <literal>C</literal>, or equivalently + <literal>POSIX</literal>. </para> <para> @@ -192,14 +192,14 @@ initdb --locale=sv_SE settings for the purpose of setting the language of messages. If in doubt, please refer to the documentation of your operating system, in particular the documentation about - <application>gettext</>. + <application>gettext</application>. </para> </note> <para> To enable messages to be translated to the user's preferred language, <acronym>NLS</acronym> must have been selected at build time - (<literal>configure --enable-nls</>). All other locale support is + (<literal>configure --enable-nls</literal>). All other locale support is built in automatically. </para> </sect2> @@ -213,63 +213,63 @@ initdb --locale=sv_SE <itemizedlist> <listitem> <para> - Sort order in queries using <literal>ORDER BY</> or the standard + Sort order in queries using <literal>ORDER BY</literal> or the standard comparison operators on textual data - <indexterm><primary>ORDER BY</><secondary>and locales</></indexterm> + <indexterm><primary>ORDER BY</primary><secondary>and locales</secondary></indexterm> </para> </listitem> <listitem> <para> - The <function>upper</>, <function>lower</>, and <function>initcap</> + The <function>upper</function>, <function>lower</function>, and <function>initcap</function> functions - <indexterm><primary>upper</><secondary>and locales</></indexterm> - <indexterm><primary>lower</><secondary>and locales</></indexterm> + <indexterm><primary>upper</primary><secondary>and locales</secondary></indexterm> + <indexterm><primary>lower</primary><secondary>and locales</secondary></indexterm> </para> </listitem> <listitem> <para> - Pattern matching operators (<literal>LIKE</>, <literal>SIMILAR TO</>, + Pattern matching operators (<literal>LIKE</literal>, <literal>SIMILAR TO</literal>, and POSIX-style regular expressions); locales affect both case insensitive matching and the classification of characters by character-class regular expressions - <indexterm><primary>LIKE</><secondary>and locales</></indexterm> - <indexterm><primary>regular expressions</><secondary>and locales</></indexterm> + <indexterm><primary>LIKE</primary><secondary>and locales</secondary></indexterm> + <indexterm><primary>regular expressions</primary><secondary>and locales</secondary></indexterm> </para> </listitem> <listitem> <para> - The <function>to_char</> family of functions - <indexterm><primary>to_char</><secondary>and locales</></indexterm> + The <function>to_char</function> family of functions + <indexterm><primary>to_char</primary><secondary>and locales</secondary></indexterm> </para> </listitem> <listitem> <para> - The ability to use indexes with <literal>LIKE</> clauses + The ability to use indexes with <literal>LIKE</literal> clauses </para> </listitem> </itemizedlist> </para> <para> - The drawback of using locales other than <literal>C</> or - <literal>POSIX</> in <productname>PostgreSQL</> is its performance + The drawback of using locales other than <literal>C</literal> or + <literal>POSIX</literal> in <productname>PostgreSQL</productname> is its performance impact. It slows character handling and prevents ordinary indexes - from being used by <literal>LIKE</>. For this reason use locales + from being used by <literal>LIKE</literal>. For this reason use locales only if you actually need them. </para> <para> - As a workaround to allow <productname>PostgreSQL</> to use indexes - with <literal>LIKE</> clauses under a non-C locale, several custom + As a workaround to allow <productname>PostgreSQL</productname> to use indexes + with <literal>LIKE</literal> clauses under a non-C locale, several custom operator classes exist. These allow the creation of an index that performs a strict character-by-character comparison, ignoring locale comparison rules. Refer to <xref linkend="indexes-opclass"> for more information. Another approach is to create indexes using - the <literal>C</> collation, as discussed in + the <literal>C</literal> collation, as discussed in <xref linkend="collation">. </para> </sect2> @@ -286,20 +286,20 @@ initdb --locale=sv_SE </para> <para> - Check that <productname>PostgreSQL</> is actually using the locale - that you think it is. The <envar>LC_COLLATE</> and <envar>LC_CTYPE</> + Check that <productname>PostgreSQL</productname> is actually using the locale + that you think it is. The <envar>LC_COLLATE</envar> and <envar>LC_CTYPE</envar> settings are determined when a database is created, and cannot be changed except by creating a new database. Other locale - settings including <envar>LC_MESSAGES</> and <envar>LC_MONETARY</> + settings including <envar>LC_MESSAGES</envar> and <envar>LC_MONETARY</envar> are initially determined by the environment the server is started in, but can be changed on-the-fly. You can check the active locale - settings using the <command>SHOW</> command. + settings using the <command>SHOW</command> command. </para> <para> - The directory <filename>src/test/locale</> in the source + The directory <filename>src/test/locale</filename> in the source distribution contains a test suite for - <productname>PostgreSQL</>'s locale support. + <productname>PostgreSQL</productname>'s locale support. </para> <para> @@ -313,7 +313,7 @@ initdb --locale=sv_SE <para> Maintaining catalogs of message translations requires the on-going efforts of many volunteers that want to see - <productname>PostgreSQL</> speak their preferred language well. + <productname>PostgreSQL</productname> speak their preferred language well. If messages in your language are currently not available or not fully translated, your assistance would be appreciated. If you want to help, refer to <xref linkend="nls"> or write to the developers' @@ -326,7 +326,7 @@ initdb --locale=sv_SE <sect1 id="collation"> <title>Collation Support</title> - <indexterm zone="collation"><primary>collation</></> + <indexterm zone="collation"><primary>collation</primary></indexterm> <para> The collation feature allows specifying the sort order and character @@ -370,9 +370,9 @@ initdb --locale=sv_SE function or operator call is derived from the arguments, as described below. In addition to comparison operators, collations are taken into account by functions that convert between lower and upper case - letters, such as <function>lower</>, <function>upper</>, and - <function>initcap</>; by pattern matching operators; and by - <function>to_char</> and related functions. + letters, such as <function>lower</function>, <function>upper</function>, and + <function>initcap</function>; by pattern matching operators; and by + <function>to_char</function> and related functions. </para> <para> @@ -452,7 +452,7 @@ SELECT a < ('foo' COLLATE "fr_FR") FROM test1; SELECT a < b FROM test1; </programlisting> the parser cannot determine which collation to apply, since the - <structfield>a</> and <structfield>b</> columns have conflicting + <structfield>a</structfield> and <structfield>b</structfield> columns have conflicting implicit collations. Since the <literal><</literal> operator does need to know which collation to use, this will result in an error. The error can be resolved by attaching an explicit collation @@ -468,7 +468,7 @@ SELECT a COLLATE "de_DE" < b FROM test1; <programlisting> SELECT a || b FROM test1; </programlisting> - does not result in an error, because the <literal>||</> operator + does not result in an error, because the <literal>||</literal> operator does not care about collations: its result is the same regardless of the collation. </para> @@ -486,8 +486,8 @@ SELECT * FROM test1 ORDER BY a || 'foo'; <programlisting> SELECT * FROM test1 ORDER BY a || b; </programlisting> - results in an error, because even though the <literal>||</> operator - doesn't need to know a collation, the <literal>ORDER BY</> clause does. + results in an error, because even though the <literal>||</literal> operator + doesn't need to know a collation, the <literal>ORDER BY</literal> clause does. As before, the conflict can be resolved with an explicit collation specifier: <programlisting> @@ -508,7 +508,7 @@ SELECT * FROM test1 ORDER BY a || b COLLATE "fr_FR"; operating system C library. These are the locales that most tools provided by the operating system use. Another provider is <literal>icu</literal>, which uses the external - ICU<indexterm><primary>ICU</></> library. ICU locales can only be + ICU<indexterm><primary>ICU</primary></indexterm> library. ICU locales can only be used if support for ICU was configured when PostgreSQL was built. </para> @@ -541,14 +541,14 @@ SELECT * FROM test1 ORDER BY a || b COLLATE "fr_FR"; <title>Standard Collations</title> <para> - On all platforms, the collations named <literal>default</>, - <literal>C</>, and <literal>POSIX</> are available. Additional + On all platforms, the collations named <literal>default</literal>, + <literal>C</literal>, and <literal>POSIX</literal> are available. Additional collations may be available depending on operating system support. - The <literal>default</> collation selects the <symbol>LC_COLLATE</symbol> + The <literal>default</literal> collation selects the <symbol>LC_COLLATE</symbol> and <symbol>LC_CTYPE</symbol> values specified at database creation time. - The <literal>C</> and <literal>POSIX</> collations both specify - <quote>traditional C</> behavior, in which only the ASCII letters - <quote><literal>A</></quote> through <quote><literal>Z</></quote> + The <literal>C</literal> and <literal>POSIX</literal> collations both specify + <quote>traditional C</quote> behavior, in which only the ASCII letters + <quote><literal>A</literal></quote> through <quote><literal>Z</literal></quote> are treated as letters, and sorting is done strictly by character code byte values. </para> @@ -565,7 +565,7 @@ SELECT * FROM test1 ORDER BY a || b COLLATE "fr_FR"; <para> If the operating system provides support for using multiple locales - within a single program (<function>newlocale</> and related functions), + within a single program (<function>newlocale</function> and related functions), or if support for ICU is configured, then when a database cluster is initialized, <command>initdb</command> populates the system catalog <literal>pg_collation</literal> with @@ -618,8 +618,8 @@ SELECT * FROM test1 ORDER BY a || b COLLATE "fr_FR"; within a given database even though it would not be unique globally. Use of the stripped collation names is recommended, since it will make one less thing you need to change if you decide to change to - another database encoding. Note however that the <literal>default</>, - <literal>C</>, and <literal>POSIX</> collations can be used regardless of + another database encoding. Note however that the <literal>default</literal>, + <literal>C</literal>, and <literal>POSIX</literal> collations can be used regardless of the database encoding. </para> @@ -630,7 +630,7 @@ SELECT * FROM test1 ORDER BY a || b COLLATE "fr_FR"; <programlisting> SELECT a COLLATE "C" < b COLLATE "POSIX" FROM test1; </programlisting> - will draw an error even though the <literal>C</> and <literal>POSIX</> + will draw an error even though the <literal>C</literal> and <literal>POSIX</literal> collations have identical behaviors. Mixing stripped and non-stripped collation names is therefore not recommended. </para> @@ -691,7 +691,7 @@ SELECT a COLLATE "C" < b COLLATE "POSIX" FROM test1; database encoding is one of these, ICU collation entries in <literal>pg_collation</literal> are ignored. Attempting to use one will draw an error along the lines of <quote>collation "de-x-icu" for - encoding "WIN874" does not exist</>. + encoding "WIN874" does not exist</quote>. </para> </sect4> </sect3> @@ -889,30 +889,30 @@ CREATE COLLATION french FROM "fr-x-icu"; <sect1 id="multibyte"> <title>Character Set Support</title> - <indexterm zone="multibyte"><primary>character set</></> + <indexterm zone="multibyte"><primary>character set</primary></indexterm> <para> The character set support in <productname>PostgreSQL</productname> allows you to store text in a variety of character sets (also called encodings), including single-byte character sets such as the ISO 8859 series and - multiple-byte character sets such as <acronym>EUC</> (Extended Unix + multiple-byte character sets such as <acronym>EUC</acronym> (Extended Unix Code), UTF-8, and Mule internal code. All supported character sets can be used transparently by clients, but a few are not supported for use within the server (that is, as a server-side encoding). The default character set is selected while initializing your <productname>PostgreSQL</productname> database - cluster using <command>initdb</>. It can be overridden when you + cluster using <command>initdb</command>. It can be overridden when you create a database, so you can have multiple databases each with a different character set. </para> <para> An important restriction, however, is that each database's character set - must be compatible with the database's <envar>LC_CTYPE</> (character - classification) and <envar>LC_COLLATE</> (string sort order) locale - settings. For <literal>C</> or - <literal>POSIX</> locale, any character set is allowed, but for other + must be compatible with the database's <envar>LC_CTYPE</envar> (character + classification) and <envar>LC_COLLATE</envar> (string sort order) locale + settings. For <literal>C</literal> or + <literal>POSIX</literal> locale, any character set is allowed, but for other libc-provided locales there is only one character set that will work correctly. (On Windows, however, UTF-8 encoding can be used with any locale.) @@ -954,7 +954,7 @@ CREATE COLLATION french FROM "fr-x-icu"; <entry>No</entry> <entry>No</entry> <entry>1-2</entry> - <entry><literal>WIN950</>, <literal>Windows950</></entry> + <entry><literal>WIN950</literal>, <literal>Windows950</literal></entry> </row> <row> <entry><literal>EUC_CN</literal></entry> @@ -1017,11 +1017,11 @@ CREATE COLLATION french FROM "fr-x-icu"; <entry>No</entry> <entry>No</entry> <entry>1-2</entry> - <entry><literal>WIN936</>, <literal>Windows936</></entry> + <entry><literal>WIN936</literal>, <literal>Windows936</literal></entry> </row> <row> <entry><literal>ISO_8859_5</literal></entry> - <entry>ISO 8859-5, <acronym>ECMA</> 113</entry> + <entry>ISO 8859-5, <acronym>ECMA</acronym> 113</entry> <entry>Latin/Cyrillic</entry> <entry>Yes</entry> <entry>Yes</entry> @@ -1030,7 +1030,7 @@ CREATE COLLATION french FROM "fr-x-icu"; </row> <row> <entry><literal>ISO_8859_6</literal></entry> - <entry>ISO 8859-6, <acronym>ECMA</> 114</entry> + <entry>ISO 8859-6, <acronym>ECMA</acronym> 114</entry> <entry>Latin/Arabic</entry> <entry>Yes</entry> <entry>Yes</entry> @@ -1039,7 +1039,7 @@ CREATE COLLATION french FROM "fr-x-icu"; </row> <row> <entry><literal>ISO_8859_7</literal></entry> - <entry>ISO 8859-7, <acronym>ECMA</> 118</entry> + <entry>ISO 8859-7, <acronym>ECMA</acronym> 118</entry> <entry>Latin/Greek</entry> <entry>Yes</entry> <entry>Yes</entry> @@ -1048,7 +1048,7 @@ CREATE COLLATION french FROM "fr-x-icu"; </row> <row> <entry><literal>ISO_8859_8</literal></entry> - <entry>ISO 8859-8, <acronym>ECMA</> 121</entry> + <entry>ISO 8859-8, <acronym>ECMA</acronym> 121</entry> <entry>Latin/Hebrew</entry> <entry>Yes</entry> <entry>Yes</entry> @@ -1057,7 +1057,7 @@ CREATE COLLATION french FROM "fr-x-icu"; </row> <row> <entry><literal>JOHAB</literal></entry> - <entry><acronym>JOHAB</></entry> + <entry><acronym>JOHAB</acronym></entry> <entry>Korean (Hangul)</entry> <entry>No</entry> <entry>No</entry> @@ -1071,7 +1071,7 @@ CREATE COLLATION french FROM "fr-x-icu"; <entry>Yes</entry> <entry>Yes</entry> <entry>1</entry> - <entry><literal>KOI8</></entry> + <entry><literal>KOI8</literal></entry> </row> <row> <entry><literal>KOI8U</literal></entry> @@ -1084,57 +1084,57 @@ CREATE COLLATION french FROM "fr-x-icu"; </row> <row> <entry><literal>LATIN1</literal></entry> - <entry>ISO 8859-1, <acronym>ECMA</> 94</entry> + <entry>ISO 8859-1, <acronym>ECMA</acronym> 94</entry> <entry>Western European</entry> <entry>Yes</entry> <entry>Yes</entry> <entry>1</entry> - <entry><literal>ISO88591</></entry> + <entry><literal>ISO88591</literal></entry> </row> <row> <entry><literal>LATIN2</literal></entry> - <entry>ISO 8859-2, <acronym>ECMA</> 94</entry> + <entry>ISO 8859-2, <acronym>ECMA</acronym> 94</entry> <entry>Central European</entry> <entry>Yes</entry> <entry>Yes</entry> <entry>1</entry> - <entry><literal>ISO88592</></entry> + <entry><literal>ISO88592</literal></entry> </row> <row> <entry><literal>LATIN3</literal></entry> - <entry>ISO 8859-3, <acronym>ECMA</> 94</entry> + <entry>ISO 8859-3, <acronym>ECMA</acronym> 94</entry> <entry>South European</entry> <entry>Yes</entry> <entry>Yes</entry> <entry>1</entry> - <entry><literal>ISO88593</></entry> + <entry><literal>ISO88593</literal></entry> </row> <row> <entry><literal>LATIN4</literal></entry> - <entry>ISO 8859-4, <acronym>ECMA</> 94</entry> + <entry>ISO 8859-4, <acronym>ECMA</acronym> 94</entry> <entry>North European</entry> <entry>Yes</entry> <entry>Yes</entry> <entry>1</entry> - <entry><literal>ISO88594</></entry> + <entry><literal>ISO88594</literal></entry> </row> <row> <entry><literal>LATIN5</literal></entry> - <entry>ISO 8859-9, <acronym>ECMA</> 128</entry> + <entry>ISO 8859-9, <acronym>ECMA</acronym> 128</entry> <entry>Turkish</entry> <entry>Yes</entry> <entry>Yes</entry> <entry>1</entry> - <entry><literal>ISO88599</></entry> + <entry><literal>ISO88599</literal></entry> </row> <row> <entry><literal>LATIN6</literal></entry> - <entry>ISO 8859-10, <acronym>ECMA</> 144</entry> + <entry>ISO 8859-10, <acronym>ECMA</acronym> 144</entry> <entry>Nordic</entry> <entry>Yes</entry> <entry>Yes</entry> <entry>1</entry> - <entry><literal>ISO885910</></entry> + <entry><literal>ISO885910</literal></entry> </row> <row> <entry><literal>LATIN7</literal></entry> @@ -1143,7 +1143,7 @@ CREATE COLLATION french FROM "fr-x-icu"; <entry>Yes</entry> <entry>Yes</entry> <entry>1</entry> - <entry><literal>ISO885913</></entry> + <entry><literal>ISO885913</literal></entry> </row> <row> <entry><literal>LATIN8</literal></entry> @@ -1152,7 +1152,7 @@ CREATE COLLATION french FROM "fr-x-icu"; <entry>Yes</entry> <entry>Yes</entry> <entry>1</entry> - <entry><literal>ISO885914</></entry> + <entry><literal>ISO885914</literal></entry> </row> <row> <entry><literal>LATIN9</literal></entry> @@ -1161,16 +1161,16 @@ CREATE COLLATION french FROM "fr-x-icu"; <entry>Yes</entry> <entry>Yes</entry> <entry>1</entry> - <entry><literal>ISO885915</></entry> + <entry><literal>ISO885915</literal></entry> </row> <row> <entry><literal>LATIN10</literal></entry> - <entry>ISO 8859-16, <acronym>ASRO</> SR 14111</entry> + <entry>ISO 8859-16, <acronym>ASRO</acronym> SR 14111</entry> <entry>Romanian</entry> <entry>Yes</entry> <entry>No</entry> <entry>1</entry> - <entry><literal>ISO885916</></entry> + <entry><literal>ISO885916</literal></entry> </row> <row> <entry><literal>MULE_INTERNAL</literal></entry> @@ -1188,7 +1188,7 @@ CREATE COLLATION french FROM "fr-x-icu"; <entry>No</entry> <entry>No</entry> <entry>1-2</entry> - <entry><literal>Mskanji</>, <literal>ShiftJIS</>, <literal>WIN932</>, <literal>Windows932</></entry> + <entry><literal>Mskanji</literal>, <literal>ShiftJIS</literal>, <literal>WIN932</literal>, <literal>Windows932</literal></entry> </row> <row> <entry><literal>SHIFT_JIS_2004</literal></entry> @@ -1202,7 +1202,7 @@ CREATE COLLATION french FROM "fr-x-icu"; <row> <entry><literal>SQL_ASCII</literal></entry> <entry>unspecified (see text)</entry> - <entry><emphasis>any</></entry> + <entry><emphasis>any</emphasis></entry> <entry>Yes</entry> <entry>No</entry> <entry>1</entry> @@ -1215,16 +1215,16 @@ CREATE COLLATION french FROM "fr-x-icu"; <entry>No</entry> <entry>No</entry> <entry>1-2</entry> - <entry><literal>WIN949</>, <literal>Windows949</></entry> + <entry><literal>WIN949</literal>, <literal>Windows949</literal></entry> </row> <row> <entry><literal>UTF8</literal></entry> <entry>Unicode, 8-bit</entry> - <entry><emphasis>all</></entry> + <entry><emphasis>all</emphasis></entry> <entry>Yes</entry> <entry>Yes</entry> <entry>1-4</entry> - <entry><literal>Unicode</></entry> + <entry><literal>Unicode</literal></entry> </row> <row> <entry><literal>WIN866</literal></entry> @@ -1233,7 +1233,7 @@ CREATE COLLATION french FROM "fr-x-icu"; <entry>Yes</entry> <entry>Yes</entry> <entry>1</entry> - <entry><literal>ALT</></entry> + <entry><literal>ALT</literal></entry> </row> <row> <entry><literal>WIN874</literal></entry> @@ -1260,7 +1260,7 @@ CREATE COLLATION french FROM "fr-x-icu"; <entry>Yes</entry> <entry>Yes</entry> <entry>1</entry> - <entry><literal>WIN</></entry> + <entry><literal>WIN</literal></entry> </row> <row> <entry><literal>WIN1252</literal></entry> @@ -1323,30 +1323,30 @@ CREATE COLLATION french FROM "fr-x-icu"; <entry>Yes</entry> <entry>Yes</entry> <entry>1</entry> - <entry><literal>ABC</>, <literal>TCVN</>, <literal>TCVN5712</>, <literal>VSCII</></entry> + <entry><literal>ABC</literal>, <literal>TCVN</literal>, <literal>TCVN5712</literal>, <literal>VSCII</literal></entry> </row> </tbody> </tgroup> </table> <para> - Not all client <acronym>API</>s support all the listed character sets. For example, the - <productname>PostgreSQL</> - JDBC driver does not support <literal>MULE_INTERNAL</>, <literal>LATIN6</>, - <literal>LATIN8</>, and <literal>LATIN10</>. + Not all client <acronym>API</acronym>s support all the listed character sets. For example, the + <productname>PostgreSQL</productname> + JDBC driver does not support <literal>MULE_INTERNAL</literal>, <literal>LATIN6</literal>, + <literal>LATIN8</literal>, and <literal>LATIN10</literal>. </para> <para> - The <literal>SQL_ASCII</> setting behaves considerably differently + The <literal>SQL_ASCII</literal> setting behaves considerably differently from the other settings. When the server character set is - <literal>SQL_ASCII</>, the server interprets byte values 0-127 + <literal>SQL_ASCII</literal>, the server interprets byte values 0-127 according to the ASCII standard, while byte values 128-255 are taken as uninterpreted characters. No encoding conversion will be done when - the setting is <literal>SQL_ASCII</>. Thus, this setting is not so + the setting is <literal>SQL_ASCII</literal>. Thus, this setting is not so much a declaration that a specific encoding is in use, as a declaration of ignorance about the encoding. In most cases, if you are working with any non-ASCII data, it is unwise to use the - <literal>SQL_ASCII</> setting because + <literal>SQL_ASCII</literal> setting because <productname>PostgreSQL</productname> will be unable to help you by converting or validating non-ASCII characters. </para> @@ -1356,7 +1356,7 @@ CREATE COLLATION french FROM "fr-x-icu"; <title>Setting the Character Set</title> <para> - <command>initdb</> defines the default character set (encoding) + <command>initdb</command> defines the default character set (encoding) for a <productname>PostgreSQL</productname> cluster. For example, <screen> @@ -1367,8 +1367,8 @@ initdb -E EUC_JP <literal>EUC_JP</literal> (Extended Unix Code for Japanese). You can use <option>--encoding</option> instead of <option>-E</option> if you prefer longer option strings. - If no <option>-E</> or <option>--encoding</option> option is - given, <command>initdb</> attempts to determine the appropriate + If no <option>-E</option> or <option>--encoding</option> option is + given, <command>initdb</command> attempts to determine the appropriate encoding to use based on the specified or default locale. </para> @@ -1388,7 +1388,7 @@ createdb -E EUC_KR -T template0 --lc-collate=ko_KR.euckr --lc-ctype=ko_KR.euckr CREATE DATABASE korean WITH ENCODING 'EUC_KR' LC_COLLATE='ko_KR.euckr' LC_CTYPE='ko_KR.euckr' TEMPLATE=template0; </programlisting> - Notice that the above commands specify copying the <literal>template0</> + Notice that the above commands specify copying the <literal>template0</literal> database. When copying any other database, the encoding and locale settings cannot be changed from those of the source database, because that might result in corrupt data. For more information see @@ -1420,7 +1420,7 @@ $ <userinput>psql -l</userinput> <important> <para> On most modern operating systems, <productname>PostgreSQL</productname> - can determine which character set is implied by the <envar>LC_CTYPE</> + can determine which character set is implied by the <envar>LC_CTYPE</envar> setting, and it will enforce that only the matching database encoding is used. On older systems it is your responsibility to ensure that you use the encoding expected by the locale you have selected. A mistake in @@ -1430,9 +1430,9 @@ $ <userinput>psql -l</userinput> <para> <productname>PostgreSQL</productname> will allow superusers to create - databases with <literal>SQL_ASCII</> encoding even when - <envar>LC_CTYPE</> is not <literal>C</> or <literal>POSIX</>. As noted - above, <literal>SQL_ASCII</> does not enforce that the data stored in + databases with <literal>SQL_ASCII</literal> encoding even when + <envar>LC_CTYPE</envar> is not <literal>C</literal> or <literal>POSIX</literal>. As noted + above, <literal>SQL_ASCII</literal> does not enforce that the data stored in the database has any particular encoding, and so this choice poses risks of locale-dependent misbehavior. Using this combination of settings is deprecated and may someday be forbidden altogether. @@ -1447,7 +1447,7 @@ $ <userinput>psql -l</userinput> <productname>PostgreSQL</productname> supports automatic character set conversion between server and client for certain character set combinations. The conversion information is stored in the - <literal>pg_conversion</> system catalog. <productname>PostgreSQL</> + <literal>pg_conversion</literal> system catalog. <productname>PostgreSQL</productname> comes with some predefined conversions, as shown in <xref linkend="multibyte-translation-table">. You can create a new conversion using the SQL command <command>CREATE CONVERSION</command>. @@ -1763,7 +1763,7 @@ $ <userinput>psql -l</userinput> <listitem> <para> - <application>libpq</> (<xref linkend="libpq-control">) has functions to control the client encoding. + <application>libpq</application> (<xref linkend="libpq-control">) has functions to control the client encoding. </para> </listitem> @@ -1774,14 +1774,14 @@ $ <userinput>psql -l</userinput> Setting the client encoding can be done with this SQL command: <programlisting> -SET CLIENT_ENCODING TO '<replaceable>value</>'; +SET CLIENT_ENCODING TO '<replaceable>value</replaceable>'; </programlisting> Also you can use the standard SQL syntax <literal>SET NAMES</literal> for this purpose: <programlisting> -SET NAMES '<replaceable>value</>'; +SET NAMES '<replaceable>value</replaceable>'; </programlisting> To query the current client encoding: @@ -1813,7 +1813,7 @@ RESET client_encoding; <para> Using the configuration variable <xref linkend="guc-client-encoding">. If the - <varname>client_encoding</> variable is set, that client + <varname>client_encoding</varname> variable is set, that client encoding is automatically selected when a connection to the server is made. (This can subsequently be overridden using any of the other methods mentioned above.) @@ -1832,9 +1832,9 @@ RESET client_encoding; </para> <para> - If the client character set is defined as <literal>SQL_ASCII</>, + If the client character set is defined as <literal>SQL_ASCII</literal>, encoding conversion is disabled, regardless of the server's character - set. Just as for the server, use of <literal>SQL_ASCII</> is unwise + set. Just as for the server, use of <literal>SQL_ASCII</literal> is unwise unless you are working with all-ASCII data. </para> </sect2> diff --git a/doc/src/sgml/citext.sgml b/doc/src/sgml/citext.sgml index 9b4c68f7d47..82251de8529 100644 --- a/doc/src/sgml/citext.sgml +++ b/doc/src/sgml/citext.sgml @@ -8,10 +8,10 @@ </indexterm> <para> - The <filename>citext</> module provides a case-insensitive - character string type, <type>citext</>. Essentially, it internally calls - <function>lower</> when comparing values. Otherwise, it behaves almost - exactly like <type>text</>. + The <filename>citext</filename> module provides a case-insensitive + character string type, <type>citext</type>. Essentially, it internally calls + <function>lower</function> when comparing values. Otherwise, it behaves almost + exactly like <type>text</type>. </para> <sect2> @@ -19,7 +19,7 @@ <para> The standard approach to doing case-insensitive matches - in <productname>PostgreSQL</> has been to use the <function>lower</> + in <productname>PostgreSQL</productname> has been to use the <function>lower</function> function when comparing values, for example <programlisting> @@ -35,19 +35,19 @@ SELECT * FROM tab WHERE lower(col) = LOWER(?); <listitem> <para> It makes your SQL statements verbose, and you always have to remember to - use <function>lower</> on both the column and the query value. + use <function>lower</function> on both the column and the query value. </para> </listitem> <listitem> <para> It won't use an index, unless you create a functional index using - <function>lower</>. + <function>lower</function>. </para> </listitem> <listitem> <para> - If you declare a column as <literal>UNIQUE</> or <literal>PRIMARY - KEY</>, the implicitly generated index is case-sensitive. So it's + If you declare a column as <literal>UNIQUE</literal> or <literal>PRIMARY + KEY</literal>, the implicitly generated index is case-sensitive. So it's useless for case-insensitive searches, and it won't enforce uniqueness case-insensitively. </para> @@ -55,13 +55,13 @@ SELECT * FROM tab WHERE lower(col) = LOWER(?); </itemizedlist> <para> - The <type>citext</> data type allows you to eliminate calls - to <function>lower</> in SQL queries, and allows a primary key to - be case-insensitive. <type>citext</> is locale-aware, just - like <type>text</>, which means that the matching of upper case and + The <type>citext</type> data type allows you to eliminate calls + to <function>lower</function> in SQL queries, and allows a primary key to + be case-insensitive. <type>citext</type> is locale-aware, just + like <type>text</type>, which means that the matching of upper case and lower case characters is dependent on the rules of - the database's <literal>LC_CTYPE</> setting. Again, this behavior is - identical to the use of <function>lower</> in queries. But because it's + the database's <literal>LC_CTYPE</literal> setting. Again, this behavior is + identical to the use of <function>lower</function> in queries. But because it's done transparently by the data type, you don't have to remember to do anything special in your queries. </para> @@ -89,9 +89,9 @@ INSERT INTO users VALUES ( 'Bjørn', md5(random()::text) ); SELECT * FROM users WHERE nick = 'Larry'; </programlisting> - The <command>SELECT</> statement will return one tuple, even though - the <structfield>nick</> column was set to <literal>larry</> and the query - was for <literal>Larry</>. + The <command>SELECT</command> statement will return one tuple, even though + the <structfield>nick</structfield> column was set to <literal>larry</literal> and the query + was for <literal>Larry</literal>. </para> </sect2> @@ -99,82 +99,82 @@ SELECT * FROM users WHERE nick = 'Larry'; <title>String Comparison Behavior</title> <para> - <type>citext</> performs comparisons by converting each string to lower - case (as though <function>lower</> were called) and then comparing the + <type>citext</type> performs comparisons by converting each string to lower + case (as though <function>lower</function> were called) and then comparing the results normally. Thus, for example, two strings are considered equal - if <function>lower</> would produce identical results for them. + if <function>lower</function> would produce identical results for them. </para> <para> In order to emulate a case-insensitive collation as closely as possible, - there are <type>citext</>-specific versions of a number of string-processing + there are <type>citext</type>-specific versions of a number of string-processing operators and functions. So, for example, the regular expression - operators <literal>~</> and <literal>~*</> exhibit the same behavior when - applied to <type>citext</>: they both match case-insensitively. + operators <literal>~</literal> and <literal>~*</literal> exhibit the same behavior when + applied to <type>citext</type>: they both match case-insensitively. The same is true - for <literal>!~</> and <literal>!~*</>, as well as for the - <literal>LIKE</> operators <literal>~~</> and <literal>~~*</>, and - <literal>!~~</> and <literal>!~~*</>. If you'd like to match - case-sensitively, you can cast the operator's arguments to <type>text</>. + for <literal>!~</literal> and <literal>!~*</literal>, as well as for the + <literal>LIKE</literal> operators <literal>~~</literal> and <literal>~~*</literal>, and + <literal>!~~</literal> and <literal>!~~*</literal>. If you'd like to match + case-sensitively, you can cast the operator's arguments to <type>text</type>. </para> <para> Similarly, all of the following functions perform matching - case-insensitively if their arguments are <type>citext</>: + case-insensitively if their arguments are <type>citext</type>: </para> <itemizedlist> <listitem> <para> - <function>regexp_match()</> + <function>regexp_match()</function> </para> </listitem> <listitem> <para> - <function>regexp_matches()</> + <function>regexp_matches()</function> </para> </listitem> <listitem> <para> - <function>regexp_replace()</> + <function>regexp_replace()</function> </para> </listitem> <listitem> <para> - <function>regexp_split_to_array()</> + <function>regexp_split_to_array()</function> </para> </listitem> <listitem> <para> - <function>regexp_split_to_table()</> + <function>regexp_split_to_table()</function> </para> </listitem> <listitem> <para> - <function>replace()</> + <function>replace()</function> </para> </listitem> <listitem> <para> - <function>split_part()</> + <function>split_part()</function> </para> </listitem> <listitem> <para> - <function>strpos()</> + <function>strpos()</function> </para> </listitem> <listitem> <para> - <function>translate()</> + <function>translate()</function> </para> </listitem> </itemizedlist> <para> For the regexp functions, if you want to match case-sensitively, you can - specify the <quote>c</> flag to force a case-sensitive match. Otherwise, - you must cast to <type>text</> before using one of these functions if + specify the <quote>c</quote> flag to force a case-sensitive match. Otherwise, + you must cast to <type>text</type> before using one of these functions if you want case-sensitive behavior. </para> @@ -186,13 +186,13 @@ SELECT * FROM users WHERE nick = 'Larry'; <itemizedlist> <listitem> <para> - <type>citext</>'s case-folding behavior depends on - the <literal>LC_CTYPE</> setting of your database. How it compares + <type>citext</type>'s case-folding behavior depends on + the <literal>LC_CTYPE</literal> setting of your database. How it compares values is therefore determined when the database is created. It is not truly case-insensitive in the terms defined by the Unicode standard. Effectively, what this means is that, as long as you're happy with your - collation, you should be happy with <type>citext</>'s comparisons. But + collation, you should be happy with <type>citext</type>'s comparisons. But if you have data in different languages stored in your database, users of one language may find their query results are not as expected if the collation is for another language. @@ -201,38 +201,38 @@ SELECT * FROM users WHERE nick = 'Larry'; <listitem> <para> - As of <productname>PostgreSQL</> 9.1, you can attach a - <literal>COLLATE</> specification to <type>citext</> columns or data - values. Currently, <type>citext</> operators will honor a non-default - <literal>COLLATE</> specification while comparing case-folded strings, + As of <productname>PostgreSQL</productname> 9.1, you can attach a + <literal>COLLATE</literal> specification to <type>citext</type> columns or data + values. Currently, <type>citext</type> operators will honor a non-default + <literal>COLLATE</literal> specification while comparing case-folded strings, but the initial folding to lower case is always done according to the - database's <literal>LC_CTYPE</> setting (that is, as though - <literal>COLLATE "default"</> were given). This may be changed in a - future release so that both steps follow the input <literal>COLLATE</> + database's <literal>LC_CTYPE</literal> setting (that is, as though + <literal>COLLATE "default"</literal> were given). This may be changed in a + future release so that both steps follow the input <literal>COLLATE</literal> specification. </para> </listitem> <listitem> <para> - <type>citext</> is not as efficient as <type>text</> because the + <type>citext</type> is not as efficient as <type>text</type> because the operator functions and the B-tree comparison functions must make copies of the data and convert it to lower case for comparisons. It is, - however, slightly more efficient than using <function>lower</> to get + however, slightly more efficient than using <function>lower</function> to get case-insensitive matching. </para> </listitem> <listitem> <para> - <type>citext</> doesn't help much if you need data to compare + <type>citext</type> doesn't help much if you need data to compare case-sensitively in some contexts and case-insensitively in other - contexts. The standard answer is to use the <type>text</> type and - manually use the <function>lower</> function when you need to compare + contexts. The standard answer is to use the <type>text</type> type and + manually use the <function>lower</function> function when you need to compare case-insensitively; this works all right if case-insensitive comparison is needed only infrequently. If you need case-insensitive behavior most of the time and case-sensitive infrequently, consider storing the data - as <type>citext</> and explicitly casting the column to <type>text</> + as <type>citext</type> and explicitly casting the column to <type>text</type> when you want case-sensitive comparison. In either situation, you will need two indexes if you want both types of searches to be fast. </para> @@ -240,9 +240,9 @@ SELECT * FROM users WHERE nick = 'Larry'; <listitem> <para> - The schema containing the <type>citext</> operators must be - in the current <varname>search_path</> (typically <literal>public</>); - if it is not, the normal case-sensitive <type>text</> operators + The schema containing the <type>citext</type> operators must be + in the current <varname>search_path</varname> (typically <literal>public</literal>); + if it is not, the normal case-sensitive <type>text</type> operators will be invoked instead. </para> </listitem> @@ -257,7 +257,7 @@ SELECT * FROM users WHERE nick = 'Larry'; </para> <para> - Inspired by the original <type>citext</> module by Donald Fraser. + Inspired by the original <type>citext</type> module by Donald Fraser. </para> </sect2> diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml index 78c594bbbaa..722f3da8138 100644 --- a/doc/src/sgml/client-auth.sgml +++ b/doc/src/sgml/client-auth.sgml @@ -21,9 +21,9 @@ <para> As explained in <xref linkend="user-manag">, <productname>PostgreSQL</productname> actually does privilege - management in terms of <quote>roles</>. In this chapter, we - consistently use <firstterm>database user</> to mean <quote>role with the - <literal>LOGIN</> privilege</quote>. + management in terms of <quote>roles</quote>. In this chapter, we + consistently use <firstterm>database user</firstterm> to mean <quote>role with the + <literal>LOGIN</literal> privilege</quote>. </para> </note> @@ -66,7 +66,7 @@ which traditionally is named <filename>pg_hba.conf</filename> and is stored in the database cluster's data directory. - (<acronym>HBA</> stands for host-based authentication.) A default + (<acronym>HBA</acronym> stands for host-based authentication.) A default <filename>pg_hba.conf</filename> file is installed when the data directory is initialized by <command>initdb</command>. It is possible to place the authentication configuration file elsewhere, @@ -82,7 +82,7 @@ up of a number of fields which are separated by spaces and/or tabs. Fields can contain white space if the field value is double-quoted. Quoting one of the keywords in a database, user, or address field (e.g., - <literal>all</> or <literal>replication</>) makes the word lose its special + <literal>all</literal> or <literal>replication</literal>) makes the word lose its special meaning, and just match a database, user, or host with that name. </para> @@ -92,8 +92,8 @@ and the authentication method to be used for connections matching these parameters. The first record with a matching connection type, client address, requested database, and user name is used to perform - authentication. There is no <quote>fall-through</> or - <quote>backup</>: if one record is chosen and the authentication + authentication. There is no <quote>fall-through</quote> or + <quote>backup</quote>: if one record is chosen and the authentication fails, subsequent records are not considered. If no record matches, access is denied. </para> @@ -138,7 +138,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> the server is started with an appropriate value for the <xref linkend="guc-listen-addresses"> configuration parameter, since the default behavior is to listen for TCP/IP connections - only on the local loopback address <literal>localhost</>. + only on the local loopback address <literal>localhost</literal>. </para> </note> </listitem> @@ -169,7 +169,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> <term><literal>hostnossl</literal></term> <listitem> <para> - This record type has the opposite behavior of <literal>hostssl</>; + This record type has the opposite behavior of <literal>hostssl</literal>; it only matches connection attempts made over TCP/IP that do not use <acronym>SSL</acronym>. </para> @@ -182,24 +182,24 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> <para> Specifies which database name(s) this record matches. The value <literal>all</literal> specifies that it matches all databases. - The value <literal>sameuser</> specifies that the record + The value <literal>sameuser</literal> specifies that the record matches if the requested database has the same name as the - requested user. The value <literal>samerole</> specifies that + requested user. The value <literal>samerole</literal> specifies that the requested user must be a member of the role with the same - name as the requested database. (<literal>samegroup</> is an - obsolete but still accepted spelling of <literal>samerole</>.) + name as the requested database. (<literal>samegroup</literal> is an + obsolete but still accepted spelling of <literal>samerole</literal>.) Superusers are not considered to be members of a role for the - purposes of <literal>samerole</> unless they are explicitly + purposes of <literal>samerole</literal> unless they are explicitly members of the role, directly or indirectly, and not just by virtue of being a superuser. - The value <literal>replication</> specifies that the record + The value <literal>replication</literal> specifies that the record matches if a physical replication connection is requested (note that replication connections do not specify any particular database). Otherwise, this is the name of a specific <productname>PostgreSQL</productname> database. Multiple database names can be supplied by separating them with commas. A separate file containing database names can be specified by - preceding the file name with <literal>@</>. + preceding the file name with <literal>@</literal>. </para> </listitem> </varlistentry> @@ -211,18 +211,18 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> Specifies which database user name(s) this record matches. The value <literal>all</literal> specifies that it matches all users. Otherwise, this is either the name of a specific - database user, or a group name preceded by <literal>+</>. + database user, or a group name preceded by <literal>+</literal>. (Recall that there is no real distinction between users and groups - in <productname>PostgreSQL</>; a <literal>+</> mark really means + in <productname>PostgreSQL</productname>; a <literal>+</literal> mark really means <quote>match any of the roles that are directly or indirectly members - of this role</>, while a name without a <literal>+</> mark matches + of this role</quote>, while a name without a <literal>+</literal> mark matches only that specific role.) For this purpose, a superuser is only considered to be a member of a role if they are explicitly a member of the role, directly or indirectly, and not just by virtue of being a superuser. Multiple user names can be supplied by separating them with commas. A separate file containing user names can be specified by preceding the - file name with <literal>@</>. + file name with <literal>@</literal>. </para> </listitem> </varlistentry> @@ -239,7 +239,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> <para> An IP address range is specified using standard numeric notation for the range's starting address, then a slash (<literal>/</literal>) - and a <acronym>CIDR</> mask length. The mask + and a <acronym>CIDR</acronym> mask length. The mask length indicates the number of high-order bits of the client IP address that must match. Bits to the right of this should be zero in the given IP address. @@ -317,7 +317,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> <para> This field only applies to <literal>host</literal>, - <literal>hostssl</literal>, and <literal>hostnossl</> records. + <literal>hostssl</literal>, and <literal>hostnossl</literal> records. </para> <note> @@ -360,17 +360,17 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> <listitem> <para> These two fields can be used as an alternative to the - <replaceable>IP-address</><literal>/</><replaceable>mask-length</> + <replaceable>IP-address</replaceable><literal>/</literal><replaceable>mask-length</replaceable> notation. Instead of specifying the mask length, the actual mask is specified in a - separate column. For example, <literal>255.0.0.0</> represents an IPv4 - CIDR mask length of 8, and <literal>255.255.255.255</> represents a + separate column. For example, <literal>255.0.0.0</literal> represents an IPv4 + CIDR mask length of 8, and <literal>255.255.255.255</literal> represents a CIDR mask length of 32. </para> <para> These fields only apply to <literal>host</literal>, - <literal>hostssl</literal>, and <literal>hostnossl</> records. + <literal>hostssl</literal>, and <literal>hostnossl</literal> records. </para> </listitem> </varlistentry> @@ -385,7 +385,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> <variablelist> <varlistentry> - <term><literal>trust</></term> + <term><literal>trust</literal></term> <listitem> <para> Allow the connection unconditionally. This method @@ -399,12 +399,12 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> </varlistentry> <varlistentry> - <term><literal>reject</></term> + <term><literal>reject</literal></term> <listitem> <para> Reject the connection unconditionally. This is useful for - <quote>filtering out</> certain hosts from a group, for example a - <literal>reject</> line could block a specific host from connecting, + <quote>filtering out</quote> certain hosts from a group, for example a + <literal>reject</literal> line could block a specific host from connecting, while a later line allows the remaining hosts in a specific network to connect. </para> @@ -412,7 +412,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> </varlistentry> <varlistentry> - <term><literal>scram-sha-256</></term> + <term><literal>scram-sha-256</literal></term> <listitem> <para> Perform SCRAM-SHA-256 authentication to verify the user's @@ -422,7 +422,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> </varlistentry> <varlistentry> - <term><literal>md5</></term> + <term><literal>md5</literal></term> <listitem> <para> Perform SCRAM-SHA-256 or MD5 authentication to verify the @@ -433,7 +433,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> </varlistentry> <varlistentry> - <term><literal>password</></term> + <term><literal>password</literal></term> <listitem> <para> Require the client to supply an unencrypted password for @@ -446,7 +446,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> </varlistentry> <varlistentry> - <term><literal>gss</></term> + <term><literal>gss</literal></term> <listitem> <para> Use GSSAPI to authenticate the user. This is only @@ -457,7 +457,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> </varlistentry> <varlistentry> - <term><literal>sspi</></term> + <term><literal>sspi</literal></term> <listitem> <para> Use SSPI to authenticate the user. This is only @@ -468,7 +468,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> </varlistentry> <varlistentry> - <term><literal>ident</></term> + <term><literal>ident</literal></term> <listitem> <para> Obtain the operating system user name of the client @@ -483,7 +483,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> </varlistentry> <varlistentry> - <term><literal>peer</></term> + <term><literal>peer</literal></term> <listitem> <para> Obtain the client's operating system user name from the operating @@ -495,17 +495,17 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> </varlistentry> <varlistentry> - <term><literal>ldap</></term> + <term><literal>ldap</literal></term> <listitem> <para> - Authenticate using an <acronym>LDAP</> server. See <xref + Authenticate using an <acronym>LDAP</acronym> server. See <xref linkend="auth-ldap"> for details. </para> </listitem> </varlistentry> <varlistentry> - <term><literal>radius</></term> + <term><literal>radius</literal></term> <listitem> <para> Authenticate using a RADIUS server. See <xref @@ -515,7 +515,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> </varlistentry> <varlistentry> - <term><literal>cert</></term> + <term><literal>cert</literal></term> <listitem> <para> Authenticate using SSL client certificates. See @@ -525,7 +525,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> </varlistentry> <varlistentry> - <term><literal>pam</></term> + <term><literal>pam</literal></term> <listitem> <para> Authenticate using the Pluggable Authentication Modules @@ -536,7 +536,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> </varlistentry> <varlistentry> - <term><literal>bsd</></term> + <term><literal>bsd</literal></term> <listitem> <para> Authenticate using the BSD Authentication service provided by the @@ -554,17 +554,17 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> <term><replaceable>auth-options</replaceable></term> <listitem> <para> - After the <replaceable>auth-method</> field, there can be field(s) of - the form <replaceable>name</><literal>=</><replaceable>value</> that + After the <replaceable>auth-method</replaceable> field, there can be field(s) of + the form <replaceable>name</replaceable><literal>=</literal><replaceable>value</replaceable> that specify options for the authentication method. Details about which options are available for which authentication methods appear below. </para> <para> In addition to the method-specific options listed below, there is one - method-independent authentication option <literal>clientcert</>, which - can be specified in any <literal>hostssl</> record. When set - to <literal>1</>, this option requires the client to present a valid + method-independent authentication option <literal>clientcert</literal>, which + can be specified in any <literal>hostssl</literal> record. When set + to <literal>1</literal>, this option requires the client to present a valid (trusted) SSL certificate, in addition to the other requirements of the authentication method. </para> @@ -574,11 +574,11 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> </para> <para> - Files included by <literal>@</> constructs are read as lists of names, + Files included by <literal>@</literal> constructs are read as lists of names, which can be separated by either whitespace or commas. Comments are introduced by <literal>#</literal>, just as in - <filename>pg_hba.conf</filename>, and nested <literal>@</> constructs are - allowed. Unless the file name following <literal>@</> is an absolute + <filename>pg_hba.conf</filename>, and nested <literal>@</literal> constructs are + allowed. Unless the file name following <literal>@</literal> is an absolute path, it is taken to be relative to the directory containing the referencing file. </para> @@ -589,10 +589,10 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> significant. Typically, earlier records will have tight connection match parameters and weaker authentication methods, while later records will have looser match parameters and stronger authentication - methods. For example, one might wish to use <literal>trust</> + methods. For example, one might wish to use <literal>trust</literal> authentication for local TCP/IP connections but require a password for remote TCP/IP connections. In this case a record specifying - <literal>trust</> authentication for connections from 127.0.0.1 would + <literal>trust</literal> authentication for connections from 127.0.0.1 would appear before a record specifying password authentication for a wider range of allowed client IP addresses. </para> @@ -603,7 +603,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> <systemitem>SIGHUP</systemitem><indexterm><primary>SIGHUP</primary></indexterm> signal. If you edit the file on an active system, you will need to signal the postmaster - (using <literal>pg_ctl reload</> or <literal>kill -HUP</>) to make it + (using <literal>pg_ctl reload</literal> or <literal>kill -HUP</literal>) to make it re-read the file. </para> @@ -618,7 +618,7 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> <para> The system view <link linkend="view-pg-hba-file-rules"><structname>pg_hba_file_rules</structname></link> - can be helpful for pre-testing changes to the <filename>pg_hba.conf</> + can be helpful for pre-testing changes to the <filename>pg_hba.conf</filename> file, or for diagnosing problems if loading of the file did not have the desired effects. Rows in the view with non-null <structfield>error</structfield> fields indicate problems in the @@ -629,9 +629,9 @@ hostnossl <replaceable>database</replaceable> <replaceable>user</replaceable> <para> To connect to a particular database, a user must not only pass the <filename>pg_hba.conf</filename> checks, but must have the - <literal>CONNECT</> privilege for the database. If you wish to + <literal>CONNECT</literal> privilege for the database. If you wish to restrict which users can connect to which databases, it's usually - easier to control this by granting/revoking <literal>CONNECT</> privilege + easier to control this by granting/revoking <literal>CONNECT</literal> privilege than to put the rules in <filename>pg_hba.conf</filename> entries. </para> </tip> @@ -760,21 +760,21 @@ local db1,db2,@demodbs all md5 <para> User name maps are defined in the ident map file, which by default is named - <filename>pg_ident.conf</><indexterm><primary>pg_ident.conf</primary></indexterm> + <filename>pg_ident.conf</filename><indexterm><primary>pg_ident.conf</primary></indexterm> and is stored in the cluster's data directory. (It is possible to place the map file elsewhere, however; see the <xref linkend="guc-ident-file"> configuration parameter.) The ident map file contains lines of the general form: <synopsis> -<replaceable>map-name</> <replaceable>system-username</> <replaceable>database-username</> +<replaceable>map-name</replaceable> <replaceable>system-username</replaceable> <replaceable>database-username</replaceable> </synopsis> Comments and whitespace are handled in the same way as in - <filename>pg_hba.conf</>. The - <replaceable>map-name</> is an arbitrary name that will be used to + <filename>pg_hba.conf</filename>. The + <replaceable>map-name</replaceable> is an arbitrary name that will be used to refer to this mapping in <filename>pg_hba.conf</filename>. The other two fields specify an operating system user name and a matching - database user name. The same <replaceable>map-name</> can be + database user name. The same <replaceable>map-name</replaceable> can be used repeatedly to specify multiple user-mappings within a single map. </para> <para> @@ -788,13 +788,13 @@ local db1,db2,@demodbs all md5 user has requested to connect as. </para> <para> - If the <replaceable>system-username</> field starts with a slash (<literal>/</>), + If the <replaceable>system-username</replaceable> field starts with a slash (<literal>/</literal>), the remainder of the field is treated as a regular expression. (See <xref linkend="posix-syntax-details"> for details of - <productname>PostgreSQL</>'s regular expression syntax.) The regular + <productname>PostgreSQL</productname>'s regular expression syntax.) The regular expression can include a single capture, or parenthesized subexpression, - which can then be referenced in the <replaceable>database-username</> - field as <literal>\1</> (backslash-one). This allows the mapping of + which can then be referenced in the <replaceable>database-username</replaceable> + field as <literal>\1</literal> (backslash-one). This allows the mapping of multiple user names in a single line, which is particularly useful for simple syntax substitutions. For example, these entries <programlisting> @@ -802,14 +802,14 @@ mymap /^(.*)@mydomain\.com$ \1 mymap /^(.*)@otherdomain\.com$ guest </programlisting> will remove the domain part for users with system user names that end with - <literal>@mydomain.com</>, and allow any user whose system name ends with - <literal>@otherdomain.com</> to log in as <literal>guest</>. + <literal>@mydomain.com</literal>, and allow any user whose system name ends with + <literal>@otherdomain.com</literal> to log in as <literal>guest</literal>. </para> <tip> <para> Keep in mind that by default, a regular expression can match just part of - a string. It's usually wise to use <literal>^</> and <literal>$</>, as + a string. It's usually wise to use <literal>^</literal> and <literal>$</literal>, as shown in the above example, to force the match to be to the entire system user name. </para> @@ -821,28 +821,28 @@ mymap /^(.*)@otherdomain\.com$ guest <systemitem>SIGHUP</systemitem><indexterm><primary>SIGHUP</primary></indexterm> signal. If you edit the file on an active system, you will need to signal the postmaster - (using <literal>pg_ctl reload</> or <literal>kill -HUP</>) to make it + (using <literal>pg_ctl reload</literal> or <literal>kill -HUP</literal>) to make it re-read the file. </para> <para> A <filename>pg_ident.conf</filename> file that could be used in - conjunction with the <filename>pg_hba.conf</> file in <xref + conjunction with the <filename>pg_hba.conf</filename> file in <xref linkend="example-pg-hba.conf"> is shown in <xref linkend="example-pg-ident.conf">. In this example, anyone logged in to a machine on the 192.168 network that does not have the - operating system user name <literal>bryanh</>, <literal>ann</>, or - <literal>robert</> would not be granted access. Unix user - <literal>robert</> would only be allowed access when he tries to - connect as <productname>PostgreSQL</> user <literal>bob</>, not - as <literal>robert</> or anyone else. <literal>ann</> would - only be allowed to connect as <literal>ann</>. User - <literal>bryanh</> would be allowed to connect as either - <literal>bryanh</> or as <literal>guest1</>. + operating system user name <literal>bryanh</literal>, <literal>ann</literal>, or + <literal>robert</literal> would not be granted access. Unix user + <literal>robert</literal> would only be allowed access when he tries to + connect as <productname>PostgreSQL</productname> user <literal>bob</literal>, not + as <literal>robert</literal> or anyone else. <literal>ann</literal> would + only be allowed to connect as <literal>ann</literal>. User + <literal>bryanh</literal> would be allowed to connect as either + <literal>bryanh</literal> or as <literal>guest1</literal>. </para> <example id="example-pg-ident.conf"> - <title>An Example <filename>pg_ident.conf</> File</title> + <title>An Example <filename>pg_ident.conf</filename> File</title> <programlisting> # MAPNAME SYSTEM-USERNAME PG-USERNAME @@ -866,21 +866,21 @@ omicron bryanh guest1 <title>Trust Authentication</title> <para> - When <literal>trust</> authentication is specified, + When <literal>trust</literal> authentication is specified, <productname>PostgreSQL</productname> assumes that anyone who can connect to the server is authorized to access the database with whatever database user name they specify (even superuser names). - Of course, restrictions made in the <literal>database</> and - <literal>user</> columns still apply. + Of course, restrictions made in the <literal>database</literal> and + <literal>user</literal> columns still apply. This method should only be used when there is adequate operating-system-level protection on connections to the server. </para> <para> - <literal>trust</> authentication is appropriate and very + <literal>trust</literal> authentication is appropriate and very convenient for local connections on a single-user workstation. It - is usually <emphasis>not</> appropriate by itself on a multiuser - machine. However, you might be able to use <literal>trust</> even + is usually <emphasis>not</emphasis> appropriate by itself on a multiuser + machine. However, you might be able to use <literal>trust</literal> even on a multiuser machine, if you restrict access to the server's Unix-domain socket file using file-system permissions. To do this, set the <varname>unix_socket_permissions</varname> (and possibly @@ -895,17 +895,17 @@ omicron bryanh guest1 Setting file-system permissions only helps for Unix-socket connections. Local TCP/IP connections are not restricted by file-system permissions. Therefore, if you want to use file-system permissions for local security, - remove the <literal>host ... 127.0.0.1 ...</> line from - <filename>pg_hba.conf</>, or change it to a - non-<literal>trust</> authentication method. + remove the <literal>host ... 127.0.0.1 ...</literal> line from + <filename>pg_hba.conf</filename>, or change it to a + non-<literal>trust</literal> authentication method. </para> <para> - <literal>trust</> authentication is only suitable for TCP/IP connections + <literal>trust</literal> authentication is only suitable for TCP/IP connections if you trust every user on every machine that is allowed to connect - to the server by the <filename>pg_hba.conf</> lines that specify - <literal>trust</>. It is seldom reasonable to use <literal>trust</> - for any TCP/IP connections other than those from <systemitem>localhost</> (127.0.0.1). + to the server by the <filename>pg_hba.conf</filename> lines that specify + <literal>trust</literal>. It is seldom reasonable to use <literal>trust</literal> + for any TCP/IP connections other than those from <systemitem>localhost</systemitem> (127.0.0.1). </para> </sect2> @@ -914,10 +914,10 @@ omicron bryanh guest1 <title>Password Authentication</title> <indexterm> - <primary>MD5</> + <primary>MD5</primary> </indexterm> <indexterm> - <primary>SCRAM</> + <primary>SCRAM</primary> </indexterm> <indexterm> <primary>password</primary> @@ -936,7 +936,7 @@ omicron bryanh guest1 <term><literal>scram-sha-256</literal></term> <listitem> <para> - The method <literal>scram-sha-256</> performs SCRAM-SHA-256 + The method <literal>scram-sha-256</literal> performs SCRAM-SHA-256 authentication, as described in <ulink url="https://tools.ietf.org/html/rfc7677">RFC 7677</ulink>. It is a challenge-response scheme that prevents password sniffing on @@ -955,7 +955,7 @@ omicron bryanh guest1 <term><literal>md5</literal></term> <listitem> <para> - The method <literal>md5</> uses a custom less secure challenge-response + The method <literal>md5</literal> uses a custom less secure challenge-response mechanism. It prevents password sniffing and avoids storing passwords on the server in plain text but provides no protection if an attacker manages to steal the password hash from the server. Also, the MD5 hash @@ -982,10 +982,10 @@ omicron bryanh guest1 <term><literal>password</literal></term> <listitem> <para> - The method <literal>password</> sends the password in clear-text and is - therefore vulnerable to password <quote>sniffing</> attacks. It should + The method <literal>password</literal> sends the password in clear-text and is + therefore vulnerable to password <quote>sniffing</quote> attacks. It should always be avoided if possible. If the connection is protected by SSL - encryption then <literal>password</> can be used safely, though. + encryption then <literal>password</literal> can be used safely, though. (Though SSL certificate authentication might be a better choice if one is depending on using SSL). </para> @@ -996,7 +996,7 @@ omicron bryanh guest1 <para> <productname>PostgreSQL</productname> database passwords are separate from operating system user passwords. The password for - each database user is stored in the <literal>pg_authid</> system + each database user is stored in the <literal>pg_authid</literal> system catalog. Passwords can be managed with the SQL commands <xref linkend="sql-createuser"> and <xref linkend="sql-alterrole">, @@ -1060,7 +1060,7 @@ omicron bryanh guest1 </para> <para> - GSSAPI support has to be enabled when <productname>PostgreSQL</> is built; + GSSAPI support has to be enabled when <productname>PostgreSQL</productname> is built; see <xref linkend="installation"> for more information. </para> @@ -1068,13 +1068,13 @@ omicron bryanh guest1 When <productname>GSSAPI</productname> uses <productname>Kerberos</productname>, it uses a standard principal in the format - <literal><replaceable>servicename</>/<replaceable>hostname</>@<replaceable>realm</></literal>. + <literal><replaceable>servicename</replaceable>/<replaceable>hostname</replaceable>@<replaceable>realm</replaceable></literal>. The PostgreSQL server will accept any principal that is included in the keytab used by the server, but care needs to be taken to specify the correct principal details when - making the connection from the client using the <literal>krbsrvname</> connection parameter. (See + making the connection from the client using the <literal>krbsrvname</literal> connection parameter. (See also <xref linkend="libpq-paramkeywords">.) The installation default can be changed from the default <literal>postgres</literal> at build time using - <literal>./configure --with-krb-srvnam=</><replaceable>whatever</>. + <literal>./configure --with-krb-srvnam=</literal><replaceable>whatever</replaceable>. In most environments, this parameter never needs to be changed. Some Kerberos implementations might require a different service name, @@ -1082,31 +1082,31 @@ omicron bryanh guest1 to be in upper case (<literal>POSTGRES</literal>). </para> <para> - <replaceable>hostname</> is the fully qualified host name of the + <replaceable>hostname</replaceable> is the fully qualified host name of the server machine. The service principal's realm is the preferred realm of the server machine. </para> <para> - Client principals can be mapped to different <productname>PostgreSQL</> - database user names with <filename>pg_ident.conf</>. For example, - <literal>pgusername@realm</> could be mapped to just <literal>pgusername</>. - Alternatively, you can use the full <literal>username@realm</> principal as - the role name in <productname>PostgreSQL</> without any mapping. + Client principals can be mapped to different <productname>PostgreSQL</productname> + database user names with <filename>pg_ident.conf</filename>. For example, + <literal>pgusername@realm</literal> could be mapped to just <literal>pgusername</literal>. + Alternatively, you can use the full <literal>username@realm</literal> principal as + the role name in <productname>PostgreSQL</productname> without any mapping. </para> <para> - <productname>PostgreSQL</> also supports a parameter to strip the realm from + <productname>PostgreSQL</productname> also supports a parameter to strip the realm from the principal. This method is supported for backwards compatibility and is strongly discouraged as it is then impossible to distinguish different users with the same user name but coming from different realms. To enable this, - set <literal>include_realm</> to 0. For simple single-realm + set <literal>include_realm</literal> to 0. For simple single-realm installations, doing that combined with setting the - <literal>krb_realm</> parameter (which checks that the principal's realm + <literal>krb_realm</literal> parameter (which checks that the principal's realm matches exactly what is in the <literal>krb_realm</literal> parameter) is still secure; but this is a less capable approach compared to specifying an explicit mapping in - <filename>pg_ident.conf</>. + <filename>pg_ident.conf</filename>. </para> <para> @@ -1116,8 +1116,8 @@ omicron bryanh guest1 of the key file is specified by the <xref linkend="guc-krb-server-keyfile"> configuration parameter. The default is - <filename>/usr/local/pgsql/etc/krb5.keytab</> (or whatever - directory was specified as <varname>sysconfdir</> at build time). + <filename>/usr/local/pgsql/etc/krb5.keytab</filename> (or whatever + directory was specified as <varname>sysconfdir</varname> at build time). For security reasons, it is recommended to use a separate keytab just for the <productname>PostgreSQL</productname> server rather than opening up permissions on the system keytab file. @@ -1127,17 +1127,17 @@ omicron bryanh guest1 Kerberos documentation for details. The following example is for MIT-compatible Kerberos 5 implementations: <screen> -<prompt>kadmin% </><userinput>ank -randkey postgres/server.my.domain.org</> -<prompt>kadmin% </><userinput>ktadd -k krb5.keytab postgres/server.my.domain.org</> +<prompt>kadmin% </prompt><userinput>ank -randkey postgres/server.my.domain.org</userinput> +<prompt>kadmin% </prompt><userinput>ktadd -k krb5.keytab postgres/server.my.domain.org</userinput> </screen> </para> <para> When connecting to the database make sure you have a ticket for a principal matching the requested database user name. For example, for - database user name <literal>fred</>, principal - <literal>fred@EXAMPLE.COM</> would be able to connect. To also allow - principal <literal>fred/users.example.com@EXAMPLE.COM</>, use a user name + database user name <literal>fred</literal>, principal + <literal>fred@EXAMPLE.COM</literal> would be able to connect. To also allow + principal <literal>fred/users.example.com@EXAMPLE.COM</literal>, use a user name map, as described in <xref linkend="auth-username-maps">. </para> @@ -1155,8 +1155,8 @@ omicron bryanh guest1 in multi-realm environments unless <literal>krb_realm</literal> is also used. It is recommended to leave <literal>include_realm</literal> set to the default (1) and to - provide an explicit mapping in <filename>pg_ident.conf</> to convert - principal names to <productname>PostgreSQL</> user names. + provide an explicit mapping in <filename>pg_ident.conf</filename> to convert + principal names to <productname>PostgreSQL</productname> user names. </para> </listitem> </varlistentry> @@ -1236,8 +1236,8 @@ omicron bryanh guest1 in multi-realm environments unless <literal>krb_realm</literal> is also used. It is recommended to leave <literal>include_realm</literal> set to the default (1) and to - provide an explicit mapping in <filename>pg_ident.conf</> to convert - principal names to <productname>PostgreSQL</> user names. + provide an explicit mapping in <filename>pg_ident.conf</filename> to convert + principal names to <productname>PostgreSQL</productname> user names. </para> </listitem> </varlistentry> @@ -1270,9 +1270,9 @@ omicron bryanh guest1 By default, these two names are identical for new user accounts. </para> <para> - Note that <application>libpq</> uses the SAM-compatible name if no + Note that <application>libpq</application> uses the SAM-compatible name if no explicit user name is specified. If you use - <application>libpq</> or a driver based on it, you should + <application>libpq</application> or a driver based on it, you should leave this option disabled or explicitly specify user name in the connection string. </para> @@ -1357,8 +1357,8 @@ omicron bryanh guest1 is to answer questions like <quote>What user initiated the connection that goes out of your port <replaceable>X</replaceable> and connects to my port <replaceable>Y</replaceable>?</quote>. - Since <productname>PostgreSQL</> knows both <replaceable>X</> and - <replaceable>Y</> when a physical connection is established, it + Since <productname>PostgreSQL</productname> knows both <replaceable>X</replaceable> and + <replaceable>Y</replaceable> when a physical connection is established, it can interrogate the ident server on the host of the connecting client and can theoretically determine the operating system user for any given connection. @@ -1386,9 +1386,9 @@ omicron bryanh guest1 <para> Some ident servers have a nonstandard option that causes the returned user name to be encrypted, using a key that only the originating - machine's administrator knows. This option <emphasis>must not</> be - used when using the ident server with <productname>PostgreSQL</>, - since <productname>PostgreSQL</> does not have any way to decrypt the + machine's administrator knows. This option <emphasis>must not</emphasis> be + used when using the ident server with <productname>PostgreSQL</productname>, + since <productname>PostgreSQL</productname> does not have any way to decrypt the returned string to determine the actual user name. </para> </sect2> @@ -1424,11 +1424,11 @@ omicron bryanh guest1 <para> Peer authentication is only available on operating systems providing - the <function>getpeereid()</> function, the <symbol>SO_PEERCRED</symbol> + the <function>getpeereid()</function> function, the <symbol>SO_PEERCRED</symbol> socket parameter, or similar mechanisms. Currently that includes - <systemitem class="osname">Linux</>, - most flavors of <systemitem class="osname">BSD</> including - <systemitem class="osname">macOS</>, + <systemitem class="osname">Linux</systemitem>, + most flavors of <systemitem class="osname">BSD</systemitem> including + <systemitem class="osname">macOS</systemitem>, and <systemitem class="osname">Solaris</systemitem>. </para> @@ -1454,23 +1454,23 @@ omicron bryanh guest1 LDAP authentication can operate in two modes. In the first mode, which we will call the simple bind mode, the server will bind to the distinguished name constructed as - <replaceable>prefix</> <replaceable>username</> <replaceable>suffix</>. - Typically, the <replaceable>prefix</> parameter is used to specify - <literal>cn=</>, or <replaceable>DOMAIN</><literal>\</> in an Active - Directory environment. <replaceable>suffix</> is used to specify the + <replaceable>prefix</replaceable> <replaceable>username</replaceable> <replaceable>suffix</replaceable>. + Typically, the <replaceable>prefix</replaceable> parameter is used to specify + <literal>cn=</literal>, or <replaceable>DOMAIN</replaceable><literal>\</literal> in an Active + Directory environment. <replaceable>suffix</replaceable> is used to specify the remaining part of the DN in a non-Active Directory environment. </para> <para> In the second mode, which we will call the search+bind mode, the server first binds to the LDAP directory with - a fixed user name and password, specified with <replaceable>ldapbinddn</> - and <replaceable>ldapbindpasswd</>, and performs a search for the user trying + a fixed user name and password, specified with <replaceable>ldapbinddn</replaceable> + and <replaceable>ldapbindpasswd</replaceable>, and performs a search for the user trying to log in to the database. If no user and password is configured, an anonymous bind will be attempted to the directory. The search will be - performed over the subtree at <replaceable>ldapbasedn</>, and will try to + performed over the subtree at <replaceable>ldapbasedn</replaceable>, and will try to do an exact match of the attribute specified in - <replaceable>ldapsearchattribute</>. + <replaceable>ldapsearchattribute</replaceable>. Once the user has been found in this search, the server disconnects and re-binds to the directory as this user, using the password specified by the client, to verify that the @@ -1572,7 +1572,7 @@ omicron bryanh guest1 <para> Attribute to match against the user name in the search when doing search+bind authentication. If no attribute is specified, the - <literal>uid</> attribute will be used. + <literal>uid</literal> attribute will be used. </para> </listitem> </varlistentry> @@ -1719,11 +1719,11 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse When using RADIUS authentication, an Access Request message will be sent to the configured RADIUS server. This request will be of type <literal>Authenticate Only</literal>, and include parameters for - <literal>user name</>, <literal>password</> (encrypted) and - <literal>NAS Identifier</>. The request will be encrypted using + <literal>user name</literal>, <literal>password</literal> (encrypted) and + <literal>NAS Identifier</literal>. The request will be encrypted using a secret shared with the server. The RADIUS server will respond to - this server with either <literal>Access Accept</> or - <literal>Access Reject</>. There is no support for RADIUS accounting. + this server with either <literal>Access Accept</literal> or + <literal>Access Reject</literal>. There is no support for RADIUS accounting. </para> <para> @@ -1762,8 +1762,8 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse <note> <para> The encryption vector used will only be cryptographically - strong if <productname>PostgreSQL</> is built with support for - <productname>OpenSSL</>. In other cases, the transmission to the + strong if <productname>PostgreSQL</productname> is built with support for + <productname>OpenSSL</productname>. In other cases, the transmission to the RADIUS server should only be considered obfuscated, not secured, and external security measures should be applied if necessary. </para> @@ -1777,7 +1777,7 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse <listitem> <para> The port number on the RADIUS servers to connect to. If no port - is specified, the default port <literal>1812</> will be used. + is specified, the default port <literal>1812</literal> will be used. </para> </listitem> </varlistentry> @@ -1786,12 +1786,12 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse <term><literal>radiusidentifiers</literal></term> <listitem> <para> - The string used as <literal>NAS Identifier</> in the RADIUS + The string used as <literal>NAS Identifier</literal> in the RADIUS requests. This parameter can be used as a second parameter identifying for example which database user the user is attempting to authenticate as, which can be used for policy matching on the RADIUS server. If no identifier is specified, the default - <literal>postgresql</> will be used. + <literal>postgresql</literal> will be used. </para> </listitem> </varlistentry> @@ -1836,11 +1836,11 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse </para> <para> - In a <filename>pg_hba.conf</> record specifying certificate - authentication, the authentication option <literal>clientcert</> is - assumed to be <literal>1</>, and it cannot be turned off since a client - certificate is necessary for this method. What the <literal>cert</> - method adds to the basic <literal>clientcert</> certificate validity test + In a <filename>pg_hba.conf</filename> record specifying certificate + authentication, the authentication option <literal>clientcert</literal> is + assumed to be <literal>1</literal>, and it cannot be turned off since a client + certificate is necessary for this method. What the <literal>cert</literal> + method adds to the basic <literal>clientcert</literal> certificate validity test is a check that the <literal>cn</literal> attribute matches the database user name. </para> @@ -1863,7 +1863,7 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse exist in the database before PAM can be used for authentication. For more information about PAM, please read the <ulink url="http://www.kernel.org/pub/linux/libs/pam/"> - <productname>Linux-PAM</> Page</ulink>. + <productname>Linux-PAM</productname> Page</ulink>. </para> <para> @@ -1896,7 +1896,7 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse <note> <para> - If PAM is set up to read <filename>/etc/shadow</>, authentication + If PAM is set up to read <filename>/etc/shadow</filename>, authentication will fail because the PostgreSQL server is started by a non-root user. However, this is not an issue when PAM is configured to use LDAP or other authentication methods. @@ -1922,11 +1922,11 @@ host ... ldap ldapserver=ldap.example.net ldapbasedn="dc=example, dc=net" ldapse </para> <para> - BSD Authentication in <productname>PostgreSQL</> uses + BSD Authentication in <productname>PostgreSQL</productname> uses the <literal>auth-postgresql</literal> login type and authenticates with the <literal>postgresql</literal> login class if that's defined in <filename>login.conf</filename>. By default that login class does not - exist, and <productname>PostgreSQL</> will use the default login class. + exist, and <productname>PostgreSQL</productname> will use the default login class. </para> <note> diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index b012a269911..aeda826d874 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -70,9 +70,9 @@ (typically eight kilobytes), milliseconds, seconds, or minutes. An unadorned numeric value for one of these settings will use the setting's default unit, which can be learned from - <structname>pg_settings</>.<structfield>unit</>. + <structname>pg_settings</structname>.<structfield>unit</structfield>. For convenience, settings can be given with a unit specified explicitly, - for example <literal>'120 ms'</> for a time value, and they will be + for example <literal>'120 ms'</literal> for a time value, and they will be converted to whatever the parameter's actual unit is. Note that the value must be written as a string (with quotes) to use this feature. The unit name is case-sensitive, and there can be whitespace between @@ -105,7 +105,7 @@ Enumerated-type parameters are written in the same way as string parameters, but are restricted to have one of a limited set of values. The values allowable for such a parameter can be found from - <structname>pg_settings</>.<structfield>enumvals</>. + <structname>pg_settings</structname>.<structfield>enumvals</structfield>. Enum parameter values are case-insensitive. </para> </listitem> @@ -117,7 +117,7 @@ <para> The most fundamental way to set these parameters is to edit the file - <filename>postgresql.conf</><indexterm><primary>postgresql.conf</></>, + <filename>postgresql.conf</filename><indexterm><primary>postgresql.conf</primary></indexterm>, which is normally kept in the data directory. A default copy is installed when the database cluster directory is initialized. An example of what this file might look like is: @@ -150,8 +150,8 @@ shared_buffers = 128MB <primary>SIGHUP</primary> </indexterm> The configuration file is reread whenever the main server process - receives a <systemitem>SIGHUP</> signal; this signal is most easily - sent by running <literal>pg_ctl reload</> from the command line or by + receives a <systemitem>SIGHUP</systemitem> signal; this signal is most easily + sent by running <literal>pg_ctl reload</literal> from the command line or by calling the SQL function <function>pg_reload_conf()</function>. The main server process also propagates this signal to all currently running server processes, so that existing sessions also adopt the new values @@ -161,26 +161,26 @@ shared_buffers = 128MB can only be set at server start; any changes to their entries in the configuration file will be ignored until the server is restarted. Invalid parameter settings in the configuration file are likewise - ignored (but logged) during <systemitem>SIGHUP</> processing. + ignored (but logged) during <systemitem>SIGHUP</systemitem> processing. </para> <para> - In addition to <filename>postgresql.conf</>, + In addition to <filename>postgresql.conf</filename>, a <productname>PostgreSQL</productname> data directory contains a file - <filename>postgresql.auto.conf</><indexterm><primary>postgresql.auto.conf</></>, - which has the same format as <filename>postgresql.conf</> but should + <filename>postgresql.auto.conf</filename><indexterm><primary>postgresql.auto.conf</primary></indexterm>, + which has the same format as <filename>postgresql.conf</filename> but should never be edited manually. This file holds settings provided through the <xref linkend="SQL-ALTERSYSTEM"> command. This file is automatically - read whenever <filename>postgresql.conf</> is, and its settings take - effect in the same way. Settings in <filename>postgresql.auto.conf</> - override those in <filename>postgresql.conf</>. + read whenever <filename>postgresql.conf</filename> is, and its settings take + effect in the same way. Settings in <filename>postgresql.auto.conf</filename> + override those in <filename>postgresql.conf</filename>. </para> <para> The system view <link linkend="view-pg-file-settings"><structname>pg_file_settings</structname></link> can be helpful for pre-testing changes to the configuration file, or for - diagnosing problems if a <systemitem>SIGHUP</> signal did not have the + diagnosing problems if a <systemitem>SIGHUP</systemitem> signal did not have the desired effects. </para> </sect2> @@ -193,7 +193,7 @@ shared_buffers = 128MB commands to establish configuration defaults. The already-mentioned <xref linkend="SQL-ALTERSYSTEM"> command provides a SQL-accessible means of changing global defaults; it is - functionally equivalent to editing <filename>postgresql.conf</>. + functionally equivalent to editing <filename>postgresql.conf</filename>. In addition, there are two commands that allow setting of defaults on a per-database or per-role basis: </para> @@ -215,7 +215,7 @@ shared_buffers = 128MB </itemizedlist> <para> - Values set with <command>ALTER DATABASE</> and <command>ALTER ROLE</> + Values set with <command>ALTER DATABASE</command> and <command>ALTER ROLE</command> are applied only when starting a fresh database session. They override values obtained from the configuration files or server command line, and constitute defaults for the rest of the session. @@ -224,7 +224,7 @@ shared_buffers = 128MB </para> <para> - Once a client is connected to the database, <productname>PostgreSQL</> + Once a client is connected to the database, <productname>PostgreSQL</productname> provides two additional SQL commands (and equivalent functions) to interact with session-local configuration settings: </para> @@ -251,14 +251,14 @@ shared_buffers = 128MB <para> In addition, the system view <link - linkend="view-pg-settings"><structname>pg_settings</></> can be + linkend="view-pg-settings"><structname>pg_settings</structname></link> can be used to view and change session-local values: </para> <itemizedlist> <listitem> <para> - Querying this view is similar to using <command>SHOW ALL</> but + Querying this view is similar to using <command>SHOW ALL</command> but provides more detail. It is also more flexible, since it's possible to specify filter conditions or join against other relations. </para> @@ -267,8 +267,8 @@ shared_buffers = 128MB <listitem> <para> Using <xref linkend="SQL-UPDATE"> on this view, specifically - updating the <structname>setting</> column, is the equivalent - of issuing <command>SET</> commands. For example, the equivalent of + updating the <structname>setting</structname> column, is the equivalent + of issuing <command>SET</command> commands. For example, the equivalent of <programlisting> SET configuration_parameter TO DEFAULT; </programlisting> @@ -289,7 +289,7 @@ UPDATE pg_settings SET setting = reset_val WHERE name = 'configuration_parameter In addition to setting global defaults or attaching overrides at the database or role level, you can pass settings to <productname>PostgreSQL</productname> via shell facilities. - Both the server and <application>libpq</> client library + Both the server and <application>libpq</application> client library accept parameter values via the shell. </para> @@ -298,26 +298,26 @@ UPDATE pg_settings SET setting = reset_val WHERE name = 'configuration_parameter <para> During server startup, parameter settings can be passed to the <command>postgres</command> command via the - <option>-c</> command-line parameter. For example, + <option>-c</option> command-line parameter. For example, <programlisting> postgres -c log_connections=yes -c log_destination='syslog' </programlisting> Settings provided in this way override those set via - <filename>postgresql.conf</> or <command>ALTER SYSTEM</>, + <filename>postgresql.conf</filename> or <command>ALTER SYSTEM</command>, so they cannot be changed globally without restarting the server. </para> </listitem> <listitem> <para> - When starting a client session via <application>libpq</>, + When starting a client session via <application>libpq</application>, parameter settings can be specified using the <envar>PGOPTIONS</envar> environment variable. Settings established in this way constitute defaults for the life of the session, but do not affect other sessions. For historical reasons, the format of <envar>PGOPTIONS</envar> is similar to that used when launching the <command>postgres</command> - command; specifically, the <option>-c</> flag must be specified. + command; specifically, the <option>-c</option> flag must be specified. For example, <programlisting> env PGOPTIONS="-c geqo=off -c statement_timeout=5min" psql @@ -338,20 +338,20 @@ env PGOPTIONS="-c geqo=off -c statement_timeout=5min" psql <title>Managing Configuration File Contents</title> <para> - <productname>PostgreSQL</> provides several features for breaking - down complex <filename>postgresql.conf</> files into sub-files. + <productname>PostgreSQL</productname> provides several features for breaking + down complex <filename>postgresql.conf</filename> files into sub-files. These features are especially useful when managing multiple servers with related, but not identical, configurations. </para> <para> <indexterm> - <primary><literal>include</></primary> + <primary><literal>include</literal></primary> <secondary>in configuration file</secondary> </indexterm> In addition to individual parameter settings, - the <filename>postgresql.conf</> file can contain <firstterm>include - directives</>, which specify another file to read and process as if + the <filename>postgresql.conf</filename> file can contain <firstterm>include + directives</firstterm>, which specify another file to read and process as if it were inserted into the configuration file at this point. This feature allows a configuration file to be divided into physically separate parts. Include directives simply look like: @@ -365,23 +365,23 @@ include 'filename' <para> <indexterm> - <primary><literal>include_if_exists</></primary> + <primary><literal>include_if_exists</literal></primary> <secondary>in configuration file</secondary> </indexterm> - There is also an <literal>include_if_exists</> directive, which acts - the same as the <literal>include</> directive, except + There is also an <literal>include_if_exists</literal> directive, which acts + the same as the <literal>include</literal> directive, except when the referenced file does not exist or cannot be read. A regular - <literal>include</> will consider this an error condition, but - <literal>include_if_exists</> merely logs a message and continues + <literal>include</literal> will consider this an error condition, but + <literal>include_if_exists</literal> merely logs a message and continues processing the referencing configuration file. </para> <para> <indexterm> - <primary><literal>include_dir</></primary> + <primary><literal>include_dir</literal></primary> <secondary>in configuration file</secondary> </indexterm> - The <filename>postgresql.conf</> file can also contain + The <filename>postgresql.conf</filename> file can also contain <literal>include_dir</literal> directives, which specify an entire directory of configuration files to include. These look like <programlisting> @@ -401,36 +401,36 @@ include_dir 'directory' <para> Include files or directories can be used to logically separate portions of the database configuration, rather than having a single large - <filename>postgresql.conf</> file. Consider a company that has two + <filename>postgresql.conf</filename> file. Consider a company that has two database servers, each with a different amount of memory. There are likely elements of the configuration both will share, for things such as logging. But memory-related parameters on the server will vary between the two. And there might be server specific customizations, too. One way to manage this situation is to break the custom configuration changes for your site into three files. You could add - this to the end of your <filename>postgresql.conf</> file to include + this to the end of your <filename>postgresql.conf</filename> file to include them: <programlisting> include 'shared.conf' include 'memory.conf' include 'server.conf' </programlisting> - All systems would have the same <filename>shared.conf</>. Each + All systems would have the same <filename>shared.conf</filename>. Each server with a particular amount of memory could share the - same <filename>memory.conf</>; you might have one for all servers + same <filename>memory.conf</filename>; you might have one for all servers with 8GB of RAM, another for those having 16GB. And - finally <filename>server.conf</> could have truly server-specific + finally <filename>server.conf</filename> could have truly server-specific configuration information in it. </para> <para> Another possibility is to create a configuration file directory and - put this information into files there. For example, a <filename>conf.d</> - directory could be referenced at the end of <filename>postgresql.conf</>: + put this information into files there. For example, a <filename>conf.d</filename> + directory could be referenced at the end of <filename>postgresql.conf</filename>: <programlisting> include_dir 'conf.d' </programlisting> - Then you could name the files in the <filename>conf.d</> directory + Then you could name the files in the <filename>conf.d</filename> directory like this: <programlisting> 00shared.conf @@ -441,8 +441,8 @@ include_dir 'conf.d' files will be loaded. This is important because only the last setting encountered for a particular parameter while the server is reading configuration files will be used. In this example, - something set in <filename>conf.d/02server.conf</> would override a - value set in <filename>conf.d/01memory.conf</>. + something set in <filename>conf.d/02server.conf</filename> would override a + value set in <filename>conf.d/01memory.conf</filename>. </para> <para> @@ -483,7 +483,7 @@ include_dir 'conf.d' <varlistentry id="guc-data-directory" xreflabel="data_directory"> <term><varname>data_directory</varname> (<type>string</type>) <indexterm> - <primary><varname>data_directory</> configuration parameter</primary> + <primary><varname>data_directory</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -497,13 +497,13 @@ include_dir 'conf.d' <varlistentry id="guc-config-file" xreflabel="config_file"> <term><varname>config_file</varname> (<type>string</type>) <indexterm> - <primary><varname>config_file</> configuration parameter</primary> + <primary><varname>config_file</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies the main server configuration file - (customarily called <filename>postgresql.conf</>). + (customarily called <filename>postgresql.conf</filename>). This parameter can only be set on the <command>postgres</command> command line. </para> </listitem> @@ -512,13 +512,13 @@ include_dir 'conf.d' <varlistentry id="guc-hba-file" xreflabel="hba_file"> <term><varname>hba_file</varname> (<type>string</type>) <indexterm> - <primary><varname>hba_file</> configuration parameter</primary> + <primary><varname>hba_file</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies the configuration file for host-based authentication - (customarily called <filename>pg_hba.conf</>). + (customarily called <filename>pg_hba.conf</filename>). This parameter can only be set at server start. </para> </listitem> @@ -527,13 +527,13 @@ include_dir 'conf.d' <varlistentry id="guc-ident-file" xreflabel="ident_file"> <term><varname>ident_file</varname> (<type>string</type>) <indexterm> - <primary><varname>ident_file</> configuration parameter</primary> + <primary><varname>ident_file</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies the configuration file for user name mapping - (customarily called <filename>pg_ident.conf</>). + (customarily called <filename>pg_ident.conf</filename>). This parameter can only be set at server start. See also <xref linkend="auth-username-maps">. </para> @@ -543,7 +543,7 @@ include_dir 'conf.d' <varlistentry id="guc-external-pid-file" xreflabel="external_pid_file"> <term><varname>external_pid_file</varname> (<type>string</type>) <indexterm> - <primary><varname>external_pid_file</> configuration parameter</primary> + <primary><varname>external_pid_file</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -569,10 +569,10 @@ include_dir 'conf.d' data directory, the <command>postgres</command> <option>-D</option> command-line option or <envar>PGDATA</envar> environment variable must point to the directory containing the configuration files, - and the <varname>data_directory</> parameter must be set in + and the <varname>data_directory</varname> parameter must be set in <filename>postgresql.conf</filename> (or on the command line) to show where the data directory is actually located. Notice that - <varname>data_directory</> overrides <option>-D</option> and + <varname>data_directory</varname> overrides <option>-D</option> and <envar>PGDATA</envar> for the location of the data directory, but not for the location of the configuration files. @@ -580,12 +580,12 @@ include_dir 'conf.d' <para> If you wish, you can specify the configuration file names and locations - individually using the parameters <varname>config_file</>, - <varname>hba_file</> and/or <varname>ident_file</>. - <varname>config_file</> can only be specified on the + individually using the parameters <varname>config_file</varname>, + <varname>hba_file</varname> and/or <varname>ident_file</varname>. + <varname>config_file</varname> can only be specified on the <command>postgres</command> command line, but the others can be set within the main configuration file. If all three parameters plus - <varname>data_directory</> are explicitly set, then it is not necessary + <varname>data_directory</varname> are explicitly set, then it is not necessary to specify <option>-D</option> or <envar>PGDATA</envar>. </para> @@ -607,7 +607,7 @@ include_dir 'conf.d' <varlistentry id="guc-listen-addresses" xreflabel="listen_addresses"> <term><varname>listen_addresses</varname> (<type>string</type>) <indexterm> - <primary><varname>listen_addresses</> configuration parameter</primary> + <primary><varname>listen_addresses</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -615,15 +615,15 @@ include_dir 'conf.d' Specifies the TCP/IP address(es) on which the server is to listen for connections from client applications. The value takes the form of a comma-separated list of host names - and/or numeric IP addresses. The special entry <literal>*</> + and/or numeric IP addresses. The special entry <literal>*</literal> corresponds to all available IP interfaces. The entry - <literal>0.0.0.0</> allows listening for all IPv4 addresses and - <literal>::</> allows listening for all IPv6 addresses. + <literal>0.0.0.0</literal> allows listening for all IPv4 addresses and + <literal>::</literal> allows listening for all IPv6 addresses. If the list is empty, the server does not listen on any IP interface at all, in which case only Unix-domain sockets can be used to connect to it. - The default value is <systemitem class="systemname">localhost</>, - which allows only local TCP/IP <quote>loopback</> connections to be + The default value is <systemitem class="systemname">localhost</systemitem>, + which allows only local TCP/IP <quote>loopback</quote> connections to be made. While client authentication (<xref linkend="client-authentication">) allows fine-grained control over who can access the server, <varname>listen_addresses</varname> @@ -638,7 +638,7 @@ include_dir 'conf.d' <varlistentry id="guc-port" xreflabel="port"> <term><varname>port</varname> (<type>integer</type>) <indexterm> - <primary><varname>port</> configuration parameter</primary> + <primary><varname>port</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -653,7 +653,7 @@ include_dir 'conf.d' <varlistentry id="guc-max-connections" xreflabel="max_connections"> <term><varname>max_connections</varname> (<type>integer</type>) <indexterm> - <primary><varname>max_connections</> configuration parameter</primary> + <primary><varname>max_connections</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -661,7 +661,7 @@ include_dir 'conf.d' Determines the maximum number of concurrent connections to the database server. The default is typically 100 connections, but might be less if your kernel settings will not support it (as - determined during <application>initdb</>). This parameter can + determined during <application>initdb</application>). This parameter can only be set at server start. </para> @@ -678,17 +678,17 @@ include_dir 'conf.d' <term><varname>superuser_reserved_connections</varname> (<type>integer</type>) <indexterm> - <primary><varname>superuser_reserved_connections</> configuration parameter</primary> + <primary><varname>superuser_reserved_connections</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Determines the number of connection <quote>slots</quote> that - are reserved for connections by <productname>PostgreSQL</> + are reserved for connections by <productname>PostgreSQL</productname> superusers. At most <xref linkend="guc-max-connections"> connections can ever be active simultaneously. Whenever the number of active concurrent connections is at least - <varname>max_connections</> minus + <varname>max_connections</varname> minus <varname>superuser_reserved_connections</varname>, new connections will be accepted only for superusers, and no new replication connections will be accepted. @@ -705,7 +705,7 @@ include_dir 'conf.d' <varlistentry id="guc-unix-socket-directories" xreflabel="unix_socket_directories"> <term><varname>unix_socket_directories</varname> (<type>string</type>) <indexterm> - <primary><varname>unix_socket_directories</> configuration parameter</primary> + <primary><varname>unix_socket_directories</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -726,10 +726,10 @@ include_dir 'conf.d' <para> In addition to the socket file itself, which is named - <literal>.s.PGSQL.<replaceable>nnnn</></literal> where - <replaceable>nnnn</> is the server's port number, an ordinary file - named <literal>.s.PGSQL.<replaceable>nnnn</>.lock</literal> will be - created in each of the <varname>unix_socket_directories</> directories. + <literal>.s.PGSQL.<replaceable>nnnn</replaceable></literal> where + <replaceable>nnnn</replaceable> is the server's port number, an ordinary file + named <literal>.s.PGSQL.<replaceable>nnnn</replaceable>.lock</literal> will be + created in each of the <varname>unix_socket_directories</varname> directories. Neither file should ever be removed manually. </para> @@ -743,7 +743,7 @@ include_dir 'conf.d' <varlistentry id="guc-unix-socket-group" xreflabel="unix_socket_group"> <term><varname>unix_socket_group</varname> (<type>string</type>) <indexterm> - <primary><varname>unix_socket_group</> configuration parameter</primary> + <primary><varname>unix_socket_group</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -768,7 +768,7 @@ include_dir 'conf.d' <varlistentry id="guc-unix-socket-permissions" xreflabel="unix_socket_permissions"> <term><varname>unix_socket_permissions</varname> (<type>integer</type>) <indexterm> - <primary><varname>unix_socket_permissions</> configuration parameter</primary> + <primary><varname>unix_socket_permissions</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -804,7 +804,7 @@ include_dir 'conf.d' <para> This parameter is irrelevant on systems, notably Solaris as of Solaris 10, that ignore socket permissions entirely. There, one can achieve a - similar effect by pointing <varname>unix_socket_directories</> to a + similar effect by pointing <varname>unix_socket_directories</varname> to a directory having search permission limited to the desired audience. This parameter is also irrelevant on Windows, which does not have Unix-domain sockets. @@ -815,7 +815,7 @@ include_dir 'conf.d' <varlistentry id="guc-bonjour" xreflabel="bonjour"> <term><varname>bonjour</varname> (<type>boolean</type>) <indexterm> - <primary><varname>bonjour</> configuration parameter</primary> + <primary><varname>bonjour</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -830,14 +830,14 @@ include_dir 'conf.d' <varlistentry id="guc-bonjour-name" xreflabel="bonjour_name"> <term><varname>bonjour_name</varname> (<type>string</type>) <indexterm> - <primary><varname>bonjour_name</> configuration parameter</primary> + <primary><varname>bonjour_name</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies the <productname>Bonjour</productname> service name. The computer name is used if this parameter is set to the - empty string <literal>''</> (which is the default). This parameter is + empty string <literal>''</literal> (which is the default). This parameter is ignored if the server was not compiled with <productname>Bonjour</productname> support. This parameter can only be set at server start. @@ -848,7 +848,7 @@ include_dir 'conf.d' <varlistentry id="guc-tcp-keepalives-idle" xreflabel="tcp_keepalives_idle"> <term><varname>tcp_keepalives_idle</varname> (<type>integer</type>) <indexterm> - <primary><varname>tcp_keepalives_idle</> configuration parameter</primary> + <primary><varname>tcp_keepalives_idle</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -857,7 +857,7 @@ include_dir 'conf.d' should send a keepalive message to the client. A value of 0 uses the system default. This parameter is supported only on systems that support - <symbol>TCP_KEEPIDLE</> or an equivalent socket option, and on + <symbol>TCP_KEEPIDLE</symbol> or an equivalent socket option, and on Windows; on other systems, it must be zero. In sessions connected via a Unix-domain socket, this parameter is ignored and always reads as zero. @@ -874,7 +874,7 @@ include_dir 'conf.d' <varlistentry id="guc-tcp-keepalives-interval" xreflabel="tcp_keepalives_interval"> <term><varname>tcp_keepalives_interval</varname> (<type>integer</type>) <indexterm> - <primary><varname>tcp_keepalives_interval</> configuration parameter</primary> + <primary><varname>tcp_keepalives_interval</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -883,7 +883,7 @@ include_dir 'conf.d' that is not acknowledged by the client should be retransmitted. A value of 0 uses the system default. This parameter is supported only on systems that support - <symbol>TCP_KEEPINTVL</> or an equivalent socket option, and on + <symbol>TCP_KEEPINTVL</symbol> or an equivalent socket option, and on Windows; on other systems, it must be zero. In sessions connected via a Unix-domain socket, this parameter is ignored and always reads as zero. @@ -900,7 +900,7 @@ include_dir 'conf.d' <varlistentry id="guc-tcp-keepalives-count" xreflabel="tcp_keepalives_count"> <term><varname>tcp_keepalives_count</varname> (<type>integer</type>) <indexterm> - <primary><varname>tcp_keepalives_count</> configuration parameter</primary> + <primary><varname>tcp_keepalives_count</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -909,7 +909,7 @@ include_dir 'conf.d' the server's connection to the client is considered dead. A value of 0 uses the system default. This parameter is supported only on systems that support - <symbol>TCP_KEEPCNT</> or an equivalent socket option; + <symbol>TCP_KEEPCNT</symbol> or an equivalent socket option; on other systems, it must be zero. In sessions connected via a Unix-domain socket, this parameter is ignored and always reads as zero. @@ -930,10 +930,10 @@ include_dir 'conf.d' <variablelist> <varlistentry id="guc-authentication-timeout" xreflabel="authentication_timeout"> <term><varname>authentication_timeout</varname> (<type>integer</type>) - <indexterm><primary>timeout</><secondary>client authentication</></indexterm> - <indexterm><primary>client authentication</><secondary>timeout during</></indexterm> + <indexterm><primary>timeout</primary><secondary>client authentication</secondary></indexterm> + <indexterm><primary>client authentication</primary><secondary>timeout during</secondary></indexterm> <indexterm> - <primary><varname>authentication_timeout</> configuration parameter</primary> + <primary><varname>authentication_timeout</varname> configuration parameter</primary> </indexterm> </term> @@ -943,8 +943,8 @@ include_dir 'conf.d' would-be client has not completed the authentication protocol in this much time, the server closes the connection. This prevents hung clients from occupying a connection indefinitely. - The default is one minute (<literal>1m</>). - This parameter can only be set in the <filename>postgresql.conf</> + The default is one minute (<literal>1m</literal>). + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -953,16 +953,16 @@ include_dir 'conf.d' <varlistentry id="guc-ssl" xreflabel="ssl"> <term><varname>ssl</varname> (<type>boolean</type>) <indexterm> - <primary><varname>ssl</> configuration parameter</primary> + <primary><varname>ssl</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - Enables <acronym>SSL</> connections. Please read + Enables <acronym>SSL</acronym> connections. Please read <xref linkend="ssl-tcp"> before using this. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. - The default is <literal>off</>. + The default is <literal>off</literal>. </para> </listitem> </varlistentry> @@ -970,7 +970,7 @@ include_dir 'conf.d' <varlistentry id="guc-ssl-ca-file" xreflabel="ssl_ca_file"> <term><varname>ssl_ca_file</varname> (<type>string</type>) <indexterm> - <primary><varname>ssl_ca_file</> configuration parameter</primary> + <primary><varname>ssl_ca_file</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -978,7 +978,7 @@ include_dir 'conf.d' Specifies the name of the file containing the SSL server certificate authority (CA). Relative paths are relative to the data directory. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. The default is empty, meaning no CA file is loaded, and client certificate verification is not performed. @@ -989,14 +989,14 @@ include_dir 'conf.d' <varlistentry id="guc-ssl-cert-file" xreflabel="ssl_cert_file"> <term><varname>ssl_cert_file</varname> (<type>string</type>) <indexterm> - <primary><varname>ssl_cert_file</> configuration parameter</primary> + <primary><varname>ssl_cert_file</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies the name of the file containing the SSL server certificate. Relative paths are relative to the data directory. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. The default is <filename>server.crt</filename>. </para> @@ -1006,7 +1006,7 @@ include_dir 'conf.d' <varlistentry id="guc-ssl-crl-file" xreflabel="ssl_crl_file"> <term><varname>ssl_crl_file</varname> (<type>string</type>) <indexterm> - <primary><varname>ssl_crl_file</> configuration parameter</primary> + <primary><varname>ssl_crl_file</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -1014,7 +1014,7 @@ include_dir 'conf.d' Specifies the name of the file containing the SSL server certificate revocation list (CRL). Relative paths are relative to the data directory. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. The default is empty, meaning no CRL file is loaded. </para> @@ -1024,14 +1024,14 @@ include_dir 'conf.d' <varlistentry id="guc-ssl-key-file" xreflabel="ssl_key_file"> <term><varname>ssl_key_file</varname> (<type>string</type>) <indexterm> - <primary><varname>ssl_key_file</> configuration parameter</primary> + <primary><varname>ssl_key_file</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies the name of the file containing the SSL server private key. Relative paths are relative to the data directory. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. The default is <filename>server.key</filename>. </para> @@ -1041,19 +1041,19 @@ include_dir 'conf.d' <varlistentry id="guc-ssl-ciphers" xreflabel="ssl_ciphers"> <term><varname>ssl_ciphers</varname> (<type>string</type>) <indexterm> - <primary><varname>ssl_ciphers</> configuration parameter</primary> + <primary><varname>ssl_ciphers</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - Specifies a list of <acronym>SSL</> cipher suites that are allowed to be + Specifies a list of <acronym>SSL</acronym> cipher suites that are allowed to be used on secure connections. See - the <citerefentry><refentrytitle>ciphers</></citerefentry> manual page - in the <application>OpenSSL</> package for the syntax of this setting + the <citerefentry><refentrytitle>ciphers</refentrytitle></citerefentry> manual page + in the <application>OpenSSL</application> package for the syntax of this setting and a list of supported values. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. - The default value is <literal>HIGH:MEDIUM:+3DES:!aNULL</>. The + The default value is <literal>HIGH:MEDIUM:+3DES:!aNULL</literal>. The default is usually a reasonable choice unless you have specific security requirements. </para> @@ -1065,7 +1065,7 @@ include_dir 'conf.d' <term><literal>HIGH</literal></term> <listitem> <para> - Cipher suites that use ciphers from <literal>HIGH</> group (e.g., + Cipher suites that use ciphers from <literal>HIGH</literal> group (e.g., AES, Camellia, 3DES) </para> </listitem> @@ -1075,7 +1075,7 @@ include_dir 'conf.d' <term><literal>MEDIUM</literal></term> <listitem> <para> - Cipher suites that use ciphers from <literal>MEDIUM</> group + Cipher suites that use ciphers from <literal>MEDIUM</literal> group (e.g., RC4, SEED) </para> </listitem> @@ -1085,11 +1085,11 @@ include_dir 'conf.d' <term><literal>+3DES</literal></term> <listitem> <para> - The OpenSSL default order for <literal>HIGH</> is problematic + The OpenSSL default order for <literal>HIGH</literal> is problematic because it orders 3DES higher than AES128. This is wrong because 3DES offers less security than AES128, and it is also much - slower. <literal>+3DES</> reorders it after all other - <literal>HIGH</> and <literal>MEDIUM</> ciphers. + slower. <literal>+3DES</literal> reorders it after all other + <literal>HIGH</literal> and <literal>MEDIUM</literal> ciphers. </para> </listitem> </varlistentry> @@ -1111,7 +1111,7 @@ include_dir 'conf.d' Available cipher suite details will vary across OpenSSL versions. Use the command <literal>openssl ciphers -v 'HIGH:MEDIUM:+3DES:!aNULL'</literal> to - see actual details for the currently installed <application>OpenSSL</> + see actual details for the currently installed <application>OpenSSL</application> version. Note that this list is filtered at run time based on the server key type. </para> @@ -1121,16 +1121,16 @@ include_dir 'conf.d' <varlistentry id="guc-ssl-prefer-server-ciphers" xreflabel="ssl_prefer_server_ciphers"> <term><varname>ssl_prefer_server_ciphers</varname> (<type>boolean</type>) <indexterm> - <primary><varname>ssl_prefer_server_ciphers</> configuration parameter</primary> + <primary><varname>ssl_prefer_server_ciphers</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies whether to use the server's SSL cipher preferences, rather than the client's. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. - The default is <literal>true</>. + The default is <literal>true</literal>. </para> <para> @@ -1146,28 +1146,28 @@ include_dir 'conf.d' <varlistentry id="guc-ssl-ecdh-curve" xreflabel="ssl_ecdh_curve"> <term><varname>ssl_ecdh_curve</varname> (<type>string</type>) <indexterm> - <primary><varname>ssl_ecdh_curve</> configuration parameter</primary> + <primary><varname>ssl_ecdh_curve</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - Specifies the name of the curve to use in <acronym>ECDH</> key + Specifies the name of the curve to use in <acronym>ECDH</acronym> key exchange. It needs to be supported by all clients that connect. It does not need to be the same curve used by the server's Elliptic Curve key. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. - The default is <literal>prime256v1</>. + The default is <literal>prime256v1</literal>. </para> <para> OpenSSL names for the most common curves are: - <literal>prime256v1</> (NIST P-256), - <literal>secp384r1</> (NIST P-384), - <literal>secp521r1</> (NIST P-521). + <literal>prime256v1</literal> (NIST P-256), + <literal>secp384r1</literal> (NIST P-384), + <literal>secp521r1</literal> (NIST P-521). The full list of available curves can be shown with the command <command>openssl ecparam -list_curves</command>. Not all of them - are usable in <acronym>TLS</> though. + are usable in <acronym>TLS</acronym> though. </para> </listitem> </varlistentry> @@ -1175,17 +1175,17 @@ include_dir 'conf.d' <varlistentry id="guc-password-encryption" xreflabel="password_encryption"> <term><varname>password_encryption</varname> (<type>enum</type>) <indexterm> - <primary><varname>password_encryption</> configuration parameter</primary> + <primary><varname>password_encryption</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> When a password is specified in <xref linkend="sql-createrole"> or <xref linkend="sql-alterrole">, this parameter determines the algorithm - to use to encrypt the password. The default value is <literal>md5</>, - which stores the password as an MD5 hash (<literal>on</> is also - accepted, as alias for <literal>md5</>). Setting this parameter to - <literal>scram-sha-256</> will encrypt the password with SCRAM-SHA-256. + to use to encrypt the password. The default value is <literal>md5</literal>, + which stores the password as an MD5 hash (<literal>on</literal> is also + accepted, as alias for <literal>md5</literal>). Setting this parameter to + <literal>scram-sha-256</literal> will encrypt the password with SCRAM-SHA-256. </para> <para> Note that older clients might lack support for the SCRAM authentication @@ -1198,7 +1198,7 @@ include_dir 'conf.d' <varlistentry id="guc-ssl-dh-params-file" xreflabel="ssl_dh_params_file"> <term><varname>ssl_dh_params_file</varname> (<type>string</type>) <indexterm> - <primary><varname>ssl_dh_params_file</> configuration parameter</primary> + <primary><varname>ssl_dh_params_file</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -1213,7 +1213,7 @@ include_dir 'conf.d' </para> <para> - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -1222,7 +1222,7 @@ include_dir 'conf.d' <varlistentry id="guc-krb-server-keyfile" xreflabel="krb_server_keyfile"> <term><varname>krb_server_keyfile</varname> (<type>string</type>) <indexterm> - <primary><varname>krb_server_keyfile</> configuration parameter</primary> + <primary><varname>krb_server_keyfile</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -1230,7 +1230,7 @@ include_dir 'conf.d' Sets the location of the Kerberos server key file. See <xref linkend="gssapi-auth"> for details. This parameter can only be set in the - <filename>postgresql.conf</> file or on the server command line. + <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> </varlistentry> @@ -1245,8 +1245,8 @@ include_dir 'conf.d' <para> Sets whether GSSAPI user names should be treated case-insensitively. - The default is <literal>off</> (case sensitive). This parameter can only be - set in the <filename>postgresql.conf</> file or on the server command line. + The default is <literal>off</literal> (case sensitive). This parameter can only be + set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> </varlistentry> @@ -1254,43 +1254,43 @@ include_dir 'conf.d' <varlistentry id="guc-db-user-namespace" xreflabel="db_user_namespace"> <term><varname>db_user_namespace</varname> (<type>boolean</type>) <indexterm> - <primary><varname>db_user_namespace</> configuration parameter</primary> + <primary><varname>db_user_namespace</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> This parameter enables per-database user names. It is off by default. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> <para> - If this is on, you should create users as <replaceable>username@dbname</>. - When <replaceable>username</> is passed by a connecting client, - <literal>@</> and the database name are appended to the user + If this is on, you should create users as <replaceable>username@dbname</replaceable>. + When <replaceable>username</replaceable> is passed by a connecting client, + <literal>@</literal> and the database name are appended to the user name and that database-specific user name is looked up by the server. Note that when you create users with names containing - <literal>@</> within the SQL environment, you will need to + <literal>@</literal> within the SQL environment, you will need to quote the user name. </para> <para> With this parameter enabled, you can still create ordinary global - users. Simply append <literal>@</> when specifying the user - name in the client, e.g. <literal>joe@</>. The <literal>@</> + users. Simply append <literal>@</literal> when specifying the user + name in the client, e.g. <literal>joe@</literal>. The <literal>@</literal> will be stripped off before the user name is looked up by the server. </para> <para> - <varname>db_user_namespace</> causes the client's and + <varname>db_user_namespace</varname> causes the client's and server's user name representation to differ. Authentication checks are always done with the server's user name so authentication methods must be configured for the server's user name, not the client's. Because - <literal>md5</> uses the user name as salt on both the - client and server, <literal>md5</> cannot be used with - <varname>db_user_namespace</>. + <literal>md5</literal> uses the user name as salt on both the + client and server, <literal>md5</literal> cannot be used with + <varname>db_user_namespace</varname>. </para> <note> @@ -1317,15 +1317,15 @@ include_dir 'conf.d' <varlistentry id="guc-shared-buffers" xreflabel="shared_buffers"> <term><varname>shared_buffers</varname> (<type>integer</type>) <indexterm> - <primary><varname>shared_buffers</> configuration parameter</primary> + <primary><varname>shared_buffers</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Sets the amount of memory the database server uses for shared memory buffers. The default is typically 128 megabytes - (<literal>128MB</>), but might be less if your kernel settings will - not support it (as determined during <application>initdb</>). + (<literal>128MB</literal>), but might be less if your kernel settings will + not support it (as determined during <application>initdb</application>). This setting must be at least 128 kilobytes. (Non-default values of <symbol>BLCKSZ</symbol> change the minimum.) However, settings significantly higher than the minimum are usually needed @@ -1358,7 +1358,7 @@ include_dir 'conf.d' <varlistentry id="guc-huge-pages" xreflabel="huge_pages"> <term><varname>huge_pages</varname> (<type>enum</type>) <indexterm> - <primary><varname>huge_pages</> configuration parameter</primary> + <primary><varname>huge_pages</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -1392,7 +1392,7 @@ include_dir 'conf.d' <varlistentry id="guc-temp-buffers" xreflabel="temp_buffers"> <term><varname>temp_buffers</varname> (<type>integer</type>) <indexterm> - <primary><varname>temp_buffers</> configuration parameter</primary> + <primary><varname>temp_buffers</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -1400,7 +1400,7 @@ include_dir 'conf.d' Sets the maximum number of temporary buffers used by each database session. These are session-local buffers used only for access to temporary tables. The default is eight megabytes - (<literal>8MB</>). The setting can be changed within individual + (<literal>8MB</literal>). The setting can be changed within individual sessions, but only before the first use of temporary tables within the session; subsequent attempts to change the value will have no effect on that session. @@ -1408,10 +1408,10 @@ include_dir 'conf.d' <para> A session will allocate temporary buffers as needed up to the limit - given by <varname>temp_buffers</>. The cost of setting a large + given by <varname>temp_buffers</varname>. The cost of setting a large value in sessions that do not actually need many temporary buffers is only a buffer descriptor, or about 64 bytes, per - increment in <varname>temp_buffers</>. However if a buffer is + increment in <varname>temp_buffers</varname>. However if a buffer is actually used an additional 8192 bytes will be consumed for it (or in general, <symbol>BLCKSZ</symbol> bytes). </para> @@ -1421,13 +1421,13 @@ include_dir 'conf.d' <varlistentry id="guc-max-prepared-transactions" xreflabel="max_prepared_transactions"> <term><varname>max_prepared_transactions</varname> (<type>integer</type>) <indexterm> - <primary><varname>max_prepared_transactions</> configuration parameter</primary> + <primary><varname>max_prepared_transactions</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Sets the maximum number of transactions that can be in the - <quote>prepared</> state simultaneously (see <xref + <quote>prepared</quote> state simultaneously (see <xref linkend="sql-prepare-transaction">). Setting this parameter to zero (which is the default) disables the prepared-transaction feature. @@ -1454,14 +1454,14 @@ include_dir 'conf.d' <varlistentry id="guc-work-mem" xreflabel="work_mem"> <term><varname>work_mem</varname> (<type>integer</type>) <indexterm> - <primary><varname>work_mem</> configuration parameter</primary> + <primary><varname>work_mem</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. The value - defaults to four megabytes (<literal>4MB</>). + defaults to four megabytes (<literal>4MB</literal>). Note that for a complex query, several sort or hash operations might be running in parallel; each operation will be allowed to use as much memory as this value specifies before it starts to write data into temporary @@ -1469,10 +1469,10 @@ include_dir 'conf.d' concurrently. Therefore, the total memory used could be many times the value of <varname>work_mem</varname>; it is necessary to keep this fact in mind when choosing the value. Sort operations are - used for <literal>ORDER BY</>, <literal>DISTINCT</>, and + used for <literal>ORDER BY</literal>, <literal>DISTINCT</literal>, and merge joins. Hash tables are used in hash joins, hash-based aggregation, and - hash-based processing of <literal>IN</> subqueries. + hash-based processing of <literal>IN</literal> subqueries. </para> </listitem> </varlistentry> @@ -1480,15 +1480,15 @@ include_dir 'conf.d' <varlistentry id="guc-maintenance-work-mem" xreflabel="maintenance_work_mem"> <term><varname>maintenance_work_mem</varname> (<type>integer</type>) <indexterm> - <primary><varname>maintenance_work_mem</> configuration parameter</primary> + <primary><varname>maintenance_work_mem</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies the maximum amount of memory to be used by maintenance operations, such as <command>VACUUM</command>, <command>CREATE - INDEX</>, and <command>ALTER TABLE ADD FOREIGN KEY</>. It defaults - to 64 megabytes (<literal>64MB</>). Since only one of these + INDEX</command>, and <command>ALTER TABLE ADD FOREIGN KEY</command>. It defaults + to 64 megabytes (<literal>64MB</literal>). Since only one of these operations can be executed at a time by a database session, and an installation normally doesn't have many of them running concurrently, it's safe to set this value significantly larger @@ -1508,7 +1508,7 @@ include_dir 'conf.d' <varlistentry id="guc-autovacuum-work-mem" xreflabel="autovacuum_work_mem"> <term><varname>autovacuum_work_mem</varname> (<type>integer</type>) <indexterm> - <primary><varname>autovacuum_work_mem</> configuration parameter</primary> + <primary><varname>autovacuum_work_mem</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -1525,26 +1525,26 @@ include_dir 'conf.d' <varlistentry id="guc-max-stack-depth" xreflabel="max_stack_depth"> <term><varname>max_stack_depth</varname> (<type>integer</type>) <indexterm> - <primary><varname>max_stack_depth</> configuration parameter</primary> + <primary><varname>max_stack_depth</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies the maximum safe depth of the server's execution stack. The ideal setting for this parameter is the actual stack size limit - enforced by the kernel (as set by <literal>ulimit -s</> or local + enforced by the kernel (as set by <literal>ulimit -s</literal> or local equivalent), less a safety margin of a megabyte or so. The safety margin is needed because the stack depth is not checked in every routine in the server, but only in key potentially-recursive routines such as expression evaluation. The default setting is two - megabytes (<literal>2MB</>), which is conservatively small and + megabytes (<literal>2MB</literal>), which is conservatively small and unlikely to risk crashes. However, it might be too small to allow execution of complex functions. Only superusers can change this setting. </para> <para> - Setting <varname>max_stack_depth</> higher than + Setting <varname>max_stack_depth</varname> higher than the actual kernel limit will mean that a runaway recursive function can crash an individual backend process. On platforms where <productname>PostgreSQL</productname> can determine the kernel limit, @@ -1558,25 +1558,25 @@ include_dir 'conf.d' <varlistentry id="guc-dynamic-shared-memory-type" xreflabel="dynamic_shared_memory_type"> <term><varname>dynamic_shared_memory_type</varname> (<type>enum</type>) <indexterm> - <primary><varname>dynamic_shared_memory_type</> configuration parameter</primary> + <primary><varname>dynamic_shared_memory_type</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies the dynamic shared memory implementation that the server - should use. Possible values are <literal>posix</> (for POSIX shared - memory allocated using <literal>shm_open</>), <literal>sysv</literal> - (for System V shared memory allocated via <literal>shmget</>), - <literal>windows</> (for Windows shared memory), <literal>mmap</> + should use. Possible values are <literal>posix</literal> (for POSIX shared + memory allocated using <literal>shm_open</literal>), <literal>sysv</literal> + (for System V shared memory allocated via <literal>shmget</literal>), + <literal>windows</literal> (for Windows shared memory), <literal>mmap</literal> (to simulate shared memory using memory-mapped files stored in the - data directory), and <literal>none</> (to disable this feature). + data directory), and <literal>none</literal> (to disable this feature). Not all values are supported on all platforms; the first supported option is the default for that platform. The use of the - <literal>mmap</> option, which is not the default on any platform, + <literal>mmap</literal> option, which is not the default on any platform, is generally discouraged because the operating system may write modified pages back to disk repeatedly, increasing system I/O load; however, it may be useful for debugging, when the - <literal>pg_dynshmem</> directory is stored on a RAM disk, or when + <literal>pg_dynshmem</literal> directory is stored on a RAM disk, or when other shared memory facilities are not available. </para> </listitem> @@ -1592,7 +1592,7 @@ include_dir 'conf.d' <varlistentry id="guc-temp-file-limit" xreflabel="temp_file_limit"> <term><varname>temp_file_limit</varname> (<type>integer</type>) <indexterm> - <primary><varname>temp_file_limit</> configuration parameter</primary> + <primary><varname>temp_file_limit</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -1601,13 +1601,13 @@ include_dir 'conf.d' for temporary files, such as sort and hash temporary files, or the storage file for a held cursor. A transaction attempting to exceed this limit will be canceled. - The value is specified in kilobytes, and <literal>-1</> (the + The value is specified in kilobytes, and <literal>-1</literal> (the default) means no limit. Only superusers can change this setting. </para> <para> This setting constrains the total space used at any instant by all - temporary files used by a given <productname>PostgreSQL</> process. + temporary files used by a given <productname>PostgreSQL</productname> process. It should be noted that disk space used for explicit temporary tables, as opposed to temporary files used behind-the-scenes in query execution, does <emphasis>not</emphasis> count against this limit. @@ -1625,7 +1625,7 @@ include_dir 'conf.d' <varlistentry id="guc-max-files-per-process" xreflabel="max_files_per_process"> <term><varname>max_files_per_process</varname> (<type>integer</type>) <indexterm> - <primary><varname>max_files_per_process</> configuration parameter</primary> + <primary><varname>max_files_per_process</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -1637,7 +1637,7 @@ include_dir 'conf.d' allow individual processes to open many more files than the system can actually support if many processes all try to open that many files. If you find yourself seeing <quote>Too many open - files</> failures, try reducing this setting. + files</quote> failures, try reducing this setting. This parameter can only be set at server start. </para> </listitem> @@ -1684,7 +1684,7 @@ include_dir 'conf.d' <varlistentry id="guc-vacuum-cost-delay" xreflabel="vacuum_cost_delay"> <term><varname>vacuum_cost_delay</varname> (<type>integer</type>) <indexterm> - <primary><varname>vacuum_cost_delay</> configuration parameter</primary> + <primary><varname>vacuum_cost_delay</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -1702,7 +1702,7 @@ include_dir 'conf.d' <para> When using cost-based vacuuming, appropriate values for - <varname>vacuum_cost_delay</> are usually quite small, perhaps + <varname>vacuum_cost_delay</varname> are usually quite small, perhaps 10 or 20 milliseconds. Adjusting vacuum's resource consumption is best done by changing the other vacuum cost parameters. </para> @@ -1712,7 +1712,7 @@ include_dir 'conf.d' <varlistentry id="guc-vacuum-cost-page-hit" xreflabel="vacuum_cost_page_hit"> <term><varname>vacuum_cost_page_hit</varname> (<type>integer</type>) <indexterm> - <primary><varname>vacuum_cost_page_hit</> configuration parameter</primary> + <primary><varname>vacuum_cost_page_hit</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -1728,7 +1728,7 @@ include_dir 'conf.d' <varlistentry id="guc-vacuum-cost-page-miss" xreflabel="vacuum_cost_page_miss"> <term><varname>vacuum_cost_page_miss</varname> (<type>integer</type>) <indexterm> - <primary><varname>vacuum_cost_page_miss</> configuration parameter</primary> + <primary><varname>vacuum_cost_page_miss</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -1744,7 +1744,7 @@ include_dir 'conf.d' <varlistentry id="guc-vacuum-cost-page-dirty" xreflabel="vacuum_cost_page_dirty"> <term><varname>vacuum_cost_page_dirty</varname> (<type>integer</type>) <indexterm> - <primary><varname>vacuum_cost_page_dirty</> configuration parameter</primary> + <primary><varname>vacuum_cost_page_dirty</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -1760,7 +1760,7 @@ include_dir 'conf.d' <varlistentry id="guc-vacuum-cost-limit" xreflabel="vacuum_cost_limit"> <term><varname>vacuum_cost_limit</varname> (<type>integer</type>) <indexterm> - <primary><varname>vacuum_cost_limit</> configuration parameter</primary> + <primary><varname>vacuum_cost_limit</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -1792,8 +1792,8 @@ include_dir 'conf.d' <para> There is a separate server - process called the <firstterm>background writer</>, whose function - is to issue writes of <quote>dirty</> (new or modified) shared + process called the <firstterm>background writer</firstterm>, whose function + is to issue writes of <quote>dirty</quote> (new or modified) shared buffers. It writes shared buffers so server processes handling user queries seldom or never need to wait for a write to occur. However, the background writer does cause a net overall @@ -1808,7 +1808,7 @@ include_dir 'conf.d' <varlistentry id="guc-bgwriter-delay" xreflabel="bgwriter_delay"> <term><varname>bgwriter_delay</varname> (<type>integer</type>) <indexterm> - <primary><varname>bgwriter_delay</> configuration parameter</primary> + <primary><varname>bgwriter_delay</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -1816,16 +1816,16 @@ include_dir 'conf.d' Specifies the delay between activity rounds for the background writer. In each round the writer issues writes for some number of dirty buffers (controllable by the - following parameters). It then sleeps for <varname>bgwriter_delay</> + following parameters). It then sleeps for <varname>bgwriter_delay</varname> milliseconds, and repeats. When there are no dirty buffers in the buffer pool, though, it goes into a longer sleep regardless of - <varname>bgwriter_delay</>. The default value is 200 - milliseconds (<literal>200ms</>). Note that on many systems, the + <varname>bgwriter_delay</varname>. The default value is 200 + milliseconds (<literal>200ms</literal>). Note that on many systems, the effective resolution of sleep delays is 10 milliseconds; setting - <varname>bgwriter_delay</> to a value that is not a multiple of 10 + <varname>bgwriter_delay</varname> to a value that is not a multiple of 10 might have the same results as setting it to the next higher multiple of 10. This parameter can only be set in the - <filename>postgresql.conf</> file or on the server command line. + <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> </varlistentry> @@ -1833,7 +1833,7 @@ include_dir 'conf.d' <varlistentry id="guc-bgwriter-lru-maxpages" xreflabel="bgwriter_lru_maxpages"> <term><varname>bgwriter_lru_maxpages</varname> (<type>integer</type>) <indexterm> - <primary><varname>bgwriter_lru_maxpages</> configuration parameter</primary> + <primary><varname>bgwriter_lru_maxpages</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -1843,7 +1843,7 @@ include_dir 'conf.d' background writing. (Note that checkpoints, which are managed by a separate, dedicated auxiliary process, are unaffected.) The default value is 100 buffers. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -1852,7 +1852,7 @@ include_dir 'conf.d' <varlistentry id="guc-bgwriter-lru-multiplier" xreflabel="bgwriter_lru_multiplier"> <term><varname>bgwriter_lru_multiplier</varname> (<type>floating point</type>) <indexterm> - <primary><varname>bgwriter_lru_multiplier</> configuration parameter</primary> + <primary><varname>bgwriter_lru_multiplier</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -1860,18 +1860,18 @@ include_dir 'conf.d' The number of dirty buffers written in each round is based on the number of new buffers that have been needed by server processes during recent rounds. The average recent need is multiplied by - <varname>bgwriter_lru_multiplier</> to arrive at an estimate of the + <varname>bgwriter_lru_multiplier</varname> to arrive at an estimate of the number of buffers that will be needed during the next round. Dirty buffers are written until there are that many clean, reusable buffers - available. (However, no more than <varname>bgwriter_lru_maxpages</> + available. (However, no more than <varname>bgwriter_lru_maxpages</varname> buffers will be written per round.) - Thus, a setting of 1.0 represents a <quote>just in time</> policy + Thus, a setting of 1.0 represents a <quote>just in time</quote> policy of writing exactly the number of buffers predicted to be needed. Larger values provide some cushion against spikes in demand, while smaller values intentionally leave writes to be done by server processes. The default is 2.0. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -1880,7 +1880,7 @@ include_dir 'conf.d' <varlistentry id="guc-bgwriter-flush-after" xreflabel="bgwriter_flush_after"> <term><varname>bgwriter_flush_after</varname> (<type>integer</type>) <indexterm> - <primary><varname>bgwriter_flush_after</> configuration parameter</primary> + <primary><varname>bgwriter_flush_after</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -1897,10 +1897,10 @@ include_dir 'conf.d' cache, where performance might degrade. This setting may have no effect on some platforms. The valid range is between <literal>0</literal>, which disables forced writeback, and - <literal>2MB</literal>. The default is <literal>512kB</> on Linux, - <literal>0</> elsewhere. (If <symbol>BLCKSZ</symbol> is not 8kB, + <literal>2MB</literal>. The default is <literal>512kB</literal> on Linux, + <literal>0</literal> elsewhere. (If <symbol>BLCKSZ</symbol> is not 8kB, the default and maximum values scale proportionally to it.) - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -1923,15 +1923,15 @@ include_dir 'conf.d' <varlistentry id="guc-effective-io-concurrency" xreflabel="effective_io_concurrency"> <term><varname>effective_io_concurrency</varname> (<type>integer</type>) <indexterm> - <primary><varname>effective_io_concurrency</> configuration parameter</primary> + <primary><varname>effective_io_concurrency</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Sets the number of concurrent disk I/O operations that - <productname>PostgreSQL</> expects can be executed + <productname>PostgreSQL</productname> expects can be executed simultaneously. Raising this value will increase the number of I/O - operations that any individual <productname>PostgreSQL</> session + operations that any individual <productname>PostgreSQL</productname> session attempts to initiate in parallel. The allowed range is 1 to 1000, or zero to disable issuance of asynchronous I/O requests. Currently, this setting only affects bitmap heap scans. @@ -1951,7 +1951,7 @@ include_dir 'conf.d' </para> <para> - Asynchronous I/O depends on an effective <function>posix_fadvise</> + Asynchronous I/O depends on an effective <function>posix_fadvise</function> function, which some operating systems lack. If the function is not present then setting this parameter to anything but zero will result in an error. On some operating systems (e.g., Solaris), the function @@ -1970,7 +1970,7 @@ include_dir 'conf.d' <varlistentry id="guc-max-worker-processes" xreflabel="max_worker_processes"> <term><varname>max_worker_processes</varname> (<type>integer</type>) <indexterm> - <primary><varname>max_worker_processes</> configuration parameter</primary> + <primary><varname>max_worker_processes</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -1997,7 +1997,7 @@ include_dir 'conf.d' <varlistentry id="guc-max-parallel-workers-per-gather" xreflabel="max_parallel_workers_per_gather"> <term><varname>max_parallel_workers_per_gather</varname> (<type>integer</type>) <indexterm> - <primary><varname>max_parallel_workers_per_gather</> configuration parameter</primary> + <primary><varname>max_parallel_workers_per_gather</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -2021,7 +2021,7 @@ include_dir 'conf.d' account when choosing a value for this setting, as well as when configuring other settings that control resource utilization, such as <xref linkend="guc-work-mem">. Resource limits such as - <varname>work_mem</> are applied individually to each worker, + <varname>work_mem</varname> are applied individually to each worker, which means the total utilization may be much higher across all processes than it would normally be for any single process. For example, a parallel query using 4 workers may use up to 5 times @@ -2039,7 +2039,7 @@ include_dir 'conf.d' <varlistentry id="guc-max-parallel-workers" xreflabel="max_parallel_workers"> <term><varname>max_parallel_workers</varname> (<type>integer</type>) <indexterm> - <primary><varname>max_parallel_workers</> configuration parameter</primary> + <primary><varname>max_parallel_workers</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -2059,7 +2059,7 @@ include_dir 'conf.d' <varlistentry id="guc-backend-flush-after" xreflabel="backend_flush_after"> <term><varname>backend_flush_after</varname> (<type>integer</type>) <indexterm> - <primary><varname>backend_flush_after</> configuration parameter</primary> + <primary><varname>backend_flush_after</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -2076,7 +2076,7 @@ include_dir 'conf.d' than the OS's page cache, where performance might degrade. This setting may have no effect on some platforms. The valid range is between <literal>0</literal>, which disables forced writeback, - and <literal>2MB</literal>. The default is <literal>0</>, i.e., no + and <literal>2MB</literal>. The default is <literal>0</literal>, i.e., no forced writeback. (If <symbol>BLCKSZ</symbol> is not 8kB, the maximum value scales proportionally to it.) </para> @@ -2086,13 +2086,13 @@ include_dir 'conf.d' <varlistentry id="guc-old-snapshot-threshold" xreflabel="old_snapshot_threshold"> <term><varname>old_snapshot_threshold</varname> (<type>integer</type>) <indexterm> - <primary><varname>old_snapshot_threshold</> configuration parameter</primary> + <primary><varname>old_snapshot_threshold</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Sets the minimum time that a snapshot can be used without risk of a - <literal>snapshot too old</> error occurring when using the snapshot. + <literal>snapshot too old</literal> error occurring when using the snapshot. This parameter can only be set at server start. </para> @@ -2107,12 +2107,12 @@ include_dir 'conf.d' </para> <para> - A value of <literal>-1</> disables this feature, and is the default. + A value of <literal>-1</literal> disables this feature, and is the default. Useful values for production work probably range from a small number of hours to a few days. The setting will be coerced to a granularity - of minutes, and small numbers (such as <literal>0</> or - <literal>1min</>) are only allowed because they may sometimes be - useful for testing. While a setting as high as <literal>60d</> is + of minutes, and small numbers (such as <literal>0</literal> or + <literal>1min</literal>) are only allowed because they may sometimes be + useful for testing. While a setting as high as <literal>60d</literal> is allowed, please note that in many workloads extreme bloat or transaction ID wraparound may occur in much shorter time frames. </para> @@ -2120,10 +2120,10 @@ include_dir 'conf.d' <para> When this feature is enabled, freed space at the end of a relation cannot be released to the operating system, since that could remove - information needed to detect the <literal>snapshot too old</> + information needed to detect the <literal>snapshot too old</literal> condition. All space allocated to a relation remains associated with that relation for reuse only within that relation unless explicitly - freed (for example, with <command>VACUUM FULL</>). + freed (for example, with <command>VACUUM FULL</command>). </para> <para> @@ -2135,7 +2135,7 @@ include_dir 'conf.d' Some tables cannot safely be vacuumed early, and so will not be affected by this setting, such as system catalogs. For such tables this setting will neither reduce bloat nor create a possibility - of a <literal>snapshot too old</> error on scanning. + of a <literal>snapshot too old</literal> error on scanning. </para> </listitem> </varlistentry> @@ -2158,45 +2158,45 @@ include_dir 'conf.d' <varlistentry id="guc-wal-level" xreflabel="wal_level"> <term><varname>wal_level</varname> (<type>enum</type>) <indexterm> - <primary><varname>wal_level</> configuration parameter</primary> + <primary><varname>wal_level</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - <varname>wal_level</> determines how much information is written to - the WAL. The default value is <literal>replica</>, which writes enough + <varname>wal_level</varname> determines how much information is written to + the WAL. The default value is <literal>replica</literal>, which writes enough data to support WAL archiving and replication, including running - read-only queries on a standby server. <literal>minimal</> removes all + read-only queries on a standby server. <literal>minimal</literal> removes all logging except the information required to recover from a crash or immediate shutdown. Finally, - <literal>logical</> adds information necessary to support logical + <literal>logical</literal> adds information necessary to support logical decoding. Each level includes the information logged at all lower levels. This parameter can only be set at server start. </para> <para> - In <literal>minimal</> level, WAL-logging of some bulk + In <literal>minimal</literal> level, WAL-logging of some bulk operations can be safely skipped, which can make those operations much faster (see <xref linkend="populate-pitr">). Operations in which this optimization can be applied include: <simplelist> - <member><command>CREATE TABLE AS</></member> - <member><command>CREATE INDEX</></member> - <member><command>CLUSTER</></member> - <member><command>COPY</> into tables that were created or truncated in the same + <member><command>CREATE TABLE AS</command></member> + <member><command>CREATE INDEX</command></member> + <member><command>CLUSTER</command></member> + <member><command>COPY</command> into tables that were created or truncated in the same transaction</member> </simplelist> But minimal WAL does not contain enough information to reconstruct the - data from a base backup and the WAL logs, so <literal>replica</> or + data from a base backup and the WAL logs, so <literal>replica</literal> or higher must be used to enable WAL archiving (<xref linkend="guc-archive-mode">) and streaming replication. </para> <para> - In <literal>logical</> level, the same information is logged as - with <literal>replica</>, plus information needed to allow + In <literal>logical</literal> level, the same information is logged as + with <literal>replica</literal>, plus information needed to allow extracting logical change sets from the WAL. Using a level of - <literal>logical</> will increase the WAL volume, particularly if many + <literal>logical</literal> will increase the WAL volume, particularly if many tables are configured for <literal>REPLICA IDENTITY FULL</literal> and - many <command>UPDATE</> and <command>DELETE</> statements are + many <command>UPDATE</command> and <command>DELETE</command> statements are executed. </para> <para> @@ -2210,14 +2210,14 @@ include_dir 'conf.d' <varlistentry id="guc-fsync" xreflabel="fsync"> <term><varname>fsync</varname> (<type>boolean</type>) <indexterm> - <primary><varname>fsync</> configuration parameter</primary> + <primary><varname>fsync</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - If this parameter is on, the <productname>PostgreSQL</> server + If this parameter is on, the <productname>PostgreSQL</productname> server will try to make sure that updates are physically written to - disk, by issuing <function>fsync()</> system calls or various + disk, by issuing <function>fsync()</function> system calls or various equivalent methods (see <xref linkend="guc-wal-sync-method">). This ensures that the database cluster can recover to a consistent state after an operating system or hardware crash. @@ -2249,7 +2249,7 @@ include_dir 'conf.d' off to on, it is necessary to force all modified buffers in the kernel to durable storage. This can be done while the cluster is shutdown or while <varname>fsync</varname> is on by running <command>initdb - --sync-only</command>, running <command>sync</>, unmounting the + --sync-only</command>, running <command>sync</command>, unmounting the file system, or rebooting the server. </para> @@ -2261,7 +2261,7 @@ include_dir 'conf.d' </para> <para> - <varname>fsync</varname> can only be set in the <filename>postgresql.conf</> + <varname>fsync</varname> can only be set in the <filename>postgresql.conf</filename> file or on the server command line. If you turn this parameter off, also consider turning off <xref linkend="guc-full-page-writes">. @@ -2272,26 +2272,26 @@ include_dir 'conf.d' <varlistentry id="guc-synchronous-commit" xreflabel="synchronous_commit"> <term><varname>synchronous_commit</varname> (<type>enum</type>) <indexterm> - <primary><varname>synchronous_commit</> configuration parameter</primary> + <primary><varname>synchronous_commit</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies whether transaction commit will wait for WAL records - to be written to disk before the command returns a <quote>success</> - indication to the client. Valid values are <literal>on</>, - <literal>remote_apply</>, <literal>remote_write</>, <literal>local</>, - and <literal>off</>. The default, and safe, setting - is <literal>on</>. When <literal>off</>, there can be a delay between + to be written to disk before the command returns a <quote>success</quote> + indication to the client. Valid values are <literal>on</literal>, + <literal>remote_apply</literal>, <literal>remote_write</literal>, <literal>local</literal>, + and <literal>off</literal>. The default, and safe, setting + is <literal>on</literal>. When <literal>off</literal>, there can be a delay between when success is reported to the client and when the transaction is really guaranteed to be safe against a server crash. (The maximum delay is three times <xref linkend="guc-wal-writer-delay">.) Unlike - <xref linkend="guc-fsync">, setting this parameter to <literal>off</> + <xref linkend="guc-fsync">, setting this parameter to <literal>off</literal> does not create any risk of database inconsistency: an operating system or database crash might result in some recent allegedly-committed transactions being lost, but the database state will be just the same as if those transactions had - been aborted cleanly. So, turning <varname>synchronous_commit</> off + been aborted cleanly. So, turning <varname>synchronous_commit</varname> off can be a useful alternative when performance is more important than exact certainty about the durability of a transaction. For more discussion see <xref linkend="wal-async-commit">. @@ -2300,32 +2300,32 @@ include_dir 'conf.d' If <xref linkend="guc-synchronous-standby-names"> is non-empty, this parameter also controls whether or not transaction commits will wait for their WAL records to be replicated to the standby server(s). - When set to <literal>on</>, commits will wait until replies + When set to <literal>on</literal>, commits will wait until replies from the current synchronous standby(s) indicate they have received the commit record of the transaction and flushed it to disk. This ensures the transaction will not be lost unless both the primary and all synchronous standbys suffer corruption of their database storage. - When set to <literal>remote_apply</>, commits will wait until replies + When set to <literal>remote_apply</literal>, commits will wait until replies from the current synchronous standby(s) indicate they have received the commit record of the transaction and applied it, so that it has become visible to queries on the standby(s). - When set to <literal>remote_write</>, commits will wait until replies + When set to <literal>remote_write</literal>, commits will wait until replies from the current synchronous standby(s) indicate they have received the commit record of the transaction and written it out to their operating system. This setting is sufficient to ensure data preservation even if a standby instance of - <productname>PostgreSQL</> were to crash, but not if the standby + <productname>PostgreSQL</productname> were to crash, but not if the standby suffers an operating-system-level crash, since the data has not necessarily reached stable storage on the standby. - Finally, the setting <literal>local</> causes commits to wait for + Finally, the setting <literal>local</literal> causes commits to wait for local flush to disk, but not for replication. This is not usually desirable when synchronous replication is in use, but is provided for completeness. </para> <para> - If <varname>synchronous_standby_names</> is empty, the settings - <literal>on</>, <literal>remote_apply</>, <literal>remote_write</> - and <literal>local</> all provide the same synchronization level: + If <varname>synchronous_standby_names</varname> is empty, the settings + <literal>on</literal>, <literal>remote_apply</literal>, <literal>remote_write</literal> + and <literal>local</literal> all provide the same synchronization level: transaction commits only wait for local flush to disk. </para> <para> @@ -2335,7 +2335,7 @@ include_dir 'conf.d' transactions commit synchronously and others asynchronously. For example, to make a single multistatement transaction commit asynchronously when the default is the opposite, issue <command>SET - LOCAL synchronous_commit TO OFF</> within the transaction. + LOCAL synchronous_commit TO OFF</command> within the transaction. </para> </listitem> </varlistentry> @@ -2343,7 +2343,7 @@ include_dir 'conf.d' <varlistentry id="guc-wal-sync-method" xreflabel="wal_sync_method"> <term><varname>wal_sync_method</varname> (<type>enum</type>) <indexterm> - <primary><varname>wal_sync_method</> configuration parameter</primary> + <primary><varname>wal_sync_method</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -2356,41 +2356,41 @@ include_dir 'conf.d' <itemizedlist> <listitem> <para> - <literal>open_datasync</> (write WAL files with <function>open()</> option <symbol>O_DSYNC</>) + <literal>open_datasync</literal> (write WAL files with <function>open()</function> option <symbol>O_DSYNC</symbol>) </para> </listitem> <listitem> <para> - <literal>fdatasync</> (call <function>fdatasync()</> at each commit) + <literal>fdatasync</literal> (call <function>fdatasync()</function> at each commit) </para> </listitem> <listitem> <para> - <literal>fsync</> (call <function>fsync()</> at each commit) + <literal>fsync</literal> (call <function>fsync()</function> at each commit) </para> </listitem> <listitem> <para> - <literal>fsync_writethrough</> (call <function>fsync()</> at each commit, forcing write-through of any disk write cache) + <literal>fsync_writethrough</literal> (call <function>fsync()</function> at each commit, forcing write-through of any disk write cache) </para> </listitem> <listitem> <para> - <literal>open_sync</> (write WAL files with <function>open()</> option <symbol>O_SYNC</>) + <literal>open_sync</literal> (write WAL files with <function>open()</function> option <symbol>O_SYNC</symbol>) </para> </listitem> </itemizedlist> <para> - The <literal>open_</>* options also use <literal>O_DIRECT</> if available. + The <literal>open_</literal>* options also use <literal>O_DIRECT</literal> if available. Not all of these choices are available on all platforms. The default is the first method in the above list that is supported - by the platform, except that <literal>fdatasync</> is the default on + by the platform, except that <literal>fdatasync</literal> is the default on Linux. The default is not necessarily ideal; it might be necessary to change this setting or other aspects of your system configuration in order to create a crash-safe configuration or achieve optimal performance. These aspects are discussed in <xref linkend="wal-reliability">. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -2399,12 +2399,12 @@ include_dir 'conf.d' <varlistentry id="guc-full-page-writes" xreflabel="full_page_writes"> <term><varname>full_page_writes</varname> (<type>boolean</type>) <indexterm> - <primary><varname>full_page_writes</> configuration parameter</primary> + <primary><varname>full_page_writes</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - When this parameter is on, the <productname>PostgreSQL</> server + When this parameter is on, the <productname>PostgreSQL</productname> server writes the entire content of each disk page to WAL during the first modification of that page after a checkpoint. This is needed because @@ -2436,9 +2436,9 @@ include_dir 'conf.d' </para> <para> - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. - The default is <literal>on</>. + The default is <literal>on</literal>. </para> </listitem> </varlistentry> @@ -2446,12 +2446,12 @@ include_dir 'conf.d' <varlistentry id="guc-wal-log-hints" xreflabel="wal_log_hints"> <term><varname>wal_log_hints</varname> (<type>boolean</type>) <indexterm> - <primary><varname>wal_log_hints</> configuration parameter</primary> + <primary><varname>wal_log_hints</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - When this parameter is <literal>on</>, the <productname>PostgreSQL</> + When this parameter is <literal>on</literal>, the <productname>PostgreSQL</productname> server writes the entire content of each disk page to WAL during the first modification of that page after a checkpoint, even for non-critical modifications of so-called hint bits. @@ -2465,7 +2465,7 @@ include_dir 'conf.d' </para> <para> - This parameter can only be set at server start. The default value is <literal>off</>. + This parameter can only be set at server start. The default value is <literal>off</literal>. </para> </listitem> </varlistentry> @@ -2473,16 +2473,16 @@ include_dir 'conf.d' <varlistentry id="guc-wal-compression" xreflabel="wal_compression"> <term><varname>wal_compression</varname> (<type>boolean</type>) <indexterm> - <primary><varname>wal_compression</> configuration parameter</primary> + <primary><varname>wal_compression</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - When this parameter is <literal>on</>, the <productname>PostgreSQL</> + When this parameter is <literal>on</literal>, the <productname>PostgreSQL</productname> server compresses a full page image written to WAL when <xref linkend="guc-full-page-writes"> is on or during a base backup. A compressed page image will be decompressed during WAL replay. - The default value is <literal>off</>. + The default value is <literal>off</literal>. Only superusers can change this setting. </para> @@ -2498,7 +2498,7 @@ include_dir 'conf.d' <varlistentry id="guc-wal-buffers" xreflabel="wal_buffers"> <term><varname>wal_buffers</varname> (<type>integer</type>) <indexterm> - <primary><varname>wal_buffers</> configuration parameter</primary> + <primary><varname>wal_buffers</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -2530,24 +2530,24 @@ include_dir 'conf.d' <varlistentry id="guc-wal-writer-delay" xreflabel="wal_writer_delay"> <term><varname>wal_writer_delay</varname> (<type>integer</type>) <indexterm> - <primary><varname>wal_writer_delay</> configuration parameter</primary> + <primary><varname>wal_writer_delay</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies how often the WAL writer flushes WAL. After flushing WAL it - sleeps for <varname>wal_writer_delay</> milliseconds, unless woken up + sleeps for <varname>wal_writer_delay</varname> milliseconds, unless woken up by an asynchronously committing transaction. If the last flush - happened less than <varname>wal_writer_delay</> milliseconds ago and - less than <varname>wal_writer_flush_after</> bytes of WAL have been + happened less than <varname>wal_writer_delay</varname> milliseconds ago and + less than <varname>wal_writer_flush_after</varname> bytes of WAL have been produced since, then WAL is only written to the operating system, not flushed to disk. - The default value is 200 milliseconds (<literal>200ms</>). Note that + The default value is 200 milliseconds (<literal>200ms</literal>). Note that on many systems, the effective resolution of sleep delays is 10 - milliseconds; setting <varname>wal_writer_delay</> to a value that is + milliseconds; setting <varname>wal_writer_delay</varname> to a value that is not a multiple of 10 might have the same results as setting it to the next higher multiple of 10. This parameter can only be set in the - <filename>postgresql.conf</> file or on the server command line. + <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> </varlistentry> @@ -2555,19 +2555,19 @@ include_dir 'conf.d' <varlistentry id="guc-wal-writer-flush-after" xreflabel="wal_writer_flush_after"> <term><varname>wal_writer_flush_after</varname> (<type>integer</type>) <indexterm> - <primary><varname>wal_writer_flush_after</> configuration parameter</primary> + <primary><varname>wal_writer_flush_after</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies how often the WAL writer flushes WAL. If the last flush - happened less than <varname>wal_writer_delay</> milliseconds ago and - less than <varname>wal_writer_flush_after</> bytes of WAL have been + happened less than <varname>wal_writer_delay</varname> milliseconds ago and + less than <varname>wal_writer_flush_after</varname> bytes of WAL have been produced since, then WAL is only written to the operating system, not - flushed to disk. If <varname>wal_writer_flush_after</> is set - to <literal>0</> then WAL data is flushed immediately. The default is + flushed to disk. If <varname>wal_writer_flush_after</varname> is set + to <literal>0</literal> then WAL data is flushed immediately. The default is <literal>1MB</literal>. This parameter can only be set in the - <filename>postgresql.conf</> file or on the server command line. + <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> </varlistentry> @@ -2575,7 +2575,7 @@ include_dir 'conf.d' <varlistentry id="guc-commit-delay" xreflabel="commit_delay"> <term><varname>commit_delay</varname> (<type>integer</type>) <indexterm> - <primary><varname>commit_delay</> configuration parameter</primary> + <primary><varname>commit_delay</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -2592,15 +2592,15 @@ include_dir 'conf.d' <varname>commit_siblings</varname> other transactions are active when a flush is about to be initiated. Also, no delays are performed if <varname>fsync</varname> is disabled. - The default <varname>commit_delay</> is zero (no delay). + The default <varname>commit_delay</varname> is zero (no delay). Only superusers can change this setting. </para> <para> - In <productname>PostgreSQL</> releases prior to 9.3, + In <productname>PostgreSQL</productname> releases prior to 9.3, <varname>commit_delay</varname> behaved differently and was much less effective: it affected only commits, rather than all WAL flushes, and waited for the entire configured delay even if the WAL flush - was completed sooner. Beginning in <productname>PostgreSQL</> 9.3, + was completed sooner. Beginning in <productname>PostgreSQL</productname> 9.3, the first process that becomes ready to flush waits for the configured interval, while subsequent processes wait only until the leader completes the flush operation. @@ -2611,13 +2611,13 @@ include_dir 'conf.d' <varlistentry id="guc-commit-siblings" xreflabel="commit_siblings"> <term><varname>commit_siblings</varname> (<type>integer</type>) <indexterm> - <primary><varname>commit_siblings</> configuration parameter</primary> + <primary><varname>commit_siblings</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Minimum number of concurrent open transactions to require - before performing the <varname>commit_delay</> delay. A larger + before performing the <varname>commit_delay</varname> delay. A larger value makes it more probable that at least one other transaction will become ready to commit during the delay interval. The default is five transactions. @@ -2634,17 +2634,17 @@ include_dir 'conf.d' <varlistentry id="guc-checkpoint-timeout" xreflabel="checkpoint_timeout"> <term><varname>checkpoint_timeout</varname> (<type>integer</type>) <indexterm> - <primary><varname>checkpoint_timeout</> configuration parameter</primary> + <primary><varname>checkpoint_timeout</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Maximum time between automatic WAL checkpoints, in seconds. The valid range is between 30 seconds and one day. - The default is five minutes (<literal>5min</>). + The default is five minutes (<literal>5min</literal>). Increasing this parameter can increase the amount of time needed for crash recovery. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -2653,14 +2653,14 @@ include_dir 'conf.d' <varlistentry id="guc-checkpoint-completion-target" xreflabel="checkpoint_completion_target"> <term><varname>checkpoint_completion_target</varname> (<type>floating point</type>) <indexterm> - <primary><varname>checkpoint_completion_target</> configuration parameter</primary> + <primary><varname>checkpoint_completion_target</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies the target of checkpoint completion, as a fraction of total time between checkpoints. The default is 0.5. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -2669,7 +2669,7 @@ include_dir 'conf.d' <varlistentry id="guc-checkpoint-flush-after" xreflabel="checkpoint_flush_after"> <term><varname>checkpoint_flush_after</varname> (<type>integer</type>) <indexterm> - <primary><varname>checkpoint_flush_after</> configuration parameter</primary> + <primary><varname>checkpoint_flush_after</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -2686,10 +2686,10 @@ include_dir 'conf.d' than the OS's page cache, where performance might degrade. This setting may have no effect on some platforms. The valid range is between <literal>0</literal>, which disables forced writeback, - and <literal>2MB</literal>. The default is <literal>256kB</> on - Linux, <literal>0</> elsewhere. (If <symbol>BLCKSZ</symbol> is not + and <literal>2MB</literal>. The default is <literal>256kB</literal> on + Linux, <literal>0</literal> elsewhere. (If <symbol>BLCKSZ</symbol> is not 8kB, the default and maximum values scale proportionally to it.) - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -2698,7 +2698,7 @@ include_dir 'conf.d' <varlistentry id="guc-checkpoint-warning" xreflabel="checkpoint_warning"> <term><varname>checkpoint_warning</varname> (<type>integer</type>) <indexterm> - <primary><varname>checkpoint_warning</> configuration parameter</primary> + <primary><varname>checkpoint_warning</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -2706,11 +2706,11 @@ include_dir 'conf.d' Write a message to the server log if checkpoints caused by the filling of checkpoint segment files happen closer together than this many seconds (which suggests that - <varname>max_wal_size</> ought to be raised). The default is - 30 seconds (<literal>30s</>). Zero disables the warning. + <varname>max_wal_size</varname> ought to be raised). The default is + 30 seconds (<literal>30s</literal>). Zero disables the warning. No warnings will be generated if <varname>checkpoint_timeout</varname> is less than <varname>checkpoint_warning</varname>. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -2719,19 +2719,19 @@ include_dir 'conf.d' <varlistentry id="guc-max-wal-size" xreflabel="max_wal_size"> <term><varname>max_wal_size</varname> (<type>integer</type>) <indexterm> - <primary><varname>max_wal_size</> configuration parameter</primary> + <primary><varname>max_wal_size</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Maximum size to let the WAL grow to between automatic WAL checkpoints. This is a soft limit; WAL size can exceed - <varname>max_wal_size</> under special circumstances, like - under heavy load, a failing <varname>archive_command</>, or a high - <varname>wal_keep_segments</> setting. The default is 1 GB. + <varname>max_wal_size</varname> under special circumstances, like + under heavy load, a failing <varname>archive_command</varname>, or a high + <varname>wal_keep_segments</varname> setting. The default is 1 GB. Increasing this parameter can increase the amount of time needed for crash recovery. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -2740,7 +2740,7 @@ include_dir 'conf.d' <varlistentry id="guc-min-wal-size" xreflabel="min_wal_size"> <term><varname>min_wal_size</varname> (<type>integer</type>) <indexterm> - <primary><varname>min_wal_size</> configuration parameter</primary> + <primary><varname>min_wal_size</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -2750,7 +2750,7 @@ include_dir 'conf.d' This can be used to ensure that enough WAL space is reserved to handle spikes in WAL usage, for example when running large batch jobs. The default is 80 MB. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -2765,29 +2765,29 @@ include_dir 'conf.d' <varlistentry id="guc-archive-mode" xreflabel="archive_mode"> <term><varname>archive_mode</varname> (<type>enum</type>) <indexterm> - <primary><varname>archive_mode</> configuration parameter</primary> + <primary><varname>archive_mode</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - When <varname>archive_mode</> is enabled, completed WAL segments + When <varname>archive_mode</varname> is enabled, completed WAL segments are sent to archive storage by setting - <xref linkend="guc-archive-command">. In addition to <literal>off</>, - to disable, there are two modes: <literal>on</>, and - <literal>always</>. During normal operation, there is no - difference between the two modes, but when set to <literal>always</> + <xref linkend="guc-archive-command">. In addition to <literal>off</literal>, + to disable, there are two modes: <literal>on</literal>, and + <literal>always</literal>. During normal operation, there is no + difference between the two modes, but when set to <literal>always</literal> the WAL archiver is enabled also during archive recovery or standby - mode. In <literal>always</> mode, all files restored from the archive + mode. In <literal>always</literal> mode, all files restored from the archive or streamed with streaming replication will be archived (again). See <xref linkend="continuous-archiving-in-standby"> for details. </para> <para> - <varname>archive_mode</> and <varname>archive_command</> are - separate variables so that <varname>archive_command</> can be + <varname>archive_mode</varname> and <varname>archive_command</varname> are + separate variables so that <varname>archive_command</varname> can be changed without leaving archiving mode. This parameter can only be set at server start. - <varname>archive_mode</> cannot be enabled when - <varname>wal_level</> is set to <literal>minimal</>. + <varname>archive_mode</varname> cannot be enabled when + <varname>wal_level</varname> is set to <literal>minimal</literal>. </para> </listitem> </varlistentry> @@ -2795,32 +2795,32 @@ include_dir 'conf.d' <varlistentry id="guc-archive-command" xreflabel="archive_command"> <term><varname>archive_command</varname> (<type>string</type>) <indexterm> - <primary><varname>archive_command</> configuration parameter</primary> + <primary><varname>archive_command</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> The local shell command to execute to archive a completed WAL file - segment. Any <literal>%p</> in the string is + segment. Any <literal>%p</literal> in the string is replaced by the path name of the file to archive, and any - <literal>%f</> is replaced by only the file name. + <literal>%f</literal> is replaced by only the file name. (The path name is relative to the working directory of the server, i.e., the cluster's data directory.) - Use <literal>%%</> to embed an actual <literal>%</> character in the + Use <literal>%%</literal> to embed an actual <literal>%</literal> character in the command. It is important for the command to return a zero exit status only if it succeeds. For more information see <xref linkend="backup-archiving-wal">. </para> <para> - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. It is ignored unless - <varname>archive_mode</> was enabled at server start. - If <varname>archive_command</> is an empty string (the default) while - <varname>archive_mode</> is enabled, WAL archiving is temporarily + <varname>archive_mode</varname> was enabled at server start. + If <varname>archive_command</varname> is an empty string (the default) while + <varname>archive_mode</varname> is enabled, WAL archiving is temporarily disabled, but the server continues to accumulate WAL segment files in the expectation that a command will soon be provided. Setting - <varname>archive_command</> to a command that does nothing but - return true, e.g. <literal>/bin/true</> (<literal>REM</> on + <varname>archive_command</varname> to a command that does nothing but + return true, e.g. <literal>/bin/true</literal> (<literal>REM</literal> on Windows), effectively disables archiving, but also breaks the chain of WAL files needed for archive recovery, so it should only be used in unusual circumstances. @@ -2831,7 +2831,7 @@ include_dir 'conf.d' <varlistentry id="guc-archive-timeout" xreflabel="archive_timeout"> <term><varname>archive_timeout</varname> (<type>integer</type>) <indexterm> - <primary><varname>archive_timeout</> configuration parameter</primary> + <primary><varname>archive_timeout</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -2841,7 +2841,7 @@ include_dir 'conf.d' traffic (or has slack periods where it does so), there could be a long delay between the completion of a transaction and its safe recording in archive storage. To limit how old unarchived - data can be, you can set <varname>archive_timeout</> to force the + data can be, you can set <varname>archive_timeout</varname> to force the server to switch to a new WAL segment file periodically. When this parameter is greater than zero, the server will switch to a new segment file whenever this many seconds have elapsed since the last @@ -2850,13 +2850,13 @@ include_dir 'conf.d' no database activity). Note that archived files that are closed early due to a forced switch are still the same length as completely full files. Therefore, it is unwise to use a very short - <varname>archive_timeout</> — it will bloat your archive - storage. <varname>archive_timeout</> settings of a minute or so are + <varname>archive_timeout</varname> — it will bloat your archive + storage. <varname>archive_timeout</varname> settings of a minute or so are usually reasonable. You should consider using streaming replication, instead of archiving, if you want data to be copied off the master server more quickly than that. This parameter can only be set in the - <filename>postgresql.conf</> file or on the server command line. + <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> </varlistentry> @@ -2871,7 +2871,7 @@ include_dir 'conf.d' <para> These settings control the behavior of the built-in - <firstterm>streaming replication</> feature (see + <firstterm>streaming replication</firstterm> feature (see <xref linkend="streaming-replication">). Servers will be either a Master or a Standby server. Masters can send data, while Standby(s) are always receivers of replicated data. When cascading replication @@ -2898,7 +2898,7 @@ include_dir 'conf.d' <varlistentry id="guc-max-wal-senders" xreflabel="max_wal_senders"> <term><varname>max_wal_senders</varname> (<type>integer</type>) <indexterm> - <primary><varname>max_wal_senders</> configuration parameter</primary> + <primary><varname>max_wal_senders</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -2914,8 +2914,8 @@ include_dir 'conf.d' a timeout is reached, so this parameter should be set slightly higher than the maximum number of expected clients so disconnected clients can immediately reconnect. This parameter can only - be set at server start. <varname>wal_level</> must be set to - <literal>replica</> or higher to allow connections from standby + be set at server start. <varname>wal_level</varname> must be set to + <literal>replica</literal> or higher to allow connections from standby servers. </para> </listitem> @@ -2924,7 +2924,7 @@ include_dir 'conf.d' <varlistentry id="guc-max-replication-slots" xreflabel="max_replication_slots"> <term><varname>max_replication_slots</varname> (<type>integer</type>) <indexterm> - <primary><varname>max_replication_slots</> configuration parameter</primary> + <primary><varname>max_replication_slots</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -2944,17 +2944,17 @@ include_dir 'conf.d' <varlistentry id="guc-wal-keep-segments" xreflabel="wal_keep_segments"> <term><varname>wal_keep_segments</varname> (<type>integer</type>) <indexterm> - <primary><varname>wal_keep_segments</> configuration parameter</primary> + <primary><varname>wal_keep_segments</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies the minimum number of past log file segments kept in the - <filename>pg_wal</> + <filename>pg_wal</filename> directory, in case a standby server needs to fetch them for streaming replication. Each segment is normally 16 megabytes. If a standby server connected to the sending server falls behind by more than - <varname>wal_keep_segments</> segments, the sending server might remove + <varname>wal_keep_segments</varname> segments, the sending server might remove a WAL segment still needed by the standby, in which case the replication connection will be terminated. Downstream connections will also eventually fail as a result. (However, the standby @@ -2964,15 +2964,15 @@ include_dir 'conf.d' <para> This sets only the minimum number of segments retained in - <filename>pg_wal</>; the system might need to retain more segments + <filename>pg_wal</filename>; the system might need to retain more segments for WAL archival or to recover from a checkpoint. If - <varname>wal_keep_segments</> is zero (the default), the system + <varname>wal_keep_segments</varname> is zero (the default), the system doesn't keep any extra segments for standby purposes, so the number of old WAL segments available to standby servers is a function of the location of the previous checkpoint and status of WAL archiving. This parameter can only be set in the - <filename>postgresql.conf</> file or on the server command line. + <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> </varlistentry> @@ -2980,7 +2980,7 @@ include_dir 'conf.d' <varlistentry id="guc-wal-sender-timeout" xreflabel="wal_sender_timeout"> <term><varname>wal_sender_timeout</varname> (<type>integer</type>) <indexterm> - <primary><varname>wal_sender_timeout</> configuration parameter</primary> + <primary><varname>wal_sender_timeout</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -2990,7 +2990,7 @@ include_dir 'conf.d' the sending server to detect a standby crash or network outage. A value of zero disables the timeout mechanism. This parameter can only be set in - the <filename>postgresql.conf</> file or on the server command line. + the <filename>postgresql.conf</filename> file or on the server command line. The default value is 60 seconds. </para> </listitem> @@ -2999,13 +2999,13 @@ include_dir 'conf.d' <varlistentry id="guc-track-commit-timestamp" xreflabel="track_commit_timestamp"> <term><varname>track_commit_timestamp</varname> (<type>boolean</type>) <indexterm> - <primary><varname>track_commit_timestamp</> configuration parameter</primary> + <primary><varname>track_commit_timestamp</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Record commit time of transactions. This parameter - can only be set in <filename>postgresql.conf</> file or on the server + can only be set in <filename>postgresql.conf</filename> file or on the server command line. The default value is <literal>off</literal>. </para> </listitem> @@ -3034,13 +3034,13 @@ include_dir 'conf.d' <varlistentry id="guc-synchronous-standby-names" xreflabel="synchronous_standby_names"> <term><varname>synchronous_standby_names</varname> (<type>string</type>) <indexterm> - <primary><varname>synchronous_standby_names</> configuration parameter</primary> + <primary><varname>synchronous_standby_names</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies a list of standby servers that can support - <firstterm>synchronous replication</>, as described in + <firstterm>synchronous replication</firstterm>, as described in <xref linkend="synchronous-replication">. There will be one or more active synchronous standbys; transactions waiting for commit will be allowed to proceed after @@ -3050,15 +3050,15 @@ include_dir 'conf.d' that are both currently connected and streaming data in real-time (as shown by a state of <literal>streaming</literal> in the <link linkend="monitoring-stats-views-table"> - <literal>pg_stat_replication</></link> view). + <literal>pg_stat_replication</literal></link> view). Specifying more than one synchronous standby can allow for very high availability and protection against data loss. </para> <para> The name of a standby server for this purpose is the - <varname>application_name</> setting of the standby, as set in the + <varname>application_name</varname> setting of the standby, as set in the standby's connection information. In case of a physical replication - standby, this should be set in the <varname>primary_conninfo</> + standby, this should be set in the <varname>primary_conninfo</varname> setting in <filename>recovery.conf</filename>; the default is <literal>walreceiver</literal>. For logical replication, this can be set in the connection information of the subscription, and it @@ -3078,54 +3078,54 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" wait for replies from, and <replaceable class="parameter">standby_name</replaceable> is the name of a standby server. - <literal>FIRST</> and <literal>ANY</> specify the method to choose + <literal>FIRST</literal> and <literal>ANY</literal> specify the method to choose synchronous standbys from the listed servers. </para> <para> - The keyword <literal>FIRST</>, coupled with + The keyword <literal>FIRST</literal>, coupled with <replaceable class="parameter">num_sync</replaceable>, specifies a priority-based synchronous replication and makes transaction commits wait until their WAL records are replicated to <replaceable class="parameter">num_sync</replaceable> synchronous standbys chosen based on their priorities. For example, a setting of - <literal>FIRST 3 (s1, s2, s3, s4)</> will cause each commit to wait for + <literal>FIRST 3 (s1, s2, s3, s4)</literal> will cause each commit to wait for replies from three higher-priority standbys chosen from standby servers - <literal>s1</>, <literal>s2</>, <literal>s3</> and <literal>s4</>. + <literal>s1</literal>, <literal>s2</literal>, <literal>s3</literal> and <literal>s4</literal>. The standbys whose names appear earlier in the list are given higher priority and will be considered as synchronous. Other standby servers appearing later in this list represent potential synchronous standbys. If any of the current synchronous standbys disconnects for whatever reason, it will be replaced immediately with the next-highest-priority - standby. The keyword <literal>FIRST</> is optional. + standby. The keyword <literal>FIRST</literal> is optional. </para> <para> - The keyword <literal>ANY</>, coupled with + The keyword <literal>ANY</literal>, coupled with <replaceable class="parameter">num_sync</replaceable>, specifies a quorum-based synchronous replication and makes transaction commits - wait until their WAL records are replicated to <emphasis>at least</> + wait until their WAL records are replicated to <emphasis>at least</emphasis> <replaceable class="parameter">num_sync</replaceable> listed standbys. - For example, a setting of <literal>ANY 3 (s1, s2, s3, s4)</> will cause + For example, a setting of <literal>ANY 3 (s1, s2, s3, s4)</literal> will cause each commit to proceed as soon as at least any three standbys of - <literal>s1</>, <literal>s2</>, <literal>s3</> and <literal>s4</> + <literal>s1</literal>, <literal>s2</literal>, <literal>s3</literal> and <literal>s4</literal> reply. </para> <para> - <literal>FIRST</> and <literal>ANY</> are case-insensitive. If these + <literal>FIRST</literal> and <literal>ANY</literal> are case-insensitive. If these keywords are used as the name of a standby server, its <replaceable class="parameter">standby_name</replaceable> must be double-quoted. </para> <para> - The third syntax was used before <productname>PostgreSQL</> + The third syntax was used before <productname>PostgreSQL</productname> version 9.6 and is still supported. It's the same as the first syntax - with <literal>FIRST</> and + with <literal>FIRST</literal> and <replaceable class="parameter">num_sync</replaceable> equal to 1. - For example, <literal>FIRST 1 (s1, s2)</> and <literal>s1, s2</> have - the same meaning: either <literal>s1</> or <literal>s2</> is chosen + For example, <literal>FIRST 1 (s1, s2)</literal> and <literal>s1, s2</literal> have + the same meaning: either <literal>s1</literal> or <literal>s2</literal> is chosen as a synchronous standby. </para> <para> - The special entry <literal>*</> matches any standby name. + The special entry <literal>*</literal> matches any standby name. </para> <para> There is no mechanism to enforce uniqueness of standby names. In case @@ -3136,7 +3136,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <para> Each <replaceable class="parameter">standby_name</replaceable> should have the form of a valid SQL identifier, unless it - is <literal>*</>. You can use double-quoting if necessary. But note + is <literal>*</literal>. You can use double-quoting if necessary. But note that <replaceable class="parameter">standby_name</replaceable>s are compared to standby application names case-insensitively, whether double-quoted or not. @@ -3149,10 +3149,10 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" synchronous replication is enabled, individual transactions can be configured not to wait for replication by setting the <xref linkend="guc-synchronous-commit"> parameter to - <literal>local</> or <literal>off</>. + <literal>local</literal> or <literal>off</literal>. </para> <para> - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -3161,13 +3161,13 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-vacuum-defer-cleanup-age" xreflabel="vacuum_defer_cleanup_age"> <term><varname>vacuum_defer_cleanup_age</varname> (<type>integer</type>) <indexterm> - <primary><varname>vacuum_defer_cleanup_age</> configuration parameter</primary> + <primary><varname>vacuum_defer_cleanup_age</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - Specifies the number of transactions by which <command>VACUUM</> and - <acronym>HOT</> updates will defer cleanup of dead row versions. The + Specifies the number of transactions by which <command>VACUUM</command> and + <acronym>HOT</acronym> updates will defer cleanup of dead row versions. The default is zero transactions, meaning that dead row versions can be removed as soon as possible, that is, as soon as they are no longer visible to any open transaction. You may wish to set this to a @@ -3178,16 +3178,16 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" is measured in terms of number of write transactions occurring on the primary server, it is difficult to predict just how much additional grace time will be made available to standby queries. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> <para> - You should also consider setting <varname>hot_standby_feedback</> + You should also consider setting <varname>hot_standby_feedback</varname> on standby server(s) as an alternative to using this parameter. </para> <para> This does not prevent cleanup of dead rows which have reached the age - specified by <varname>old_snapshot_threshold</>. + specified by <varname>old_snapshot_threshold</varname>. </para> </listitem> </varlistentry> @@ -3209,7 +3209,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-hot-standby" xreflabel="hot_standby"> <term><varname>hot_standby</varname> (<type>boolean</type>) <indexterm> - <primary><varname>hot_standby</> configuration parameter</primary> + <primary><varname>hot_standby</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3226,7 +3226,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-max-standby-archive-delay" xreflabel="max_standby_archive_delay"> <term><varname>max_standby_archive_delay</varname> (<type>integer</type>) <indexterm> - <primary><varname>max_standby_archive_delay</> configuration parameter</primary> + <primary><varname>max_standby_archive_delay</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3235,16 +3235,16 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" standby server should wait before canceling standby queries that conflict with about-to-be-applied WAL entries, as described in <xref linkend="hot-standby-conflict">. - <varname>max_standby_archive_delay</> applies when WAL data is + <varname>max_standby_archive_delay</varname> applies when WAL data is being read from WAL archive (and is therefore not current). The default is 30 seconds. Units are milliseconds if not specified. A value of -1 allows the standby to wait forever for conflicting queries to complete. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> <para> - Note that <varname>max_standby_archive_delay</> is not the same as the + Note that <varname>max_standby_archive_delay</varname> is not the same as the maximum length of time a query can run before cancellation; rather it is the maximum total time allowed to apply any one WAL segment's data. Thus, if one query has resulted in significant delay earlier in the @@ -3257,7 +3257,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-max-standby-streaming-delay" xreflabel="max_standby_streaming_delay"> <term><varname>max_standby_streaming_delay</varname> (<type>integer</type>) <indexterm> - <primary><varname>max_standby_streaming_delay</> configuration parameter</primary> + <primary><varname>max_standby_streaming_delay</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3266,16 +3266,16 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" standby server should wait before canceling standby queries that conflict with about-to-be-applied WAL entries, as described in <xref linkend="hot-standby-conflict">. - <varname>max_standby_streaming_delay</> applies when WAL data is + <varname>max_standby_streaming_delay</varname> applies when WAL data is being received via streaming replication. The default is 30 seconds. Units are milliseconds if not specified. A value of -1 allows the standby to wait forever for conflicting queries to complete. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> <para> - Note that <varname>max_standby_streaming_delay</> is not the same as + Note that <varname>max_standby_streaming_delay</varname> is not the same as the maximum length of time a query can run before cancellation; rather it is the maximum total time allowed to apply WAL data once it has been received from the primary server. Thus, if one query has @@ -3289,7 +3289,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-wal-receiver-status-interval" xreflabel="wal_receiver_status_interval"> <term><varname>wal_receiver_status_interval</varname> (<type>integer</type>) <indexterm> - <primary><varname>wal_receiver_status_interval</> configuration parameter</primary> + <primary><varname>wal_receiver_status_interval</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3298,7 +3298,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" process on the standby to send information about replication progress to the primary or upstream standby, where it can be seen using the <link linkend="monitoring-stats-views-table"> - <literal>pg_stat_replication</></link> view. The standby will report + <literal>pg_stat_replication</literal></link> view. The standby will report the last write-ahead log location it has written, the last position it has flushed to disk, and the last position it has applied. This parameter's @@ -3307,7 +3307,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" often as specified by this parameter. Thus, the apply position may lag slightly behind the true position. Setting this parameter to zero disables status updates completely. This parameter can only be set in - the <filename>postgresql.conf</> file or on the server command line. + the <filename>postgresql.conf</filename> file or on the server command line. The default value is 10 seconds. </para> </listitem> @@ -3316,7 +3316,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-hot-standby-feedback" xreflabel="hot_standby_feedback"> <term><varname>hot_standby_feedback</varname> (<type>boolean</type>) <indexterm> - <primary><varname>hot_standby_feedback</> configuration parameter</primary> + <primary><varname>hot_standby_feedback</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3327,9 +3327,9 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" be used to eliminate query cancels caused by cleanup records, but can cause database bloat on the primary for some workloads. Feedback messages will not be sent more frequently than once per - <varname>wal_receiver_status_interval</>. The default value is + <varname>wal_receiver_status_interval</varname>. The default value is <literal>off</literal>. This parameter can only be set in the - <filename>postgresql.conf</> file or on the server command line. + <filename>postgresql.conf</filename> file or on the server command line. </para> <para> If cascaded replication is in use the feedback is passed upstream @@ -3338,10 +3338,10 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" </para> <para> This setting does not override the behavior of - <varname>old_snapshot_threshold</> on the primary; a snapshot on the + <varname>old_snapshot_threshold</varname> on the primary; a snapshot on the standby which exceeds the primary's age threshold can become invalid, resulting in cancellation of transactions on the standby. This is - because <varname>old_snapshot_threshold</> is intended to provide an + because <varname>old_snapshot_threshold</varname> is intended to provide an absolute limit on the time which dead rows can contribute to bloat, which would otherwise be violated because of the configuration of a standby. @@ -3352,7 +3352,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-wal-receiver-timeout" xreflabel="wal_receiver_timeout"> <term><varname>wal_receiver_timeout</varname> (<type>integer</type>) <indexterm> - <primary><varname>wal_receiver_timeout</> configuration parameter</primary> + <primary><varname>wal_receiver_timeout</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3363,7 +3363,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" outage. A value of zero disables the timeout mechanism. This parameter can only be set in - the <filename>postgresql.conf</> file or on the server command line. + the <filename>postgresql.conf</filename> file or on the server command line. The default value is 60 seconds. </para> </listitem> @@ -3372,16 +3372,16 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-wal-retrieve-retry-interval" xreflabel="wal_retrieve_retry_interval"> <term><varname>wal_retrieve_retry_interval</varname> (<type>integer</type>) <indexterm> - <primary><varname>wal_retrieve_retry_interval</> configuration parameter</primary> + <primary><varname>wal_retrieve_retry_interval</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specify how long the standby server should wait when WAL data is not available from any sources (streaming replication, - local <filename>pg_wal</> or WAL archive) before retrying to + local <filename>pg_wal</filename> or WAL archive) before retrying to retrieve WAL data. This parameter can only be set in the - <filename>postgresql.conf</> file or on the server command line. + <filename>postgresql.conf</filename> file or on the server command line. The default value is 5 seconds. Units are milliseconds if not specified. </para> <para> @@ -3420,7 +3420,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-max-logical-replication-workers" xreflabel="max_logical_replication_workers"> <term><varname>max_logical_replication_workers</varname> (<type>int</type>) <indexterm> - <primary><varname>max_logical_replication_workers</> configuration parameter</primary> + <primary><varname>max_logical_replication_workers</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3441,7 +3441,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-max-sync-workers-per-subscription" xreflabel="max_sync_workers_per_subscription"> <term><varname>max_sync_workers_per_subscription</varname> (<type>integer</type>) <indexterm> - <primary><varname>max_sync_workers_per_subscription</> configuration parameter</primary> + <primary><varname>max_sync_workers_per_subscription</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3478,7 +3478,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" These configuration parameters provide a crude method of influencing the query plans chosen by the query optimizer. If the default plan chosen by the optimizer for a particular query - is not optimal, a <emphasis>temporary</> solution is to use one + is not optimal, a <emphasis>temporary</emphasis> solution is to use one of these configuration parameters to force the optimizer to choose a different plan. Better ways to improve the quality of the @@ -3499,13 +3499,13 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <primary>bitmap scan</primary> </indexterm> <indexterm> - <primary><varname>enable_bitmapscan</> configuration parameter</primary> + <primary><varname>enable_bitmapscan</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Enables or disables the query planner's use of bitmap-scan plan - types. The default is <literal>on</>. + types. The default is <literal>on</literal>. </para> </listitem> </varlistentry> @@ -3513,13 +3513,13 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-enable-gathermerge" xreflabel="enable_gathermerge"> <term><varname>enable_gathermerge</varname> (<type>boolean</type>) <indexterm> - <primary><varname>enable_gathermerge</> configuration parameter</primary> + <primary><varname>enable_gathermerge</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Enables or disables the query planner's use of gather - merge plan types. The default is <literal>on</>. + merge plan types. The default is <literal>on</literal>. </para> </listitem> </varlistentry> @@ -3527,13 +3527,13 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-enable-hashagg" xreflabel="enable_hashagg"> <term><varname>enable_hashagg</varname> (<type>boolean</type>) <indexterm> - <primary><varname>enable_hashagg</> configuration parameter</primary> + <primary><varname>enable_hashagg</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Enables or disables the query planner's use of hashed - aggregation plan types. The default is <literal>on</>. + aggregation plan types. The default is <literal>on</literal>. </para> </listitem> </varlistentry> @@ -3541,13 +3541,13 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-enable-hashjoin" xreflabel="enable_hashjoin"> <term><varname>enable_hashjoin</varname> (<type>boolean</type>) <indexterm> - <primary><varname>enable_hashjoin</> configuration parameter</primary> + <primary><varname>enable_hashjoin</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Enables or disables the query planner's use of hash-join plan - types. The default is <literal>on</>. + types. The default is <literal>on</literal>. </para> </listitem> </varlistentry> @@ -3558,13 +3558,13 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <primary>index scan</primary> </indexterm> <indexterm> - <primary><varname>enable_indexscan</> configuration parameter</primary> + <primary><varname>enable_indexscan</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Enables or disables the query planner's use of index-scan plan - types. The default is <literal>on</>. + types. The default is <literal>on</literal>. </para> </listitem> </varlistentry> @@ -3572,14 +3572,14 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-enable-indexonlyscan" xreflabel="enable_indexonlyscan"> <term><varname>enable_indexonlyscan</varname> (<type>boolean</type>) <indexterm> - <primary><varname>enable_indexonlyscan</> configuration parameter</primary> + <primary><varname>enable_indexonlyscan</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Enables or disables the query planner's use of index-only-scan plan types (see <xref linkend="indexes-index-only-scans">). - The default is <literal>on</>. + The default is <literal>on</literal>. </para> </listitem> </varlistentry> @@ -3587,7 +3587,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-enable-material" xreflabel="enable_material"> <term><varname>enable_material</varname> (<type>boolean</type>) <indexterm> - <primary><varname>enable_material</> configuration parameter</primary> + <primary><varname>enable_material</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3596,7 +3596,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" It is impossible to suppress materialization entirely, but turning this variable off prevents the planner from inserting materialize nodes except in cases where it is required for correctness. - The default is <literal>on</>. + The default is <literal>on</literal>. </para> </listitem> </varlistentry> @@ -3604,13 +3604,13 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-enable-mergejoin" xreflabel="enable_mergejoin"> <term><varname>enable_mergejoin</varname> (<type>boolean</type>) <indexterm> - <primary><varname>enable_mergejoin</> configuration parameter</primary> + <primary><varname>enable_mergejoin</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Enables or disables the query planner's use of merge-join plan - types. The default is <literal>on</>. + types. The default is <literal>on</literal>. </para> </listitem> </varlistentry> @@ -3618,7 +3618,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-enable-nestloop" xreflabel="enable_nestloop"> <term><varname>enable_nestloop</varname> (<type>boolean</type>) <indexterm> - <primary><varname>enable_nestloop</> configuration parameter</primary> + <primary><varname>enable_nestloop</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3627,7 +3627,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" plans. It is impossible to suppress nested-loop joins entirely, but turning this variable off discourages the planner from using one if there are other methods available. The default is - <literal>on</>. + <literal>on</literal>. </para> </listitem> </varlistentry> @@ -3635,7 +3635,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-enable-partition-wise-join" xreflabel="enable_partition_wise_join"> <term><varname>enable_partition_wise_join</varname> (<type>boolean</type>) <indexterm> - <primary><varname>enable_partition_wise_join</> configuration parameter</primary> + <primary><varname>enable_partition_wise_join</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3647,7 +3647,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" must be of the same data type and have exactly matching sets of child partitions. Because partition-wise join planning can use significantly more CPU time and memory during planning, the default is - <literal>off</>. + <literal>off</literal>. </para> </listitem> </varlistentry> @@ -3658,7 +3658,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <primary>sequential scan</primary> </indexterm> <indexterm> - <primary><varname>enable_seqscan</> configuration parameter</primary> + <primary><varname>enable_seqscan</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3667,7 +3667,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" plan types. It is impossible to suppress sequential scans entirely, but turning this variable off discourages the planner from using one if there are other methods available. The - default is <literal>on</>. + default is <literal>on</literal>. </para> </listitem> </varlistentry> @@ -3675,7 +3675,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-enable-sort" xreflabel="enable_sort"> <term><varname>enable_sort</varname> (<type>boolean</type>) <indexterm> - <primary><varname>enable_sort</> configuration parameter</primary> + <primary><varname>enable_sort</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3684,7 +3684,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" steps. It is impossible to suppress explicit sorts entirely, but turning this variable off discourages the planner from using one if there are other methods available. The default - is <literal>on</>. + is <literal>on</literal>. </para> </listitem> </varlistentry> @@ -3692,13 +3692,13 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-enable-tidscan" xreflabel="enable_tidscan"> <term><varname>enable_tidscan</varname> (<type>boolean</type>) <indexterm> - <primary><varname>enable_tidscan</> configuration parameter</primary> + <primary><varname>enable_tidscan</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - Enables or disables the query planner's use of <acronym>TID</> - scan plan types. The default is <literal>on</>. + Enables or disables the query planner's use of <acronym>TID</acronym> + scan plan types. The default is <literal>on</literal>. </para> </listitem> </varlistentry> @@ -3709,12 +3709,12 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <title>Planner Cost Constants</title> <para> - The <firstterm>cost</> variables described in this section are measured + The <firstterm>cost</firstterm> variables described in this section are measured on an arbitrary scale. Only their relative values matter, hence scaling them all up or down by the same factor will result in no change in the planner's choices. By default, these cost variables are based on the cost of sequential page fetches; that is, - <varname>seq_page_cost</> is conventionally set to <literal>1.0</> + <varname>seq_page_cost</varname> is conventionally set to <literal>1.0</literal> and the other cost variables are set with reference to that. But you can use a different scale if you prefer, such as actual execution times in milliseconds on a particular machine. @@ -3735,7 +3735,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-seq-page-cost" xreflabel="seq_page_cost"> <term><varname>seq_page_cost</varname> (<type>floating point</type>) <indexterm> - <primary><varname>seq_page_cost</> configuration parameter</primary> + <primary><varname>seq_page_cost</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3752,7 +3752,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-random-page-cost" xreflabel="random_page_cost"> <term><varname>random_page_cost</varname> (<type>floating point</type>) <indexterm> - <primary><varname>random_page_cost</> configuration parameter</primary> + <primary><varname>random_page_cost</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3765,7 +3765,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" </para> <para> - Reducing this value relative to <varname>seq_page_cost</> + Reducing this value relative to <varname>seq_page_cost</varname> will cause the system to prefer index scans; raising it will make index scans look relatively more expensive. You can raise or lower both values together to change the importance of disk I/O @@ -3795,8 +3795,8 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <tip> <para> - Although the system will let you set <varname>random_page_cost</> to - less than <varname>seq_page_cost</>, it is not physically sensible + Although the system will let you set <varname>random_page_cost</varname> to + less than <varname>seq_page_cost</varname>, it is not physically sensible to do so. However, setting them equal makes sense if the database is entirely cached in RAM, since in that case there is no penalty for touching pages out of sequence. Also, in a heavily-cached @@ -3811,7 +3811,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-cpu-tuple-cost" xreflabel="cpu_tuple_cost"> <term><varname>cpu_tuple_cost</varname> (<type>floating point</type>) <indexterm> - <primary><varname>cpu_tuple_cost</> configuration parameter</primary> + <primary><varname>cpu_tuple_cost</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3826,7 +3826,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-cpu-index-tuple-cost" xreflabel="cpu_index_tuple_cost"> <term><varname>cpu_index_tuple_cost</varname> (<type>floating point</type>) <indexterm> - <primary><varname>cpu_index_tuple_cost</> configuration parameter</primary> + <primary><varname>cpu_index_tuple_cost</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3841,7 +3841,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-cpu-operator-cost" xreflabel="cpu_operator_cost"> <term><varname>cpu_operator_cost</varname> (<type>floating point</type>) <indexterm> - <primary><varname>cpu_operator_cost</> configuration parameter</primary> + <primary><varname>cpu_operator_cost</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3856,7 +3856,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-parallel-setup-cost" xreflabel="parallel_setup_cost"> <term><varname>parallel_setup_cost</varname> (<type>floating point</type>) <indexterm> - <primary><varname>parallel_setup_cost</> configuration parameter</primary> + <primary><varname>parallel_setup_cost</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3871,7 +3871,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-parallel-tuple-cost" xreflabel="parallel_tuple_cost"> <term><varname>parallel_tuple_cost</varname> (<type>floating point</type>) <indexterm> - <primary><varname>parallel_tuple_cost</> configuration parameter</primary> + <primary><varname>parallel_tuple_cost</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3886,7 +3886,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-min-parallel-table-scan-size" xreflabel="min_parallel_table_scan_size"> <term><varname>min_parallel_table_scan_size</varname> (<type>integer</type>) <indexterm> - <primary><varname>min_parallel_table_scan_size</> configuration parameter</primary> + <primary><varname>min_parallel_table_scan_size</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3896,7 +3896,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" the amount of table data scanned is always equal to the size of the table, but when indexes are used the amount of table data scanned will normally be less. The default is 8 - megabytes (<literal>8MB</>). + megabytes (<literal>8MB</literal>). </para> </listitem> </varlistentry> @@ -3904,7 +3904,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-min-parallel-index-scan-size" xreflabel="min_parallel_index_scan_size"> <term><varname>min_parallel_index_scan_size</varname> (<type>integer</type>) <indexterm> - <primary><varname>min_parallel_index_scan_size</> configuration parameter</primary> + <primary><varname>min_parallel_index_scan_size</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3913,7 +3913,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" for a parallel scan to be considered. Note that a parallel index scan typically won't touch the entire index; it is the number of pages which the planner believes will actually be touched by the scan which - is relevant. The default is 512 kilobytes (<literal>512kB</>). + is relevant. The default is 512 kilobytes (<literal>512kB</literal>). </para> </listitem> </varlistentry> @@ -3921,7 +3921,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-effective-cache-size" xreflabel="effective_cache_size"> <term><varname>effective_cache_size</varname> (<type>integer</type>) <indexterm> - <primary><varname>effective_cache_size</> configuration parameter</primary> + <primary><varname>effective_cache_size</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3942,7 +3942,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" does it reserve kernel disk cache; it is used only for estimation purposes. The system also does not assume data remains in the disk cache between queries. The default is 4 gigabytes - (<literal>4GB</>). + (<literal>4GB</literal>). </para> </listitem> </varlistentry> @@ -3974,7 +3974,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <see>genetic query optimization</see> </indexterm> <indexterm> - <primary><varname>geqo</> configuration parameter</primary> + <primary><varname>geqo</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -3990,14 +3990,14 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-geqo-threshold" xreflabel="geqo_threshold"> <term><varname>geqo_threshold</varname> (<type>integer</type>) <indexterm> - <primary><varname>geqo_threshold</> configuration parameter</primary> + <primary><varname>geqo_threshold</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Use genetic query optimization to plan queries with at least - this many <literal>FROM</> items involved. (Note that a - <literal>FULL OUTER JOIN</> construct counts as only one <literal>FROM</> + this many <literal>FROM</literal> items involved. (Note that a + <literal>FULL OUTER JOIN</literal> construct counts as only one <literal>FROM</literal> item.) The default is 12. For simpler queries it is usually best to use the regular, exhaustive-search planner, but for queries with many tables the exhaustive search takes too long, often @@ -4011,7 +4011,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-geqo-effort" xreflabel="geqo_effort"> <term><varname>geqo_effort</varname> (<type>integer</type>) <indexterm> - <primary><varname>geqo_effort</> configuration parameter</primary> + <primary><varname>geqo_effort</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -4037,7 +4037,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-geqo-pool-size" xreflabel="geqo_pool_size"> <term><varname>geqo_pool_size</varname> (<type>integer</type>) <indexterm> - <primary><varname>geqo_pool_size</> configuration parameter</primary> + <primary><varname>geqo_pool_size</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -4055,7 +4055,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-geqo-generations" xreflabel="geqo_generations"> <term><varname>geqo_generations</varname> (<type>integer</type>) <indexterm> - <primary><varname>geqo_generations</> configuration parameter</primary> + <primary><varname>geqo_generations</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -4073,7 +4073,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-geqo-selection-bias" xreflabel="geqo_selection_bias"> <term><varname>geqo_selection_bias</varname> (<type>floating point</type>) <indexterm> - <primary><varname>geqo_selection_bias</> configuration parameter</primary> + <primary><varname>geqo_selection_bias</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -4088,7 +4088,7 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-geqo-seed" xreflabel="geqo_seed"> <term><varname>geqo_seed</varname> (<type>floating point</type>) <indexterm> - <primary><varname>geqo_seed</> configuration parameter</primary> + <primary><varname>geqo_seed</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -4112,17 +4112,17 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <varlistentry id="guc-default-statistics-target" xreflabel="default_statistics_target"> <term><varname>default_statistics_target</varname> (<type>integer</type>) <indexterm> - <primary><varname>default_statistics_target</> configuration parameter</primary> + <primary><varname>default_statistics_target</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Sets the default statistics target for table columns without a column-specific target set via <command>ALTER TABLE - SET STATISTICS</>. Larger values increase the time needed to - do <command>ANALYZE</>, but might improve the quality of the + SET STATISTICS</command>. Larger values increase the time needed to + do <command>ANALYZE</command>, but might improve the quality of the planner's estimates. The default is 100. For more information - on the use of statistics by the <productname>PostgreSQL</> + on the use of statistics by the <productname>PostgreSQL</productname> query planner, refer to <xref linkend="planner-stats">. </para> </listitem> @@ -4134,26 +4134,26 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <primary>constraint exclusion</primary> </indexterm> <indexterm> - <primary><varname>constraint_exclusion</> configuration parameter</primary> + <primary><varname>constraint_exclusion</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Controls the query planner's use of table constraints to optimize queries. - The allowed values of <varname>constraint_exclusion</> are - <literal>on</> (examine constraints for all tables), - <literal>off</> (never examine constraints), and - <literal>partition</> (examine constraints only for inheritance child - tables and <literal>UNION ALL</> subqueries). - <literal>partition</> is the default setting. + The allowed values of <varname>constraint_exclusion</varname> are + <literal>on</literal> (examine constraints for all tables), + <literal>off</literal> (never examine constraints), and + <literal>partition</literal> (examine constraints only for inheritance child + tables and <literal>UNION ALL</literal> subqueries). + <literal>partition</literal> is the default setting. It is often used with inheritance and partitioned tables to improve performance. </para> <para> When this parameter allows it for a particular table, the planner - compares query conditions with the table's <literal>CHECK</> + compares query conditions with the table's <literal>CHECK</literal> constraints, and omits scanning tables for which the conditions contradict the constraints. For example: @@ -4165,8 +4165,8 @@ CREATE TABLE child2000(check (key between 2000 and 2999)) INHERITS(parent); SELECT * FROM parent WHERE key = 2400; </programlisting> - With constraint exclusion enabled, this <command>SELECT</> - will not scan <structname>child1000</> at all, improving performance. + With constraint exclusion enabled, this <command>SELECT</command> + will not scan <structname>child1000</structname> at all, improving performance. </para> <para> @@ -4188,14 +4188,14 @@ SELECT * FROM parent WHERE key = 2400; <varlistentry id="guc-cursor-tuple-fraction" xreflabel="cursor_tuple_fraction"> <term><varname>cursor_tuple_fraction</varname> (<type>floating point</type>) <indexterm> - <primary><varname>cursor_tuple_fraction</> configuration parameter</primary> + <primary><varname>cursor_tuple_fraction</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Sets the planner's estimate of the fraction of a cursor's rows that will be retrieved. The default is 0.1. Smaller values of this - setting bias the planner towards using <quote>fast start</> plans + setting bias the planner towards using <quote>fast start</quote> plans for cursors, which will retrieve the first few rows quickly while perhaps taking a long time to fetch all rows. Larger values put more emphasis on the total estimated time. At the maximum @@ -4209,7 +4209,7 @@ SELECT * FROM parent WHERE key = 2400; <varlistentry id="guc-from-collapse-limit" xreflabel="from_collapse_limit"> <term><varname>from_collapse_limit</varname> (<type>integer</type>) <indexterm> - <primary><varname>from_collapse_limit</> configuration parameter</primary> + <primary><varname>from_collapse_limit</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -4232,14 +4232,14 @@ SELECT * FROM parent WHERE key = 2400; <varlistentry id="guc-join-collapse-limit" xreflabel="join_collapse_limit"> <term><varname>join_collapse_limit</varname> (<type>integer</type>) <indexterm> - <primary><varname>join_collapse_limit</> configuration parameter</primary> + <primary><varname>join_collapse_limit</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - The planner will rewrite explicit <literal>JOIN</> - constructs (except <literal>FULL JOIN</>s) into lists of - <literal>FROM</> items whenever a list of no more than this many items + The planner will rewrite explicit <literal>JOIN</literal> + constructs (except <literal>FULL JOIN</literal>s) into lists of + <literal>FROM</literal> items whenever a list of no more than this many items would result. Smaller values reduce planning time but might yield inferior query plans. </para> @@ -4248,7 +4248,7 @@ SELECT * FROM parent WHERE key = 2400; By default, this variable is set the same as <varname>from_collapse_limit</varname>, which is appropriate for most uses. Setting it to 1 prevents any reordering of - explicit <literal>JOIN</>s. Thus, the explicit join order + explicit <literal>JOIN</literal>s. Thus, the explicit join order specified in the query will be the actual order in which the relations are joined. Because the query planner does not always choose the optimal join order, advanced users can elect to @@ -4268,24 +4268,24 @@ SELECT * FROM parent WHERE key = 2400; <varlistentry id="guc-force-parallel-mode" xreflabel="force_parallel_mode"> <term><varname>force_parallel_mode</varname> (<type>enum</type>) <indexterm> - <primary><varname>force_parallel_mode</> configuration parameter</primary> + <primary><varname>force_parallel_mode</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Allows the use of parallel queries for testing purposes even in cases where no performance benefit is expected. - The allowed values of <varname>force_parallel_mode</> are - <literal>off</> (use parallel mode only when it is expected to improve - performance), <literal>on</> (force parallel query for all queries - for which it is thought to be safe), and <literal>regress</> (like - <literal>on</>, but with additional behavior changes as explained + The allowed values of <varname>force_parallel_mode</varname> are + <literal>off</literal> (use parallel mode only when it is expected to improve + performance), <literal>on</literal> (force parallel query for all queries + for which it is thought to be safe), and <literal>regress</literal> (like + <literal>on</literal>, but with additional behavior changes as explained below). </para> <para> - More specifically, setting this value to <literal>on</> will add - a <literal>Gather</> node to the top of any query plan for which this + More specifically, setting this value to <literal>on</literal> will add + a <literal>Gather</literal> node to the top of any query plan for which this appears to be safe, so that the query runs inside of a parallel worker. Even when a parallel worker is not available or cannot be used, operations such as starting a subtransaction that would be prohibited @@ -4297,15 +4297,15 @@ SELECT * FROM parent WHERE key = 2400; </para> <para> - Setting this value to <literal>regress</> has all of the same effects - as setting it to <literal>on</> plus some additional effects that are + Setting this value to <literal>regress</literal> has all of the same effects + as setting it to <literal>on</literal> plus some additional effects that are intended to facilitate automated regression testing. Normally, messages from a parallel worker include a context line indicating that, - but a setting of <literal>regress</> suppresses this line so that the + but a setting of <literal>regress</literal> suppresses this line so that the output is the same as in non-parallel execution. Also, - the <literal>Gather</> nodes added to plans by this setting are hidden - in <literal>EXPLAIN</> output so that the output matches what - would be obtained if this setting were turned <literal>off</>. + the <literal>Gather</literal> nodes added to plans by this setting are hidden + in <literal>EXPLAIN</literal> output so that the output matches what + would be obtained if this setting were turned <literal>off</literal>. </para> </listitem> </varlistentry> @@ -4338,7 +4338,7 @@ SELECT * FROM parent WHERE key = 2400; <varlistentry id="guc-log-destination" xreflabel="log_destination"> <term><varname>log_destination</varname> (<type>string</type>) <indexterm> - <primary><varname>log_destination</> configuration parameter</primary> + <primary><varname>log_destination</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -4351,13 +4351,13 @@ SELECT * FROM parent WHERE key = 2400; parameter to a list of desired log destinations separated by commas. The default is to log to <systemitem>stderr</systemitem> only. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> <para> - If <systemitem>csvlog</> is included in <varname>log_destination</>, + If <systemitem>csvlog</systemitem> is included in <varname>log_destination</varname>, log entries are output in <quote>comma separated - value</> (<acronym>CSV</>) format, which is convenient for + value</quote> (<acronym>CSV</acronym>) format, which is convenient for loading logs into programs. See <xref linkend="runtime-config-logging-csvlog"> for details. <xref linkend="guc-logging-collector"> must be enabled to generate @@ -4366,7 +4366,7 @@ SELECT * FROM parent WHERE key = 2400; <para> When either <systemitem>stderr</systemitem> or <systemitem>csvlog</systemitem> are included, the file - <filename>current_logfiles</> is created to record the location + <filename>current_logfiles</filename> is created to record the location of the log file(s) currently in use by the logging collector and the associated logging destination. This provides a convenient way to find the logs currently in use by the instance. Here is an example of @@ -4378,10 +4378,10 @@ csvlog log/postgresql.csv <filename>current_logfiles</filename> is recreated when a new log file is created as an effect of rotation, and - when <varname>log_destination</> is reloaded. It is removed when + when <varname>log_destination</varname> is reloaded. It is removed when neither <systemitem>stderr</systemitem> nor <systemitem>csvlog</systemitem> are included - in <varname>log_destination</>, and when the logging collector is + in <varname>log_destination</varname>, and when the logging collector is disabled. </para> @@ -4390,9 +4390,9 @@ csvlog log/postgresql.csv On most Unix systems, you will need to alter the configuration of your system's <application>syslog</application> daemon in order to make use of the <systemitem>syslog</systemitem> option for - <varname>log_destination</>. <productname>PostgreSQL</productname> + <varname>log_destination</varname>. <productname>PostgreSQL</productname> can log to <application>syslog</application> facilities - <literal>LOCAL0</> through <literal>LOCAL7</> (see <xref + <literal>LOCAL0</literal> through <literal>LOCAL7</literal> (see <xref linkend="guc-syslog-facility">), but the default <application>syslog</application> configuration on most platforms will discard all such messages. You will need to add something like: @@ -4404,7 +4404,7 @@ local0.* /var/log/postgresql </para> <para> On Windows, when you use the <literal>eventlog</literal> - option for <varname>log_destination</>, you should + option for <varname>log_destination</varname>, you should register an event source and its library with the operating system so that the Windows Event Viewer can display event log messages cleanly. @@ -4417,27 +4417,27 @@ local0.* /var/log/postgresql <varlistentry id="guc-logging-collector" xreflabel="logging_collector"> <term><varname>logging_collector</varname> (<type>boolean</type>) <indexterm> - <primary><varname>logging_collector</> configuration parameter</primary> + <primary><varname>logging_collector</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - This parameter enables the <firstterm>logging collector</>, which + This parameter enables the <firstterm>logging collector</firstterm>, which is a background process that captures log messages - sent to <systemitem>stderr</> and redirects them into log files. + sent to <systemitem>stderr</systemitem> and redirects them into log files. This approach is often more useful than - logging to <application>syslog</>, since some types of messages - might not appear in <application>syslog</> output. (One common + logging to <application>syslog</application>, since some types of messages + might not appear in <application>syslog</application> output. (One common example is dynamic-linker failure messages; another is error messages - produced by scripts such as <varname>archive_command</>.) + produced by scripts such as <varname>archive_command</varname>.) This parameter can only be set at server start. </para> <note> <para> - It is possible to log to <systemitem>stderr</> without using the + It is possible to log to <systemitem>stderr</systemitem> without using the logging collector; the log messages will just go to wherever the - server's <systemitem>stderr</> is directed. However, that method is + server's <systemitem>stderr</systemitem> is directed. However, that method is only suitable for low log volumes, since it provides no convenient way to rotate log files. Also, on some platforms not using the logging collector can result in lost or garbled log output, because @@ -4451,7 +4451,7 @@ local0.* /var/log/postgresql The logging collector is designed to never lose messages. This means that in case of extremely high load, server processes could be blocked while trying to send additional log messages when the - collector has fallen behind. In contrast, <application>syslog</> + collector has fallen behind. In contrast, <application>syslog</application> prefers to drop messages if it cannot write them, which means it may fail to log some messages in such cases but it will not block the rest of the system. @@ -4464,16 +4464,16 @@ local0.* /var/log/postgresql <varlistentry id="guc-log-directory" xreflabel="log_directory"> <term><varname>log_directory</varname> (<type>string</type>) <indexterm> - <primary><varname>log_directory</> configuration parameter</primary> + <primary><varname>log_directory</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - When <varname>logging_collector</> is enabled, + When <varname>logging_collector</varname> is enabled, this parameter determines the directory in which log files will be created. It can be specified as an absolute path, or relative to the cluster data directory. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. The default is <literal>log</literal>. </para> @@ -4483,7 +4483,7 @@ local0.* /var/log/postgresql <varlistentry id="guc-log-filename" xreflabel="log_filename"> <term><varname>log_filename</varname> (<type>string</type>) <indexterm> - <primary><varname>log_filename</> configuration parameter</primary> + <primary><varname>log_filename</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -4514,14 +4514,14 @@ local0.* /var/log/postgresql longer the case. </para> <para> - If CSV-format output is enabled in <varname>log_destination</>, - <literal>.csv</> will be appended to the timestamped + If CSV-format output is enabled in <varname>log_destination</varname>, + <literal>.csv</literal> will be appended to the timestamped log file name to create the file name for CSV-format output. - (If <varname>log_filename</> ends in <literal>.log</>, the suffix is + (If <varname>log_filename</varname> ends in <literal>.log</literal>, the suffix is replaced instead.) </para> <para> - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -4530,7 +4530,7 @@ local0.* /var/log/postgresql <varlistentry id="guc-log-file-mode" xreflabel="log_file_mode"> <term><varname>log_file_mode</varname> (<type>integer</type>) <indexterm> - <primary><varname>log_file_mode</> configuration parameter</primary> + <primary><varname>log_file_mode</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -4545,9 +4545,9 @@ local0.* /var/log/postgresql must start with a <literal>0</literal> (zero).) </para> <para> - The default permissions are <literal>0600</>, meaning only the + The default permissions are <literal>0600</literal>, meaning only the server owner can read or write the log files. The other commonly - useful setting is <literal>0640</>, allowing members of the owner's + useful setting is <literal>0640</literal>, allowing members of the owner's group to read the files. Note however that to make use of such a setting, you'll need to alter <xref linkend="guc-log-directory"> to store the files somewhere outside the cluster data directory. In @@ -4555,7 +4555,7 @@ local0.* /var/log/postgresql they might contain sensitive data. </para> <para> - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -4564,7 +4564,7 @@ local0.* /var/log/postgresql <varlistentry id="guc-log-rotation-age" xreflabel="log_rotation_age"> <term><varname>log_rotation_age</varname> (<type>integer</type>) <indexterm> - <primary><varname>log_rotation_age</> configuration parameter</primary> + <primary><varname>log_rotation_age</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -4574,7 +4574,7 @@ local0.* /var/log/postgresql After this many minutes have elapsed, a new log file will be created. Set to zero to disable time-based creation of new log files. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -4583,7 +4583,7 @@ local0.* /var/log/postgresql <varlistentry id="guc-log-rotation-size" xreflabel="log_rotation_size"> <term><varname>log_rotation_size</varname> (<type>integer</type>) <indexterm> - <primary><varname>log_rotation_size</> configuration parameter</primary> + <primary><varname>log_rotation_size</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -4593,7 +4593,7 @@ local0.* /var/log/postgresql After this many kilobytes have been emitted into a log file, a new log file will be created. Set to zero to disable size-based creation of new log files. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -4602,7 +4602,7 @@ local0.* /var/log/postgresql <varlistentry id="guc-log-truncate-on-rotation" xreflabel="log_truncate_on_rotation"> <term><varname>log_truncate_on_rotation</varname> (<type>boolean</type>) <indexterm> - <primary><varname>log_truncate_on_rotation</> configuration parameter</primary> + <primary><varname>log_truncate_on_rotation</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -4617,7 +4617,7 @@ local0.* /var/log/postgresql a <varname>log_filename</varname> like <literal>postgresql-%H.log</literal> would result in generating twenty-four hourly log files and then cyclically overwriting them. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> <para> @@ -4635,7 +4635,7 @@ local0.* /var/log/postgresql <varname>log_truncate_on_rotation</varname> to <literal>on</literal>, <varname>log_rotation_age</varname> to <literal>60</literal>, and <varname>log_rotation_size</varname> to <literal>1000000</literal>. - Including <literal>%M</> in <varname>log_filename</varname> allows + Including <literal>%M</literal> in <varname>log_filename</varname> allows any size-driven rotations that might occur to select a file name different from the hour's initial file name. </para> @@ -4645,21 +4645,21 @@ local0.* /var/log/postgresql <varlistentry id="guc-syslog-facility" xreflabel="syslog_facility"> <term><varname>syslog_facility</varname> (<type>enum</type>) <indexterm> - <primary><varname>syslog_facility</> configuration parameter</primary> + <primary><varname>syslog_facility</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - When logging to <application>syslog</> is enabled, this parameter + When logging to <application>syslog</application> is enabled, this parameter determines the <application>syslog</application> <quote>facility</quote> to be used. You can choose - from <literal>LOCAL0</>, <literal>LOCAL1</>, - <literal>LOCAL2</>, <literal>LOCAL3</>, <literal>LOCAL4</>, - <literal>LOCAL5</>, <literal>LOCAL6</>, <literal>LOCAL7</>; - the default is <literal>LOCAL0</>. See also the + from <literal>LOCAL0</literal>, <literal>LOCAL1</literal>, + <literal>LOCAL2</literal>, <literal>LOCAL3</literal>, <literal>LOCAL4</literal>, + <literal>LOCAL5</literal>, <literal>LOCAL6</literal>, <literal>LOCAL7</literal>; + the default is <literal>LOCAL0</literal>. See also the documentation of your system's <application>syslog</application> daemon. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -4668,17 +4668,17 @@ local0.* /var/log/postgresql <varlistentry id="guc-syslog-ident" xreflabel="syslog_ident"> <term><varname>syslog_ident</varname> (<type>string</type>) <indexterm> - <primary><varname>syslog_ident</> configuration parameter</primary> + <primary><varname>syslog_ident</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - When logging to <application>syslog</> is enabled, this parameter + When logging to <application>syslog</application> is enabled, this parameter determines the program name used to identify <productname>PostgreSQL</productname> messages in <application>syslog</application> logs. The default is <literal>postgres</literal>. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -4687,7 +4687,7 @@ local0.* /var/log/postgresql <varlistentry id="guc-syslog-sequence-numbers" xreflabel="syslog_sequence_numbers"> <term><varname>syslog_sequence_numbers</varname> (<type>boolean</type>) <indexterm> - <primary><varname>syslog_sequence_numbers</> configuration parameter</primary> + <primary><varname>syslog_sequence_numbers</varname> configuration parameter</primary> </indexterm> </term> @@ -4706,7 +4706,7 @@ local0.* /var/log/postgresql </para> <para> - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -4715,12 +4715,12 @@ local0.* /var/log/postgresql <varlistentry id="guc-syslog-split-messages" xreflabel="syslog_split_messages"> <term><varname>syslog_split_messages</varname> (<type>boolean</type>) <indexterm> - <primary><varname>syslog_split_messages</> configuration parameter</primary> + <primary><varname>syslog_split_messages</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - When logging to <application>syslog</> is enabled, this parameter + When logging to <application>syslog</application> is enabled, this parameter determines how messages are delivered to syslog. When on (the default), messages are split by lines, and long lines are split so that they will fit into 1024 bytes, which is a typical size limit for @@ -4739,7 +4739,7 @@ local0.* /var/log/postgresql </para> <para> - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -4748,16 +4748,16 @@ local0.* /var/log/postgresql <varlistentry id="guc-event-source" xreflabel="event_source"> <term><varname>event_source</varname> (<type>string</type>) <indexterm> - <primary><varname>event_source</> configuration parameter</primary> + <primary><varname>event_source</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - When logging to <application>event log</> is enabled, this parameter + When logging to <application>event log</application> is enabled, this parameter determines the program name used to identify <productname>PostgreSQL</productname> messages in the log. The default is <literal>PostgreSQL</literal>. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -4773,21 +4773,21 @@ local0.* /var/log/postgresql <varlistentry id="guc-client-min-messages" xreflabel="client_min_messages"> <term><varname>client_min_messages</varname> (<type>enum</type>) <indexterm> - <primary><varname>client_min_messages</> configuration parameter</primary> + <primary><varname>client_min_messages</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Controls which message levels are sent to the client. - Valid values are <literal>DEBUG5</>, - <literal>DEBUG4</>, <literal>DEBUG3</>, <literal>DEBUG2</>, - <literal>DEBUG1</>, <literal>LOG</>, <literal>NOTICE</>, - <literal>WARNING</>, <literal>ERROR</>, <literal>FATAL</>, - and <literal>PANIC</>. Each level + Valid values are <literal>DEBUG5</literal>, + <literal>DEBUG4</literal>, <literal>DEBUG3</literal>, <literal>DEBUG2</literal>, + <literal>DEBUG1</literal>, <literal>LOG</literal>, <literal>NOTICE</literal>, + <literal>WARNING</literal>, <literal>ERROR</literal>, <literal>FATAL</literal>, + and <literal>PANIC</literal>. Each level includes all the levels that follow it. The later the level, the fewer messages are sent. The default is - <literal>NOTICE</>. Note that <literal>LOG</> has a different - rank here than in <varname>log_min_messages</>. + <literal>NOTICE</literal>. Note that <literal>LOG</literal> has a different + rank here than in <varname>log_min_messages</varname>. </para> </listitem> </varlistentry> @@ -4795,21 +4795,21 @@ local0.* /var/log/postgresql <varlistentry id="guc-log-min-messages" xreflabel="log_min_messages"> <term><varname>log_min_messages</varname> (<type>enum</type>) <indexterm> - <primary><varname>log_min_messages</> configuration parameter</primary> + <primary><varname>log_min_messages</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Controls which message levels are written to the server log. - Valid values are <literal>DEBUG5</>, <literal>DEBUG4</>, - <literal>DEBUG3</>, <literal>DEBUG2</>, <literal>DEBUG1</>, - <literal>INFO</>, <literal>NOTICE</>, <literal>WARNING</>, - <literal>ERROR</>, <literal>LOG</>, <literal>FATAL</>, and - <literal>PANIC</>. Each level includes all the levels that + Valid values are <literal>DEBUG5</literal>, <literal>DEBUG4</literal>, + <literal>DEBUG3</literal>, <literal>DEBUG2</literal>, <literal>DEBUG1</literal>, + <literal>INFO</literal>, <literal>NOTICE</literal>, <literal>WARNING</literal>, + <literal>ERROR</literal>, <literal>LOG</literal>, <literal>FATAL</literal>, and + <literal>PANIC</literal>. Each level includes all the levels that follow it. The later the level, the fewer messages are sent - to the log. The default is <literal>WARNING</>. Note that - <literal>LOG</> has a different rank here than in - <varname>client_min_messages</>. + to the log. The default is <literal>WARNING</literal>. Note that + <literal>LOG</literal> has a different rank here than in + <varname>client_min_messages</varname>. Only superusers can change this setting. </para> </listitem> @@ -4818,7 +4818,7 @@ local0.* /var/log/postgresql <varlistentry id="guc-log-min-error-statement" xreflabel="log_min_error_statement"> <term><varname>log_min_error_statement</varname> (<type>enum</type>) <indexterm> - <primary><varname>log_min_error_statement</> configuration parameter</primary> + <primary><varname>log_min_error_statement</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -4846,7 +4846,7 @@ local0.* /var/log/postgresql <varlistentry id="guc-log-min-duration-statement" xreflabel="log_min_duration_statement"> <term><varname>log_min_duration_statement</varname> (<type>integer</type>) <indexterm> - <primary><varname>log_min_duration_statement</> configuration parameter</primary> + <primary><varname>log_min_duration_statement</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -4872,9 +4872,9 @@ local0.* /var/log/postgresql When using this option together with <xref linkend="guc-log-statement">, the text of statements that are logged because of - <varname>log_statement</> will not be repeated in the + <varname>log_statement</varname> will not be repeated in the duration log message. - If you are not using <application>syslog</>, it is recommended + If you are not using <application>syslog</application>, it is recommended that you log the PID or session ID using <xref linkend="guc-log-line-prefix"> so that you can link the statement message to the later @@ -4888,7 +4888,7 @@ local0.* /var/log/postgresql <para> <xref linkend="runtime-config-severity-levels"> explains the message - severity levels used by <productname>PostgreSQL</>. If logging output + severity levels used by <productname>PostgreSQL</productname>. If logging output is sent to <systemitem>syslog</systemitem> or Windows' <systemitem>eventlog</systemitem>, the severity levels are translated as shown in the table. @@ -4901,73 +4901,73 @@ local0.* /var/log/postgresql <row> <entry>Severity</entry> <entry>Usage</entry> - <entry><systemitem>syslog</></entry> - <entry><systemitem>eventlog</></entry> + <entry><systemitem>syslog</systemitem></entry> + <entry><systemitem>eventlog</systemitem></entry> </row> </thead> <tbody> <row> - <entry><literal>DEBUG1..DEBUG5</></entry> + <entry><literal>DEBUG1..DEBUG5</literal></entry> <entry>Provides successively-more-detailed information for use by developers.</entry> - <entry><literal>DEBUG</></entry> - <entry><literal>INFORMATION</></entry> + <entry><literal>DEBUG</literal></entry> + <entry><literal>INFORMATION</literal></entry> </row> <row> - <entry><literal>INFO</></entry> + <entry><literal>INFO</literal></entry> <entry>Provides information implicitly requested by the user, - e.g., output from <command>VACUUM VERBOSE</>.</entry> - <entry><literal>INFO</></entry> - <entry><literal>INFORMATION</></entry> + e.g., output from <command>VACUUM VERBOSE</command>.</entry> + <entry><literal>INFO</literal></entry> + <entry><literal>INFORMATION</literal></entry> </row> <row> - <entry><literal>NOTICE</></entry> + <entry><literal>NOTICE</literal></entry> <entry>Provides information that might be helpful to users, e.g., notice of truncation of long identifiers.</entry> - <entry><literal>NOTICE</></entry> - <entry><literal>INFORMATION</></entry> + <entry><literal>NOTICE</literal></entry> + <entry><literal>INFORMATION</literal></entry> </row> <row> - <entry><literal>WARNING</></entry> - <entry>Provides warnings of likely problems, e.g., <command>COMMIT</> + <entry><literal>WARNING</literal></entry> + <entry>Provides warnings of likely problems, e.g., <command>COMMIT</command> outside a transaction block.</entry> - <entry><literal>NOTICE</></entry> - <entry><literal>WARNING</></entry> + <entry><literal>NOTICE</literal></entry> + <entry><literal>WARNING</literal></entry> </row> <row> - <entry><literal>ERROR</></entry> + <entry><literal>ERROR</literal></entry> <entry>Reports an error that caused the current command to abort.</entry> - <entry><literal>WARNING</></entry> - <entry><literal>ERROR</></entry> + <entry><literal>WARNING</literal></entry> + <entry><literal>ERROR</literal></entry> </row> <row> - <entry><literal>LOG</></entry> + <entry><literal>LOG</literal></entry> <entry>Reports information of interest to administrators, e.g., checkpoint activity.</entry> - <entry><literal>INFO</></entry> - <entry><literal>INFORMATION</></entry> + <entry><literal>INFO</literal></entry> + <entry><literal>INFORMATION</literal></entry> </row> <row> - <entry><literal>FATAL</></entry> + <entry><literal>FATAL</literal></entry> <entry>Reports an error that caused the current session to abort.</entry> - <entry><literal>ERR</></entry> - <entry><literal>ERROR</></entry> + <entry><literal>ERR</literal></entry> + <entry><literal>ERROR</literal></entry> </row> <row> - <entry><literal>PANIC</></entry> + <entry><literal>PANIC</literal></entry> <entry>Reports an error that caused all database sessions to abort.</entry> - <entry><literal>CRIT</></entry> - <entry><literal>ERROR</></entry> + <entry><literal>CRIT</literal></entry> + <entry><literal>ERROR</literal></entry> </row> </tbody> </tgroup> @@ -4982,15 +4982,15 @@ local0.* /var/log/postgresql <varlistentry id="guc-application-name" xreflabel="application_name"> <term><varname>application_name</varname> (<type>string</type>) <indexterm> - <primary><varname>application_name</> configuration parameter</primary> + <primary><varname>application_name</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> The <varname>application_name</varname> can be any string of less than - <symbol>NAMEDATALEN</> characters (64 characters in a standard build). + <symbol>NAMEDATALEN</symbol> characters (64 characters in a standard build). It is typically set by an application upon connection to the server. - The name will be displayed in the <structname>pg_stat_activity</> view + The name will be displayed in the <structname>pg_stat_activity</structname> view and included in CSV log entries. It can also be included in regular log entries via the <xref linkend="guc-log-line-prefix"> parameter. Only printable ASCII characters may be used in the @@ -5003,17 +5003,17 @@ local0.* /var/log/postgresql <varlistentry> <term><varname>debug_print_parse</varname> (<type>boolean</type>) <indexterm> - <primary><varname>debug_print_parse</> configuration parameter</primary> + <primary><varname>debug_print_parse</varname> configuration parameter</primary> </indexterm> </term> <term><varname>debug_print_rewritten</varname> (<type>boolean</type>) <indexterm> - <primary><varname>debug_print_rewritten</> configuration parameter</primary> + <primary><varname>debug_print_rewritten</varname> configuration parameter</primary> </indexterm> </term> <term><varname>debug_print_plan</varname> (<type>boolean</type>) <indexterm> - <primary><varname>debug_print_plan</> configuration parameter</primary> + <primary><varname>debug_print_plan</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -5021,7 +5021,7 @@ local0.* /var/log/postgresql These parameters enable various debugging output to be emitted. When set, they print the resulting parse tree, the query rewriter output, or the execution plan for each executed query. - These messages are emitted at <literal>LOG</> message level, so by + These messages are emitted at <literal>LOG</literal> message level, so by default they will appear in the server log but will not be sent to the client. You can change that by adjusting <xref linkend="guc-client-min-messages"> and/or @@ -5034,7 +5034,7 @@ local0.* /var/log/postgresql <varlistentry> <term><varname>debug_pretty_print</varname> (<type>boolean</type>) <indexterm> - <primary><varname>debug_pretty_print</> configuration parameter</primary> + <primary><varname>debug_pretty_print</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -5043,7 +5043,7 @@ local0.* /var/log/postgresql produced by <varname>debug_print_parse</varname>, <varname>debug_print_rewritten</varname>, or <varname>debug_print_plan</varname>. This results in more readable - but much longer output than the <quote>compact</> format used when + but much longer output than the <quote>compact</quote> format used when it is off. It is on by default. </para> </listitem> @@ -5052,7 +5052,7 @@ local0.* /var/log/postgresql <varlistentry id="guc-log-checkpoints" xreflabel="log_checkpoints"> <term><varname>log_checkpoints</varname> (<type>boolean</type>) <indexterm> - <primary><varname>log_checkpoints</> configuration parameter</primary> + <primary><varname>log_checkpoints</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -5060,7 +5060,7 @@ local0.* /var/log/postgresql Causes checkpoints and restartpoints to be logged in the server log. Some statistics are included in the log messages, including the number of buffers written and the time spent writing them. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. The default is off. </para> </listitem> @@ -5069,7 +5069,7 @@ local0.* /var/log/postgresql <varlistentry id="guc-log-connections" xreflabel="log_connections"> <term><varname>log_connections</varname> (<type>boolean</type>) <indexterm> - <primary><varname>log_connections</> configuration parameter</primary> + <primary><varname>log_connections</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -5078,14 +5078,14 @@ local0.* /var/log/postgresql as well as successful completion of client authentication. Only superusers can change this parameter at session start, and it cannot be changed at all within a session. - The default is <literal>off</>. + The default is <literal>off</literal>. </para> <note> <para> - Some client programs, like <application>psql</>, attempt + Some client programs, like <application>psql</application>, attempt to connect twice while determining if a password is required, so - duplicate <quote>connection received</> messages do not + duplicate <quote>connection received</quote> messages do not necessarily indicate a problem. </para> </note> @@ -5095,7 +5095,7 @@ local0.* /var/log/postgresql <varlistentry id="guc-log-disconnections" xreflabel="log_disconnections"> <term><varname>log_disconnections</varname> (<type>boolean</type>) <indexterm> - <primary><varname>log_disconnections</> configuration parameter</primary> + <primary><varname>log_disconnections</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -5105,7 +5105,7 @@ local0.* /var/log/postgresql plus the duration of the session. Only superusers can change this parameter at session start, and it cannot be changed at all within a session. - The default is <literal>off</>. + The default is <literal>off</literal>. </para> </listitem> </varlistentry> @@ -5114,13 +5114,13 @@ local0.* /var/log/postgresql <varlistentry id="guc-log-duration" xreflabel="log_duration"> <term><varname>log_duration</varname> (<type>boolean</type>) <indexterm> - <primary><varname>log_duration</> configuration parameter</primary> + <primary><varname>log_duration</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Causes the duration of every completed statement to be logged. - The default is <literal>off</>. + The default is <literal>off</literal>. Only superusers can change this setting. </para> @@ -5133,10 +5133,10 @@ local0.* /var/log/postgresql <para> The difference between setting this option and setting <xref linkend="guc-log-min-duration-statement"> to zero is that - exceeding <varname>log_min_duration_statement</> forces the text of + exceeding <varname>log_min_duration_statement</varname> forces the text of the query to be logged, but this option doesn't. Thus, if - <varname>log_duration</> is <literal>on</> and - <varname>log_min_duration_statement</> has a positive value, all + <varname>log_duration</varname> is <literal>on</literal> and + <varname>log_min_duration_statement</varname> has a positive value, all durations are logged but the query text is included only for statements exceeding the threshold. This behavior can be useful for gathering statistics in high-load installations. @@ -5148,18 +5148,18 @@ local0.* /var/log/postgresql <varlistentry id="guc-log-error-verbosity" xreflabel="log_error_verbosity"> <term><varname>log_error_verbosity</varname> (<type>enum</type>) <indexterm> - <primary><varname>log_error_verbosity</> configuration parameter</primary> + <primary><varname>log_error_verbosity</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Controls the amount of detail written in the server log for each - message that is logged. Valid values are <literal>TERSE</>, - <literal>DEFAULT</>, and <literal>VERBOSE</>, each adding more - fields to displayed messages. <literal>TERSE</> excludes - the logging of <literal>DETAIL</>, <literal>HINT</>, - <literal>QUERY</>, and <literal>CONTEXT</> error information. - <literal>VERBOSE</> output includes the <symbol>SQLSTATE</> error + message that is logged. Valid values are <literal>TERSE</literal>, + <literal>DEFAULT</literal>, and <literal>VERBOSE</literal>, each adding more + fields to displayed messages. <literal>TERSE</literal> excludes + the logging of <literal>DETAIL</literal>, <literal>HINT</literal>, + <literal>QUERY</literal>, and <literal>CONTEXT</literal> error information. + <literal>VERBOSE</literal> output includes the <symbol>SQLSTATE</symbol> error code (see also <xref linkend="errcodes-appendix">) and the source code file name, function name, and line number that generated the error. Only superusers can change this setting. @@ -5170,7 +5170,7 @@ local0.* /var/log/postgresql <varlistentry id="guc-log-hostname" xreflabel="log_hostname"> <term><varname>log_hostname</varname> (<type>boolean</type>) <indexterm> - <primary><varname>log_hostname</> configuration parameter</primary> + <primary><varname>log_hostname</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -5179,7 +5179,7 @@ local0.* /var/log/postgresql connecting host. Turning this parameter on causes logging of the host name as well. Note that depending on your host name resolution setup this might impose a non-negligible performance penalty. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -5188,14 +5188,14 @@ local0.* /var/log/postgresql <varlistentry id="guc-log-line-prefix" xreflabel="log_line_prefix"> <term><varname>log_line_prefix</varname> (<type>string</type>) <indexterm> - <primary><varname>log_line_prefix</> configuration parameter</primary> + <primary><varname>log_line_prefix</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - This is a <function>printf</>-style string that is output at the + This is a <function>printf</function>-style string that is output at the beginning of each log line. - <literal>%</> characters begin <quote>escape sequences</> + <literal>%</literal> characters begin <quote>escape sequences</quote> that are replaced with status information as outlined below. Unrecognized escapes are ignored. Other characters are copied straight to the log line. Some escapes are @@ -5207,9 +5207,9 @@ local0.* /var/log/postgresql right with spaces to give it a minimum width, whereas a positive value will pad on the left. Padding can be useful to aid human readability in log files. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. The default is - <literal>'%m [%p] '</> which logs a time stamp and the process ID. + <literal>'%m [%p] '</literal> which logs a time stamp and the process ID. <informaltable> <tgroup cols="3"> @@ -5310,19 +5310,19 @@ local0.* /var/log/postgresql </row> <row> <entry><literal>%%</literal></entry> - <entry>Literal <literal>%</></entry> + <entry>Literal <literal>%</literal></entry> <entry>no</entry> </row> </tbody> </tgroup> </informaltable> - The <literal>%c</> escape prints a quasi-unique session identifier, + The <literal>%c</literal> escape prints a quasi-unique session identifier, consisting of two 4-byte hexadecimal numbers (without leading zeros) separated by a dot. The numbers are the process start time and the - process ID, so <literal>%c</> can also be used as a space saving way + process ID, so <literal>%c</literal> can also be used as a space saving way of printing those items. For example, to generate the session - identifier from <literal>pg_stat_activity</>, use this query: + identifier from <literal>pg_stat_activity</literal>, use this query: <programlisting> SELECT to_hex(trunc(EXTRACT(EPOCH FROM backend_start))::integer) || '.' || to_hex(pid) @@ -5333,7 +5333,7 @@ FROM pg_stat_activity; <tip> <para> - If you set a nonempty value for <varname>log_line_prefix</>, + If you set a nonempty value for <varname>log_line_prefix</varname>, you should usually make its last character be a space, to provide visual separation from the rest of the log line. A punctuation character can be used too. @@ -5342,15 +5342,15 @@ FROM pg_stat_activity; <tip> <para> - <application>Syslog</> produces its own + <application>Syslog</application> produces its own time stamp and process ID information, so you probably do not want to - include those escapes if you are logging to <application>syslog</>. + include those escapes if you are logging to <application>syslog</application>. </para> </tip> <tip> <para> - The <literal>%q</> escape is useful when including information that is + The <literal>%q</literal> escape is useful when including information that is only available in session (backend) context like user or database name. For example: <programlisting> @@ -5364,7 +5364,7 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' <varlistentry id="guc-log-lock-waits" xreflabel="log_lock_waits"> <term><varname>log_lock_waits</varname> (<type>boolean</type>) <indexterm> - <primary><varname>log_lock_waits</> configuration parameter</primary> + <primary><varname>log_lock_waits</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -5372,7 +5372,7 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' Controls whether a log message is produced when a session waits longer than <xref linkend="guc-deadlock-timeout"> to acquire a lock. This is useful in determining if lock waits are causing - poor performance. The default is <literal>off</>. + poor performance. The default is <literal>off</literal>. Only superusers can change this setting. </para> </listitem> @@ -5381,22 +5381,22 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' <varlistentry id="guc-log-statement" xreflabel="log_statement"> <term><varname>log_statement</varname> (<type>enum</type>) <indexterm> - <primary><varname>log_statement</> configuration parameter</primary> + <primary><varname>log_statement</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Controls which SQL statements are logged. Valid values are - <literal>none</> (off), <literal>ddl</>, <literal>mod</>, and - <literal>all</> (all statements). <literal>ddl</> logs all data definition - statements, such as <command>CREATE</>, <command>ALTER</>, and - <command>DROP</> statements. <literal>mod</> logs all - <literal>ddl</> statements, plus data-modifying statements - such as <command>INSERT</>, - <command>UPDATE</>, <command>DELETE</>, <command>TRUNCATE</>, - and <command>COPY FROM</>. - <command>PREPARE</>, <command>EXECUTE</>, and - <command>EXPLAIN ANALYZE</> statements are also logged if their + <literal>none</literal> (off), <literal>ddl</literal>, <literal>mod</literal>, and + <literal>all</literal> (all statements). <literal>ddl</literal> logs all data definition + statements, such as <command>CREATE</command>, <command>ALTER</command>, and + <command>DROP</command> statements. <literal>mod</literal> logs all + <literal>ddl</literal> statements, plus data-modifying statements + such as <command>INSERT</command>, + <command>UPDATE</command>, <command>DELETE</command>, <command>TRUNCATE</command>, + and <command>COPY FROM</command>. + <command>PREPARE</command>, <command>EXECUTE</command>, and + <command>EXPLAIN ANALYZE</command> statements are also logged if their contained command is of an appropriate type. For clients using extended query protocol, logging occurs when an Execute message is received, and values of the Bind parameters are included @@ -5404,20 +5404,20 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' </para> <para> - The default is <literal>none</>. Only superusers can change this + The default is <literal>none</literal>. Only superusers can change this setting. </para> <note> <para> Statements that contain simple syntax errors are not logged - even by the <varname>log_statement</> = <literal>all</> setting, + even by the <varname>log_statement</varname> = <literal>all</literal> setting, because the log message is emitted only after basic parsing has been done to determine the statement type. In the case of extended query protocol, this setting likewise does not log statements that fail before the Execute phase (i.e., during parse analysis or - planning). Set <varname>log_min_error_statement</> to - <literal>ERROR</> (or lower) to log such statements. + planning). Set <varname>log_min_error_statement</varname> to + <literal>ERROR</literal> (or lower) to log such statements. </para> </note> </listitem> @@ -5426,14 +5426,14 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' <varlistentry id="guc-log-replication-commands" xreflabel="log_replication_commands"> <term><varname>log_replication_commands</varname> (<type>boolean</type>) <indexterm> - <primary><varname>log_replication_commands</> configuration parameter</primary> + <primary><varname>log_replication_commands</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Causes each replication command to be logged in the server log. See <xref linkend="protocol-replication"> for more information about - replication command. The default value is <literal>off</>. + replication command. The default value is <literal>off</literal>. Only superusers can change this setting. </para> </listitem> @@ -5442,7 +5442,7 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' <varlistentry id="guc-log-temp-files" xreflabel="log_temp_files"> <term><varname>log_temp_files</varname> (<type>integer</type>) <indexterm> - <primary><varname>log_temp_files</> configuration parameter</primary> + <primary><varname>log_temp_files</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -5463,7 +5463,7 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' <varlistentry id="guc-log-timezone" xreflabel="log_timezone"> <term><varname>log_timezone</varname> (<type>string</type>) <indexterm> - <primary><varname>log_timezone</> configuration parameter</primary> + <primary><varname>log_timezone</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -5471,11 +5471,11 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' Sets the time zone used for timestamps written in the server log. Unlike <xref linkend="guc-timezone">, this value is cluster-wide, so that all sessions will report timestamps consistently. - The built-in default is <literal>GMT</>, but that is typically - overridden in <filename>postgresql.conf</>; <application>initdb</> + The built-in default is <literal>GMT</literal>, but that is typically + overridden in <filename>postgresql.conf</filename>; <application>initdb</application> will install a setting there corresponding to its system environment. See <xref linkend="datatype-timezones"> for more information. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -5487,10 +5487,10 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' <title>Using CSV-Format Log Output</title> <para> - Including <literal>csvlog</> in the <varname>log_destination</> list + Including <literal>csvlog</literal> in the <varname>log_destination</varname> list provides a convenient way to import log files into a database table. This option emits log lines in comma-separated-values - (<acronym>CSV</>) format, + (<acronym>CSV</acronym>) format, with these columns: time stamp with milliseconds, user name, @@ -5512,10 +5512,10 @@ log_line_prefix = '%m [%p] %q%u@%d/%a ' character count of the error position therein, error context, user query that led to the error (if any and enabled by - <varname>log_min_error_statement</>), + <varname>log_min_error_statement</varname>), character count of the error position therein, location of the error in the PostgreSQL source code - (if <varname>log_error_verbosity</> is set to <literal>verbose</>), + (if <varname>log_error_verbosity</varname> is set to <literal>verbose</literal>), and application name. Here is a sample table definition for storing CSV-format log output: @@ -5551,7 +5551,7 @@ CREATE TABLE postgres_log </para> <para> - To import a log file into this table, use the <command>COPY FROM</> + To import a log file into this table, use the <command>COPY FROM</command> command: <programlisting> @@ -5567,7 +5567,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <listitem> <para> Set <varname>log_filename</varname> and - <varname>log_rotation_age</> to provide a consistent, + <varname>log_rotation_age</varname> to provide a consistent, predictable naming scheme for your log files. This lets you predict what the file name will be and know when an individual log file is complete and therefore ready to be imported. @@ -5584,7 +5584,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <listitem> <para> - Set <varname>log_truncate_on_rotation</varname> to <literal>on</> so + Set <varname>log_truncate_on_rotation</varname> to <literal>on</literal> so that old log data isn't mixed with the new in the same file. </para> </listitem> @@ -5593,14 +5593,14 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <para> The table definition above includes a primary key specification. This is useful to protect against accidentally importing the same - information twice. The <command>COPY</> command commits all of the + information twice. The <command>COPY</command> command commits all of the data it imports at one time, so any error will cause the entire import to fail. If you import a partial log file and later import the file again when it is complete, the primary key violation will cause the import to fail. Wait until the log is complete and closed before importing. This procedure will also protect against accidentally importing a partial line that hasn't been completely - written, which would also cause <command>COPY</> to fail. + written, which would also cause <command>COPY</command> to fail. </para> </listitem> </orderedlist> @@ -5613,7 +5613,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <para> These settings control how process titles of server processes are modified. Process titles are typically viewed using programs like - <application>ps</> or, on Windows, <application>Process Explorer</>. + <application>ps</application> or, on Windows, <application>Process Explorer</application>. See <xref linkend="monitoring-ps"> for details. </para> @@ -5621,18 +5621,18 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-cluster-name" xreflabel="cluster_name"> <term><varname>cluster_name</varname> (<type>string</type>) <indexterm> - <primary><varname>cluster_name</> configuration parameter</primary> + <primary><varname>cluster_name</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Sets the cluster name that appears in the process title for all server processes in this cluster. The name can be any string of less - than <symbol>NAMEDATALEN</> characters (64 characters in a standard + than <symbol>NAMEDATALEN</symbol> characters (64 characters in a standard build). Only printable ASCII characters may be used in the <varname>cluster_name</varname> value. Other characters will be replaced with question marks (<literal>?</literal>). No name is shown - if this parameter is set to the empty string <literal>''</> (which is + if this parameter is set to the empty string <literal>''</literal> (which is the default). This parameter can only be set at server start. </para> </listitem> @@ -5641,15 +5641,15 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-update-process-title" xreflabel="update_process_title"> <term><varname>update_process_title</varname> (<type>boolean</type>) <indexterm> - <primary><varname>update_process_title</> configuration parameter</primary> + <primary><varname>update_process_title</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Enables updating of the process title every time a new SQL command is received by the server. - This setting defaults to <literal>on</> on most platforms, but it - defaults to <literal>off</> on Windows due to that platform's larger + This setting defaults to <literal>on</literal> on most platforms, but it + defaults to <literal>off</literal> on Windows due to that platform's larger overhead for updating the process title. Only superusers can change this setting. </para> @@ -5678,7 +5678,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-track-activities" xreflabel="track_activities"> <term><varname>track_activities</varname> (<type>boolean</type>) <indexterm> - <primary><varname>track_activities</> configuration parameter</primary> + <primary><varname>track_activities</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -5698,14 +5698,14 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-track-activity-query-size" xreflabel="track_activity_query_size"> <term><varname>track_activity_query_size</varname> (<type>integer</type>) <indexterm> - <primary><varname>track_activity_query_size</> configuration parameter</primary> + <primary><varname>track_activity_query_size</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies the number of bytes reserved to track the currently executing command for each active session, for the - <structname>pg_stat_activity</>.<structfield>query</> field. + <structname>pg_stat_activity</structname>.<structfield>query</structfield> field. The default value is 1024. This parameter can only be set at server start. </para> @@ -5715,7 +5715,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-track-counts" xreflabel="track_counts"> <term><varname>track_counts</varname> (<type>boolean</type>) <indexterm> - <primary><varname>track_counts</> configuration parameter</primary> + <primary><varname>track_counts</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -5731,7 +5731,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-track-io-timing" xreflabel="track_io_timing"> <term><varname>track_io_timing</varname> (<type>boolean</type>) <indexterm> - <primary><varname>track_io_timing</> configuration parameter</primary> + <primary><varname>track_io_timing</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -5743,7 +5743,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; measure the overhead of timing on your system. I/O timing information is displayed in <xref linkend="pg-stat-database-view">, in the output of - <xref linkend="sql-explain"> when the <literal>BUFFERS</> option is + <xref linkend="sql-explain"> when the <literal>BUFFERS</literal> option is used, and by <xref linkend="pgstatstatements">. Only superusers can change this setting. </para> @@ -5753,7 +5753,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-track-functions" xreflabel="track_functions"> <term><varname>track_functions</varname> (<type>enum</type>) <indexterm> - <primary><varname>track_functions</> configuration parameter</primary> + <primary><varname>track_functions</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -5767,7 +5767,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <note> <para> - SQL-language functions that are simple enough to be <quote>inlined</> + SQL-language functions that are simple enough to be <quote>inlined</quote> into the calling query will not be tracked, regardless of this setting. </para> @@ -5778,7 +5778,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-stats-temp-directory" xreflabel="stats_temp_directory"> <term><varname>stats_temp_directory</varname> (<type>string</type>) <indexterm> - <primary><varname>stats_temp_directory</> configuration parameter</primary> + <primary><varname>stats_temp_directory</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -5788,7 +5788,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; is <filename>pg_stat_tmp</filename>. Pointing this at a RAM-based file system will decrease physical I/O requirements and can lead to improved performance. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -5804,29 +5804,29 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry> <term><varname>log_statement_stats</varname> (<type>boolean</type>) <indexterm> - <primary><varname>log_statement_stats</> configuration parameter</primary> + <primary><varname>log_statement_stats</varname> configuration parameter</primary> </indexterm> </term> <term><varname>log_parser_stats</varname> (<type>boolean</type>) <indexterm> - <primary><varname>log_parser_stats</> configuration parameter</primary> + <primary><varname>log_parser_stats</varname> configuration parameter</primary> </indexterm> </term> <term><varname>log_planner_stats</varname> (<type>boolean</type>) <indexterm> - <primary><varname>log_planner_stats</> configuration parameter</primary> + <primary><varname>log_planner_stats</varname> configuration parameter</primary> </indexterm> </term> <term><varname>log_executor_stats</varname> (<type>boolean</type>) <indexterm> - <primary><varname>log_executor_stats</> configuration parameter</primary> + <primary><varname>log_executor_stats</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> For each query, output performance statistics of the respective module to the server log. This is a crude profiling - instrument, similar to the Unix <function>getrusage()</> operating + instrument, similar to the Unix <function>getrusage()</function> operating system facility. <varname>log_statement_stats</varname> reports total statement statistics, while the others report per-module statistics. <varname>log_statement_stats</varname> cannot be enabled together with @@ -5850,7 +5850,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; </indexterm> <para> - These settings control the behavior of the <firstterm>autovacuum</> + These settings control the behavior of the <firstterm>autovacuum</firstterm> feature. Refer to <xref linkend="autovacuum"> for more information. Note that many of these settings can be overridden on a per-table basis; see <xref linkend="sql-createtable-storage-parameters" @@ -5862,7 +5862,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-autovacuum" xreflabel="autovacuum"> <term><varname>autovacuum</varname> (<type>boolean</type>) <indexterm> - <primary><varname>autovacuum</> configuration parameter</primary> + <primary><varname>autovacuum</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -5871,7 +5871,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; autovacuum launcher daemon. This is on by default; however, <xref linkend="guc-track-counts"> must also be enabled for autovacuum to work. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line; however, autovacuuming can be disabled for individual tables by changing table storage parameters. </para> @@ -5887,7 +5887,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-log-autovacuum-min-duration" xreflabel="log_autovacuum_min_duration"> <term><varname>log_autovacuum_min_duration</varname> (<type>integer</type>) <indexterm> - <primary><varname>log_autovacuum_min_duration</> configuration parameter</primary> + <primary><varname>log_autovacuum_min_duration</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -5902,7 +5902,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; logged if an autovacuum action is skipped due to the existence of a conflicting lock. Enabling this parameter can be helpful in tracking autovacuum activity. This parameter can only be set in - the <filename>postgresql.conf</> file or on the server command line; + the <filename>postgresql.conf</filename> file or on the server command line; but the setting can be overridden for individual tables by changing table storage parameters. </para> @@ -5912,7 +5912,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-autovacuum-max-workers" xreflabel="autovacuum_max_workers"> <term><varname>autovacuum_max_workers</varname> (<type>integer</type>) <indexterm> - <primary><varname>autovacuum_max_workers</> configuration parameter</primary> + <primary><varname>autovacuum_max_workers</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -5927,17 +5927,17 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-autovacuum-naptime" xreflabel="autovacuum_naptime"> <term><varname>autovacuum_naptime</varname> (<type>integer</type>) <indexterm> - <primary><varname>autovacuum_naptime</> configuration parameter</primary> + <primary><varname>autovacuum_naptime</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies the minimum delay between autovacuum runs on any given database. In each round the daemon examines the - database and issues <command>VACUUM</> and <command>ANALYZE</> commands + database and issues <command>VACUUM</command> and <command>ANALYZE</command> commands as needed for tables in that database. The delay is measured - in seconds, and the default is one minute (<literal>1min</>). - This parameter can only be set in the <filename>postgresql.conf</> + in seconds, and the default is one minute (<literal>1min</literal>). + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -5946,15 +5946,15 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-autovacuum-vacuum-threshold" xreflabel="autovacuum_vacuum_threshold"> <term><varname>autovacuum_vacuum_threshold</varname> (<type>integer</type>) <indexterm> - <primary><varname>autovacuum_vacuum_threshold</> configuration parameter</primary> + <primary><varname>autovacuum_vacuum_threshold</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies the minimum number of updated or deleted tuples needed - to trigger a <command>VACUUM</> in any one table. + to trigger a <command>VACUUM</command> in any one table. The default is 50 tuples. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line; but the setting can be overridden for individual tables by changing table storage parameters. @@ -5965,15 +5965,15 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-autovacuum-analyze-threshold" xreflabel="autovacuum_analyze_threshold"> <term><varname>autovacuum_analyze_threshold</varname> (<type>integer</type>) <indexterm> - <primary><varname>autovacuum_analyze_threshold</> configuration parameter</primary> + <primary><varname>autovacuum_analyze_threshold</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies the minimum number of inserted, updated or deleted tuples - needed to trigger an <command>ANALYZE</> in any one table. + needed to trigger an <command>ANALYZE</command> in any one table. The default is 50 tuples. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line; but the setting can be overridden for individual tables by changing table storage parameters. @@ -5984,16 +5984,16 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-autovacuum-vacuum-scale-factor" xreflabel="autovacuum_vacuum_scale_factor"> <term><varname>autovacuum_vacuum_scale_factor</varname> (<type>floating point</type>) <indexterm> - <primary><varname>autovacuum_vacuum_scale_factor</> configuration parameter</primary> + <primary><varname>autovacuum_vacuum_scale_factor</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies a fraction of the table size to add to <varname>autovacuum_vacuum_threshold</varname> - when deciding whether to trigger a <command>VACUUM</>. + when deciding whether to trigger a <command>VACUUM</command>. The default is 0.2 (20% of table size). - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line; but the setting can be overridden for individual tables by changing table storage parameters. @@ -6004,16 +6004,16 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-autovacuum-analyze-scale-factor" xreflabel="autovacuum_analyze_scale_factor"> <term><varname>autovacuum_analyze_scale_factor</varname> (<type>floating point</type>) <indexterm> - <primary><varname>autovacuum_analyze_scale_factor</> configuration parameter</primary> + <primary><varname>autovacuum_analyze_scale_factor</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies a fraction of the table size to add to <varname>autovacuum_analyze_threshold</varname> - when deciding whether to trigger an <command>ANALYZE</>. + when deciding whether to trigger an <command>ANALYZE</command>. The default is 0.1 (10% of table size). - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line; but the setting can be overridden for individual tables by changing table storage parameters. @@ -6024,14 +6024,14 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-autovacuum-freeze-max-age" xreflabel="autovacuum_freeze_max_age"> <term><varname>autovacuum_freeze_max_age</varname> (<type>integer</type>) <indexterm> - <primary><varname>autovacuum_freeze_max_age</> configuration parameter</primary> + <primary><varname>autovacuum_freeze_max_age</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies the maximum age (in transactions) that a table's - <structname>pg_class</>.<structfield>relfrozenxid</> field can - attain before a <command>VACUUM</> operation is forced + <structname>pg_class</structname>.<structfield>relfrozenxid</structfield> field can + attain before a <command>VACUUM</command> operation is forced to prevent transaction ID wraparound within the table. Note that the system will launch autovacuum processes to prevent wraparound even when autovacuum is otherwise disabled. @@ -6039,7 +6039,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <para> Vacuum also allows removal of old files from the - <filename>pg_xact</> subdirectory, which is why the default + <filename>pg_xact</filename> subdirectory, which is why the default is a relatively low 200 million transactions. This parameter can only be set at server start, but the setting can be reduced for individual tables by @@ -6058,8 +6058,8 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <listitem> <para> Specifies the maximum age (in multixacts) that a table's - <structname>pg_class</>.<structfield>relminmxid</> field can - attain before a <command>VACUUM</> operation is forced to + <structname>pg_class</structname>.<structfield>relminmxid</structfield> field can + attain before a <command>VACUUM</command> operation is forced to prevent multixact ID wraparound within the table. Note that the system will launch autovacuum processes to prevent wraparound even when autovacuum is otherwise disabled. @@ -6067,7 +6067,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <para> Vacuuming multixacts also allows removal of old files from the - <filename>pg_multixact/members</> and <filename>pg_multixact/offsets</> + <filename>pg_multixact/members</filename> and <filename>pg_multixact/offsets</filename> subdirectories, which is why the default is a relatively low 400 million multixacts. This parameter can only be set at server start, but the setting can @@ -6080,16 +6080,16 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-autovacuum-vacuum-cost-delay" xreflabel="autovacuum_vacuum_cost_delay"> <term><varname>autovacuum_vacuum_cost_delay</varname> (<type>integer</type>) <indexterm> - <primary><varname>autovacuum_vacuum_cost_delay</> configuration parameter</primary> + <primary><varname>autovacuum_vacuum_cost_delay</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies the cost delay value that will be used in automatic - <command>VACUUM</> operations. If -1 is specified, the regular + <command>VACUUM</command> operations. If -1 is specified, the regular <xref linkend="guc-vacuum-cost-delay"> value will be used. The default value is 20 milliseconds. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line; but the setting can be overridden for individual tables by changing table storage parameters. @@ -6100,19 +6100,19 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-autovacuum-vacuum-cost-limit" xreflabel="autovacuum_vacuum_cost_limit"> <term><varname>autovacuum_vacuum_cost_limit</varname> (<type>integer</type>) <indexterm> - <primary><varname>autovacuum_vacuum_cost_limit</> configuration parameter</primary> + <primary><varname>autovacuum_vacuum_cost_limit</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Specifies the cost limit value that will be used in automatic - <command>VACUUM</> operations. If -1 is specified (which is the + <command>VACUUM</command> operations. If -1 is specified (which is the default), the regular <xref linkend="guc-vacuum-cost-limit"> value will be used. Note that the value is distributed proportionally among the running autovacuum workers, if there is more than one, so that the sum of the limits for each worker does not exceed the value of this variable. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line; but the setting can be overridden for individual tables by changing table storage parameters. @@ -6133,9 +6133,9 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-search-path" xreflabel="search_path"> <term><varname>search_path</varname> (<type>string</type>) <indexterm> - <primary><varname>search_path</> configuration parameter</primary> + <primary><varname>search_path</varname> configuration parameter</primary> </indexterm> - <indexterm><primary>path</><secondary>for schemas</></> + <indexterm><primary>path</primary><secondary>for schemas</secondary></indexterm> </term> <listitem> <para> @@ -6151,32 +6151,32 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <para> The value for <varname>search_path</varname> must be a comma-separated list of schema names. Any name that is not an existing schema, or is - a schema for which the user does not have <literal>USAGE</> + a schema for which the user does not have <literal>USAGE</literal> permission, is silently ignored. </para> <para> If one of the list items is the special name <literal>$user</literal>, then the schema having the name returned by - <function>SESSION_USER</> is substituted, if there is such a schema - and the user has <literal>USAGE</> permission for it. + <function>SESSION_USER</function> is substituted, if there is such a schema + and the user has <literal>USAGE</literal> permission for it. (If not, <literal>$user</literal> is ignored.) </para> <para> - The system catalog schema, <literal>pg_catalog</>, is always + The system catalog schema, <literal>pg_catalog</literal>, is always searched, whether it is mentioned in the path or not. If it is mentioned in the path then it will be searched in the specified - order. If <literal>pg_catalog</> is not in the path then it will - be searched <emphasis>before</> searching any of the path items. + order. If <literal>pg_catalog</literal> is not in the path then it will + be searched <emphasis>before</emphasis> searching any of the path items. </para> <para> Likewise, the current session's temporary-table schema, - <literal>pg_temp_<replaceable>nnn</></>, is always searched if it + <literal>pg_temp_<replaceable>nnn</replaceable></literal>, is always searched if it exists. It can be explicitly listed in the path by using the - alias <literal>pg_temp</><indexterm><primary>pg_temp</></>. If it is not listed in the path then - it is searched first (even before <literal>pg_catalog</>). However, + alias <literal>pg_temp</literal><indexterm><primary>pg_temp</primary></indexterm>. If it is not listed in the path then + it is searched first (even before <literal>pg_catalog</literal>). However, the temporary schema is only searched for relation (table, view, sequence, etc) and data type names. It is never searched for function or operator names. @@ -6193,7 +6193,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; The default value for this parameter is <literal>"$user", public</literal>. This setting supports shared use of a database (where no users - have private schemas, and all share use of <literal>public</>), + have private schemas, and all share use of <literal>public</literal>), private per-user schemas, and combinations of these. Other effects can be obtained by altering the default search path setting, either globally or per-user. @@ -6202,11 +6202,11 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <para> The current effective value of the search path can be examined via the <acronym>SQL</acronym> function - <function>current_schemas</> + <function>current_schemas</function> (see <xref linkend="functions-info">). This is not quite the same as examining the value of <varname>search_path</varname>, since - <function>current_schemas</> shows how the items + <function>current_schemas</function> shows how the items appearing in <varname>search_path</varname> were resolved. </para> @@ -6219,20 +6219,20 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-row-security" xreflabel="row_security"> <term><varname>row_security</varname> (<type>boolean</type>) <indexterm> - <primary><varname>row_security</> configuration parameter</primary> + <primary><varname>row_security</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> This variable controls whether to raise an error in lieu of applying a - row security policy. When set to <literal>on</>, policies apply - normally. When set to <literal>off</>, queries fail which would - otherwise apply at least one policy. The default is <literal>on</>. - Change to <literal>off</> where limited row visibility could cause - incorrect results; for example, <application>pg_dump</> makes that + row security policy. When set to <literal>on</literal>, policies apply + normally. When set to <literal>off</literal>, queries fail which would + otherwise apply at least one policy. The default is <literal>on</literal>. + Change to <literal>off</literal> where limited row visibility could cause + incorrect results; for example, <application>pg_dump</application> makes that change by default. This variable has no effect on roles which bypass every row security policy, to wit, superusers and roles with - the <literal>BYPASSRLS</> attribute. + the <literal>BYPASSRLS</literal> attribute. </para> <para> @@ -6245,14 +6245,14 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-default-tablespace" xreflabel="default_tablespace"> <term><varname>default_tablespace</varname> (<type>string</type>) <indexterm> - <primary><varname>default_tablespace</> configuration parameter</primary> + <primary><varname>default_tablespace</varname> configuration parameter</primary> </indexterm> - <indexterm><primary>tablespace</><secondary>default</></> + <indexterm><primary>tablespace</primary><secondary>default</secondary></indexterm> </term> <listitem> <para> This variable specifies the default tablespace in which to create - objects (tables and indexes) when a <command>CREATE</> command does + objects (tables and indexes) when a <command>CREATE</command> command does not explicitly specify a tablespace. </para> @@ -6260,9 +6260,9 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; The value is either the name of a tablespace, or an empty string to specify using the default tablespace of the current database. If the value does not match the name of any existing tablespace, - <productname>PostgreSQL</> will automatically use the default + <productname>PostgreSQL</productname> will automatically use the default tablespace of the current database. If a nondefault tablespace - is specified, the user must have <literal>CREATE</> privilege + is specified, the user must have <literal>CREATE</literal> privilege for it, or creation attempts will fail. </para> @@ -6287,38 +6287,38 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-temp-tablespaces" xreflabel="temp_tablespaces"> <term><varname>temp_tablespaces</varname> (<type>string</type>) <indexterm> - <primary><varname>temp_tablespaces</> configuration parameter</primary> + <primary><varname>temp_tablespaces</varname> configuration parameter</primary> </indexterm> - <indexterm><primary>tablespace</><secondary>temporary</></> + <indexterm><primary>tablespace</primary><secondary>temporary</secondary></indexterm> </term> <listitem> <para> This variable specifies tablespaces in which to create temporary objects (temp tables and indexes on temp tables) when a - <command>CREATE</> command does not explicitly specify a tablespace. + <command>CREATE</command> command does not explicitly specify a tablespace. Temporary files for purposes such as sorting large data sets are also created in these tablespaces. </para> <para> The value is a list of names of tablespaces. When there is more than - one name in the list, <productname>PostgreSQL</> chooses a random + one name in the list, <productname>PostgreSQL</productname> chooses a random member of the list each time a temporary object is to be created; except that within a transaction, successively created temporary objects are placed in successive tablespaces from the list. If the selected element of the list is an empty string, - <productname>PostgreSQL</> will automatically use the default + <productname>PostgreSQL</productname> will automatically use the default tablespace of the current database instead. </para> <para> - When <varname>temp_tablespaces</> is set interactively, specifying a + When <varname>temp_tablespaces</varname> is set interactively, specifying a nonexistent tablespace is an error, as is specifying a tablespace for - which the user does not have <literal>CREATE</> privilege. However, + which the user does not have <literal>CREATE</literal> privilege. However, when using a previously set value, nonexistent tablespaces are ignored, as are tablespaces for which the user lacks - <literal>CREATE</> privilege. In particular, this rule applies when - using a value set in <filename>postgresql.conf</>. + <literal>CREATE</literal> privilege. In particular, this rule applies when + using a value set in <filename>postgresql.conf</filename>. </para> <para> @@ -6336,18 +6336,18 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-check-function-bodies" xreflabel="check_function_bodies"> <term><varname>check_function_bodies</varname> (<type>boolean</type>) <indexterm> - <primary><varname>check_function_bodies</> configuration parameter</primary> + <primary><varname>check_function_bodies</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - This parameter is normally on. When set to <literal>off</>, it + This parameter is normally on. When set to <literal>off</literal>, it disables validation of the function body string during <xref linkend="sql-createfunction">. Disabling validation avoids side effects of the validation process and avoids false positives due to problems such as forward references. Set this parameter - to <literal>off</> before loading functions on behalf of other - users; <application>pg_dump</> does so automatically. + to <literal>off</literal> before loading functions on behalf of other + users; <application>pg_dump</application> does so automatically. </para> </listitem> </varlistentry> @@ -6359,7 +6359,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <secondary>setting default</secondary> </indexterm> <indexterm> - <primary><varname>default_transaction_isolation</> configuration parameter</primary> + <primary><varname>default_transaction_isolation</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -6386,14 +6386,14 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <secondary>setting default</secondary> </indexterm> <indexterm> - <primary><varname>default_transaction_read_only</> configuration parameter</primary> + <primary><varname>default_transaction_read_only</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> A read-only SQL transaction cannot alter non-temporary tables. This parameter controls the default read-only status of each new - transaction. The default is <literal>off</> (read/write). + transaction. The default is <literal>off</literal> (read/write). </para> <para> @@ -6409,12 +6409,12 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <secondary>setting default</secondary> </indexterm> <indexterm> - <primary><varname>default_transaction_deferrable</> configuration parameter</primary> + <primary><varname>default_transaction_deferrable</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - When running at the <literal>serializable</> isolation level, + When running at the <literal>serializable</literal> isolation level, a deferrable read-only SQL transaction may be delayed before it is allowed to proceed. However, once it begins executing it does not incur any of the overhead required to ensure @@ -6427,7 +6427,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; This parameter controls the default deferrable status of each new transaction. It currently has no effect on read-write transactions or those operating at isolation levels lower - than <literal>serializable</>. The default is <literal>off</>. + than <literal>serializable</literal>. The default is <literal>off</literal>. </para> <para> @@ -6440,7 +6440,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-session-replication-role" xreflabel="session_replication_role"> <term><varname>session_replication_role</varname> (<type>enum</type>) <indexterm> - <primary><varname>session_replication_role</> configuration parameter</primary> + <primary><varname>session_replication_role</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -6448,8 +6448,8 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; Controls firing of replication-related triggers and rules for the current session. Setting this variable requires superuser privilege and results in discarding any previously cached - query plans. Possible values are <literal>origin</> (the default), - <literal>replica</> and <literal>local</>. + query plans. Possible values are <literal>origin</literal> (the default), + <literal>replica</literal> and <literal>local</literal>. See <xref linkend="sql-altertable"> for more information. </para> @@ -6459,21 +6459,21 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-statement-timeout" xreflabel="statement_timeout"> <term><varname>statement_timeout</varname> (<type>integer</type>) <indexterm> - <primary><varname>statement_timeout</> configuration parameter</primary> + <primary><varname>statement_timeout</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Abort any statement that takes more than the specified number of milliseconds, starting from the time the command arrives at the server - from the client. If <varname>log_min_error_statement</> is set to - <literal>ERROR</> or lower, the statement that timed out will also be + from the client. If <varname>log_min_error_statement</varname> is set to + <literal>ERROR</literal> or lower, the statement that timed out will also be logged. A value of zero (the default) turns this off. </para> <para> - Setting <varname>statement_timeout</> in - <filename>postgresql.conf</> is not recommended because it would + Setting <varname>statement_timeout</varname> in + <filename>postgresql.conf</filename> is not recommended because it would affect all sessions. </para> </listitem> @@ -6482,7 +6482,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-lock-timeout" xreflabel="lock_timeout"> <term><varname>lock_timeout</varname> (<type>integer</type>) <indexterm> - <primary><varname>lock_timeout</> configuration parameter</primary> + <primary><varname>lock_timeout</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -6491,24 +6491,24 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; milliseconds while attempting to acquire a lock on a table, index, row, or other database object. The time limit applies separately to each lock acquisition attempt. The limit applies both to explicit - locking requests (such as <command>LOCK TABLE</>, or <command>SELECT - FOR UPDATE</> without <literal>NOWAIT</>) and to implicitly-acquired - locks. If <varname>log_min_error_statement</> is set to - <literal>ERROR</> or lower, the statement that timed out will be + locking requests (such as <command>LOCK TABLE</command>, or <command>SELECT + FOR UPDATE</command> without <literal>NOWAIT</literal>) and to implicitly-acquired + locks. If <varname>log_min_error_statement</varname> is set to + <literal>ERROR</literal> or lower, the statement that timed out will be logged. A value of zero (the default) turns this off. </para> <para> - Unlike <varname>statement_timeout</>, this timeout can only occur - while waiting for locks. Note that if <varname>statement_timeout</> - is nonzero, it is rather pointless to set <varname>lock_timeout</> to + Unlike <varname>statement_timeout</varname>, this timeout can only occur + while waiting for locks. Note that if <varname>statement_timeout</varname> + is nonzero, it is rather pointless to set <varname>lock_timeout</varname> to the same or larger value, since the statement timeout would always trigger first. </para> <para> - Setting <varname>lock_timeout</> in - <filename>postgresql.conf</> is not recommended because it would + Setting <varname>lock_timeout</varname> in + <filename>postgresql.conf</filename> is not recommended because it would affect all sessions. </para> </listitem> @@ -6517,7 +6517,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-idle-in-transaction-session-timeout" xreflabel="idle_in_transaction_session_timeout"> <term><varname>idle_in_transaction_session_timeout</varname> (<type>integer</type>) <indexterm> - <primary><varname>idle_in_transaction_session_timeout</> configuration parameter</primary> + <primary><varname>idle_in_transaction_session_timeout</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -6537,21 +6537,21 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-vacuum-freeze-table-age" xreflabel="vacuum_freeze_table_age"> <term><varname>vacuum_freeze_table_age</varname> (<type>integer</type>) <indexterm> - <primary><varname>vacuum_freeze_table_age</> configuration parameter</primary> + <primary><varname>vacuum_freeze_table_age</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - <command>VACUUM</> performs an aggressive scan if the table's - <structname>pg_class</>.<structfield>relfrozenxid</> field has reached + <command>VACUUM</command> performs an aggressive scan if the table's + <structname>pg_class</structname>.<structfield>relfrozenxid</structfield> field has reached the age specified by this setting. An aggressive scan differs from - a regular <command>VACUUM</> in that it visits every page that might + a regular <command>VACUUM</command> in that it visits every page that might contain unfrozen XIDs or MXIDs, not just those that might contain dead tuples. The default is 150 million transactions. Although users can - set this value anywhere from zero to two billions, <command>VACUUM</> + set this value anywhere from zero to two billions, <command>VACUUM</command> will silently limit the effective value to 95% of <xref linkend="guc-autovacuum-freeze-max-age">, so that a - periodical manual <command>VACUUM</> has a chance to run before an + periodical manual <command>VACUUM</command> has a chance to run before an anti-wraparound autovacuum is launched for the table. For more information see <xref linkend="vacuum-for-wraparound">. @@ -6562,17 +6562,17 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-vacuum-freeze-min-age" xreflabel="vacuum_freeze_min_age"> <term><varname>vacuum_freeze_min_age</varname> (<type>integer</type>) <indexterm> - <primary><varname>vacuum_freeze_min_age</> configuration parameter</primary> + <primary><varname>vacuum_freeze_min_age</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - Specifies the cutoff age (in transactions) that <command>VACUUM</> + Specifies the cutoff age (in transactions) that <command>VACUUM</command> should use to decide whether to freeze row versions while scanning a table. The default is 50 million transactions. Although users can set this value anywhere from zero to one billion, - <command>VACUUM</> will silently limit the effective value to half + <command>VACUUM</command> will silently limit the effective value to half the value of <xref linkend="guc-autovacuum-freeze-max-age">, so that there is not an unreasonably short time between forced autovacuums. For more information see <xref @@ -6584,21 +6584,21 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-vacuum-multixact-freeze-table-age" xreflabel="vacuum_multixact_freeze_table_age"> <term><varname>vacuum_multixact_freeze_table_age</varname> (<type>integer</type>) <indexterm> - <primary><varname>vacuum_multixact_freeze_table_age</> configuration parameter</primary> + <primary><varname>vacuum_multixact_freeze_table_age</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - <command>VACUUM</> performs an aggressive scan if the table's - <structname>pg_class</>.<structfield>relminmxid</> field has reached + <command>VACUUM</command> performs an aggressive scan if the table's + <structname>pg_class</structname>.<structfield>relminmxid</structfield> field has reached the age specified by this setting. An aggressive scan differs from - a regular <command>VACUUM</> in that it visits every page that might + a regular <command>VACUUM</command> in that it visits every page that might contain unfrozen XIDs or MXIDs, not just those that might contain dead tuples. The default is 150 million multixacts. Although users can set this value anywhere from zero to two billions, - <command>VACUUM</> will silently limit the effective value to 95% of + <command>VACUUM</command> will silently limit the effective value to 95% of <xref linkend="guc-autovacuum-multixact-freeze-max-age">, so that a - periodical manual <command>VACUUM</> has a chance to run before an + periodical manual <command>VACUUM</command> has a chance to run before an anti-wraparound is launched for the table. For more information see <xref linkend="vacuum-for-multixact-wraparound">. </para> @@ -6608,17 +6608,17 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-vacuum-multixact-freeze-min-age" xreflabel="vacuum_multixact_freeze_min_age"> <term><varname>vacuum_multixact_freeze_min_age</varname> (<type>integer</type>) <indexterm> - <primary><varname>vacuum_multixact_freeze_min_age</> configuration parameter</primary> + <primary><varname>vacuum_multixact_freeze_min_age</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - Specifies the cutoff age (in multixacts) that <command>VACUUM</> + Specifies the cutoff age (in multixacts) that <command>VACUUM</command> should use to decide whether to replace multixact IDs with a newer transaction ID or multixact ID while scanning a table. The default is 5 million multixacts. Although users can set this value anywhere from zero to one billion, - <command>VACUUM</> will silently limit the effective value to half + <command>VACUUM</command> will silently limit the effective value to half the value of <xref linkend="guc-autovacuum-multixact-freeze-max-age">, so that there is not an unreasonably short time between forced autovacuums. @@ -6630,7 +6630,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-bytea-output" xreflabel="bytea_output"> <term><varname>bytea_output</varname> (<type>enum</type>) <indexterm> - <primary><varname>bytea_output</> configuration parameter</primary> + <primary><varname>bytea_output</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -6648,7 +6648,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-xmlbinary" xreflabel="xmlbinary"> <term><varname>xmlbinary</varname> (<type>enum</type>) <indexterm> - <primary><varname>xmlbinary</> configuration parameter</primary> + <primary><varname>xmlbinary</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -6676,10 +6676,10 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv; <varlistentry id="guc-xmloption" xreflabel="xmloption"> <term><varname>xmloption</varname> (<type>enum</type>) <indexterm> - <primary><varname>xmloption</> configuration parameter</primary> + <primary><varname>xmloption</varname> configuration parameter</primary> </indexterm> <indexterm> - <primary><varname>SET XML OPTION</></primary> + <primary><varname>SET XML OPTION</varname></primary> </indexterm> <indexterm> <primary>XML option</primary> @@ -6709,16 +6709,16 @@ SET XML OPTION { DOCUMENT | CONTENT }; <varlistentry id="guc-gin-pending-list-limit" xreflabel="gin_pending_list_limit"> <term><varname>gin_pending_list_limit</varname> (<type>integer</type>) <indexterm> - <primary><varname>gin_pending_list_limit</> configuration parameter</primary> + <primary><varname>gin_pending_list_limit</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Sets the maximum size of the GIN pending list which is used - when <literal>fastupdate</> is enabled. If the list grows + when <literal>fastupdate</literal> is enabled. If the list grows larger than this maximum size, it is cleaned up by moving the entries in it to the main GIN data structure in bulk. - The default is four megabytes (<literal>4MB</>). This setting + The default is four megabytes (<literal>4MB</literal>). This setting can be overridden for individual GIN indexes by changing index storage parameters. See <xref linkend="gin-fast-update"> and <xref linkend="gin-tips"> @@ -6737,7 +6737,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; <varlistentry id="guc-datestyle" xreflabel="DateStyle"> <term><varname>DateStyle</varname> (<type>string</type>) <indexterm> - <primary><varname>DateStyle</> configuration parameter</primary> + <primary><varname>DateStyle</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -6745,16 +6745,16 @@ SET XML OPTION { DOCUMENT | CONTENT }; Sets the display format for date and time values, as well as the rules for interpreting ambiguous date input values. For historical reasons, this variable contains two independent - components: the output format specification (<literal>ISO</>, - <literal>Postgres</>, <literal>SQL</>, or <literal>German</>) + components: the output format specification (<literal>ISO</literal>, + <literal>Postgres</literal>, <literal>SQL</literal>, or <literal>German</literal>) and the input/output specification for year/month/day ordering - (<literal>DMY</>, <literal>MDY</>, or <literal>YMD</>). These - can be set separately or together. The keywords <literal>Euro</> - and <literal>European</> are synonyms for <literal>DMY</>; the - keywords <literal>US</>, <literal>NonEuro</>, and - <literal>NonEuropean</> are synonyms for <literal>MDY</>. See + (<literal>DMY</literal>, <literal>MDY</literal>, or <literal>YMD</literal>). These + can be set separately or together. The keywords <literal>Euro</literal> + and <literal>European</literal> are synonyms for <literal>DMY</literal>; the + keywords <literal>US</literal>, <literal>NonEuro</literal>, and + <literal>NonEuropean</literal> are synonyms for <literal>MDY</literal>. See <xref linkend="datatype-datetime"> for more information. The - built-in default is <literal>ISO, MDY</>, but + built-in default is <literal>ISO, MDY</literal>, but <application>initdb</application> will initialize the configuration file with a setting that corresponds to the behavior of the chosen <varname>lc_time</varname> locale. @@ -6765,28 +6765,28 @@ SET XML OPTION { DOCUMENT | CONTENT }; <varlistentry id="guc-intervalstyle" xreflabel="IntervalStyle"> <term><varname>IntervalStyle</varname> (<type>enum</type>) <indexterm> - <primary><varname>IntervalStyle</> configuration parameter</primary> + <primary><varname>IntervalStyle</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Sets the display format for interval values. - The value <literal>sql_standard</> will produce + The value <literal>sql_standard</literal> will produce output matching <acronym>SQL</acronym> standard interval literals. - The value <literal>postgres</> (which is the default) will produce - output matching <productname>PostgreSQL</> releases prior to 8.4 + The value <literal>postgres</literal> (which is the default) will produce + output matching <productname>PostgreSQL</productname> releases prior to 8.4 when the <xref linkend="guc-datestyle"> - parameter was set to <literal>ISO</>. - The value <literal>postgres_verbose</> will produce output - matching <productname>PostgreSQL</> releases prior to 8.4 - when the <varname>DateStyle</> - parameter was set to non-<literal>ISO</> output. - The value <literal>iso_8601</> will produce output matching the time - interval <quote>format with designators</> defined in section + parameter was set to <literal>ISO</literal>. + The value <literal>postgres_verbose</literal> will produce output + matching <productname>PostgreSQL</productname> releases prior to 8.4 + when the <varname>DateStyle</varname> + parameter was set to non-<literal>ISO</literal> output. + The value <literal>iso_8601</literal> will produce output matching the time + interval <quote>format with designators</quote> defined in section 4.4.3.2 of ISO 8601. </para> <para> - The <varname>IntervalStyle</> parameter also affects the + The <varname>IntervalStyle</varname> parameter also affects the interpretation of ambiguous interval input. See <xref linkend="datatype-interval-input"> for more information. </para> @@ -6796,15 +6796,15 @@ SET XML OPTION { DOCUMENT | CONTENT }; <varlistentry id="guc-timezone" xreflabel="TimeZone"> <term><varname>TimeZone</varname> (<type>string</type>) <indexterm> - <primary><varname>TimeZone</> configuration parameter</primary> + <primary><varname>TimeZone</varname> configuration parameter</primary> </indexterm> - <indexterm><primary>time zone</></> + <indexterm><primary>time zone</primary></indexterm> </term> <listitem> <para> Sets the time zone for displaying and interpreting time stamps. - The built-in default is <literal>GMT</>, but that is typically - overridden in <filename>postgresql.conf</>; <application>initdb</> + The built-in default is <literal>GMT</literal>, but that is typically + overridden in <filename>postgresql.conf</filename>; <application>initdb</application> will install a setting there corresponding to its system environment. See <xref linkend="datatype-timezones"> for more information. </para> @@ -6814,14 +6814,14 @@ SET XML OPTION { DOCUMENT | CONTENT }; <varlistentry id="guc-timezone-abbreviations" xreflabel="timezone_abbreviations"> <term><varname>timezone_abbreviations</varname> (<type>string</type>) <indexterm> - <primary><varname>timezone_abbreviations</> configuration parameter</primary> + <primary><varname>timezone_abbreviations</varname> configuration parameter</primary> </indexterm> - <indexterm><primary>time zone names</></> + <indexterm><primary>time zone names</primary></indexterm> </term> <listitem> <para> Sets the collection of time zone abbreviations that will be accepted - by the server for datetime input. The default is <literal>'Default'</>, + by the server for datetime input. The default is <literal>'Default'</literal>, which is a collection that works in most of the world; there are also <literal>'Australia'</literal> and <literal>'India'</literal>, and other collections can be defined for a particular installation. @@ -6840,15 +6840,15 @@ SET XML OPTION { DOCUMENT | CONTENT }; <secondary>display</secondary> </indexterm> <indexterm> - <primary><varname>extra_float_digits</> configuration parameter</primary> + <primary><varname>extra_float_digits</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> This parameter adjusts the number of digits displayed for - floating-point values, including <type>float4</>, <type>float8</>, + floating-point values, including <type>float4</type>, <type>float8</type>, and geometric data types. The parameter value is added to the - standard number of digits (<literal>FLT_DIG</> or <literal>DBL_DIG</> + standard number of digits (<literal>FLT_DIG</literal> or <literal>DBL_DIG</literal> as appropriate). The value can be set as high as 3, to include partially-significant digits; this is especially useful for dumping float data that needs to be restored exactly. Or it can be set @@ -6861,9 +6861,9 @@ SET XML OPTION { DOCUMENT | CONTENT }; <varlistentry id="guc-client-encoding" xreflabel="client_encoding"> <term><varname>client_encoding</varname> (<type>string</type>) <indexterm> - <primary><varname>client_encoding</> configuration parameter</primary> + <primary><varname>client_encoding</varname> configuration parameter</primary> </indexterm> - <indexterm><primary>character set</></> + <indexterm><primary>character set</primary></indexterm> </term> <listitem> <para> @@ -6878,7 +6878,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; <varlistentry id="guc-lc-messages" xreflabel="lc_messages"> <term><varname>lc_messages</varname> (<type>string</type>) <indexterm> - <primary><varname>lc_messages</> configuration parameter</primary> + <primary><varname>lc_messages</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -6910,7 +6910,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; <varlistentry id="guc-lc-monetary" xreflabel="lc_monetary"> <term><varname>lc_monetary</varname> (<type>string</type>) <indexterm> - <primary><varname>lc_monetary</> configuration parameter</primary> + <primary><varname>lc_monetary</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -6929,7 +6929,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; <varlistentry id="guc-lc-numeric" xreflabel="lc_numeric"> <term><varname>lc_numeric</varname> (<type>string</type>) <indexterm> - <primary><varname>lc_numeric</> configuration parameter</primary> + <primary><varname>lc_numeric</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -6948,7 +6948,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; <varlistentry id="guc-lc-time" xreflabel="lc_time"> <term><varname>lc_time</varname> (<type>string</type>) <indexterm> - <primary><varname>lc_time</> configuration parameter</primary> + <primary><varname>lc_time</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -6967,7 +6967,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; <varlistentry id="guc-default-text-search-config" xreflabel="default_text_search_config"> <term><varname>default_text_search_config</varname> (<type>string</type>) <indexterm> - <primary><varname>default_text_search_config</> configuration parameter</primary> + <primary><varname>default_text_search_config</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -6976,7 +6976,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; of the text search functions that do not have an explicit argument specifying the configuration. See <xref linkend="textsearch"> for further information. - The built-in default is <literal>pg_catalog.simple</>, but + The built-in default is <literal>pg_catalog.simple</literal>, but <application>initdb</application> will initialize the configuration file with a setting that corresponds to the chosen <varname>lc_ctype</varname> locale, if a configuration @@ -6997,8 +6997,8 @@ SET XML OPTION { DOCUMENT | CONTENT }; server, in order to load additional functionality or achieve performance benefits. For example, a setting of <literal>'$libdir/mylib'</literal> would cause - <literal>mylib.so</> (or on some platforms, - <literal>mylib.sl</>) to be preloaded from the installation's standard + <literal>mylib.so</literal> (or on some platforms, + <literal>mylib.sl</literal>) to be preloaded from the installation's standard library directory. The differences between the settings are when they take effect and what privileges are required to change them. </para> @@ -7007,14 +7007,14 @@ SET XML OPTION { DOCUMENT | CONTENT }; <productname>PostgreSQL</productname> procedural language libraries can be preloaded in this way, typically by using the syntax <literal>'$libdir/plXXX'</literal> where - <literal>XXX</literal> is <literal>pgsql</>, <literal>perl</>, - <literal>tcl</>, or <literal>python</>. + <literal>XXX</literal> is <literal>pgsql</literal>, <literal>perl</literal>, + <literal>tcl</literal>, or <literal>python</literal>. </para> <para> Only shared libraries specifically intended to be used with PostgreSQL can be loaded this way. Every PostgreSQL-supported library has - a <quote>magic block</> that is checked to guarantee compatibility. For + a <quote>magic block</quote> that is checked to guarantee compatibility. For this reason, non-PostgreSQL libraries cannot be loaded in this way. You might be able to use operating-system facilities such as <envar>LD_PRELOAD</envar> for that. @@ -7029,10 +7029,10 @@ SET XML OPTION { DOCUMENT | CONTENT }; <varlistentry id="guc-local-preload-libraries" xreflabel="local_preload_libraries"> <term><varname>local_preload_libraries</varname> (<type>string</type>) <indexterm> - <primary><varname>local_preload_libraries</> configuration parameter</primary> + <primary><varname>local_preload_libraries</varname> configuration parameter</primary> </indexterm> <indexterm> - <primary><filename>$libdir/plugins</></primary> + <primary><filename>$libdir/plugins</filename></primary> </indexterm> </term> <listitem> @@ -7051,10 +7051,10 @@ SET XML OPTION { DOCUMENT | CONTENT }; <para> This option can be set by any user. Because of that, the libraries that can be loaded are restricted to those appearing in the - <filename>plugins</> subdirectory of the installation's + <filename>plugins</filename> subdirectory of the installation's standard library directory. (It is the database administrator's - responsibility to ensure that only <quote>safe</> libraries - are installed there.) Entries in <varname>local_preload_libraries</> + responsibility to ensure that only <quote>safe</quote> libraries + are installed there.) Entries in <varname>local_preload_libraries</varname> can specify this directory explicitly, for example <literal>$libdir/plugins/mylib</literal>, or just specify the library name — <literal>mylib</literal> would have @@ -7064,11 +7064,11 @@ SET XML OPTION { DOCUMENT | CONTENT }; <para> The intent of this feature is to allow unprivileged users to load debugging or performance-measurement libraries into specific sessions - without requiring an explicit <command>LOAD</> command. To that end, + without requiring an explicit <command>LOAD</command> command. To that end, it would be typical to set this parameter using the <envar>PGOPTIONS</envar> environment variable on the client or by using - <command>ALTER ROLE SET</>. + <command>ALTER ROLE SET</command>. </para> <para> @@ -7083,7 +7083,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; <varlistentry id="guc-session-preload-libraries" xreflabel="session_preload_libraries"> <term><varname>session_preload_libraries</varname> (<type>string</type>) <indexterm> - <primary><varname>session_preload_libraries</> configuration parameter</primary> + <primary><varname>session_preload_libraries</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -7104,10 +7104,10 @@ SET XML OPTION { DOCUMENT | CONTENT }; The intent of this feature is to allow debugging or performance-measurement libraries to be loaded into specific sessions without an explicit - <command>LOAD</> command being given. For + <command>LOAD</command> command being given. For example, <xref linkend="auto-explain"> could be enabled for all sessions under a given user name by setting this parameter - with <command>ALTER ROLE SET</>. Also, this parameter can be changed + with <command>ALTER ROLE SET</command>. Also, this parameter can be changed without restarting the server (but changes only take effect when a new session is started), so it is easier to add new modules this way, even if they should apply to all sessions. @@ -7125,7 +7125,7 @@ SET XML OPTION { DOCUMENT | CONTENT }; <varlistentry id="guc-shared-preload-libraries" xreflabel="shared_preload_libraries"> <term><varname>shared_preload_libraries</varname> (<type>string</type>) <indexterm> - <primary><varname>shared_preload_libraries</> configuration parameter</primary> + <primary><varname>shared_preload_libraries</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -7182,9 +7182,9 @@ SET XML OPTION { DOCUMENT | CONTENT }; <varlistentry id="guc-dynamic-library-path" xreflabel="dynamic_library_path"> <term><varname>dynamic_library_path</varname> (<type>string</type>) <indexterm> - <primary><varname>dynamic_library_path</> configuration parameter</primary> + <primary><varname>dynamic_library_path</varname> configuration parameter</primary> </indexterm> - <indexterm><primary>dynamic loading</></> + <indexterm><primary>dynamic loading</primary></indexterm> </term> <listitem> <para> @@ -7236,7 +7236,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-gin-fuzzy-search-limit" xreflabel="gin_fuzzy_search_limit"> <term><varname>gin_fuzzy_search_limit</varname> (<type>integer</type>) <indexterm> - <primary><varname>gin_fuzzy_search_limit</> configuration parameter</primary> + <primary><varname>gin_fuzzy_search_limit</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -7267,7 +7267,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <secondary>deadlock</secondary> </indexterm> <indexterm> - <primary><varname>deadlock_timeout</> configuration parameter</primary> + <primary><varname>deadlock_timeout</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -7280,7 +7280,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' just wait on the lock for a while before checking for a deadlock. Increasing this value reduces the amount of time wasted in needless deadlock checks, but slows down reporting of - real deadlock errors. The default is one second (<literal>1s</>), + real deadlock errors. The default is one second (<literal>1s</literal>), which is probably about the smallest value you would want in practice. On a heavily loaded server you might want to raise it. Ideally the setting should exceed your typical transaction time, @@ -7302,7 +7302,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-max-locks-per-transaction" xreflabel="max_locks_per_transaction"> <term><varname>max_locks_per_transaction</varname> (<type>integer</type>) <indexterm> - <primary><varname>max_locks_per_transaction</> configuration parameter</primary> + <primary><varname>max_locks_per_transaction</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -7315,7 +7315,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' any one time. This parameter controls the average number of object locks allocated for each transaction; individual transactions can lock more objects as long as the locks of all transactions - fit in the lock table. This is <emphasis>not</> the number of + fit in the lock table. This is <emphasis>not</emphasis> the number of rows that can be locked; that value is unlimited. The default, 64, has historically proven sufficient, but you might need to raise this value if you have queries that touch many different @@ -7334,7 +7334,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-max-pred-locks-per-transaction" xreflabel="max_pred_locks_per_transaction"> <term><varname>max_pred_locks_per_transaction</varname> (<type>integer</type>) <indexterm> - <primary><varname>max_pred_locks_per_transaction</> configuration parameter</primary> + <primary><varname>max_pred_locks_per_transaction</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -7347,7 +7347,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' any one time. This parameter controls the average number of object locks allocated for each transaction; individual transactions can lock more objects as long as the locks of all transactions - fit in the lock table. This is <emphasis>not</> the number of + fit in the lock table. This is <emphasis>not</emphasis> the number of rows that can be locked; that value is unlimited. The default, 64, has generally been sufficient in testing, but you might need to raise this value if you have clients that touch many different @@ -7360,7 +7360,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-max-pred-locks-per-relation" xreflabel="max_pred_locks_per_relation"> <term><varname>max_pred_locks_per_relation</varname> (<type>integer</type>) <indexterm> - <primary><varname>max_pred_locks_per_relation</> configuration parameter</primary> + <primary><varname>max_pred_locks_per_relation</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -7371,8 +7371,8 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' limit, while negative values mean <xref linkend="guc-max-pred-locks-per-transaction"> divided by the absolute value of this setting. The default is -2, which keeps - the behavior from previous versions of <productname>PostgreSQL</>. - This parameter can only be set in the <filename>postgresql.conf</> + the behavior from previous versions of <productname>PostgreSQL</productname>. + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -7381,7 +7381,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-max-pred-locks-per-page" xreflabel="max_pred_locks_per_page"> <term><varname>max_pred_locks_per_page</varname> (<type>integer</type>) <indexterm> - <primary><varname>max_pred_locks_per_page</> configuration parameter</primary> + <primary><varname>max_pred_locks_per_page</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -7389,7 +7389,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' This controls how many rows on a single page can be predicate-locked before the lock is promoted to covering the whole page. The default is 2. This parameter can only be set in - the <filename>postgresql.conf</> file or on the server command line. + the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> </varlistentry> @@ -7408,62 +7408,62 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-array-nulls" xreflabel="array_nulls"> <term><varname>array_nulls</varname> (<type>boolean</type>) <indexterm> - <primary><varname>array_nulls</> configuration parameter</primary> + <primary><varname>array_nulls</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> This controls whether the array input parser recognizes - unquoted <literal>NULL</> as specifying a null array element. - By default, this is <literal>on</>, allowing array values containing - null values to be entered. However, <productname>PostgreSQL</> versions + unquoted <literal>NULL</literal> as specifying a null array element. + By default, this is <literal>on</literal>, allowing array values containing + null values to be entered. However, <productname>PostgreSQL</productname> versions before 8.2 did not support null values in arrays, and therefore would - treat <literal>NULL</> as specifying a normal array element with - the string value <quote>NULL</>. For backward compatibility with + treat <literal>NULL</literal> as specifying a normal array element with + the string value <quote>NULL</quote>. For backward compatibility with applications that require the old behavior, this variable can be - turned <literal>off</>. + turned <literal>off</literal>. </para> <para> Note that it is possible to create array values containing null values - even when this variable is <literal>off</>. + even when this variable is <literal>off</literal>. </para> </listitem> </varlistentry> <varlistentry id="guc-backslash-quote" xreflabel="backslash_quote"> <term><varname>backslash_quote</varname> (<type>enum</type>) - <indexterm><primary>strings</><secondary>backslash quotes</></> + <indexterm><primary>strings</primary><secondary>backslash quotes</secondary></indexterm> <indexterm> - <primary><varname>backslash_quote</> configuration parameter</primary> + <primary><varname>backslash_quote</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> This controls whether a quote mark can be represented by - <literal>\'</> in a string literal. The preferred, SQL-standard way - to represent a quote mark is by doubling it (<literal>''</>) but - <productname>PostgreSQL</> has historically also accepted - <literal>\'</>. However, use of <literal>\'</> creates security risks + <literal>\'</literal> in a string literal. The preferred, SQL-standard way + to represent a quote mark is by doubling it (<literal>''</literal>) but + <productname>PostgreSQL</productname> has historically also accepted + <literal>\'</literal>. However, use of <literal>\'</literal> creates security risks because in some client character set encodings, there are multibyte characters in which the last byte is numerically equivalent to ASCII - <literal>\</>. If client-side code does escaping incorrectly then a + <literal>\</literal>. If client-side code does escaping incorrectly then a SQL-injection attack is possible. This risk can be prevented by making the server reject queries in which a quote mark appears to be escaped by a backslash. - The allowed values of <varname>backslash_quote</> are - <literal>on</> (allow <literal>\'</> always), - <literal>off</> (reject always), and - <literal>safe_encoding</> (allow only if client encoding does not - allow ASCII <literal>\</> within a multibyte character). - <literal>safe_encoding</> is the default setting. + The allowed values of <varname>backslash_quote</varname> are + <literal>on</literal> (allow <literal>\'</literal> always), + <literal>off</literal> (reject always), and + <literal>safe_encoding</literal> (allow only if client encoding does not + allow ASCII <literal>\</literal> within a multibyte character). + <literal>safe_encoding</literal> is the default setting. </para> <para> - Note that in a standard-conforming string literal, <literal>\</> just - means <literal>\</> anyway. This parameter only affects the handling of + Note that in a standard-conforming string literal, <literal>\</literal> just + means <literal>\</literal> anyway. This parameter only affects the handling of non-standard-conforming literals, including - escape string syntax (<literal>E'...'</>). + escape string syntax (<literal>E'...'</literal>). </para> </listitem> </varlistentry> @@ -7471,7 +7471,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-default-with-oids" xreflabel="default_with_oids"> <term><varname>default_with_oids</varname> (<type>boolean</type>) <indexterm> - <primary><varname>default_with_oids</> configuration parameter</primary> + <primary><varname>default_with_oids</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -7481,9 +7481,9 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' newly-created tables, if neither <literal>WITH OIDS</literal> nor <literal>WITHOUT OIDS</literal> is specified. It also determines whether OIDs will be included in tables created by - <command>SELECT INTO</command>. The parameter is <literal>off</> - by default; in <productname>PostgreSQL</> 8.0 and earlier, it - was <literal>on</> by default. + <command>SELECT INTO</command>. The parameter is <literal>off</literal> + by default; in <productname>PostgreSQL</productname> 8.0 and earlier, it + was <literal>on</literal> by default. </para> <para> @@ -7499,21 +7499,21 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-escape-string-warning" xreflabel="escape_string_warning"> <term><varname>escape_string_warning</varname> (<type>boolean</type>) - <indexterm><primary>strings</><secondary>escape warning</></> + <indexterm><primary>strings</primary><secondary>escape warning</secondary></indexterm> <indexterm> - <primary><varname>escape_string_warning</> configuration parameter</primary> + <primary><varname>escape_string_warning</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - When on, a warning is issued if a backslash (<literal>\</>) - appears in an ordinary string literal (<literal>'...'</> + When on, a warning is issued if a backslash (<literal>\</literal>) + appears in an ordinary string literal (<literal>'...'</literal> syntax) and <varname>standard_conforming_strings</varname> is off. - The default is <literal>on</>. + The default is <literal>on</literal>. </para> <para> Applications that wish to use backslash as escape should be - modified to use escape string syntax (<literal>E'...'</>), + modified to use escape string syntax (<literal>E'...'</literal>), because the default behavior of ordinary strings is now to treat backslash as an ordinary character, per SQL standard. This variable can be enabled to help locate code that needs to be changed. @@ -7524,22 +7524,22 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-lo-compat-privileges" xreflabel="lo_compat_privileges"> <term><varname>lo_compat_privileges</varname> (<type>boolean</type>) <indexterm> - <primary><varname>lo_compat_privileges</> configuration parameter</primary> + <primary><varname>lo_compat_privileges</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - In <productname>PostgreSQL</> releases prior to 9.0, large objects + In <productname>PostgreSQL</productname> releases prior to 9.0, large objects did not have access privileges and were, therefore, always readable - and writable by all users. Setting this variable to <literal>on</> + and writable by all users. Setting this variable to <literal>on</literal> disables the new privilege checks, for compatibility with prior - releases. The default is <literal>off</>. + releases. The default is <literal>off</literal>. Only superusers can change this setting. </para> <para> Setting this variable does not disable all security checks related to large objects — only those for which the default behavior has - changed in <productname>PostgreSQL</> 9.0. + changed in <productname>PostgreSQL</productname> 9.0. For example, <literal>lo_import()</literal> and <literal>lo_export()</literal> need superuser privileges regardless of this setting. @@ -7550,18 +7550,18 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-operator-precedence-warning" xreflabel="operator_precedence_warning"> <term><varname>operator_precedence_warning</varname> (<type>boolean</type>) <indexterm> - <primary><varname>operator_precedence_warning</> configuration parameter</primary> + <primary><varname>operator_precedence_warning</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> When on, the parser will emit a warning for any construct that might - have changed meanings since <productname>PostgreSQL</> 9.4 as a result + have changed meanings since <productname>PostgreSQL</productname> 9.4 as a result of changes in operator precedence. This is useful for auditing applications to see if precedence changes have broken anything; but it is not meant to be kept turned on in production, since it will warn about some perfectly valid, standard-compliant SQL code. - The default is <literal>off</>. + The default is <literal>off</literal>. </para> <para> @@ -7573,15 +7573,15 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-quote-all-identifiers" xreflabel="quote-all-identifiers"> <term><varname>quote_all_identifiers</varname> (<type>boolean</type>) <indexterm> - <primary><varname>quote_all_identifiers</> configuration parameter</primary> + <primary><varname>quote_all_identifiers</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> When the database generates SQL, force all identifiers to be quoted, even if they are not (currently) keywords. This will affect the - output of <command>EXPLAIN</> as well as the results of functions - like <function>pg_get_viewdef</>. See also the + output of <command>EXPLAIN</command> as well as the results of functions + like <function>pg_get_viewdef</function>. See also the <option>--quote-all-identifiers</option> option of <xref linkend="app-pgdump"> and <xref linkend="app-pg-dumpall">. </para> @@ -7590,22 +7590,22 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-standard-conforming-strings" xreflabel="standard_conforming_strings"> <term><varname>standard_conforming_strings</varname> (<type>boolean</type>) - <indexterm><primary>strings</><secondary>standard conforming</></> + <indexterm><primary>strings</primary><secondary>standard conforming</secondary></indexterm> <indexterm> - <primary><varname>standard_conforming_strings</> configuration parameter</primary> + <primary><varname>standard_conforming_strings</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> This controls whether ordinary string literals - (<literal>'...'</>) treat backslashes literally, as specified in + (<literal>'...'</literal>) treat backslashes literally, as specified in the SQL standard. Beginning in <productname>PostgreSQL</productname> 9.1, the default is - <literal>on</> (prior releases defaulted to <literal>off</>). + <literal>on</literal> (prior releases defaulted to <literal>off</literal>). Applications can check this parameter to determine how string literals will be processed. The presence of this parameter can also be taken as an indication - that the escape string syntax (<literal>E'...'</>) is supported. + that the escape string syntax (<literal>E'...'</literal>) is supported. Escape string syntax (<xref linkend="sql-syntax-strings-escape">) should be used if an application desires backslashes to be treated as escape characters. @@ -7616,7 +7616,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-synchronize-seqscans" xreflabel="synchronize_seqscans"> <term><varname>synchronize_seqscans</varname> (<type>boolean</type>) <indexterm> - <primary><varname>synchronize_seqscans</> configuration parameter</primary> + <primary><varname>synchronize_seqscans</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -7625,13 +7625,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' other, so that concurrent scans read the same block at about the same time and hence share the I/O workload. When this is enabled, a scan might start in the middle of the table and then <quote>wrap - around</> the end to cover all rows, so as to synchronize with the + around</quote> the end to cover all rows, so as to synchronize with the activity of scans already in progress. This can result in unpredictable changes in the row ordering returned by queries that - have no <literal>ORDER BY</> clause. Setting this parameter to - <literal>off</> ensures the pre-8.3 behavior in which a sequential + have no <literal>ORDER BY</literal> clause. Setting this parameter to + <literal>off</literal> ensures the pre-8.3 behavior in which a sequential scan always starts from the beginning of the table. The default - is <literal>on</>. + is <literal>on</literal>. </para> </listitem> </varlistentry> @@ -7645,31 +7645,31 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-transform-null-equals" xreflabel="transform_null_equals"> <term><varname>transform_null_equals</varname> (<type>boolean</type>) - <indexterm><primary>IS NULL</></> + <indexterm><primary>IS NULL</primary></indexterm> <indexterm> - <primary><varname>transform_null_equals</> configuration parameter</primary> + <primary><varname>transform_null_equals</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - When on, expressions of the form <literal><replaceable>expr</> = + When on, expressions of the form <literal><replaceable>expr</replaceable> = NULL</literal> (or <literal>NULL = - <replaceable>expr</></literal>) are treated as - <literal><replaceable>expr</> IS NULL</literal>, that is, they - return true if <replaceable>expr</> evaluates to the null value, + <replaceable>expr</replaceable></literal>) are treated as + <literal><replaceable>expr</replaceable> IS NULL</literal>, that is, they + return true if <replaceable>expr</replaceable> evaluates to the null value, and false otherwise. The correct SQL-spec-compliant behavior of - <literal><replaceable>expr</> = NULL</literal> is to always + <literal><replaceable>expr</replaceable> = NULL</literal> is to always return null (unknown). Therefore this parameter defaults to - <literal>off</>. + <literal>off</literal>. </para> <para> However, filtered forms in <productname>Microsoft Access</productname> generate queries that appear to use - <literal><replaceable>expr</> = NULL</literal> to test for + <literal><replaceable>expr</replaceable> = NULL</literal> to test for null values, so if you use that interface to access the database you might want to turn this option on. Since expressions of the - form <literal><replaceable>expr</> = NULL</literal> always + form <literal><replaceable>expr</replaceable> = NULL</literal> always return the null value (using the SQL standard interpretation), they are not very useful and do not appear often in normal applications so this option does little harm in practice. But new users are @@ -7678,7 +7678,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' </para> <para> - Note that this option only affects the exact form <literal>= NULL</>, + Note that this option only affects the exact form <literal>= NULL</literal>, not other comparison operators or other expressions that are computationally equivalent to some expression involving the equals operator (such as <literal>IN</literal>). @@ -7703,7 +7703,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-exit-on-error" xreflabel="exit_on_error"> <term><varname>exit_on_error</varname> (<type>boolean</type>) <indexterm> - <primary><varname>exit_on_error</> configuration parameter</primary> + <primary><varname>exit_on_error</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -7718,16 +7718,16 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-restart-after-crash" xreflabel="restart_after_crash"> <term><varname>restart_after_crash</varname> (<type>boolean</type>) <indexterm> - <primary><varname>restart_after_crash</> configuration parameter</primary> + <primary><varname>restart_after_crash</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - When set to true, which is the default, <productname>PostgreSQL</> + When set to true, which is the default, <productname>PostgreSQL</productname> will automatically reinitialize after a backend crash. Leaving this value set to true is normally the best way to maximize the availability of the database. However, in some circumstances, such as when - <productname>PostgreSQL</> is being invoked by clusterware, it may be + <productname>PostgreSQL</productname> is being invoked by clusterware, it may be useful to disable the restart so that the clusterware can gain control and take any actions it deems appropriate. </para> @@ -7742,10 +7742,10 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <title>Preset Options</title> <para> - The following <quote>parameters</> are read-only, and are determined + The following <quote>parameters</quote> are read-only, and are determined when <productname>PostgreSQL</productname> is compiled or when it is installed. As such, they have been excluded from the sample - <filename>postgresql.conf</> file. These options report + <filename>postgresql.conf</filename> file. These options report various aspects of <productname>PostgreSQL</productname> behavior that might be of interest to certain applications, particularly administrative front-ends. @@ -7756,13 +7756,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-block-size" xreflabel="block_size"> <term><varname>block_size</varname> (<type>integer</type>) <indexterm> - <primary><varname>block_size</> configuration parameter</primary> + <primary><varname>block_size</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Reports the size of a disk block. It is determined by the value - of <literal>BLCKSZ</> when building the server. The default + of <literal>BLCKSZ</literal> when building the server. The default value is 8192 bytes. The meaning of some configuration variables (such as <xref linkend="guc-shared-buffers">) is influenced by <varname>block_size</varname>. See <xref @@ -7774,7 +7774,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-data-checksums" xreflabel="data_checksums"> <term><varname>data_checksums</varname> (<type>boolean</type>) <indexterm> - <primary><varname>data_checksums</> configuration parameter</primary> + <primary><varname>data_checksums</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -7788,7 +7788,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-debug-assertions" xreflabel="debug_assertions"> <term><varname>debug_assertions</varname> (<type>boolean</type>) <indexterm> - <primary><varname>debug_assertions</> configuration parameter</primary> + <primary><varname>debug_assertions</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -7808,13 +7808,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-integer-datetimes" xreflabel="integer_datetimes"> <term><varname>integer_datetimes</varname> (<type>boolean</type>) <indexterm> - <primary><varname>integer_datetimes</> configuration parameter</primary> + <primary><varname>integer_datetimes</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> - Reports whether <productname>PostgreSQL</> was built with support for - 64-bit-integer dates and times. As of <productname>PostgreSQL</> 10, + Reports whether <productname>PostgreSQL</productname> was built with support for + 64-bit-integer dates and times. As of <productname>PostgreSQL</productname> 10, this is always <literal>on</literal>. </para> </listitem> @@ -7823,7 +7823,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-lc-collate" xreflabel="lc_collate"> <term><varname>lc_collate</varname> (<type>string</type>) <indexterm> - <primary><varname>lc_collate</> configuration parameter</primary> + <primary><varname>lc_collate</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -7838,7 +7838,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-lc-ctype" xreflabel="lc_ctype"> <term><varname>lc_ctype</varname> (<type>string</type>) <indexterm> - <primary><varname>lc_ctype</> configuration parameter</primary> + <primary><varname>lc_ctype</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -7855,13 +7855,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-max-function-args" xreflabel="max_function_args"> <term><varname>max_function_args</varname> (<type>integer</type>) <indexterm> - <primary><varname>max_function_args</> configuration parameter</primary> + <primary><varname>max_function_args</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Reports the maximum number of function arguments. It is determined by - the value of <literal>FUNC_MAX_ARGS</> when building the server. The + the value of <literal>FUNC_MAX_ARGS</literal> when building the server. The default value is 100 arguments. </para> </listitem> @@ -7870,14 +7870,14 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-max-identifier-length" xreflabel="max_identifier_length"> <term><varname>max_identifier_length</varname> (<type>integer</type>) <indexterm> - <primary><varname>max_identifier_length</> configuration parameter</primary> + <primary><varname>max_identifier_length</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Reports the maximum identifier length. It is determined as one - less than the value of <literal>NAMEDATALEN</> when building - the server. The default value of <literal>NAMEDATALEN</> is + less than the value of <literal>NAMEDATALEN</literal> when building + the server. The default value of <literal>NAMEDATALEN</literal> is 64; therefore the default <varname>max_identifier_length</varname> is 63 bytes, which can be less than 63 characters when using multibyte encodings. @@ -7888,13 +7888,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-max-index-keys" xreflabel="max_index_keys"> <term><varname>max_index_keys</varname> (<type>integer</type>) <indexterm> - <primary><varname>max_index_keys</> configuration parameter</primary> + <primary><varname>max_index_keys</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Reports the maximum number of index keys. It is determined by - the value of <literal>INDEX_MAX_KEYS</> when building the server. The + the value of <literal>INDEX_MAX_KEYS</literal> when building the server. The default value is 32 keys. </para> </listitem> @@ -7903,16 +7903,16 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-segment-size" xreflabel="segment_size"> <term><varname>segment_size</varname> (<type>integer</type>) <indexterm> - <primary><varname>segment_size</> configuration parameter</primary> + <primary><varname>segment_size</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Reports the number of blocks (pages) that can be stored within a file - segment. It is determined by the value of <literal>RELSEG_SIZE</> + segment. It is determined by the value of <literal>RELSEG_SIZE</literal> when building the server. The maximum size of a segment file in bytes - is equal to <varname>segment_size</> multiplied by - <varname>block_size</>; by default this is 1GB. + is equal to <varname>segment_size</varname> multiplied by + <varname>block_size</varname>; by default this is 1GB. </para> </listitem> </varlistentry> @@ -7920,9 +7920,9 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-server-encoding" xreflabel="server_encoding"> <term><varname>server_encoding</varname> (<type>string</type>) <indexterm> - <primary><varname>server_encoding</> configuration parameter</primary> + <primary><varname>server_encoding</varname> configuration parameter</primary> </indexterm> - <indexterm><primary>character set</></> + <indexterm><primary>character set</primary></indexterm> </term> <listitem> <para> @@ -7937,13 +7937,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-server-version" xreflabel="server_version"> <term><varname>server_version</varname> (<type>string</type>) <indexterm> - <primary><varname>server_version</> configuration parameter</primary> + <primary><varname>server_version</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Reports the version number of the server. It is determined by the - value of <literal>PG_VERSION</> when building the server. + value of <literal>PG_VERSION</literal> when building the server. </para> </listitem> </varlistentry> @@ -7951,13 +7951,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-server-version-num" xreflabel="server_version_num"> <term><varname>server_version_num</varname> (<type>integer</type>) <indexterm> - <primary><varname>server_version_num</> configuration parameter</primary> + <primary><varname>server_version_num</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Reports the version number of the server as an integer. It is determined - by the value of <literal>PG_VERSION_NUM</> when building the server. + by the value of <literal>PG_VERSION_NUM</literal> when building the server. </para> </listitem> </varlistentry> @@ -7965,13 +7965,13 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-wal-block-size" xreflabel="wal_block_size"> <term><varname>wal_block_size</varname> (<type>integer</type>) <indexterm> - <primary><varname>wal_block_size</> configuration parameter</primary> + <primary><varname>wal_block_size</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Reports the size of a WAL disk block. It is determined by the value - of <literal>XLOG_BLCKSZ</> when building the server. The default value + of <literal>XLOG_BLCKSZ</literal> when building the server. The default value is 8192 bytes. </para> </listitem> @@ -7980,14 +7980,14 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-wal-segment-size" xreflabel="wal_segment_size"> <term><varname>wal_segment_size</varname> (<type>integer</type>) <indexterm> - <primary><varname>wal_segment_size</> configuration parameter</primary> + <primary><varname>wal_segment_size</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Reports the number of blocks (pages) in a WAL segment file. The total size of a WAL segment file in bytes is equal to - <varname>wal_segment_size</> multiplied by <varname>wal_block_size</>; + <varname>wal_segment_size</varname> multiplied by <varname>wal_block_size</varname>; by default this is 16MB. See <xref linkend="wal-configuration"> for more information. </para> @@ -8010,12 +8010,12 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <para> Custom options have two-part names: an extension name, then a dot, then the parameter name proper, much like qualified names in SQL. An example - is <literal>plpgsql.variable_conflict</>. + is <literal>plpgsql.variable_conflict</literal>. </para> <para> Because custom options may need to be set in processes that have not - loaded the relevant extension module, <productname>PostgreSQL</> + loaded the relevant extension module, <productname>PostgreSQL</productname> will accept a setting for any two-part parameter name. Such variables are treated as placeholders and have no function until the module that defines them is loaded. When an extension module is loaded, it will add @@ -8034,7 +8034,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' to assist with recovery of severely damaged databases. There should be no reason to use them on a production database. As such, they have been excluded from the sample - <filename>postgresql.conf</> file. Note that many of these + <filename>postgresql.conf</filename> file. Note that many of these parameters require special source compilation flags to work at all. </para> @@ -8073,7 +8073,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-post-auth-delay" xreflabel="post_auth_delay"> <term><varname>post_auth_delay</varname> (<type>integer</type>) <indexterm> - <primary><varname>post_auth_delay</> configuration parameter</primary> + <primary><varname>post_auth_delay</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -8090,7 +8090,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-pre-auth-delay" xreflabel="pre_auth_delay"> <term><varname>pre_auth_delay</varname> (<type>integer</type>) <indexterm> - <primary><varname>pre_auth_delay</> configuration parameter</primary> + <primary><varname>pre_auth_delay</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -8100,7 +8100,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' authentication procedure. This is intended to give developers an opportunity to attach to the server process with a debugger to trace down misbehavior in authentication. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -8109,7 +8109,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-trace-notify" xreflabel="trace_notify"> <term><varname>trace_notify</varname> (<type>boolean</type>) <indexterm> - <primary><varname>trace_notify</> configuration parameter</primary> + <primary><varname>trace_notify</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -8127,7 +8127,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-trace-recovery-messages" xreflabel="trace_recovery_messages"> <term><varname>trace_recovery_messages</varname> (<type>enum</type>) <indexterm> - <primary><varname>trace_recovery_messages</> configuration parameter</primary> + <primary><varname>trace_recovery_messages</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -8136,15 +8136,15 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' would not be logged. This parameter allows the user to override the normal setting of <xref linkend="guc-log-min-messages">, but only for specific messages. This is intended for use in debugging Hot Standby. - Valid values are <literal>DEBUG5</>, <literal>DEBUG4</>, - <literal>DEBUG3</>, <literal>DEBUG2</>, <literal>DEBUG1</>, and - <literal>LOG</>. The default, <literal>LOG</>, does not affect + Valid values are <literal>DEBUG5</literal>, <literal>DEBUG4</literal>, + <literal>DEBUG3</literal>, <literal>DEBUG2</literal>, <literal>DEBUG1</literal>, and + <literal>LOG</literal>. The default, <literal>LOG</literal>, does not affect logging decisions at all. The other values cause recovery-related debug messages of that priority or higher to be logged as though they - had <literal>LOG</> priority; for common settings of - <varname>log_min_messages</> this results in unconditionally sending + had <literal>LOG</literal> priority; for common settings of + <varname>log_min_messages</varname> this results in unconditionally sending them to the server log. - This parameter can only be set in the <filename>postgresql.conf</> + This parameter can only be set in the <filename>postgresql.conf</filename> file or on the server command line. </para> </listitem> @@ -8153,7 +8153,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry id="guc-trace-sort" xreflabel="trace_sort"> <term><varname>trace_sort</varname> (<type>boolean</type>) <indexterm> - <primary><varname>trace_sort</> configuration parameter</primary> + <primary><varname>trace_sort</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -8169,7 +8169,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' <varlistentry> <term><varname>trace_locks</varname> (<type>boolean</type>) <indexterm> - <primary><varname>trace_locks</> configuration parameter</primary> + <primary><varname>trace_locks</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -8210,7 +8210,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) <varlistentry> <term><varname>trace_lwlocks</varname> (<type>boolean</type>) <indexterm> - <primary><varname>trace_lwlocks</> configuration parameter</primary> + <primary><varname>trace_lwlocks</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -8230,7 +8230,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) <varlistentry> <term><varname>trace_userlocks</varname> (<type>boolean</type>) <indexterm> - <primary><varname>trace_userlocks</> configuration parameter</primary> + <primary><varname>trace_userlocks</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -8249,7 +8249,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) <varlistentry> <term><varname>trace_lock_oidmin</varname> (<type>integer</type>) <indexterm> - <primary><varname>trace_lock_oidmin</> configuration parameter</primary> + <primary><varname>trace_lock_oidmin</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -8268,7 +8268,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) <varlistentry> <term><varname>trace_lock_table</varname> (<type>integer</type>) <indexterm> - <primary><varname>trace_lock_table</> configuration parameter</primary> + <primary><varname>trace_lock_table</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -8286,7 +8286,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) <varlistentry> <term><varname>debug_deadlocks</varname> (<type>boolean</type>) <indexterm> - <primary><varname>debug_deadlocks</> configuration parameter</primary> + <primary><varname>debug_deadlocks</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -8305,7 +8305,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) <varlistentry> <term><varname>log_btree_build_stats</varname> (<type>boolean</type>) <indexterm> - <primary><varname>log_btree_build_stats</> configuration parameter</primary> + <primary><varname>log_btree_build_stats</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -8324,7 +8324,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) <varlistentry id="guc-wal-consistency-checking" xreflabel="wal_consistency_checking"> <term><varname>wal_consistency_checking</varname> (<type>string</type>) <indexterm> - <primary><varname>wal_consistency_checking</> configuration parameter</primary> + <primary><varname>wal_consistency_checking</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -8344,10 +8344,10 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) the feature. It can be set to <literal>all</literal> to check all records, or to a comma-separated list of resource managers to check only records originating from those resource managers. Currently, - the supported resource managers are <literal>heap</>, - <literal>heap2</>, <literal>btree</>, <literal>hash</>, - <literal>gin</>, <literal>gist</>, <literal>sequence</>, - <literal>spgist</>, <literal>brin</>, and <literal>generic</>. Only + the supported resource managers are <literal>heap</literal>, + <literal>heap2</literal>, <literal>btree</literal>, <literal>hash</literal>, + <literal>gin</literal>, <literal>gist</literal>, <literal>sequence</literal>, + <literal>spgist</literal>, <literal>brin</literal>, and <literal>generic</literal>. Only superusers can change this setting. </para> </listitem> @@ -8356,7 +8356,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) <varlistentry id="guc-wal-debug" xreflabel="wal_debug"> <term><varname>wal_debug</varname> (<type>boolean</type>) <indexterm> - <primary><varname>wal_debug</> configuration parameter</primary> + <primary><varname>wal_debug</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -8372,7 +8372,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) <varlistentry id="guc-ignore-checksum-failure" xreflabel="ignore_checksum_failure"> <term><varname>ignore_checksum_failure</varname> (<type>boolean</type>) <indexterm> - <primary><varname>ignore_checksum_failure</> configuration parameter</primary> + <primary><varname>ignore_checksum_failure</varname> configuration parameter</primary> </indexterm> </term> <listitem> @@ -8381,15 +8381,15 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) </para> <para> Detection of a checksum failure during a read normally causes - <productname>PostgreSQL</> to report an error, aborting the current - transaction. Setting <varname>ignore_checksum_failure</> to on causes + <productname>PostgreSQL</productname> to report an error, aborting the current + transaction. Setting <varname>ignore_checksum_failure</varname> to on causes the system to ignore the failure (but still report a warning), and continue processing. This behavior may <emphasis>cause crashes, propagate - or hide corruption, or other serious problems</>. However, it may allow + or hide corruption, or other serious problems</emphasis>. However, it may allow you to get past the error and retrieve undamaged tuples that might still be present in the table if the block header is still sane. If the header is corrupt an error will be reported even if this option is enabled. The - default setting is <literal>off</>, and it can only be changed by a superuser. + default setting is <literal>off</literal>, and it can only be changed by a superuser. </para> </listitem> </varlistentry> @@ -8397,16 +8397,16 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) <varlistentry id="guc-zero-damaged-pages" xreflabel="zero_damaged_pages"> <term><varname>zero_damaged_pages</varname> (<type>boolean</type>) <indexterm> - <primary><varname>zero_damaged_pages</> configuration parameter</primary> + <primary><varname>zero_damaged_pages</varname> configuration parameter</primary> </indexterm> </term> <listitem> <para> Detection of a damaged page header normally causes - <productname>PostgreSQL</> to report an error, aborting the current - transaction. Setting <varname>zero_damaged_pages</> to on causes + <productname>PostgreSQL</productname> to report an error, aborting the current + transaction. Setting <varname>zero_damaged_pages</varname> to on causes the system to instead report a warning, zero out the damaged - page in memory, and continue processing. This behavior <emphasis>will destroy data</>, + page in memory, and continue processing. This behavior <emphasis>will destroy data</emphasis>, namely all the rows on the damaged page. However, it does allow you to get past the error and retrieve rows from any undamaged pages that might be present in the table. It is useful for recovering data if @@ -8415,7 +8415,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) data from the damaged pages of a table. Zeroed-out pages are not forced to disk so it is recommended to recreate the table or the index before turning this parameter off again. The - default setting is <literal>off</>, and it can only be changed + default setting is <literal>off</literal>, and it can only be changed by a superuser. </para> </listitem> @@ -8447,15 +8447,15 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) <tbody> <row> <entry><option>-B <replaceable>x</replaceable></option></entry> - <entry><literal>shared_buffers = <replaceable>x</replaceable></></entry> + <entry><literal>shared_buffers = <replaceable>x</replaceable></literal></entry> </row> <row> <entry><option>-d <replaceable>x</replaceable></option></entry> - <entry><literal>log_min_messages = DEBUG<replaceable>x</replaceable></></entry> + <entry><literal>log_min_messages = DEBUG<replaceable>x</replaceable></literal></entry> </row> <row> <entry><option>-e</option></entry> - <entry><literal>datestyle = euro</></entry> + <entry><literal>datestyle = euro</literal></entry> </row> <row> <entry> @@ -8464,69 +8464,69 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) <option>-fs</option>, <option>-ft</option> </entry> <entry> - <literal>enable_bitmapscan = off</>, - <literal>enable_hashjoin = off</>, - <literal>enable_indexscan = off</>, - <literal>enable_mergejoin = off</>, - <literal>enable_nestloop = off</>, - <literal>enable_indexonlyscan = off</>, - <literal>enable_seqscan = off</>, - <literal>enable_tidscan = off</> + <literal>enable_bitmapscan = off</literal>, + <literal>enable_hashjoin = off</literal>, + <literal>enable_indexscan = off</literal>, + <literal>enable_mergejoin = off</literal>, + <literal>enable_nestloop = off</literal>, + <literal>enable_indexonlyscan = off</literal>, + <literal>enable_seqscan = off</literal>, + <literal>enable_tidscan = off</literal> </entry> </row> <row> <entry><option>-F</option></entry> - <entry><literal>fsync = off</></entry> + <entry><literal>fsync = off</literal></entry> </row> <row> <entry><option>-h <replaceable>x</replaceable></option></entry> - <entry><literal>listen_addresses = <replaceable>x</replaceable></></entry> + <entry><literal>listen_addresses = <replaceable>x</replaceable></literal></entry> </row> <row> <entry><option>-i</option></entry> - <entry><literal>listen_addresses = '*'</></entry> + <entry><literal>listen_addresses = '*'</literal></entry> </row> <row> <entry><option>-k <replaceable>x</replaceable></option></entry> - <entry><literal>unix_socket_directories = <replaceable>x</replaceable></></entry> + <entry><literal>unix_socket_directories = <replaceable>x</replaceable></literal></entry> </row> <row> <entry><option>-l</option></entry> - <entry><literal>ssl = on</></entry> + <entry><literal>ssl = on</literal></entry> </row> <row> <entry><option>-N <replaceable>x</replaceable></option></entry> - <entry><literal>max_connections = <replaceable>x</replaceable></></entry> + <entry><literal>max_connections = <replaceable>x</replaceable></literal></entry> </row> <row> <entry><option>-O</option></entry> - <entry><literal>allow_system_table_mods = on</></entry> + <entry><literal>allow_system_table_mods = on</literal></entry> </row> <row> <entry><option>-p <replaceable>x</replaceable></option></entry> - <entry><literal>port = <replaceable>x</replaceable></></entry> + <entry><literal>port = <replaceable>x</replaceable></literal></entry> </row> <row> <entry><option>-P</option></entry> - <entry><literal>ignore_system_indexes = on</></entry> + <entry><literal>ignore_system_indexes = on</literal></entry> </row> <row> <entry><option>-s</option></entry> - <entry><literal>log_statement_stats = on</></entry> + <entry><literal>log_statement_stats = on</literal></entry> </row> <row> <entry><option>-S <replaceable>x</replaceable></option></entry> - <entry><literal>work_mem = <replaceable>x</replaceable></></entry> + <entry><literal>work_mem = <replaceable>x</replaceable></literal></entry> </row> <row> <entry><option>-tpa</option>, <option>-tpl</option>, <option>-te</option></entry> - <entry><literal>log_parser_stats = on</>, - <literal>log_planner_stats = on</>, - <literal>log_executor_stats = on</></entry> + <entry><literal>log_parser_stats = on</literal>, + <literal>log_planner_stats = on</literal>, + <literal>log_executor_stats = on</literal></entry> </row> <row> <entry><option>-W <replaceable>x</replaceable></option></entry> - <entry><literal>post_auth_delay = <replaceable>x</replaceable></></entry> + <entry><literal>post_auth_delay = <replaceable>x</replaceable></literal></entry> </row> </tbody> </tgroup> diff --git a/doc/src/sgml/contrib-spi.sgml b/doc/src/sgml/contrib-spi.sgml index 3287c18d27d..32c7105cf64 100644 --- a/doc/src/sgml/contrib-spi.sgml +++ b/doc/src/sgml/contrib-spi.sgml @@ -9,7 +9,7 @@ </indexterm> <para> - The <application>spi</> module provides several workable examples + The <application>spi</application> module provides several workable examples of using SPI and triggers. While these functions are of some value in their own right, they are even more useful as examples to modify for your own purposes. The functions are general enough to be used @@ -26,15 +26,15 @@ <title>refint — Functions for Implementing Referential Integrity</title> <para> - <function>check_primary_key()</> and - <function>check_foreign_key()</> are used to check foreign key constraints. + <function>check_primary_key()</function> and + <function>check_foreign_key()</function> are used to check foreign key constraints. (This functionality is long since superseded by the built-in foreign key mechanism, of course, but the module is still useful as an example.) </para> <para> - <function>check_primary_key()</> checks the referencing table. - To use, create a <literal>BEFORE INSERT OR UPDATE</> trigger using this + <function>check_primary_key()</function> checks the referencing table. + To use, create a <literal>BEFORE INSERT OR UPDATE</literal> trigger using this function on a table referencing another table. Specify as the trigger arguments: the referencing table's column name(s) which form the foreign key, the referenced table name, and the column names in the referenced table @@ -43,14 +43,14 @@ </para> <para> - <function>check_foreign_key()</> checks the referenced table. - To use, create a <literal>BEFORE DELETE OR UPDATE</> trigger using this + <function>check_foreign_key()</function> checks the referenced table. + To use, create a <literal>BEFORE DELETE OR UPDATE</literal> trigger using this function on a table referenced by other table(s). Specify as the trigger arguments: the number of referencing tables for which the function has to perform checking, the action if a referencing key is found - (<literal>cascade</> — to delete the referencing row, - <literal>restrict</> — to abort transaction if referencing keys - exist, <literal>setnull</> — to set referencing key fields to null), + (<literal>cascade</literal> — to delete the referencing row, + <literal>restrict</literal> — to abort transaction if referencing keys + exist, <literal>setnull</literal> — to set referencing key fields to null), the triggered table's column names which form the primary/unique key, then the referencing table name and column names (repeated for as many referencing tables as were specified by first argument). Note that the @@ -59,7 +59,7 @@ </para> <para> - There are examples in <filename>refint.example</>. + There are examples in <filename>refint.example</filename>. </para> </sect2> @@ -67,10 +67,10 @@ <title>timetravel — Functions for Implementing Time Travel</title> <para> - Long ago, <productname>PostgreSQL</> had a built-in time travel feature + Long ago, <productname>PostgreSQL</productname> had a built-in time travel feature that kept the insert and delete times for each tuple. This can be emulated using these functions. To use these functions, - you must add to a table two columns of <type>abstime</> type to store + you must add to a table two columns of <type>abstime</type> type to store the date when a tuple was inserted (start_date) and changed/deleted (stop_date): @@ -89,7 +89,7 @@ CREATE TABLE mytab ( <para> When a new row is inserted, start_date should normally be set to - current time, and stop_date to <literal>infinity</>. The trigger + current time, and stop_date to <literal>infinity</literal>. The trigger will automatically substitute these values if the inserted data contains nulls in these columns. Generally, inserting explicit non-null data in these columns should only be done when re-loading @@ -97,7 +97,7 @@ CREATE TABLE mytab ( </para> <para> - Tuples with stop_date equal to <literal>infinity</> are <quote>valid + Tuples with stop_date equal to <literal>infinity</literal> are <quote>valid now</quote>, and can be modified. Tuples with a finite stop_date cannot be modified anymore — the trigger will prevent it. (If you need to do that, you can turn off time travel as shown below.) @@ -107,7 +107,7 @@ CREATE TABLE mytab ( For a modifiable row, on update only the stop_date in the tuple being updated will be changed (to current time) and a new tuple with the modified data will be inserted. Start_date in this new tuple will be set to current - time and stop_date to <literal>infinity</>. + time and stop_date to <literal>infinity</literal>. </para> <para> @@ -117,29 +117,29 @@ CREATE TABLE mytab ( <para> To query for tuples <quote>valid now</quote>, include - <literal>stop_date = 'infinity'</> in the query's WHERE condition. + <literal>stop_date = 'infinity'</literal> in the query's WHERE condition. (You might wish to incorporate that in a view.) Similarly, you can query for tuples valid at any past time with suitable conditions on start_date and stop_date. </para> <para> - <function>timetravel()</> is the general trigger function that supports - this behavior. Create a <literal>BEFORE INSERT OR UPDATE OR DELETE</> + <function>timetravel()</function> is the general trigger function that supports + this behavior. Create a <literal>BEFORE INSERT OR UPDATE OR DELETE</literal> trigger using this function on each time-traveled table. Specify two trigger arguments: the actual names of the start_date and stop_date columns. Optionally, you can specify one to three more arguments, which must refer - to columns of type <type>text</>. The trigger will store the name of + to columns of type <type>text</type>. The trigger will store the name of the current user into the first of these columns during INSERT, the second column during UPDATE, and the third during DELETE. </para> <para> - <function>set_timetravel()</> allows you to turn time-travel on or off for + <function>set_timetravel()</function> allows you to turn time-travel on or off for a table. - <literal>set_timetravel('mytab', 1)</> will turn TT ON for table <literal>mytab</>. - <literal>set_timetravel('mytab', 0)</> will turn TT OFF for table <literal>mytab</>. + <literal>set_timetravel('mytab', 1)</literal> will turn TT ON for table <literal>mytab</literal>. + <literal>set_timetravel('mytab', 0)</literal> will turn TT OFF for table <literal>mytab</literal>. In both cases the old status is reported. While TT is off, you can modify the start_date and stop_date columns freely. Note that the on/off status is local to the current database session — fresh sessions will @@ -147,12 +147,12 @@ CREATE TABLE mytab ( </para> <para> - <function>get_timetravel()</> returns the TT state for a table without + <function>get_timetravel()</function> returns the TT state for a table without changing it. </para> <para> - There is an example in <filename>timetravel.example</>. + There is an example in <filename>timetravel.example</filename>. </para> </sect2> @@ -160,17 +160,17 @@ CREATE TABLE mytab ( <title>autoinc — Functions for Autoincrementing Fields</title> <para> - <function>autoinc()</> is a trigger that stores the next value of + <function>autoinc()</function> is a trigger that stores the next value of a sequence into an integer field. This has some overlap with the - built-in <quote>serial column</> feature, but it is not the same: - <function>autoinc()</> will override attempts to substitute a + built-in <quote>serial column</quote> feature, but it is not the same: + <function>autoinc()</function> will override attempts to substitute a different field value during inserts, and optionally it can be used to increment the field during updates, too. </para> <para> - To use, create a <literal>BEFORE INSERT</> (or optionally <literal>BEFORE - INSERT OR UPDATE</>) trigger using this function. Specify two + To use, create a <literal>BEFORE INSERT</literal> (or optionally <literal>BEFORE + INSERT OR UPDATE</literal>) trigger using this function. Specify two trigger arguments: the name of the integer column to be modified, and the name of the sequence object that will supply values. (Actually, you can specify any number of pairs of such names, if @@ -178,7 +178,7 @@ CREATE TABLE mytab ( </para> <para> - There is an example in <filename>autoinc.example</>. + There is an example in <filename>autoinc.example</filename>. </para> </sect2> @@ -187,19 +187,19 @@ CREATE TABLE mytab ( <title>insert_username — Functions for Tracking Who Changed a Table</title> <para> - <function>insert_username()</> is a trigger that stores the current + <function>insert_username()</function> is a trigger that stores the current user's name into a text field. This can be useful for tracking who last modified a particular row within a table. </para> <para> - To use, create a <literal>BEFORE INSERT</> and/or <literal>UPDATE</> + To use, create a <literal>BEFORE INSERT</literal> and/or <literal>UPDATE</literal> trigger using this function. Specify a single trigger argument: the name of the text column to be modified. </para> <para> - There is an example in <filename>insert_username.example</>. + There is an example in <filename>insert_username.example</filename>. </para> </sect2> @@ -208,21 +208,21 @@ CREATE TABLE mytab ( <title>moddatetime — Functions for Tracking Last Modification Time</title> <para> - <function>moddatetime()</> is a trigger that stores the current - time into a <type>timestamp</> field. This can be useful for tracking + <function>moddatetime()</function> is a trigger that stores the current + time into a <type>timestamp</type> field. This can be useful for tracking the last modification time of a particular row within a table. </para> <para> - To use, create a <literal>BEFORE UPDATE</> + To use, create a <literal>BEFORE UPDATE</literal> trigger using this function. Specify a single trigger argument: the name of the column to be modified. - The column must be of type <type>timestamp</> or <type>timestamp with - time zone</>. + The column must be of type <type>timestamp</type> or <type>timestamp with + time zone</type>. </para> <para> - There is an example in <filename>moddatetime.example</>. + There is an example in <filename>moddatetime.example</filename>. </para> </sect2> diff --git a/doc/src/sgml/contrib.sgml b/doc/src/sgml/contrib.sgml index f32b8a81a21..7dd203e9cdc 100644 --- a/doc/src/sgml/contrib.sgml +++ b/doc/src/sgml/contrib.sgml @@ -6,7 +6,7 @@ <para> This appendix and the next one contain information regarding the modules that can be found in the <literal>contrib</literal> directory of the - <productname>PostgreSQL</> distribution. + <productname>PostgreSQL</productname> distribution. These include porting tools, analysis utilities, and plug-in features that are not part of the core PostgreSQL system, mainly because they address a limited audience or are too experimental @@ -41,54 +41,54 @@ <screen> <userinput>make installcheck</userinput> </screen> - once you have a <productname>PostgreSQL</> server running. + once you have a <productname>PostgreSQL</productname> server running. </para> <para> - If you are using a pre-packaged version of <productname>PostgreSQL</>, + If you are using a pre-packaged version of <productname>PostgreSQL</productname>, these modules are typically made available as a separate subpackage, - such as <literal>postgresql-contrib</>. + such as <literal>postgresql-contrib</literal>. </para> <para> Many modules supply new user-defined functions, operators, or types. To make use of one of these modules, after you have installed the code you need to register the new SQL objects in the database system. - In <productname>PostgreSQL</> 9.1 and later, this is done by executing + In <productname>PostgreSQL</productname> 9.1 and later, this is done by executing a <xref linkend="sql-createextension"> command. In a fresh database, you can simply do <programlisting> -CREATE EXTENSION <replaceable>module_name</>; +CREATE EXTENSION <replaceable>module_name</replaceable>; </programlisting> This command must be run by a database superuser. This registers the new SQL objects in the current database only, so you need to run this command in each database that you want the module's facilities to be available in. Alternatively, run it in - database <literal>template1</> so that the extension will be copied into + database <literal>template1</literal> so that the extension will be copied into subsequently-created databases by default. </para> <para> Many modules allow you to install their objects in a schema of your choice. To do that, add <literal>SCHEMA - <replaceable>schema_name</></literal> to the <command>CREATE EXTENSION</> + <replaceable>schema_name</replaceable></literal> to the <command>CREATE EXTENSION</command> command. By default, the objects will be placed in your current creation - target schema, typically <literal>public</>. + target schema, typically <literal>public</literal>. </para> <para> If your database was brought forward by dump and reload from a pre-9.1 - version of <productname>PostgreSQL</>, and you had been using the pre-9.1 + version of <productname>PostgreSQL</productname>, and you had been using the pre-9.1 version of the module in it, you should instead do <programlisting> -CREATE EXTENSION <replaceable>module_name</> FROM unpackaged; +CREATE EXTENSION <replaceable>module_name</replaceable> FROM unpackaged; </programlisting> This will update the pre-9.1 objects of the module into a proper - <firstterm>extension</> object. Future updates to the module will be + <firstterm>extension</firstterm> object. Future updates to the module will be managed by <xref linkend="sql-alterextension">. For more information about extension updates, see <xref linkend="extend-extensions">. @@ -163,7 +163,7 @@ pages. <para> This appendix and the previous one contain information regarding the modules that can be found in the <literal>contrib</literal> directory of the - <productname>PostgreSQL</> distribution. See <xref linkend="contrib"> for + <productname>PostgreSQL</productname> distribution. See <xref linkend="contrib"> for more information about the <literal>contrib</literal> section in general and server extensions and plug-ins found in <literal>contrib</literal> specifically. diff --git a/doc/src/sgml/cube.sgml b/doc/src/sgml/cube.sgml index 1ffc40f1a5b..46d8e4eb8fe 100644 --- a/doc/src/sgml/cube.sgml +++ b/doc/src/sgml/cube.sgml @@ -8,7 +8,7 @@ </indexterm> <para> - This module implements a data type <type>cube</> for + This module implements a data type <type>cube</type> for representing multidimensional cubes. </para> @@ -17,8 +17,8 @@ <para> <xref linkend="cube-repr-table"> shows the valid external - representations for the <type>cube</> - type. <replaceable>x</>, <replaceable>y</>, etc. denote + representations for the <type>cube</type> + type. <replaceable>x</replaceable>, <replaceable>y</replaceable>, etc. denote floating-point numbers. </para> @@ -34,43 +34,43 @@ <tbody> <row> - <entry><literal><replaceable>x</></literal></entry> + <entry><literal><replaceable>x</replaceable></literal></entry> <entry>A one-dimensional point (or, zero-length one-dimensional interval) </entry> </row> <row> - <entry><literal>(<replaceable>x</>)</literal></entry> + <entry><literal>(<replaceable>x</replaceable>)</literal></entry> <entry>Same as above</entry> </row> <row> - <entry><literal><replaceable>x1</>,<replaceable>x2</>,...,<replaceable>xn</></literal></entry> + <entry><literal><replaceable>x1</replaceable>,<replaceable>x2</replaceable>,...,<replaceable>xn</replaceable></literal></entry> <entry>A point in n-dimensional space, represented internally as a zero-volume cube </entry> </row> <row> - <entry><literal>(<replaceable>x1</>,<replaceable>x2</>,...,<replaceable>xn</>)</literal></entry> + <entry><literal>(<replaceable>x1</replaceable>,<replaceable>x2</replaceable>,...,<replaceable>xn</replaceable>)</literal></entry> <entry>Same as above</entry> </row> <row> - <entry><literal>(<replaceable>x</>),(<replaceable>y</>)</literal></entry> - <entry>A one-dimensional interval starting at <replaceable>x</> and ending at <replaceable>y</> or vice versa; the + <entry><literal>(<replaceable>x</replaceable>),(<replaceable>y</replaceable>)</literal></entry> + <entry>A one-dimensional interval starting at <replaceable>x</replaceable> and ending at <replaceable>y</replaceable> or vice versa; the order does not matter </entry> </row> <row> - <entry><literal>[(<replaceable>x</>),(<replaceable>y</>)]</literal></entry> + <entry><literal>[(<replaceable>x</replaceable>),(<replaceable>y</replaceable>)]</literal></entry> <entry>Same as above</entry> </row> <row> - <entry><literal>(<replaceable>x1</>,...,<replaceable>xn</>),(<replaceable>y1</>,...,<replaceable>yn</>)</literal></entry> + <entry><literal>(<replaceable>x1</replaceable>,...,<replaceable>xn</replaceable>),(<replaceable>y1</replaceable>,...,<replaceable>yn</replaceable>)</literal></entry> <entry>An n-dimensional cube represented by a pair of its diagonally opposite corners </entry> </row> <row> - <entry><literal>[(<replaceable>x1</>,...,<replaceable>xn</>),(<replaceable>y1</>,...,<replaceable>yn</>)]</literal></entry> + <entry><literal>[(<replaceable>x1</replaceable>,...,<replaceable>xn</replaceable>),(<replaceable>y1</replaceable>,...,<replaceable>yn</replaceable>)]</literal></entry> <entry>Same as above</entry> </row> </tbody> @@ -79,17 +79,17 @@ <para> It does not matter which order the opposite corners of a cube are - entered in. The <type>cube</> functions + entered in. The <type>cube</type> functions automatically swap values if needed to create a uniform - <quote>lower left — upper right</> internal representation. - When the corners coincide, <type>cube</> stores only one corner - along with an <quote>is point</> flag to avoid wasting space. + <quote>lower left — upper right</quote> internal representation. + When the corners coincide, <type>cube</type> stores only one corner + along with an <quote>is point</quote> flag to avoid wasting space. </para> <para> White space is ignored on input, so - <literal>[(<replaceable>x</>),(<replaceable>y</>)]</literal> is the same as - <literal>[ ( <replaceable>x</> ), ( <replaceable>y</> ) ]</literal>. + <literal>[(<replaceable>x</replaceable>),(<replaceable>y</replaceable>)]</literal> is the same as + <literal>[ ( <replaceable>x</replaceable> ), ( <replaceable>y</replaceable> ) ]</literal>. </para> </sect2> @@ -107,7 +107,7 @@ <para> <xref linkend="cube-operators-table"> shows the operators provided for - type <type>cube</>. + type <type>cube</type>. </para> <table id="cube-operators-table"> @@ -123,91 +123,91 @@ <tbody> <row> - <entry><literal>a = b</></entry> - <entry><type>boolean</></entry> + <entry><literal>a = b</literal></entry> + <entry><type>boolean</type></entry> <entry>The cubes a and b are identical.</entry> </row> <row> - <entry><literal>a && b</></entry> - <entry><type>boolean</></entry> + <entry><literal>a && b</literal></entry> + <entry><type>boolean</type></entry> <entry>The cubes a and b overlap.</entry> </row> <row> - <entry><literal>a @> b</></entry> - <entry><type>boolean</></entry> + <entry><literal>a @> b</literal></entry> + <entry><type>boolean</type></entry> <entry>The cube a contains the cube b.</entry> </row> <row> - <entry><literal>a <@ b</></entry> - <entry><type>boolean</></entry> + <entry><literal>a <@ b</literal></entry> + <entry><type>boolean</type></entry> <entry>The cube a is contained in the cube b.</entry> </row> <row> - <entry><literal>a < b</></entry> - <entry><type>boolean</></entry> + <entry><literal>a < b</literal></entry> + <entry><type>boolean</type></entry> <entry>The cube a is less than the cube b.</entry> </row> <row> - <entry><literal>a <= b</></entry> - <entry><type>boolean</></entry> + <entry><literal>a <= b</literal></entry> + <entry><type>boolean</type></entry> <entry>The cube a is less than or equal to the cube b.</entry> </row> <row> - <entry><literal>a > b</></entry> - <entry><type>boolean</></entry> + <entry><literal>a > b</literal></entry> + <entry><type>boolean</type></entry> <entry>The cube a is greater than the cube b.</entry> </row> <row> - <entry><literal>a >= b</></entry> - <entry><type>boolean</></entry> + <entry><literal>a >= b</literal></entry> + <entry><type>boolean</type></entry> <entry>The cube a is greater than or equal to the cube b.</entry> </row> <row> - <entry><literal>a <> b</></entry> - <entry><type>boolean</></entry> + <entry><literal>a <> b</literal></entry> + <entry><type>boolean</type></entry> <entry>The cube a is not equal to the cube b.</entry> </row> <row> - <entry><literal>a -> n</></entry> - <entry><type>float8</></entry> - <entry>Get <replaceable>n</>-th coordinate of cube (counting from 1).</entry> + <entry><literal>a -> n</literal></entry> + <entry><type>float8</type></entry> + <entry>Get <replaceable>n</replaceable>-th coordinate of cube (counting from 1).</entry> </row> <row> - <entry><literal>a ~> n</></entry> - <entry><type>float8</></entry> + <entry><literal>a ~> n</literal></entry> + <entry><type>float8</type></entry> <entry> - Get <replaceable>n</>-th coordinate in <quote>normalized</> cube + Get <replaceable>n</replaceable>-th coordinate in <quote>normalized</quote> cube representation, in which the coordinates have been rearranged into - the form <quote>lower left — upper right</>; that is, the + the form <quote>lower left — upper right</quote>; that is, the smaller endpoint along each dimension appears first. </entry> </row> <row> - <entry><literal>a <-> b</></entry> - <entry><type>float8</></entry> + <entry><literal>a <-> b</literal></entry> + <entry><type>float8</type></entry> <entry>Euclidean distance between a and b.</entry> </row> <row> - <entry><literal>a <#> b</></entry> - <entry><type>float8</></entry> + <entry><literal>a <#> b</literal></entry> + <entry><type>float8</type></entry> <entry>Taxicab (L-1 metric) distance between a and b.</entry> </row> <row> - <entry><literal>a <=> b</></entry> - <entry><type>float8</></entry> + <entry><literal>a <=> b</literal></entry> + <entry><type>float8</type></entry> <entry>Chebyshev (L-inf metric) distance between a and b.</entry> </row> @@ -216,35 +216,35 @@ </table> <para> - (Before PostgreSQL 8.2, the containment operators <literal>@></> and <literal><@</> were - respectively called <literal>@</> and <literal>~</>. These names are still available, but are + (Before PostgreSQL 8.2, the containment operators <literal>@></literal> and <literal><@</literal> were + respectively called <literal>@</literal> and <literal>~</literal>. These names are still available, but are deprecated and will eventually be retired. Notice that the old names are reversed from the convention formerly followed by the core geometric data types!) </para> <para> - The scalar ordering operators (<literal><</>, <literal>>=</>, etc) + The scalar ordering operators (<literal><</literal>, <literal>>=</literal>, etc) do not make a lot of sense for any practical purpose but sorting. These operators first compare the first coordinates, and if those are equal, compare the second coordinates, etc. They exist mainly to support the - b-tree index operator class for <type>cube</>, which can be useful for - example if you would like a UNIQUE constraint on a <type>cube</> column. + b-tree index operator class for <type>cube</type>, which can be useful for + example if you would like a UNIQUE constraint on a <type>cube</type> column. </para> <para> - The <filename>cube</> module also provides a GiST index operator class for - <type>cube</> values. - A <type>cube</> GiST index can be used to search for values using the - <literal>=</>, <literal>&&</>, <literal>@></>, and - <literal><@</> operators in <literal>WHERE</> clauses. + The <filename>cube</filename> module also provides a GiST index operator class for + <type>cube</type> values. + A <type>cube</type> GiST index can be used to search for values using the + <literal>=</literal>, <literal>&&</literal>, <literal>@></literal>, and + <literal><@</literal> operators in <literal>WHERE</literal> clauses. </para> <para> - In addition, a <type>cube</> GiST index can be used to find nearest + In addition, a <type>cube</type> GiST index can be used to find nearest neighbors using the metric operators - <literal><-></>, <literal><#></>, and - <literal><=></> in <literal>ORDER BY</> clauses. + <literal><-></literal>, <literal><#></literal>, and + <literal><=></literal> in <literal>ORDER BY</literal> clauses. For example, the nearest neighbor of the 3-D point (0.5, 0.5, 0.5) could be found efficiently with: <programlisting> @@ -253,7 +253,7 @@ SELECT c FROM test ORDER BY c <-> cube(array[0.5,0.5,0.5]) LIMIT 1; </para> <para> - The <literal>~></> operator can also be used in this way to + The <literal>~></literal> operator can also be used in this way to efficiently retrieve the first few values sorted by a selected coordinate. For example, to get the first few cubes ordered by the first coordinate (lower left corner) ascending one could use the following query: @@ -365,7 +365,7 @@ SELECT c FROM test ORDER BY c ~> 3 DESC LIMIT 5; <row> <entry><literal>cube_ll_coord(cube, integer)</literal></entry> <entry><type>float8</type></entry> - <entry>Returns the <replaceable>n</>-th coordinate value for the lower + <entry>Returns the <replaceable>n</replaceable>-th coordinate value for the lower left corner of the cube. </entry> <entry> @@ -376,7 +376,7 @@ SELECT c FROM test ORDER BY c ~> 3 DESC LIMIT 5; <row> <entry><literal>cube_ur_coord(cube, integer)</literal></entry> <entry><type>float8</type></entry> - <entry>Returns the <replaceable>n</>-th coordinate value for the + <entry>Returns the <replaceable>n</replaceable>-th coordinate value for the upper right corner of the cube. </entry> <entry> @@ -412,9 +412,9 @@ SELECT c FROM test ORDER BY c ~> 3 DESC LIMIT 5; desired. </entry> <entry> - <literal>cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[2]) == '(3),(7)'</> + <literal>cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[2]) == '(3),(7)'</literal> <literal>cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]) == - '(5,3,1,1),(8,7,6,6)'</> + '(5,3,1,1),(8,7,6,6)'</literal> </entry> </row> @@ -440,24 +440,24 @@ SELECT c FROM test ORDER BY c ~> 3 DESC LIMIT 5; <entry><literal>cube_enlarge(c cube, r double, n integer)</literal></entry> <entry><type>cube</type></entry> <entry>Increases the size of the cube by the specified - radius <replaceable>r</> in at least <replaceable>n</> dimensions. + radius <replaceable>r</replaceable> in at least <replaceable>n</replaceable> dimensions. If the radius is negative the cube is shrunk instead. - All defined dimensions are changed by the radius <replaceable>r</>. - Lower-left coordinates are decreased by <replaceable>r</> and - upper-right coordinates are increased by <replaceable>r</>. If a + All defined dimensions are changed by the radius <replaceable>r</replaceable>. + Lower-left coordinates are decreased by <replaceable>r</replaceable> and + upper-right coordinates are increased by <replaceable>r</replaceable>. If a lower-left coordinate is increased to more than the corresponding - upper-right coordinate (this can only happen when <replaceable>r</> + upper-right coordinate (this can only happen when <replaceable>r</replaceable> < 0) than both coordinates are set to their average. - If <replaceable>n</> is greater than the number of defined dimensions - and the cube is being enlarged (<replaceable>r</> > 0), then extra - dimensions are added to make <replaceable>n</> altogether; + If <replaceable>n</replaceable> is greater than the number of defined dimensions + and the cube is being enlarged (<replaceable>r</replaceable> > 0), then extra + dimensions are added to make <replaceable>n</replaceable> altogether; 0 is used as the initial value for the extra coordinates. This function is useful for creating bounding boxes around a point for searching for nearby points. </entry> <entry> <literal>cube_enlarge('(1,2),(3,4)', 0.5, 3) == - '(0.5,1.5,-0.5),(3.5,4.5,0.5)'</> + '(0.5,1.5,-0.5),(3.5,4.5,0.5)'</literal> </entry> </row> </tbody> @@ -523,13 +523,13 @@ t <title>Notes</title> <para> - For examples of usage, see the regression test <filename>sql/cube.sql</>. + For examples of usage, see the regression test <filename>sql/cube.sql</filename>. </para> <para> To make it harder for people to break things, there is a limit of 100 on the number of dimensions of cubes. This is set - in <filename>cubedata.h</> if you need something bigger. + in <filename>cubedata.h</filename> if you need something bigger. </para> </sect2> diff --git a/doc/src/sgml/custom-scan.sgml b/doc/src/sgml/custom-scan.sgml index 9d1ca7bfe16..a46641674fa 100644 --- a/doc/src/sgml/custom-scan.sgml +++ b/doc/src/sgml/custom-scan.sgml @@ -9,9 +9,9 @@ </indexterm> <para> - <productname>PostgreSQL</> supports a set of experimental facilities which + <productname>PostgreSQL</productname> supports a set of experimental facilities which are intended to allow extension modules to add new scan types to the system. - Unlike a <link linkend="fdwhandler">foreign data wrapper</>, which is only + Unlike a <link linkend="fdwhandler">foreign data wrapper</link>, which is only responsible for knowing how to scan its own foreign tables, a custom scan provider can provide an alternative method of scanning any relation in the system. Typically, the motivation for writing a custom scan provider will @@ -51,9 +51,9 @@ extern PGDLLIMPORT set_rel_pathlist_hook_type set_rel_pathlist_hook; <para> Although this hook function can be used to examine, modify, or remove paths generated by the core system, a custom scan provider will typically - confine itself to generating <structname>CustomPath</> objects and adding - them to <literal>rel</> using <function>add_path</>. The custom scan - provider is responsible for initializing the <structname>CustomPath</> + confine itself to generating <structname>CustomPath</structname> objects and adding + them to <literal>rel</literal> using <function>add_path</function>. The custom scan + provider is responsible for initializing the <structname>CustomPath</structname> object, which is declared like this: <programlisting> typedef struct CustomPath @@ -68,22 +68,22 @@ typedef struct CustomPath </para> <para> - <structfield>path</> must be initialized as for any other path, including + <structfield>path</structfield> must be initialized as for any other path, including the row-count estimate, start and total cost, and sort ordering provided - by this path. <structfield>flags</> is a bit mask, which should include - <literal>CUSTOMPATH_SUPPORT_BACKWARD_SCAN</> if the custom path can support - a backward scan and <literal>CUSTOMPATH_SUPPORT_MARK_RESTORE</> if it + by this path. <structfield>flags</structfield> is a bit mask, which should include + <literal>CUSTOMPATH_SUPPORT_BACKWARD_SCAN</literal> if the custom path can support + a backward scan and <literal>CUSTOMPATH_SUPPORT_MARK_RESTORE</literal> if it can support mark and restore. Both capabilities are optional. - An optional <structfield>custom_paths</> is a list of <structname>Path</> + An optional <structfield>custom_paths</structfield> is a list of <structname>Path</structname> nodes used by this custom-path node; these will be transformed into - <structname>Plan</> nodes by planner. - <structfield>custom_private</> can be used to store the custom path's + <structname>Plan</structname> nodes by planner. + <structfield>custom_private</structfield> can be used to store the custom path's private data. Private data should be stored in a form that can be handled - by <literal>nodeToString</>, so that debugging routines that attempt to - print the custom path will work as designed. <structfield>methods</> must + by <literal>nodeToString</literal>, so that debugging routines that attempt to + print the custom path will work as designed. <structfield>methods</structfield> must point to a (usually statically allocated) object implementing the required custom path methods, of which there is currently only one. The - <structfield>LibraryName</> and <structfield>SymbolName</> fields must also + <structfield>LibraryName</structfield> and <structfield>SymbolName</structfield> fields must also be initialized so that the dynamic loader can resolve them to locate the method table. </para> @@ -93,7 +93,7 @@ typedef struct CustomPath relations, such a path must produce the same output as would normally be produced by the join it replaces. To do this, the join provider should set the following hook, and then within the hook function, - create <structname>CustomPath</> path(s) for the join relation. + create <structname>CustomPath</structname> path(s) for the join relation. <programlisting> typedef void (*set_join_pathlist_hook_type) (PlannerInfo *root, RelOptInfo *joinrel, @@ -122,7 +122,7 @@ Plan *(*PlanCustomPath) (PlannerInfo *root, List *custom_plans); </programlisting> Convert a custom path to a finished plan. The return value will generally - be a <literal>CustomScan</> object, which the callback must allocate and + be a <literal>CustomScan</literal> object, which the callback must allocate and initialize. See <xref linkend="custom-scan-plan"> for more details. </para> </sect2> @@ -150,45 +150,45 @@ typedef struct CustomScan </para> <para> - <structfield>scan</> must be initialized as for any other scan, including + <structfield>scan</structfield> must be initialized as for any other scan, including estimated costs, target lists, qualifications, and so on. - <structfield>flags</> is a bit mask with the same meaning as in - <structname>CustomPath</>. - <structfield>custom_plans</> can be used to store child - <structname>Plan</> nodes. - <structfield>custom_exprs</> should be used to + <structfield>flags</structfield> is a bit mask with the same meaning as in + <structname>CustomPath</structname>. + <structfield>custom_plans</structfield> can be used to store child + <structname>Plan</structname> nodes. + <structfield>custom_exprs</structfield> should be used to store expression trees that will need to be fixed up by - <filename>setrefs.c</> and <filename>subselect.c</>, while - <structfield>custom_private</> should be used to store other private data + <filename>setrefs.c</filename> and <filename>subselect.c</filename>, while + <structfield>custom_private</structfield> should be used to store other private data that is only used by the custom scan provider itself. - <structfield>custom_scan_tlist</> can be NIL when scanning a base + <structfield>custom_scan_tlist</structfield> can be NIL when scanning a base relation, indicating that the custom scan returns scan tuples that match the base relation's row type. Otherwise it is a target list describing - the actual scan tuples. <structfield>custom_scan_tlist</> must be + the actual scan tuples. <structfield>custom_scan_tlist</structfield> must be provided for joins, and could be provided for scans if the custom scan provider can compute some non-Var expressions. - <structfield>custom_relids</> is set by the core code to the set of + <structfield>custom_relids</structfield> is set by the core code to the set of relations (range table indexes) that this scan node handles; except when this scan is replacing a join, it will have only one member. - <structfield>methods</> must point to a (usually statically allocated) + <structfield>methods</structfield> must point to a (usually statically allocated) object implementing the required custom scan methods, which are further detailed below. </para> <para> - When a <structname>CustomScan</> scans a single relation, - <structfield>scan.scanrelid</> must be the range table index of the table - to be scanned. When it replaces a join, <structfield>scan.scanrelid</> + When a <structname>CustomScan</structname> scans a single relation, + <structfield>scan.scanrelid</structfield> must be the range table index of the table + to be scanned. When it replaces a join, <structfield>scan.scanrelid</structfield> should be zero. </para> <para> - Plan trees must be able to be duplicated using <function>copyObject</>, - so all the data stored within the <quote>custom</> fields must consist of + Plan trees must be able to be duplicated using <function>copyObject</function>, + so all the data stored within the <quote>custom</quote> fields must consist of nodes that that function can handle. Furthermore, custom scan providers cannot substitute a larger structure that embeds - a <structname>CustomScan</> for the structure itself, as would be possible - for a <structname>CustomPath</> or <structname>CustomScanState</>. + a <structname>CustomScan</structname> for the structure itself, as would be possible + for a <structname>CustomPath</structname> or <structname>CustomScanState</structname>. </para> <sect2 id="custom-scan-plan-callbacks"> @@ -197,14 +197,14 @@ typedef struct CustomScan <programlisting> Node *(*CreateCustomScanState) (CustomScan *cscan); </programlisting> - Allocate a <structname>CustomScanState</> for this - <structname>CustomScan</>. The actual allocation will often be larger than - required for an ordinary <structname>CustomScanState</>, because many + Allocate a <structname>CustomScanState</structname> for this + <structname>CustomScan</structname>. The actual allocation will often be larger than + required for an ordinary <structname>CustomScanState</structname>, because many providers will wish to embed that as the first field of a larger structure. - The value returned must have the node tag and <structfield>methods</> + The value returned must have the node tag and <structfield>methods</structfield> set appropriately, but other fields should be left as zeroes at this - stage; after <function>ExecInitCustomScan</> performs basic initialization, - the <function>BeginCustomScan</> callback will be invoked to give the + stage; after <function>ExecInitCustomScan</function> performs basic initialization, + the <function>BeginCustomScan</function> callback will be invoked to give the custom scan provider a chance to do whatever else is needed. </para> </sect2> @@ -214,8 +214,8 @@ Node *(*CreateCustomScanState) (CustomScan *cscan); <title>Executing Custom Scans</title> <para> - When a <structfield>CustomScan</> is executed, its execution state is - represented by a <structfield>CustomScanState</>, which is declared as + When a <structfield>CustomScan</structfield> is executed, its execution state is + represented by a <structfield>CustomScanState</structfield>, which is declared as follows: <programlisting> typedef struct CustomScanState @@ -228,15 +228,15 @@ typedef struct CustomScanState </para> <para> - <structfield>ss</> is initialized as for any other scan state, + <structfield>ss</structfield> is initialized as for any other scan state, except that if the scan is for a join rather than a base relation, - <literal>ss.ss_currentRelation</> is left NULL. - <structfield>flags</> is a bit mask with the same meaning as in - <structname>CustomPath</> and <structname>CustomScan</>. - <structfield>methods</> must point to a (usually statically allocated) + <literal>ss.ss_currentRelation</literal> is left NULL. + <structfield>flags</structfield> is a bit mask with the same meaning as in + <structname>CustomPath</structname> and <structname>CustomScan</structname>. + <structfield>methods</structfield> must point to a (usually statically allocated) object implementing the required custom scan state methods, which are - further detailed below. Typically, a <structname>CustomScanState</>, which - need not support <function>copyObject</>, will actually be a larger + further detailed below. Typically, a <structname>CustomScanState</structname>, which + need not support <function>copyObject</function>, will actually be a larger structure embedding the above as its first member. </para> @@ -249,8 +249,8 @@ void (*BeginCustomScan) (CustomScanState *node, EState *estate, int eflags); </programlisting> - Complete initialization of the supplied <structname>CustomScanState</>. - Standard fields have been initialized by <function>ExecInitCustomScan</>, + Complete initialization of the supplied <structname>CustomScanState</structname>. + Standard fields have been initialized by <function>ExecInitCustomScan</function>, but any private fields should be initialized here. </para> @@ -259,16 +259,16 @@ void (*BeginCustomScan) (CustomScanState *node, TupleTableSlot *(*ExecCustomScan) (CustomScanState *node); </programlisting> Fetch the next scan tuple. If any tuples remain, it should fill - <literal>ps_ResultTupleSlot</> with the next tuple in the current scan + <literal>ps_ResultTupleSlot</literal> with the next tuple in the current scan direction, and then return the tuple slot. If not, - <literal>NULL</> or an empty slot should be returned. + <literal>NULL</literal> or an empty slot should be returned. </para> <para> <programlisting> void (*EndCustomScan) (CustomScanState *node); </programlisting> - Clean up any private data associated with the <literal>CustomScanState</>. + Clean up any private data associated with the <literal>CustomScanState</literal>. This method is required, but it does not need to do anything if there is no associated data or it will be cleaned up automatically. </para> @@ -286,9 +286,9 @@ void (*ReScanCustomScan) (CustomScanState *node); void (*MarkPosCustomScan) (CustomScanState *node); </programlisting> Save the current scan position so that it can subsequently be restored - by the <function>RestrPosCustomScan</> callback. This callback is + by the <function>RestrPosCustomScan</function> callback. This callback is optional, and need only be supplied if the - <literal>CUSTOMPATH_SUPPORT_MARK_RESTORE</> flag is set. + <literal>CUSTOMPATH_SUPPORT_MARK_RESTORE</literal> flag is set. </para> <para> @@ -296,9 +296,9 @@ void (*MarkPosCustomScan) (CustomScanState *node); void (*RestrPosCustomScan) (CustomScanState *node); </programlisting> Restore the previous scan position as saved by the - <function>MarkPosCustomScan</> callback. This callback is optional, + <function>MarkPosCustomScan</function> callback. This callback is optional, and need only be supplied if the - <literal>CUSTOMPATH_SUPPORT_MARK_RESTORE</> flag is set. + <literal>CUSTOMPATH_SUPPORT_MARK_RESTORE</literal> flag is set. </para> <para> @@ -320,8 +320,8 @@ void (*InitializeDSMCustomScan) (CustomScanState *node, void *coordinate); </programlisting> Initialize the dynamic shared memory that will be required for parallel - operation. <literal>coordinate</> points to a shared memory area of - size equal to the return value of <function>EstimateDSMCustomScan</>. + operation. <literal>coordinate</literal> points to a shared memory area of + size equal to the return value of <function>EstimateDSMCustomScan</function>. This callback is optional, and need only be supplied if this custom scan provider supports parallel execution. </para> @@ -337,9 +337,9 @@ void (*ReInitializeDSMCustomScan) (CustomScanState *node, This callback is optional, and need only be supplied if this custom scan provider supports parallel execution. Recommended practice is that this callback reset only shared state, - while the <function>ReScanCustomScan</> callback resets only local + while the <function>ReScanCustomScan</function> callback resets only local state. Currently, this callback will be called - before <function>ReScanCustomScan</>, but it's best not to rely on + before <function>ReScanCustomScan</function>, but it's best not to rely on that ordering. </para> @@ -350,7 +350,7 @@ void (*InitializeWorkerCustomScan) (CustomScanState *node, void *coordinate); </programlisting> Initialize a parallel worker's local state based on the shared state - set up by the leader during <function>InitializeDSMCustomScan</>. + set up by the leader during <function>InitializeDSMCustomScan</function>. This callback is optional, and need only be supplied if this custom scan provider supports parallel execution. </para> @@ -361,7 +361,7 @@ void (*ShutdownCustomScan) (CustomScanState *node); </programlisting> Release resources when it is anticipated the node will not be executed to completion. This is not called in all cases; sometimes, - <literal>EndCustomScan</> may be called without this function having + <literal>EndCustomScan</literal> may be called without this function having been called first. Since the DSM segment used by parallel query is destroyed just after this callback is invoked, custom scan providers that wish to take some action before the DSM segment goes away should implement @@ -374,9 +374,9 @@ void (*ExplainCustomScan) (CustomScanState *node, List *ancestors, ExplainState *es); </programlisting> - Output additional information for <command>EXPLAIN</> of a custom-scan + Output additional information for <command>EXPLAIN</command> of a custom-scan plan node. This callback is optional. Common data stored in the - <structname>ScanState</>, such as the target list and scan relation, will + <structname>ScanState</structname>, such as the target list and scan relation, will be shown even without this callback, but the callback allows the display of additional, private state. </para> diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml index 512756df4af..6a15f9030c9 100644 --- a/doc/src/sgml/datatype.sgml +++ b/doc/src/sgml/datatype.sgml @@ -79,7 +79,7 @@ <row> <entry><type>bytea</type></entry> <entry></entry> - <entry>binary data (<quote>byte array</>)</entry> + <entry>binary data (<quote>byte array</quote>)</entry> </row> <row> @@ -354,45 +354,45 @@ <tbody> <row> - <entry><type>smallint</></entry> + <entry><type>smallint</type></entry> <entry>2 bytes</entry> <entry>small-range integer</entry> <entry>-32768 to +32767</entry> </row> <row> - <entry><type>integer</></entry> + <entry><type>integer</type></entry> <entry>4 bytes</entry> <entry>typical choice for integer</entry> <entry>-2147483648 to +2147483647</entry> </row> <row> - <entry><type>bigint</></entry> + <entry><type>bigint</type></entry> <entry>8 bytes</entry> <entry>large-range integer</entry> <entry>-9223372036854775808 to +9223372036854775807</entry> </row> <row> - <entry><type>decimal</></entry> + <entry><type>decimal</type></entry> <entry>variable</entry> <entry>user-specified precision, exact</entry> <entry>up to 131072 digits before the decimal point; up to 16383 digits after the decimal point</entry> </row> <row> - <entry><type>numeric</></entry> + <entry><type>numeric</type></entry> <entry>variable</entry> <entry>user-specified precision, exact</entry> <entry>up to 131072 digits before the decimal point; up to 16383 digits after the decimal point</entry> </row> <row> - <entry><type>real</></entry> + <entry><type>real</type></entry> <entry>4 bytes</entry> <entry>variable-precision, inexact</entry> <entry>6 decimal digits precision</entry> </row> <row> - <entry><type>double precision</></entry> + <entry><type>double precision</type></entry> <entry>8 bytes</entry> <entry>variable-precision, inexact</entry> <entry>15 decimal digits precision</entry> @@ -406,7 +406,7 @@ </row> <row> - <entry><type>serial</></entry> + <entry><type>serial</type></entry> <entry>4 bytes</entry> <entry>autoincrementing integer</entry> <entry>1 to 2147483647</entry> @@ -574,9 +574,9 @@ NUMERIC <para> Numeric values are physically stored without any extra leading or trailing zeroes. Thus, the declared precision and scale of a column - are maximums, not fixed allocations. (In this sense the <type>numeric</> - type is more akin to <type>varchar(<replaceable>n</>)</type> - than to <type>char(<replaceable>n</>)</type>.) The actual storage + are maximums, not fixed allocations. (In this sense the <type>numeric</type> + type is more akin to <type>varchar(<replaceable>n</replaceable>)</type> + than to <type>char(<replaceable>n</replaceable>)</type>.) The actual storage requirement is two bytes for each group of four decimal digits, plus three to eight bytes overhead. </para> @@ -593,22 +593,22 @@ NUMERIC <para> In addition to ordinary numeric values, the <type>numeric</type> - type allows the special value <literal>NaN</>, meaning - <quote>not-a-number</quote>. Any operation on <literal>NaN</> - yields another <literal>NaN</>. When writing this value + type allows the special value <literal>NaN</literal>, meaning + <quote>not-a-number</quote>. Any operation on <literal>NaN</literal> + yields another <literal>NaN</literal>. When writing this value as a constant in an SQL command, you must put quotes around it, - for example <literal>UPDATE table SET x = 'NaN'</>. On input, - the string <literal>NaN</> is recognized in a case-insensitive manner. + for example <literal>UPDATE table SET x = 'NaN'</literal>. On input, + the string <literal>NaN</literal> is recognized in a case-insensitive manner. </para> <note> <para> - In most implementations of the <quote>not-a-number</> concept, - <literal>NaN</> is not considered equal to any other numeric - value (including <literal>NaN</>). In order to allow - <type>numeric</> values to be sorted and used in tree-based - indexes, <productname>PostgreSQL</> treats <literal>NaN</> - values as equal, and greater than all non-<literal>NaN</> + In most implementations of the <quote>not-a-number</quote> concept, + <literal>NaN</literal> is not considered equal to any other numeric + value (including <literal>NaN</literal>). In order to allow + <type>numeric</type> values to be sorted and used in tree-based + indexes, <productname>PostgreSQL</productname> treats <literal>NaN</literal> + values as equal, and greater than all non-<literal>NaN</literal> values. </para> </note> @@ -756,18 +756,18 @@ FROM generate_series(-3.5, 3.5, 1) as x; floating-point arithmetic does not follow IEEE 754, these values will probably not work as expected.) When writing these values as constants in an SQL command, you must put quotes around them, - for example <literal>UPDATE table SET x = '-Infinity'</>. On input, + for example <literal>UPDATE table SET x = '-Infinity'</literal>. On input, these strings are recognized in a case-insensitive manner. </para> <note> <para> - IEEE754 specifies that <literal>NaN</> should not compare equal - to any other floating-point value (including <literal>NaN</>). + IEEE754 specifies that <literal>NaN</literal> should not compare equal + to any other floating-point value (including <literal>NaN</literal>). In order to allow floating-point values to be sorted and used - in tree-based indexes, <productname>PostgreSQL</> treats - <literal>NaN</> values as equal, and greater than all - non-<literal>NaN</> values. + in tree-based indexes, <productname>PostgreSQL</productname> treats + <literal>NaN</literal> values as equal, and greater than all + non-<literal>NaN</literal> values. </para> </note> @@ -776,7 +776,7 @@ FROM generate_series(-3.5, 3.5, 1) as x; notations <type>float</type> and <type>float(<replaceable>p</replaceable>)</type> for specifying inexact numeric types. Here, <replaceable>p</replaceable> specifies - the minimum acceptable precision in <emphasis>binary</> digits. + the minimum acceptable precision in <emphasis>binary</emphasis> digits. <productname>PostgreSQL</productname> accepts <type>float(1)</type> to <type>float(24)</type> as selecting the <type>real</type> type, while @@ -870,12 +870,12 @@ ALTER SEQUENCE <replaceable class="parameter">tablename</replaceable>_<replaceab </programlisting> Thus, we have created an integer column and arranged for its default - values to be assigned from a sequence generator. A <literal>NOT NULL</> + values to be assigned from a sequence generator. A <literal>NOT NULL</literal> constraint is applied to ensure that a null value cannot be inserted. (In most cases you would also want to attach a - <literal>UNIQUE</> or <literal>PRIMARY KEY</> constraint to prevent + <literal>UNIQUE</literal> or <literal>PRIMARY KEY</literal> constraint to prevent duplicate values from being inserted by accident, but this is - not automatic.) Lastly, the sequence is marked as <quote>owned by</> + not automatic.) Lastly, the sequence is marked as <quote>owned by</quote> the column, so that it will be dropped if the column or table is dropped. </para> @@ -908,7 +908,7 @@ ALTER SEQUENCE <replaceable class="parameter">tablename</replaceable>_<replaceab names <type>bigserial</type> and <type>serial8</type> work the same way, except that they create a <type>bigint</type> column. <type>bigserial</type> should be used if you anticipate - the use of more than 2<superscript>31</> identifiers over the + the use of more than 2<superscript>31</superscript> identifiers over the lifetime of the table. The type names <type>smallserial</type> and <type>serial2</type> also work the same way, except that they create a <type>smallint</type> column. @@ -962,9 +962,9 @@ ALTER SEQUENCE <replaceable class="parameter">tablename</replaceable>_<replaceab <para> Since the output of this data type is locale-sensitive, it might not - work to load <type>money</> data into a database that has a different - setting of <varname>lc_monetary</>. To avoid problems, before - restoring a dump into a new database make sure <varname>lc_monetary</> has + work to load <type>money</type> data into a database that has a different + setting of <varname>lc_monetary</varname>. To avoid problems, before + restoring a dump into a new database make sure <varname>lc_monetary</varname> has the same or equivalent value as in the database that was dumped. </para> @@ -994,7 +994,7 @@ SELECT '52093.89'::money::numeric::float8; Division of a <type>money</type> value by an integer value is performed with truncation of the fractional part towards zero. To get a rounded result, divide by a floating-point value, or cast the <type>money</type> - value to <type>numeric</> before dividing and back to <type>money</type> + value to <type>numeric</type> before dividing and back to <type>money</type> afterwards. (The latter is preferable to avoid risking precision loss.) When a <type>money</type> value is divided by another <type>money</type> value, the result is <type>double precision</type> (i.e., a pure number, @@ -1047,11 +1047,11 @@ SELECT '52093.89'::money::numeric::float8; </thead> <tbody> <row> - <entry><type>character varying(<replaceable>n</>)</type>, <type>varchar(<replaceable>n</>)</type></entry> + <entry><type>character varying(<replaceable>n</replaceable>)</type>, <type>varchar(<replaceable>n</replaceable>)</type></entry> <entry>variable-length with limit</entry> </row> <row> - <entry><type>character(<replaceable>n</>)</type>, <type>char(<replaceable>n</>)</type></entry> + <entry><type>character(<replaceable>n</replaceable>)</type>, <type>char(<replaceable>n</replaceable>)</type></entry> <entry>fixed-length, blank padded</entry> </row> <row> @@ -1070,10 +1070,10 @@ SELECT '52093.89'::money::numeric::float8; <para> <acronym>SQL</acronym> defines two primary character types: - <type>character varying(<replaceable>n</>)</type> and - <type>character(<replaceable>n</>)</type>, where <replaceable>n</> + <type>character varying(<replaceable>n</replaceable>)</type> and + <type>character(<replaceable>n</replaceable>)</type>, where <replaceable>n</replaceable> is a positive integer. Both of these types can store strings up to - <replaceable>n</> characters (not bytes) in length. An attempt to store a + <replaceable>n</replaceable> characters (not bytes) in length. An attempt to store a longer string into a column of these types will result in an error, unless the excess characters are all spaces, in which case the string will be truncated to the maximum length. (This somewhat @@ -1087,22 +1087,22 @@ SELECT '52093.89'::money::numeric::float8; <para> If one explicitly casts a value to <type>character - varying(<replaceable>n</>)</type> or - <type>character(<replaceable>n</>)</type>, then an over-length - value will be truncated to <replaceable>n</> characters without + varying(<replaceable>n</replaceable>)</type> or + <type>character(<replaceable>n</replaceable>)</type>, then an over-length + value will be truncated to <replaceable>n</replaceable> characters without raising an error. (This too is required by the <acronym>SQL</acronym> standard.) </para> <para> - The notations <type>varchar(<replaceable>n</>)</type> and - <type>char(<replaceable>n</>)</type> are aliases for <type>character - varying(<replaceable>n</>)</type> and - <type>character(<replaceable>n</>)</type>, respectively. + The notations <type>varchar(<replaceable>n</replaceable>)</type> and + <type>char(<replaceable>n</replaceable>)</type> are aliases for <type>character + varying(<replaceable>n</replaceable>)</type> and + <type>character(<replaceable>n</replaceable>)</type>, respectively. <type>character</type> without length specifier is equivalent to <type>character(1)</type>. If <type>character varying</type> is used without length specifier, the type accepts strings of any size. The - latter is a <productname>PostgreSQL</> extension. + latter is a <productname>PostgreSQL</productname> extension. </para> <para> @@ -1115,19 +1115,19 @@ SELECT '52093.89'::money::numeric::float8; <para> Values of type <type>character</type> are physically padded - with spaces to the specified width <replaceable>n</>, and are + with spaces to the specified width <replaceable>n</replaceable>, and are stored and displayed that way. However, trailing spaces are treated as semantically insignificant and disregarded when comparing two values of type <type>character</type>. In collations where whitespace is significant, this behavior can produce unexpected results; for example <command>SELECT 'a '::CHAR(2) collate "C" < - E'a\n'::CHAR(2)</command> returns true, even though <literal>C</> + E'a\n'::CHAR(2)</command> returns true, even though <literal>C</literal> locale would consider a space to be greater than a newline. Trailing spaces are removed when converting a <type>character</type> value to one of the other string types. Note that trailing spaces - <emphasis>are</> semantically significant in + <emphasis>are</emphasis> semantically significant in <type>character varying</type> and <type>text</type> values, and - when using pattern matching, that is <literal>LIKE</> and + when using pattern matching, that is <literal>LIKE</literal> and regular expressions. </para> @@ -1140,7 +1140,7 @@ SELECT '52093.89'::money::numeric::float8; stored in background tables so that they do not interfere with rapid access to shorter column values. In any case, the longest possible character string that can be stored is about 1 GB. (The - maximum value that will be allowed for <replaceable>n</> in the data + maximum value that will be allowed for <replaceable>n</replaceable> in the data type declaration is less than that. It wouldn't be useful to change this because with multibyte character encodings the number of characters and bytes can be quite different. If you desire to @@ -1155,10 +1155,10 @@ SELECT '52093.89'::money::numeric::float8; apart from increased storage space when using the blank-padded type, and a few extra CPU cycles to check the length when storing into a length-constrained column. While - <type>character(<replaceable>n</>)</type> has performance + <type>character(<replaceable>n</replaceable>)</type> has performance advantages in some other database systems, there is no such advantage in <productname>PostgreSQL</productname>; in fact - <type>character(<replaceable>n</>)</type> is usually the slowest of + <type>character(<replaceable>n</replaceable>)</type> is usually the slowest of the three because of its additional storage costs. In most situations <type>text</type> or <type>character varying</type> should be used instead. @@ -1220,7 +1220,7 @@ SELECT b, char_length(b) FROM test2; in the internal system catalogs and is not intended for use by the general user. Its length is currently defined as 64 bytes (63 usable characters plus terminator) but should be referenced using the constant - <symbol>NAMEDATALEN</symbol> in <literal>C</> source code. + <symbol>NAMEDATALEN</symbol> in <literal>C</literal> source code. The length is set at compile time (and is therefore adjustable for special uses); the default maximum length might change in a future release. The type <type>"char"</type> @@ -1304,7 +1304,7 @@ SELECT b, char_length(b) FROM test2; Second, operations on binary strings process the actual bytes, whereas the processing of character strings depends on locale settings. In short, binary strings are appropriate for storing data that the - programmer thinks of as <quote>raw bytes</>, whereas character + programmer thinks of as <quote>raw bytes</quote>, whereas character strings are appropriate for storing text. </para> @@ -1328,10 +1328,10 @@ SELECT b, char_length(b) FROM test2; </para> <sect2> - <title><type>bytea</> Hex Format</title> + <title><type>bytea</type> Hex Format</title> <para> - The <quote>hex</> format encodes binary data as 2 hexadecimal digits + The <quote>hex</quote> format encodes binary data as 2 hexadecimal digits per byte, most significant nibble first. The entire string is preceded by the sequence <literal>\x</literal> (to distinguish it from the escape format). In some contexts, the initial backslash may @@ -1355,7 +1355,7 @@ SELECT E'\\xDEADBEEF'; </sect2> <sect2> - <title><type>bytea</> Escape Format</title> + <title><type>bytea</type> Escape Format</title> <para> The <quote>escape</quote> format is the traditional @@ -1390,7 +1390,7 @@ SELECT E'\\xDEADBEEF'; </para> <table id="datatype-binary-sqlesc"> - <title><type>bytea</> Literal Escaped Octets</title> + <title><type>bytea</type> Literal Escaped Octets</title> <tgroup cols="5"> <thead> <row> @@ -1430,7 +1430,7 @@ SELECT E'\\xDEADBEEF'; <row> <entry>0 to 31 and 127 to 255</entry> <entry><quote>non-printable</quote> octets</entry> - <entry><literal>E'\\<replaceable>xxx'</></literal> (octal value)</entry> + <entry><literal>E'\\<replaceable>xxx'</replaceable></literal> (octal value)</entry> <entry><literal>SELECT E'\\001'::bytea;</literal></entry> <entry><literal>\001</literal></entry> </row> @@ -1481,7 +1481,7 @@ SELECT E'\\xDEADBEEF'; </para> <table id="datatype-binary-resesc"> - <title><type>bytea</> Output Escaped Octets</title> + <title><type>bytea</type> Output Escaped Octets</title> <tgroup cols="5"> <thead> <row> @@ -1506,7 +1506,7 @@ SELECT E'\\xDEADBEEF'; <row> <entry>0 to 31 and 127 to 255</entry> <entry><quote>non-printable</quote> octets</entry> - <entry><literal>\<replaceable>xxx</></literal> (octal value)</entry> + <entry><literal>\<replaceable>xxx</replaceable></literal> (octal value)</entry> <entry><literal>SELECT E'\\001'::bytea;</literal></entry> <entry><literal>\001</literal></entry> </row> @@ -1524,7 +1524,7 @@ SELECT E'\\xDEADBEEF'; </table> <para> - Depending on the front end to <productname>PostgreSQL</> you use, + Depending on the front end to <productname>PostgreSQL</productname> you use, you might have additional work to do in terms of escaping and unescaping <type>bytea</type> strings. For example, you might also have to escape line feeds and carriage returns if your interface @@ -1685,7 +1685,7 @@ MINUTE TO SECOND </literallayout> Note that if both <replaceable>fields</replaceable> and <replaceable>p</replaceable> are specified, the - <replaceable>fields</replaceable> must include <literal>SECOND</>, + <replaceable>fields</replaceable> must include <literal>SECOND</literal>, since the precision applies only to the seconds. </para> @@ -1717,9 +1717,9 @@ MINUTE TO SECOND For some formats, ordering of day, month, and year in date input is ambiguous and there is support for specifying the expected ordering of these fields. Set the <xref linkend="guc-datestyle"> parameter - to <literal>MDY</> to select month-day-year interpretation, - <literal>DMY</> to select day-month-year interpretation, or - <literal>YMD</> to select year-month-day interpretation. + to <literal>MDY</literal> to select month-day-year interpretation, + <literal>DMY</literal> to select day-month-year interpretation, or + <literal>YMD</literal> to select year-month-day interpretation. </para> <para> @@ -1784,19 +1784,19 @@ MINUTE TO SECOND </row> <row> <entry>1/8/1999</entry> - <entry>January 8 in <literal>MDY</> mode; - August 1 in <literal>DMY</> mode</entry> + <entry>January 8 in <literal>MDY</literal> mode; + August 1 in <literal>DMY</literal> mode</entry> </row> <row> <entry>1/18/1999</entry> - <entry>January 18 in <literal>MDY</> mode; + <entry>January 18 in <literal>MDY</literal> mode; rejected in other modes</entry> </row> <row> <entry>01/02/03</entry> - <entry>January 2, 2003 in <literal>MDY</> mode; - February 1, 2003 in <literal>DMY</> mode; - February 3, 2001 in <literal>YMD</> mode + <entry>January 2, 2003 in <literal>MDY</literal> mode; + February 1, 2003 in <literal>DMY</literal> mode; + February 3, 2001 in <literal>YMD</literal> mode </entry> </row> <row> @@ -1813,15 +1813,15 @@ MINUTE TO SECOND </row> <row> <entry>99-Jan-08</entry> - <entry>January 8 in <literal>YMD</> mode, else error</entry> + <entry>January 8 in <literal>YMD</literal> mode, else error</entry> </row> <row> <entry>08-Jan-99</entry> - <entry>January 8, except error in <literal>YMD</> mode</entry> + <entry>January 8, except error in <literal>YMD</literal> mode</entry> </row> <row> <entry>Jan-08-99</entry> - <entry>January 8, except error in <literal>YMD</> mode</entry> + <entry>January 8, except error in <literal>YMD</literal> mode</entry> </row> <row> <entry>19990108</entry> @@ -2070,20 +2070,20 @@ January 8 04:05:06 1999 PST For <type>timestamp with time zone</type>, the internally stored value is always in UTC (Universal Coordinated Time, traditionally known as Greenwich Mean Time, - <acronym>GMT</>). An input value that has an explicit + <acronym>GMT</acronym>). An input value that has an explicit time zone specified is converted to UTC using the appropriate offset for that time zone. If no time zone is stated in the input string, then it is assumed to be in the time zone indicated by the system's <xref linkend="guc-timezone"> parameter, and is converted to UTC using the - offset for the <varname>timezone</> zone. + offset for the <varname>timezone</varname> zone. </para> <para> When a <type>timestamp with time zone</type> value is output, it is always converted from UTC to the - current <varname>timezone</> zone, and displayed as local time in that + current <varname>timezone</varname> zone, and displayed as local time in that zone. To see the time in another time zone, either change - <varname>timezone</> or use the <literal>AT TIME ZONE</> construct + <varname>timezone</varname> or use the <literal>AT TIME ZONE</literal> construct (see <xref linkend="functions-datetime-zoneconvert">). </para> @@ -2091,8 +2091,8 @@ January 8 04:05:06 1999 PST Conversions between <type>timestamp without time zone</type> and <type>timestamp with time zone</type> normally assume that the <type>timestamp without time zone</type> value should be taken or given - as <varname>timezone</> local time. A different time zone can - be specified for the conversion using <literal>AT TIME ZONE</>. + as <varname>timezone</varname> local time. A different time zone can + be specified for the conversion using <literal>AT TIME ZONE</literal>. </para> </sect3> @@ -2117,7 +2117,7 @@ January 8 04:05:06 1999 PST are specially represented inside the system and will be displayed unchanged; but the others are simply notational shorthands that will be converted to ordinary date/time values when read. - (In particular, <literal>now</> and related strings are converted + (In particular, <literal>now</literal> and related strings are converted to a specific time value as soon as they are read.) All of these values need to be enclosed in single quotes when used as constants in SQL commands. @@ -2187,7 +2187,7 @@ January 8 04:05:06 1999 PST <literal>LOCALTIMESTAMP</literal>. The latter four accept an optional subsecond precision specification. (See <xref linkend="functions-datetime-current">.) Note that these are - SQL functions and are <emphasis>not</> recognized in data input strings. + SQL functions and are <emphasis>not</emphasis> recognized in data input strings. </para> </sect3> @@ -2211,8 +2211,8 @@ January 8 04:05:06 1999 PST <para> The output format of the date/time types can be set to one of the four styles ISO 8601, - <acronym>SQL</acronym> (Ingres), traditional <productname>POSTGRES</> - (Unix <application>date</> format), or + <acronym>SQL</acronym> (Ingres), traditional <productname>POSTGRES</productname> + (Unix <application>date</application> format), or German. The default is the <acronym>ISO</acronym> format. (The <acronym>SQL</acronym> standard requires the use of the ISO 8601 @@ -2222,7 +2222,7 @@ January 8 04:05:06 1999 PST output style. The output of the <type>date</type> and <type>time</type> types is generally only the date or time part in accordance with the given examples. However, the - <productname>POSTGRES</> style outputs date-only values in + <productname>POSTGRES</productname> style outputs date-only values in <acronym>ISO</acronym> format. </para> @@ -2263,9 +2263,9 @@ January 8 04:05:06 1999 PST <note> <para> - ISO 8601 specifies the use of uppercase letter <literal>T</> to separate - the date and time. <productname>PostgreSQL</> accepts that format on - input, but on output it uses a space rather than <literal>T</>, as shown + ISO 8601 specifies the use of uppercase letter <literal>T</literal> to separate + the date and time. <productname>PostgreSQL</productname> accepts that format on + input, but on output it uses a space rather than <literal>T</literal>, as shown above. This is for readability and for consistency with RFC 3339 as well as some other database systems. </para> @@ -2292,17 +2292,17 @@ January 8 04:05:06 1999 PST </thead> <tbody> <row> - <entry><literal>SQL, DMY</></entry> + <entry><literal>SQL, DMY</literal></entry> <entry><replaceable>day</replaceable>/<replaceable>month</replaceable>/<replaceable>year</replaceable></entry> <entry><literal>17/12/1997 15:37:16.00 CET</literal></entry> </row> <row> - <entry><literal>SQL, MDY</></entry> + <entry><literal>SQL, MDY</literal></entry> <entry><replaceable>month</replaceable>/<replaceable>day</replaceable>/<replaceable>year</replaceable></entry> <entry><literal>12/17/1997 07:37:16.00 PST</literal></entry> </row> <row> - <entry><literal>Postgres, DMY</></entry> + <entry><literal>Postgres, DMY</literal></entry> <entry><replaceable>day</replaceable>/<replaceable>month</replaceable>/<replaceable>year</replaceable></entry> <entry><literal>Wed 17 Dec 07:37:16 1997 PST</literal></entry> </row> @@ -2368,7 +2368,7 @@ January 8 04:05:06 1999 PST <listitem> <para> The default time zone is specified as a constant numeric offset - from <acronym>UTC</>. It is therefore impossible to adapt to + from <acronym>UTC</acronym>. It is therefore impossible to adapt to daylight-saving time when doing date/time arithmetic across <acronym>DST</acronym> boundaries. </para> @@ -2380,7 +2380,7 @@ January 8 04:05:06 1999 PST <para> To address these difficulties, we recommend using date/time types that contain both date and time when using time zones. We - do <emphasis>not</> recommend using the type <type>time with + do <emphasis>not</emphasis> recommend using the type <type>time with time zone</type> (though it is supported by <productname>PostgreSQL</productname> for legacy applications and for compliance with the <acronym>SQL</acronym> standard). @@ -2401,7 +2401,7 @@ January 8 04:05:06 1999 PST <itemizedlist> <listitem> <para> - A full time zone name, for example <literal>America/New_York</>. + A full time zone name, for example <literal>America/New_York</literal>. The recognized time zone names are listed in the <literal>pg_timezone_names</literal> view (see <xref linkend="view-pg-timezone-names">). @@ -2412,16 +2412,16 @@ January 8 04:05:06 1999 PST </listitem> <listitem> <para> - A time zone abbreviation, for example <literal>PST</>. Such a + A time zone abbreviation, for example <literal>PST</literal>. Such a specification merely defines a particular offset from UTC, in contrast to full time zone names which can imply a set of daylight savings transition-date rules as well. The recognized abbreviations - are listed in the <literal>pg_timezone_abbrevs</> view (see <xref + are listed in the <literal>pg_timezone_abbrevs</literal> view (see <xref linkend="view-pg-timezone-abbrevs">). You cannot set the configuration parameters <xref linkend="guc-timezone"> or <xref linkend="guc-log-timezone"> to a time zone abbreviation, but you can use abbreviations in - date/time input values and with the <literal>AT TIME ZONE</> + date/time input values and with the <literal>AT TIME ZONE</literal> operator. </para> </listitem> @@ -2429,25 +2429,25 @@ January 8 04:05:06 1999 PST <para> In addition to the timezone names and abbreviations, <productname>PostgreSQL</productname> will accept POSIX-style time zone - specifications of the form <replaceable>STD</><replaceable>offset</> or - <replaceable>STD</><replaceable>offset</><replaceable>DST</>, where - <replaceable>STD</> is a zone abbreviation, <replaceable>offset</> is a - numeric offset in hours west from UTC, and <replaceable>DST</> is an + specifications of the form <replaceable>STD</replaceable><replaceable>offset</replaceable> or + <replaceable>STD</replaceable><replaceable>offset</replaceable><replaceable>DST</replaceable>, where + <replaceable>STD</replaceable> is a zone abbreviation, <replaceable>offset</replaceable> is a + numeric offset in hours west from UTC, and <replaceable>DST</replaceable> is an optional daylight-savings zone abbreviation, assumed to stand for one - hour ahead of the given offset. For example, if <literal>EST5EDT</> + hour ahead of the given offset. For example, if <literal>EST5EDT</literal> were not already a recognized zone name, it would be accepted and would be functionally equivalent to United States East Coast time. In this syntax, a zone abbreviation can be a string of letters, or an - arbitrary string surrounded by angle brackets (<literal><></>). + arbitrary string surrounded by angle brackets (<literal><></literal>). When a daylight-savings zone abbreviation is present, it is assumed to be used according to the same daylight-savings transition rules used in the - IANA time zone database's <filename>posixrules</> entry. + IANA time zone database's <filename>posixrules</filename> entry. In a standard <productname>PostgreSQL</productname> installation, - <filename>posixrules</> is the same as <literal>US/Eastern</>, so + <filename>posixrules</filename> is the same as <literal>US/Eastern</literal>, so that POSIX-style time zone specifications follow USA daylight-savings rules. If needed, you can adjust this behavior by replacing the - <filename>posixrules</> file. + <filename>posixrules</filename> file. </para> </listitem> </itemizedlist> @@ -2456,10 +2456,10 @@ January 8 04:05:06 1999 PST and full names: abbreviations represent a specific offset from UTC, whereas many of the full names imply a local daylight-savings time rule, and so have two possible UTC offsets. As an example, - <literal>2014-06-04 12:00 America/New_York</> represents noon local + <literal>2014-06-04 12:00 America/New_York</literal> represents noon local time in New York, which for this particular date was Eastern Daylight - Time (UTC-4). So <literal>2014-06-04 12:00 EDT</> specifies that - same time instant. But <literal>2014-06-04 12:00 EST</> specifies + Time (UTC-4). So <literal>2014-06-04 12:00 EDT</literal> specifies that + same time instant. But <literal>2014-06-04 12:00 EST</literal> specifies noon Eastern Standard Time (UTC-5), regardless of whether daylight savings was nominally in effect on that date. </para> @@ -2467,10 +2467,10 @@ January 8 04:05:06 1999 PST <para> To complicate matters, some jurisdictions have used the same timezone abbreviation to mean different UTC offsets at different times; for - example, in Moscow <literal>MSK</> has meant UTC+3 in some years and - UTC+4 in others. <application>PostgreSQL</> interprets such + example, in Moscow <literal>MSK</literal> has meant UTC+3 in some years and + UTC+4 in others. <application>PostgreSQL</application> interprets such abbreviations according to whatever they meant (or had most recently - meant) on the specified date; but, as with the <literal>EST</> example + meant) on the specified date; but, as with the <literal>EST</literal> example above, this is not necessarily the same as local civil time on that date. </para> @@ -2478,18 +2478,18 @@ January 8 04:05:06 1999 PST One should be wary that the POSIX-style time zone feature can lead to silently accepting bogus input, since there is no check on the reasonableness of the zone abbreviations. For example, <literal>SET - TIMEZONE TO FOOBAR0</> will work, leaving the system effectively using + TIMEZONE TO FOOBAR0</literal> will work, leaving the system effectively using a rather peculiar abbreviation for UTC. Another issue to keep in mind is that in POSIX time zone names, - positive offsets are used for locations <emphasis>west</> of Greenwich. + positive offsets are used for locations <emphasis>west</emphasis> of Greenwich. Everywhere else, <productname>PostgreSQL</productname> follows the - ISO-8601 convention that positive timezone offsets are <emphasis>east</> + ISO-8601 convention that positive timezone offsets are <emphasis>east</emphasis> of Greenwich. </para> <para> In all cases, timezone names and abbreviations are recognized - case-insensitively. (This is a change from <productname>PostgreSQL</> + case-insensitively. (This is a change from <productname>PostgreSQL</productname> versions prior to 8.2, which were case-sensitive in some contexts but not others.) </para> @@ -2497,14 +2497,14 @@ January 8 04:05:06 1999 PST <para> Neither timezone names nor abbreviations are hard-wired into the server; they are obtained from configuration files stored under - <filename>.../share/timezone/</> and <filename>.../share/timezonesets/</> + <filename>.../share/timezone/</filename> and <filename>.../share/timezonesets/</filename> of the installation directory (see <xref linkend="datetime-config-files">). </para> <para> The <xref linkend="guc-timezone"> configuration parameter can - be set in the file <filename>postgresql.conf</>, or in any of the + be set in the file <filename>postgresql.conf</filename>, or in any of the other standard ways described in <xref linkend="runtime-config">. There are also some special ways to set it: @@ -2513,7 +2513,7 @@ January 8 04:05:06 1999 PST <para> The <acronym>SQL</acronym> command <command>SET TIME ZONE</command> sets the time zone for the session. This is an alternative spelling - of <command>SET TIMEZONE TO</> with a more SQL-spec-compatible syntax. + of <command>SET TIMEZONE TO</command> with a more SQL-spec-compatible syntax. </para> </listitem> @@ -2541,52 +2541,52 @@ January 8 04:05:06 1999 PST verbose syntax: <synopsis> -<optional>@</> <replaceable>quantity</> <replaceable>unit</> <optional><replaceable>quantity</> <replaceable>unit</>...</> <optional><replaceable>direction</></optional> +<optional>@</optional> <replaceable>quantity</replaceable> <replaceable>unit</replaceable> <optional><replaceable>quantity</replaceable> <replaceable>unit</replaceable>...</optional> <optional><replaceable>direction</replaceable></optional> </synopsis> - where <replaceable>quantity</> is a number (possibly signed); - <replaceable>unit</> is <literal>microsecond</literal>, + where <replaceable>quantity</replaceable> is a number (possibly signed); + <replaceable>unit</replaceable> is <literal>microsecond</literal>, <literal>millisecond</literal>, <literal>second</literal>, <literal>minute</literal>, <literal>hour</literal>, <literal>day</literal>, <literal>week</literal>, <literal>month</literal>, <literal>year</literal>, <literal>decade</literal>, <literal>century</literal>, <literal>millennium</literal>, or abbreviations or plurals of these units; - <replaceable>direction</> can be <literal>ago</literal> or - empty. The at sign (<literal>@</>) is optional noise. The amounts + <replaceable>direction</replaceable> can be <literal>ago</literal> or + empty. The at sign (<literal>@</literal>) is optional noise. The amounts of the different units are implicitly added with appropriate sign accounting. <literal>ago</literal> negates all the fields. This syntax is also used for interval output, if <xref linkend="guc-intervalstyle"> is set to - <literal>postgres_verbose</>. + <literal>postgres_verbose</literal>. </para> <para> Quantities of days, hours, minutes, and seconds can be specified without - explicit unit markings. For example, <literal>'1 12:59:10'</> is read - the same as <literal>'1 day 12 hours 59 min 10 sec'</>. Also, + explicit unit markings. For example, <literal>'1 12:59:10'</literal> is read + the same as <literal>'1 day 12 hours 59 min 10 sec'</literal>. Also, a combination of years and months can be specified with a dash; - for example <literal>'200-10'</> is read the same as <literal>'200 years - 10 months'</>. (These shorter forms are in fact the only ones allowed + for example <literal>'200-10'</literal> is read the same as <literal>'200 years + 10 months'</literal>. (These shorter forms are in fact the only ones allowed by the <acronym>SQL</acronym> standard, and are used for output when - <varname>IntervalStyle</> is set to <literal>sql_standard</literal>.) + <varname>IntervalStyle</varname> is set to <literal>sql_standard</literal>.) </para> <para> Interval values can also be written as ISO 8601 time intervals, using - either the <quote>format with designators</> of the standard's section - 4.4.3.2 or the <quote>alternative format</> of section 4.4.3.3. The + either the <quote>format with designators</quote> of the standard's section + 4.4.3.2 or the <quote>alternative format</quote> of section 4.4.3.3. The format with designators looks like this: <synopsis> -P <replaceable>quantity</> <replaceable>unit</> <optional> <replaceable>quantity</> <replaceable>unit</> ...</optional> <optional> T <optional> <replaceable>quantity</> <replaceable>unit</> ...</optional></optional> +P <replaceable>quantity</replaceable> <replaceable>unit</replaceable> <optional> <replaceable>quantity</replaceable> <replaceable>unit</replaceable> ...</optional> <optional> T <optional> <replaceable>quantity</replaceable> <replaceable>unit</replaceable> ...</optional></optional> </synopsis> - The string must start with a <literal>P</>, and may include a - <literal>T</> that introduces the time-of-day units. The + The string must start with a <literal>P</literal>, and may include a + <literal>T</literal> that introduces the time-of-day units. The available unit abbreviations are given in <xref linkend="datatype-interval-iso8601-units">. Units may be omitted, and may be specified in any order, but units smaller than - a day must appear after <literal>T</>. In particular, the meaning of - <literal>M</> depends on whether it is before or after - <literal>T</>. + a day must appear after <literal>T</literal>. In particular, the meaning of + <literal>M</literal> depends on whether it is before or after + <literal>T</literal>. </para> <table id="datatype-interval-iso8601-units"> @@ -2634,51 +2634,51 @@ P <replaceable>quantity</> <replaceable>unit</> <optional> <replaceable>quantity <para> In the alternative format: <synopsis> -P <optional> <replaceable>years</>-<replaceable>months</>-<replaceable>days</> </optional> <optional> T <replaceable>hours</>:<replaceable>minutes</>:<replaceable>seconds</> </optional> +P <optional> <replaceable>years</replaceable>-<replaceable>months</replaceable>-<replaceable>days</replaceable> </optional> <optional> T <replaceable>hours</replaceable>:<replaceable>minutes</replaceable>:<replaceable>seconds</replaceable> </optional> </synopsis> the string must begin with <literal>P</literal>, and a - <literal>T</> separates the date and time parts of the interval. + <literal>T</literal> separates the date and time parts of the interval. The values are given as numbers similar to ISO 8601 dates. </para> <para> - When writing an interval constant with a <replaceable>fields</> + When writing an interval constant with a <replaceable>fields</replaceable> specification, or when assigning a string to an interval column that was - defined with a <replaceable>fields</> specification, the interpretation of - unmarked quantities depends on the <replaceable>fields</>. For - example <literal>INTERVAL '1' YEAR</> is read as 1 year, whereas - <literal>INTERVAL '1'</> means 1 second. Also, field values - <quote>to the right</> of the least significant field allowed by the - <replaceable>fields</> specification are silently discarded. For - example, writing <literal>INTERVAL '1 day 2:03:04' HOUR TO MINUTE</> + defined with a <replaceable>fields</replaceable> specification, the interpretation of + unmarked quantities depends on the <replaceable>fields</replaceable>. For + example <literal>INTERVAL '1' YEAR</literal> is read as 1 year, whereas + <literal>INTERVAL '1'</literal> means 1 second. Also, field values + <quote>to the right</quote> of the least significant field allowed by the + <replaceable>fields</replaceable> specification are silently discarded. For + example, writing <literal>INTERVAL '1 day 2:03:04' HOUR TO MINUTE</literal> results in dropping the seconds field, but not the day field. </para> <para> - According to the <acronym>SQL</> standard all fields of an interval + According to the <acronym>SQL</acronym> standard all fields of an interval value must have the same sign, so a leading negative sign applies to all fields; for example the negative sign in the interval literal - <literal>'-1 2:03:04'</> applies to both the days and hour/minute/second - parts. <productname>PostgreSQL</> allows the fields to have different + <literal>'-1 2:03:04'</literal> applies to both the days and hour/minute/second + parts. <productname>PostgreSQL</productname> allows the fields to have different signs, and traditionally treats each field in the textual representation as independently signed, so that the hour/minute/second part is - considered positive in this example. If <varname>IntervalStyle</> is + considered positive in this example. If <varname>IntervalStyle</varname> is set to <literal>sql_standard</literal> then a leading sign is considered to apply to all fields (but only if no additional signs appear). - Otherwise the traditional <productname>PostgreSQL</> interpretation is + Otherwise the traditional <productname>PostgreSQL</productname> interpretation is used. To avoid ambiguity, it's recommended to attach an explicit sign to each field if any field is negative. </para> <para> - Internally <type>interval</> values are stored as months, days, + Internally <type>interval</type> values are stored as months, days, and seconds. This is done because the number of days in a month varies, and a day can have 23 or 25 hours if a daylight savings time adjustment is involved. The months and days fields are integers while the seconds field can store fractions. Because intervals are - usually created from constant strings or <type>timestamp</> subtraction, + usually created from constant strings or <type>timestamp</type> subtraction, this storage method works well in most cases. Functions - <function>justify_days</> and <function>justify_hours</> are + <function>justify_days</function> and <function>justify_hours</function> are available for adjusting days and hours that overflow their normal ranges. </para> @@ -2686,18 +2686,18 @@ P <optional> <replaceable>years</>-<replaceable>months</>-<replaceable>days</> < <para> In the verbose input format, and in some fields of the more compact input formats, field values can have fractional parts; for example - <literal>'1.5 week'</> or <literal>'01:02:03.45'</>. Such input is + <literal>'1.5 week'</literal> or <literal>'01:02:03.45'</literal>. Such input is converted to the appropriate number of months, days, and seconds for storage. When this would result in a fractional number of months or days, the fraction is added to the lower-order fields using the conversion factors 1 month = 30 days and 1 day = 24 hours. - For example, <literal>'1.5 month'</> becomes 1 month and 15 days. + For example, <literal>'1.5 month'</literal> becomes 1 month and 15 days. Only seconds will ever be shown as fractional on output. </para> <para> <xref linkend="datatype-interval-input-examples"> shows some examples - of valid <type>interval</> input. + of valid <type>interval</type> input. </para> <table id="datatype-interval-input-examples"> @@ -2724,11 +2724,11 @@ P <optional> <replaceable>years</>-<replaceable>months</>-<replaceable>days</> < </row> <row> <entry>P1Y2M3DT4H5M6S</entry> - <entry>ISO 8601 <quote>format with designators</>: same meaning as above</entry> + <entry>ISO 8601 <quote>format with designators</quote>: same meaning as above</entry> </row> <row> <entry>P0001-02-03T04:05:06</entry> - <entry>ISO 8601 <quote>alternative format</>: same meaning as above</entry> + <entry>ISO 8601 <quote>alternative format</quote>: same meaning as above</entry> </row> </tbody> </tgroup> @@ -2747,16 +2747,16 @@ P <optional> <replaceable>years</>-<replaceable>months</>-<replaceable>days</> < <para> The output format of the interval type can be set to one of the - four styles <literal>sql_standard</>, <literal>postgres</>, - <literal>postgres_verbose</>, or <literal>iso_8601</>, + four styles <literal>sql_standard</literal>, <literal>postgres</literal>, + <literal>postgres_verbose</literal>, or <literal>iso_8601</literal>, using the command <literal>SET intervalstyle</literal>. - The default is the <literal>postgres</> format. + The default is the <literal>postgres</literal> format. <xref linkend="interval-style-output-table"> shows examples of each output style. </para> <para> - The <literal>sql_standard</> style produces output that conforms to + The <literal>sql_standard</literal> style produces output that conforms to the SQL standard's specification for interval literal strings, if the interval value meets the standard's restrictions (either year-month only or day-time only, with no mixing of positive @@ -2766,20 +2766,20 @@ P <optional> <replaceable>years</>-<replaceable>months</>-<replaceable>days</> < </para> <para> - The output of the <literal>postgres</> style matches the output of - <productname>PostgreSQL</> releases prior to 8.4 when the - <xref linkend="guc-datestyle"> parameter was set to <literal>ISO</>. + The output of the <literal>postgres</literal> style matches the output of + <productname>PostgreSQL</productname> releases prior to 8.4 when the + <xref linkend="guc-datestyle"> parameter was set to <literal>ISO</literal>. </para> <para> - The output of the <literal>postgres_verbose</> style matches the output of - <productname>PostgreSQL</> releases prior to 8.4 when the - <varname>DateStyle</> parameter was set to non-<literal>ISO</> output. + The output of the <literal>postgres_verbose</literal> style matches the output of + <productname>PostgreSQL</productname> releases prior to 8.4 when the + <varname>DateStyle</varname> parameter was set to non-<literal>ISO</literal> output. </para> <para> - The output of the <literal>iso_8601</> style matches the <quote>format - with designators</> described in section 4.4.3.2 of the + The output of the <literal>iso_8601</literal> style matches the <quote>format + with designators</quote> described in section 4.4.3.2 of the ISO 8601 standard. </para> @@ -2796,25 +2796,25 @@ P <optional> <replaceable>years</>-<replaceable>months</>-<replaceable>days</> < </thead> <tbody> <row> - <entry><literal>sql_standard</></entry> + <entry><literal>sql_standard</literal></entry> <entry>1-2</entry> <entry>3 4:05:06</entry> <entry>-1-2 +3 -4:05:06</entry> </row> <row> - <entry><literal>postgres</></entry> + <entry><literal>postgres</literal></entry> <entry>1 year 2 mons</entry> <entry>3 days 04:05:06</entry> <entry>-1 year -2 mons +3 days -04:05:06</entry> </row> <row> - <entry><literal>postgres_verbose</></entry> + <entry><literal>postgres_verbose</literal></entry> <entry>@ 1 year 2 mons</entry> <entry>@ 3 days 4 hours 5 mins 6 secs</entry> <entry>@ 1 year 2 mons -3 days 4 hours 5 mins 6 secs ago</entry> </row> <row> - <entry><literal>iso_8601</></entry> + <entry><literal>iso_8601</literal></entry> <entry>P1Y2M</entry> <entry>P3DT4H5M6S</entry> <entry>P-1Y-2M3DT-4H-5M-6S</entry> @@ -3178,7 +3178,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays <replaceable>x</replaceable> , <replaceable>y</replaceable> </synopsis> - where <replaceable>x</> and <replaceable>y</> are the respective + where <replaceable>x</replaceable> and <replaceable>y</replaceable> are the respective coordinates, as floating-point numbers. </para> @@ -3196,8 +3196,8 @@ SELECT person.name, holidays.num_weeks FROM person, holidays <para> Lines are represented by the linear - equation <replaceable>A</>x + <replaceable>B</>y + <replaceable>C</> = 0, - where <replaceable>A</> and <replaceable>B</> are not both zero. Values + equation <replaceable>A</replaceable>x + <replaceable>B</replaceable>y + <replaceable>C</replaceable> = 0, + where <replaceable>A</replaceable> and <replaceable>B</replaceable> are not both zero. Values of type <type>line</type> are input and output in the following form: <synopsis> { <replaceable>A</replaceable>, <replaceable>B</replaceable>, <replaceable>C</replaceable> } @@ -3324,8 +3324,8 @@ SELECT person.name, holidays.num_weeks FROM person, holidays </synopsis> where the points are the end points of the line segments - comprising the path. Square brackets (<literal>[]</>) indicate - an open path, while parentheses (<literal>()</>) indicate a + comprising the path. Square brackets (<literal>[]</literal>) indicate + an open path, while parentheses (<literal>()</literal>) indicate a closed path. When the outermost parentheses are omitted, as in the third through fifth syntaxes, a closed path is assumed. </para> @@ -3388,7 +3388,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays </synopsis> where - <literal>(<replaceable>x</replaceable>,<replaceable>y</replaceable>)</> + <literal>(<replaceable>x</replaceable>,<replaceable>y</replaceable>)</literal> is the center point and <replaceable>r</replaceable> is the radius of the circle. </para> @@ -3409,7 +3409,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays </indexterm> <para> - <productname>PostgreSQL</> offers data types to store IPv4, IPv6, and MAC + <productname>PostgreSQL</productname> offers data types to store IPv4, IPv6, and MAC addresses, as shown in <xref linkend="datatype-net-types-table">. It is better to use these types instead of plain text types to store network addresses, because @@ -3503,7 +3503,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays </sect2> <sect2 id="datatype-cidr"> - <title><type>cidr</></title> + <title><type>cidr</type></title> <indexterm> <primary>cidr</primary> @@ -3514,11 +3514,11 @@ SELECT person.name, holidays.num_weeks FROM person, holidays Input and output formats follow Classless Internet Domain Routing conventions. The format for specifying networks is <replaceable - class="parameter">address/y</> where <replaceable - class="parameter">address</> is the network represented as an + class="parameter">address/y</replaceable> where <replaceable + class="parameter">address</replaceable> is the network represented as an IPv4 or IPv6 address, and <replaceable - class="parameter">y</> is the number of bits in the netmask. If - <replaceable class="parameter">y</> is omitted, it is calculated + class="parameter">y</replaceable> is the number of bits in the netmask. If + <replaceable class="parameter">y</replaceable> is omitted, it is calculated using assumptions from the older classful network numbering system, except it will be at least large enough to include all of the octets written in the input. It is an error to specify a network address @@ -3530,7 +3530,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays </para> <table id="datatype-net-cidr-table"> - <title><type>cidr</> Type Input Examples</title> + <title><type>cidr</type> Type Input Examples</title> <tgroup cols="3"> <thead> <row> @@ -3639,8 +3639,8 @@ SELECT person.name, holidays.num_weeks FROM person, holidays <tip> <para> If you do not like the output format for <type>inet</type> or - <type>cidr</type> values, try the functions <function>host</>, - <function>text</>, and <function>abbrev</>. + <type>cidr</type> values, try the functions <function>host</function>, + <function>text</function>, and <function>abbrev</function>. </para> </tip> </sect2> @@ -3658,24 +3658,24 @@ SELECT person.name, holidays.num_weeks FROM person, holidays </indexterm> <para> - The <type>macaddr</> type stores MAC addresses, known for example + The <type>macaddr</type> type stores MAC addresses, known for example from Ethernet card hardware addresses (although MAC addresses are used for other purposes as well). Input is accepted in the following formats: <simplelist> - <member><literal>'08:00:2b:01:02:03'</></member> - <member><literal>'08-00-2b-01-02-03'</></member> - <member><literal>'08002b:010203'</></member> - <member><literal>'08002b-010203'</></member> - <member><literal>'0800.2b01.0203'</></member> - <member><literal>'0800-2b01-0203'</></member> - <member><literal>'08002b010203'</></member> + <member><literal>'08:00:2b:01:02:03'</literal></member> + <member><literal>'08-00-2b-01-02-03'</literal></member> + <member><literal>'08002b:010203'</literal></member> + <member><literal>'08002b-010203'</literal></member> + <member><literal>'0800.2b01.0203'</literal></member> + <member><literal>'0800-2b01-0203'</literal></member> + <member><literal>'08002b010203'</literal></member> </simplelist> These examples would all specify the same address. Upper and lower case is accepted for the digits - <literal>a</> through <literal>f</>. Output is always in the + <literal>a</literal> through <literal>f</literal>. Output is always in the first of the forms shown. </para> @@ -3708,7 +3708,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays </indexterm> <para> - The <type>macaddr8</> type stores MAC addresses in EUI-64 + The <type>macaddr8</type> type stores MAC addresses in EUI-64 format, known for example from Ethernet card hardware addresses (although MAC addresses are used for other purposes as well). This type can accept both 6 and 8 byte length MAC addresses @@ -3718,31 +3718,31 @@ SELECT person.name, holidays.num_weeks FROM person, holidays Note that IPv6 uses a modified EUI-64 format where the 7th bit should be set to one after the conversion from EUI-48. The - function <function>macaddr8_set7bit</> is provided to make this + function <function>macaddr8_set7bit</function> is provided to make this change. Generally speaking, any input which is comprised of pairs of hex digits (on byte boundaries), optionally separated consistently by - one of <literal>':'</>, <literal>'-'</> or <literal>'.'</>, is + one of <literal>':'</literal>, <literal>'-'</literal> or <literal>'.'</literal>, is accepted. The number of hex digits must be either 16 (8 bytes) or 12 (6 bytes). Leading and trailing whitespace is ignored. The following are examples of input formats that are accepted: <simplelist> - <member><literal>'08:00:2b:01:02:03:04:05'</></member> - <member><literal>'08-00-2b-01-02-03-04-05'</></member> - <member><literal>'08002b:0102030405'</></member> - <member><literal>'08002b-0102030405'</></member> - <member><literal>'0800.2b01.0203.0405'</></member> - <member><literal>'0800-2b01-0203-0405'</></member> - <member><literal>'08002b01:02030405'</></member> - <member><literal>'08002b0102030405'</></member> + <member><literal>'08:00:2b:01:02:03:04:05'</literal></member> + <member><literal>'08-00-2b-01-02-03-04-05'</literal></member> + <member><literal>'08002b:0102030405'</literal></member> + <member><literal>'08002b-0102030405'</literal></member> + <member><literal>'0800.2b01.0203.0405'</literal></member> + <member><literal>'0800-2b01-0203-0405'</literal></member> + <member><literal>'08002b01:02030405'</literal></member> + <member><literal>'08002b0102030405'</literal></member> </simplelist> These examples would all specify the same address. Upper and lower case is accepted for the digits - <literal>a</> through <literal>f</>. Output is always in the + <literal>a</literal> through <literal>f</literal>. Output is always in the first of the forms shown. The last six input formats that are mentioned above are not part @@ -3750,7 +3750,7 @@ SELECT person.name, holidays.num_weeks FROM person, holidays To convert a traditional 48 bit MAC address in EUI-48 format to modified EUI-64 format to be included as the host portion of an - IPv6 address, use <function>macaddr8_set7bit</> as shown: + IPv6 address, use <function>macaddr8_set7bit</function> as shown: <programlisting> SELECT macaddr8_set7bit('08:00:2b:01:02:03'); @@ -3798,12 +3798,12 @@ SELECT macaddr8_set7bit('08:00:2b:01:02:03'); <note> <para> If one explicitly casts a bit-string value to - <type>bit(<replaceable>n</>)</type>, it will be truncated or - zero-padded on the right to be exactly <replaceable>n</> bits, + <type>bit(<replaceable>n</replaceable>)</type>, it will be truncated or + zero-padded on the right to be exactly <replaceable>n</replaceable> bits, without raising an error. Similarly, if one explicitly casts a bit-string value to - <type>bit varying(<replaceable>n</>)</type>, it will be truncated - on the right if it is more than <replaceable>n</> bits. + <type>bit varying(<replaceable>n</replaceable>)</type>, it will be truncated + on the right if it is more than <replaceable>n</replaceable> bits. </para> </note> @@ -3860,8 +3860,8 @@ SELECT * FROM test; <para> <productname>PostgreSQL</productname> provides two data types that are designed to support full text search, which is the activity of - searching through a collection of natural-language <firstterm>documents</> - to locate those that best match a <firstterm>query</>. + searching through a collection of natural-language <firstterm>documents</firstterm> + to locate those that best match a <firstterm>query</firstterm>. The <type>tsvector</type> type represents a document in a form optimized for text search; the <type>tsquery</type> type similarly represents a text query. @@ -3879,8 +3879,8 @@ SELECT * FROM test; <para> A <type>tsvector</type> value is a sorted list of distinct - <firstterm>lexemes</>, which are words that have been - <firstterm>normalized</> to merge different variants of the same word + <firstterm>lexemes</firstterm>, which are words that have been + <firstterm>normalized</firstterm> to merge different variants of the same word (see <xref linkend="textsearch"> for details). Sorting and duplicate-elimination are done automatically during input, as shown in this example: @@ -3913,7 +3913,7 @@ SELECT $$the lexeme 'Joe''s' contains a quote$$::tsvector; 'Joe''s' 'a' 'contains' 'lexeme' 'quote' 'the' </programlisting> - Optionally, integer <firstterm>positions</> + Optionally, integer <firstterm>positions</firstterm> can be attached to lexemes: <programlisting> @@ -3932,7 +3932,7 @@ SELECT 'a:1 fat:2 cat:3 sat:4 on:5 a:6 mat:7 and:8 ate:9 a:10 fat:11 rat:12'::ts <para> Lexemes that have positions can further be labeled with a - <firstterm>weight</>, which can be <literal>A</literal>, + <firstterm>weight</firstterm>, which can be <literal>A</literal>, <literal>B</literal>, <literal>C</literal>, or <literal>D</literal>. <literal>D</literal> is the default and hence is not shown on output: @@ -3965,7 +3965,7 @@ SELECT 'The Fat Rats'::tsvector; For most English-text-searching applications the above words would be considered non-normalized, but <type>tsvector</type> doesn't care. Raw document text should usually be passed through - <function>to_tsvector</> to normalize the words appropriately + <function>to_tsvector</function> to normalize the words appropriately for searching: <programlisting> @@ -3991,17 +3991,17 @@ SELECT to_tsvector('english', 'The Fat Rats'); A <type>tsquery</type> value stores lexemes that are to be searched for, and can combine them using the Boolean operators <literal>&</literal> (AND), <literal>|</literal> (OR), and - <literal>!</> (NOT), as well as the phrase search operator - <literal><-></> (FOLLOWED BY). There is also a variant - <literal><<replaceable>N</>></literal> of the FOLLOWED BY - operator, where <replaceable>N</> is an integer constant that + <literal>!</literal> (NOT), as well as the phrase search operator + <literal><-></literal> (FOLLOWED BY). There is also a variant + <literal><<replaceable>N</replaceable>></literal> of the FOLLOWED BY + operator, where <replaceable>N</replaceable> is an integer constant that specifies the distance between the two lexemes being searched - for. <literal><-></> is equivalent to <literal><1></>. + for. <literal><-></literal> is equivalent to <literal><1></literal>. </para> <para> Parentheses can be used to enforce grouping of these operators. - In the absence of parentheses, <literal>!</> (NOT) binds most tightly, + In the absence of parentheses, <literal>!</literal> (NOT) binds most tightly, <literal><-></literal> (FOLLOWED BY) next most tightly, then <literal>&</literal> (AND), with <literal>|</literal> (OR) binding the least tightly. @@ -4031,7 +4031,7 @@ SELECT 'fat & rat & ! cat'::tsquery; <para> Optionally, lexemes in a <type>tsquery</type> can be labeled with one or more weight letters, which restricts them to match only - <type>tsvector</> lexemes with one of those weights: + <type>tsvector</type> lexemes with one of those weights: <programlisting> SELECT 'fat:ab & cat'::tsquery; @@ -4042,7 +4042,7 @@ SELECT 'fat:ab & cat'::tsquery; </para> <para> - Also, lexemes in a <type>tsquery</type> can be labeled with <literal>*</> + Also, lexemes in a <type>tsquery</type> can be labeled with <literal>*</literal> to specify prefix matching: <programlisting> SELECT 'super:*'::tsquery; @@ -4050,15 +4050,15 @@ SELECT 'super:*'::tsquery; ----------- 'super':* </programlisting> - This query will match any word in a <type>tsvector</> that begins - with <quote>super</>. + This query will match any word in a <type>tsvector</type> that begins + with <quote>super</quote>. </para> <para> Quoting rules for lexemes are the same as described previously for - lexemes in <type>tsvector</>; and, as with <type>tsvector</>, + lexemes in <type>tsvector</type>; and, as with <type>tsvector</type>, any required normalization of words must be done before converting - to the <type>tsquery</> type. The <function>to_tsquery</> + to the <type>tsquery</type> type. The <function>to_tsquery</function> function is convenient for performing such normalization: <programlisting> @@ -4068,7 +4068,7 @@ SELECT to_tsquery('Fat:ab & Cats'); 'fat':AB & 'cat' </programlisting> - Note that <function>to_tsquery</> will process prefixes in the same way + Note that <function>to_tsquery</function> will process prefixes in the same way as other words, which means this comparison returns true: <programlisting> @@ -4077,14 +4077,14 @@ SELECT to_tsvector( 'postgraduate' ) @@ to_tsquery( 'postgres:*' ); ---------- t </programlisting> - because <literal>postgres</> gets stemmed to <literal>postgr</>: + because <literal>postgres</literal> gets stemmed to <literal>postgr</literal>: <programlisting> SELECT to_tsvector( 'postgraduate' ), to_tsquery( 'postgres:*' ); to_tsvector | to_tsquery ---------------+------------ 'postgradu':1 | 'postgr':* </programlisting> - which will match the stemmed form of <literal>postgraduate</>. + which will match the stemmed form of <literal>postgraduate</literal>. </para> </sect2> @@ -4150,7 +4150,7 @@ a0ee-bc99-9c0b-4ef8-bb6d-6bb9-bd38-0a11 </sect1> <sect1 id="datatype-xml"> - <title><acronym>XML</> Type</title> + <title><acronym>XML</acronym> Type</title> <indexterm zone="datatype-xml"> <primary>XML</primary> @@ -4163,7 +4163,7 @@ a0ee-bc99-9c0b-4ef8-bb6d-6bb9-bd38-0a11 functions to perform type-safe operations on it; see <xref linkend="functions-xml">. Use of this data type requires the installation to have been built with <command>configure - --with-libxml</>. + --with-libxml</command>. </para> <para> @@ -4311,7 +4311,7 @@ SET xmloption TO { DOCUMENT | CONTENT }; <para> Some XML-related functions may not work at all on non-ASCII data when the server encoding is not UTF-8. This is known to be an - issue for <function>xmltable()</> and <function>xpath()</> in particular. + issue for <function>xmltable()</function> and <function>xpath()</function> in particular. </para> </caution> </sect2> @@ -4421,17 +4421,17 @@ SET xmloption TO { DOCUMENT | CONTENT }; system tables. OIDs are not added to user-created tables, unless <literal>WITH OIDS</literal> is specified when the table is created, or the <xref linkend="guc-default-with-oids"> - configuration variable is enabled. Type <type>oid</> represents + configuration variable is enabled. Type <type>oid</type> represents an object identifier. There are also several alias types for - <type>oid</>: <type>regproc</>, <type>regprocedure</>, - <type>regoper</>, <type>regoperator</>, <type>regclass</>, - <type>regtype</>, <type>regrole</>, <type>regnamespace</>, - <type>regconfig</>, and <type>regdictionary</>. + <type>oid</type>: <type>regproc</type>, <type>regprocedure</type>, + <type>regoper</type>, <type>regoperator</type>, <type>regclass</type>, + <type>regtype</type>, <type>regrole</type>, <type>regnamespace</type>, + <type>regconfig</type>, and <type>regdictionary</type>. <xref linkend="datatype-oid-table"> shows an overview. </para> <para> - The <type>oid</> type is currently implemented as an unsigned + The <type>oid</type> type is currently implemented as an unsigned four-byte integer. Therefore, it is not large enough to provide database-wide uniqueness in large databases, or even in large individual tables. So, using a user-created table's OID column as @@ -4440,7 +4440,7 @@ SET xmloption TO { DOCUMENT | CONTENT }; </para> <para> - The <type>oid</> type itself has few operations beyond comparison. + The <type>oid</type> type itself has few operations beyond comparison. It can be cast to integer, however, and then manipulated using the standard integer operators. (Beware of possible signed-versus-unsigned confusion if you do this.) @@ -4450,10 +4450,10 @@ SET xmloption TO { DOCUMENT | CONTENT }; The OID alias types have no operations of their own except for specialized input and output routines. These routines are able to accept and display symbolic names for system objects, rather than - the raw numeric value that type <type>oid</> would use. The alias + the raw numeric value that type <type>oid</type> would use. The alias types allow simplified lookup of OID values for objects. For example, - to examine the <structname>pg_attribute</> rows related to a table - <literal>mytable</>, one could write: + to examine the <structname>pg_attribute</structname> rows related to a table + <literal>mytable</literal>, one could write: <programlisting> SELECT * FROM pg_attribute WHERE attrelid = 'mytable'::regclass; </programlisting> @@ -4465,11 +4465,11 @@ SELECT * FROM pg_attribute While that doesn't look all that bad by itself, it's still oversimplified. A far more complicated sub-select would be needed to select the right OID if there are multiple tables named - <literal>mytable</> in different schemas. - The <type>regclass</> input converter handles the table lookup according - to the schema path setting, and so it does the <quote>right thing</> + <literal>mytable</literal> in different schemas. + The <type>regclass</type> input converter handles the table lookup according + to the schema path setting, and so it does the <quote>right thing</quote> automatically. Similarly, casting a table's OID to - <type>regclass</> is handy for symbolic display of a numeric OID. + <type>regclass</type> is handy for symbolic display of a numeric OID. </para> <table id="datatype-oid-table"> @@ -4487,80 +4487,80 @@ SELECT * FROM pg_attribute <tbody> <row> - <entry><type>oid</></entry> + <entry><type>oid</type></entry> <entry>any</entry> <entry>numeric object identifier</entry> - <entry><literal>564182</></entry> + <entry><literal>564182</literal></entry> </row> <row> - <entry><type>regproc</></entry> - <entry><structname>pg_proc</></entry> + <entry><type>regproc</type></entry> + <entry><structname>pg_proc</structname></entry> <entry>function name</entry> - <entry><literal>sum</></entry> + <entry><literal>sum</literal></entry> </row> <row> - <entry><type>regprocedure</></entry> - <entry><structname>pg_proc</></entry> + <entry><type>regprocedure</type></entry> + <entry><structname>pg_proc</structname></entry> <entry>function with argument types</entry> - <entry><literal>sum(int4)</></entry> + <entry><literal>sum(int4)</literal></entry> </row> <row> - <entry><type>regoper</></entry> - <entry><structname>pg_operator</></entry> + <entry><type>regoper</type></entry> + <entry><structname>pg_operator</structname></entry> <entry>operator name</entry> - <entry><literal>+</></entry> + <entry><literal>+</literal></entry> </row> <row> - <entry><type>regoperator</></entry> - <entry><structname>pg_operator</></entry> + <entry><type>regoperator</type></entry> + <entry><structname>pg_operator</structname></entry> <entry>operator with argument types</entry> - <entry><literal>*(integer,integer)</> or <literal>-(NONE,integer)</></entry> + <entry><literal>*(integer,integer)</literal> or <literal>-(NONE,integer)</literal></entry> </row> <row> - <entry><type>regclass</></entry> - <entry><structname>pg_class</></entry> + <entry><type>regclass</type></entry> + <entry><structname>pg_class</structname></entry> <entry>relation name</entry> - <entry><literal>pg_type</></entry> + <entry><literal>pg_type</literal></entry> </row> <row> - <entry><type>regtype</></entry> - <entry><structname>pg_type</></entry> + <entry><type>regtype</type></entry> + <entry><structname>pg_type</structname></entry> <entry>data type name</entry> - <entry><literal>integer</></entry> + <entry><literal>integer</literal></entry> </row> <row> - <entry><type>regrole</></entry> - <entry><structname>pg_authid</></entry> + <entry><type>regrole</type></entry> + <entry><structname>pg_authid</structname></entry> <entry>role name</entry> - <entry><literal>smithee</></entry> + <entry><literal>smithee</literal></entry> </row> <row> - <entry><type>regnamespace</></entry> - <entry><structname>pg_namespace</></entry> + <entry><type>regnamespace</type></entry> + <entry><structname>pg_namespace</structname></entry> <entry>namespace name</entry> - <entry><literal>pg_catalog</></entry> + <entry><literal>pg_catalog</literal></entry> </row> <row> - <entry><type>regconfig</></entry> - <entry><structname>pg_ts_config</></entry> + <entry><type>regconfig</type></entry> + <entry><structname>pg_ts_config</structname></entry> <entry>text search configuration</entry> - <entry><literal>english</></entry> + <entry><literal>english</literal></entry> </row> <row> - <entry><type>regdictionary</></entry> - <entry><structname>pg_ts_dict</></entry> + <entry><type>regdictionary</type></entry> + <entry><structname>pg_ts_dict</structname></entry> <entry>text search dictionary</entry> - <entry><literal>simple</></entry> + <entry><literal>simple</literal></entry> </row> </tbody> </tgroup> @@ -4571,11 +4571,11 @@ SELECT * FROM pg_attribute schema-qualified names, and will display schema-qualified names on output if the object would not be found in the current search path without being qualified. - The <type>regproc</> and <type>regoper</> alias types will only + The <type>regproc</type> and <type>regoper</type> alias types will only accept input names that are unique (not overloaded), so they are - of limited use; for most uses <type>regprocedure</> or - <type>regoperator</> are more appropriate. For <type>regoperator</>, - unary operators are identified by writing <literal>NONE</> for the unused + of limited use; for most uses <type>regprocedure</type> or + <type>regoperator</type> are more appropriate. For <type>regoperator</type>, + unary operators are identified by writing <literal>NONE</literal> for the unused operand. </para> @@ -4585,12 +4585,12 @@ SELECT * FROM pg_attribute constant of one of these types appears in a stored expression (such as a column default expression or view), it creates a dependency on the referenced object. For example, if a column has a default - expression <literal>nextval('my_seq'::regclass)</>, + expression <literal>nextval('my_seq'::regclass)</literal>, <productname>PostgreSQL</productname> understands that the default expression depends on the sequence - <literal>my_seq</>; the system will not let the sequence be dropped + <literal>my_seq</literal>; the system will not let the sequence be dropped without first removing the default expression. - <type>regrole</> is the only exception for the property. Constants of this + <type>regrole</type> is the only exception for the property. Constants of this type are not allowed in such expressions. </para> @@ -4603,21 +4603,21 @@ SELECT * FROM pg_attribute </note> <para> - Another identifier type used by the system is <type>xid</>, or transaction - (abbreviated <abbrev>xact</>) identifier. This is the data type of the system columns - <structfield>xmin</> and <structfield>xmax</>. Transaction identifiers are 32-bit quantities. + Another identifier type used by the system is <type>xid</type>, or transaction + (abbreviated <abbrev>xact</abbrev>) identifier. This is the data type of the system columns + <structfield>xmin</structfield> and <structfield>xmax</structfield>. Transaction identifiers are 32-bit quantities. </para> <para> - A third identifier type used by the system is <type>cid</>, or + A third identifier type used by the system is <type>cid</type>, or command identifier. This is the data type of the system columns - <structfield>cmin</> and <structfield>cmax</>. Command identifiers are also 32-bit quantities. + <structfield>cmin</structfield> and <structfield>cmax</structfield>. Command identifiers are also 32-bit quantities. </para> <para> - A final identifier type used by the system is <type>tid</>, or tuple + A final identifier type used by the system is <type>tid</type>, or tuple identifier (row identifier). This is the data type of the system column - <structfield>ctid</>. A tuple ID is a pair + <structfield>ctid</structfield>. A tuple ID is a pair (block number, tuple index within block) that identifies the physical location of the row within its table. </para> @@ -4646,7 +4646,7 @@ SELECT * FROM pg_attribute Internally, an LSN is a 64-bit integer, representing a byte position in the write-ahead log stream. It is printed as two hexadecimal numbers of up to 8 digits each, separated by a slash; for example, - <literal>16/B374D848</>. The <type>pg_lsn</type> type supports the + <literal>16/B374D848</literal>. The <type>pg_lsn</type> type supports the standard comparison operators, like <literal>=</literal> and <literal>></literal>. Two LSNs can be subtracted using the <literal>-</literal> operator; the result is the number of bytes separating @@ -4736,7 +4736,7 @@ SELECT * FROM pg_attribute <para> The <productname>PostgreSQL</productname> type system contains a number of special-purpose entries that are collectively called - <firstterm>pseudo-types</>. A pseudo-type cannot be used as a + <firstterm>pseudo-types</firstterm>. A pseudo-type cannot be used as a column data type, but it can be used to declare a function's argument or result type. Each of the available pseudo-types is useful in situations where a function's behavior does not @@ -4758,106 +4758,106 @@ SELECT * FROM pg_attribute <tbody> <row> - <entry><type>any</></entry> + <entry><type>any</type></entry> <entry>Indicates that a function accepts any input data type.</entry> </row> <row> - <entry><type>anyelement</></entry> + <entry><type>anyelement</type></entry> <entry>Indicates that a function accepts any data type (see <xref linkend="extend-types-polymorphic">).</entry> </row> <row> - <entry><type>anyarray</></entry> + <entry><type>anyarray</type></entry> <entry>Indicates that a function accepts any array data type (see <xref linkend="extend-types-polymorphic">).</entry> </row> <row> - <entry><type>anynonarray</></entry> + <entry><type>anynonarray</type></entry> <entry>Indicates that a function accepts any non-array data type (see <xref linkend="extend-types-polymorphic">).</entry> </row> <row> - <entry><type>anyenum</></entry> + <entry><type>anyenum</type></entry> <entry>Indicates that a function accepts any enum data type (see <xref linkend="extend-types-polymorphic"> and <xref linkend="datatype-enum">).</entry> </row> <row> - <entry><type>anyrange</></entry> + <entry><type>anyrange</type></entry> <entry>Indicates that a function accepts any range data type (see <xref linkend="extend-types-polymorphic"> and <xref linkend="rangetypes">).</entry> </row> <row> - <entry><type>cstring</></entry> + <entry><type>cstring</type></entry> <entry>Indicates that a function accepts or returns a null-terminated C string.</entry> </row> <row> - <entry><type>internal</></entry> + <entry><type>internal</type></entry> <entry>Indicates that a function accepts or returns a server-internal data type.</entry> </row> <row> - <entry><type>language_handler</></entry> - <entry>A procedural language call handler is declared to return <type>language_handler</>.</entry> + <entry><type>language_handler</type></entry> + <entry>A procedural language call handler is declared to return <type>language_handler</type>.</entry> </row> <row> - <entry><type>fdw_handler</></entry> - <entry>A foreign-data wrapper handler is declared to return <type>fdw_handler</>.</entry> + <entry><type>fdw_handler</type></entry> + <entry>A foreign-data wrapper handler is declared to return <type>fdw_handler</type>.</entry> </row> <row> - <entry><type>index_am_handler</></entry> - <entry>An index access method handler is declared to return <type>index_am_handler</>.</entry> + <entry><type>index_am_handler</type></entry> + <entry>An index access method handler is declared to return <type>index_am_handler</type>.</entry> </row> <row> - <entry><type>tsm_handler</></entry> - <entry>A tablesample method handler is declared to return <type>tsm_handler</>.</entry> + <entry><type>tsm_handler</type></entry> + <entry>A tablesample method handler is declared to return <type>tsm_handler</type>.</entry> </row> <row> - <entry><type>record</></entry> + <entry><type>record</type></entry> <entry>Identifies a function taking or returning an unspecified row type.</entry> </row> <row> - <entry><type>trigger</></entry> - <entry>A trigger function is declared to return <type>trigger.</></entry> + <entry><type>trigger</type></entry> + <entry>A trigger function is declared to return <type>trigger.</type></entry> </row> <row> - <entry><type>event_trigger</></entry> - <entry>An event trigger function is declared to return <type>event_trigger.</></entry> + <entry><type>event_trigger</type></entry> + <entry>An event trigger function is declared to return <type>event_trigger.</type></entry> </row> <row> - <entry><type>pg_ddl_command</></entry> + <entry><type>pg_ddl_command</type></entry> <entry>Identifies a representation of DDL commands that is available to event triggers.</entry> </row> <row> - <entry><type>void</></entry> + <entry><type>void</type></entry> <entry>Indicates that a function returns no value.</entry> </row> <row> - <entry><type>unknown</></entry> + <entry><type>unknown</type></entry> <entry>Identifies a not-yet-resolved type, e.g. of an undecorated string literal.</entry> </row> <row> - <entry><type>opaque</></entry> + <entry><type>opaque</type></entry> <entry>An obsolete type name that formerly served many of the above purposes.</entry> </row> @@ -4876,24 +4876,24 @@ SELECT * FROM pg_attribute Functions coded in procedural languages can use pseudo-types only as allowed by their implementation languages. At present most procedural languages forbid use of a pseudo-type as an argument type, and allow - only <type>void</> and <type>record</> as a result type (plus - <type>trigger</> or <type>event_trigger</> when the function is used + only <type>void</type> and <type>record</type> as a result type (plus + <type>trigger</type> or <type>event_trigger</type> when the function is used as a trigger or event trigger). Some also - support polymorphic functions using the types <type>anyelement</>, - <type>anyarray</>, <type>anynonarray</>, <type>anyenum</>, and - <type>anyrange</>. + support polymorphic functions using the types <type>anyelement</type>, + <type>anyarray</type>, <type>anynonarray</type>, <type>anyenum</type>, and + <type>anyrange</type>. </para> <para> - The <type>internal</> pseudo-type is used to declare functions + The <type>internal</type> pseudo-type is used to declare functions that are meant only to be called internally by the database system, and not by direct invocation in an <acronym>SQL</acronym> - query. If a function has at least one <type>internal</>-type + query. If a function has at least one <type>internal</type>-type argument then it cannot be called from <acronym>SQL</acronym>. To preserve the type safety of this restriction it is important to follow this coding rule: do not create any function that is - declared to return <type>internal</> unless it has at least one - <type>internal</> argument. + declared to return <type>internal</type> unless it has at least one + <type>internal</type> argument. </para> </sect1> diff --git a/doc/src/sgml/datetime.sgml b/doc/src/sgml/datetime.sgml index ef9139f9e38..a533bbf8d2e 100644 --- a/doc/src/sgml/datetime.sgml +++ b/doc/src/sgml/datetime.sgml @@ -37,18 +37,18 @@ <substeps> <step> <para> - If the numeric token contains a colon (<literal>:</>), this is + If the numeric token contains a colon (<literal>:</literal>), this is a time string. Include all subsequent digits and colons. </para> </step> <step> <para> - If the numeric token contains a dash (<literal>-</>), slash - (<literal>/</>), or two or more dots (<literal>.</>), this is + If the numeric token contains a dash (<literal>-</literal>), slash + (<literal>/</literal>), or two or more dots (<literal>.</literal>), this is a date string which might have a text month. If a date token has already been seen, it is instead interpreted as a time zone - name (e.g., <literal>America/New_York</>). + name (e.g., <literal>America/New_York</literal>). </para> </step> @@ -63,8 +63,8 @@ <step> <para> - If the token starts with a plus (<literal>+</>) or minus - (<literal>-</>), then it is either a numeric time zone or a special + If the token starts with a plus (<literal>+</literal>) or minus + (<literal>-</literal>), then it is either a numeric time zone or a special field. </para> </step> @@ -114,7 +114,7 @@ and if no other date fields have been previously read, then interpret as a <quote>concatenated date</quote> (e.g., <literal>19990118</literal> or <literal>990118</literal>). - The interpretation is <literal>YYYYMMDD</> or <literal>YYMMDD</>. + The interpretation is <literal>YYYYMMDD</literal> or <literal>YYMMDD</literal>. </para> </step> @@ -128,7 +128,7 @@ <step> <para> If four or six digits and a year has already been read, then - interpret as a time (<literal>HHMM</> or <literal>HHMMSS</>). + interpret as a time (<literal>HHMM</literal> or <literal>HHMMSS</literal>). </para> </step> @@ -143,7 +143,7 @@ <step> <para> Otherwise the date field ordering is assumed to follow the - <varname>DateStyle</> setting: mm-dd-yy, dd-mm-yy, or yy-mm-dd. + <varname>DateStyle</varname> setting: mm-dd-yy, dd-mm-yy, or yy-mm-dd. Throw an error if a month or day field is found to be out of range. </para> </step> @@ -167,7 +167,7 @@ <tip> <para> Gregorian years AD 1-99 can be entered by using 4 digits with leading - zeros (e.g., <literal>0099</> is AD 99). + zeros (e.g., <literal>0099</literal> is AD 99). </para> </tip> </para> @@ -317,7 +317,7 @@ <entry>Ignored</entry> </row> <row> - <entry><literal>JULIAN</>, <literal>JD</>, <literal>J</></entry> + <entry><literal>JULIAN</literal>, <literal>JD</literal>, <literal>J</literal></entry> <entry>Next field is Julian Date</entry> </row> <row> @@ -354,23 +354,23 @@ can be altered by any database user, the possible values for it are under the control of the database administrator — they are in fact names of configuration files stored in - <filename>.../share/timezonesets/</> of the installation directory. + <filename>.../share/timezonesets/</filename> of the installation directory. By adding or altering files in that directory, the administrator can set local policy for timezone abbreviations. </para> <para> - <varname>timezone_abbreviations</> can be set to any file name - found in <filename>.../share/timezonesets/</>, if the file's name + <varname>timezone_abbreviations</varname> can be set to any file name + found in <filename>.../share/timezonesets/</filename>, if the file's name is entirely alphabetic. (The prohibition against non-alphabetic - characters in <varname>timezone_abbreviations</> prevents reading + characters in <varname>timezone_abbreviations</varname> prevents reading files outside the intended directory, as well as reading editor backup files and other extraneous files.) </para> <para> A timezone abbreviation file can contain blank lines and comments - beginning with <literal>#</>. Non-comment lines must have one of + beginning with <literal>#</literal>. Non-comment lines must have one of these formats: <synopsis> @@ -388,12 +388,12 @@ the equivalent offset in seconds from UTC, positive being east from Greenwich and negative being west. For example, -18000 would be five hours west of Greenwich, or North American east coast standard time. - <literal>D</> indicates that the zone name represents local + <literal>D</literal> indicates that the zone name represents local daylight-savings time rather than standard time. </para> <para> - Alternatively, a <replaceable>time_zone_name</> can be given, referencing + Alternatively, a <replaceable>time_zone_name</replaceable> can be given, referencing a zone name defined in the IANA timezone database. The zone's definition is consulted to see whether the abbreviation is or has been in use in that zone, and if so, the appropriate meaning is used — that is, @@ -417,34 +417,34 @@ </tip> <para> - The <literal>@INCLUDE</> syntax allows inclusion of another file in the - <filename>.../share/timezonesets/</> directory. Inclusion can be nested, + The <literal>@INCLUDE</literal> syntax allows inclusion of another file in the + <filename>.../share/timezonesets/</filename> directory. Inclusion can be nested, to a limited depth. </para> <para> - The <literal>@OVERRIDE</> syntax indicates that subsequent entries in the + The <literal>@OVERRIDE</literal> syntax indicates that subsequent entries in the file can override previous entries (typically, entries obtained from included files). Without this, conflicting definitions of the same timezone abbreviation are considered an error. </para> <para> - In an unmodified installation, the file <filename>Default</> contains + In an unmodified installation, the file <filename>Default</filename> contains all the non-conflicting time zone abbreviations for most of the world. - Additional files <filename>Australia</> and <filename>India</> are + Additional files <filename>Australia</filename> and <filename>India</filename> are provided for those regions: these files first include the - <literal>Default</> file and then add or modify abbreviations as needed. + <literal>Default</literal> file and then add or modify abbreviations as needed. </para> <para> For reference purposes, a standard installation also contains files - <filename>Africa.txt</>, <filename>America.txt</>, etc, containing + <filename>Africa.txt</filename>, <filename>America.txt</filename>, etc, containing information about every time zone abbreviation known to be in use according to the IANA timezone database. The zone name definitions found in these files can be copied and pasted into a custom configuration file as needed. Note that these files cannot be directly - referenced as <varname>timezone_abbreviations</> settings, because of + referenced as <varname>timezone_abbreviations</varname> settings, because of the dot embedded in their names. </para> @@ -460,16 +460,16 @@ <para> Time zone abbreviations defined in the configuration file override non-timezone meanings built into <productname>PostgreSQL</productname>. - For example, the <filename>Australia</> configuration file defines - <literal>SAT</> (for South Australian Standard Time). When this - file is active, <literal>SAT</> will not be recognized as an abbreviation + For example, the <filename>Australia</filename> configuration file defines + <literal>SAT</literal> (for South Australian Standard Time). When this + file is active, <literal>SAT</literal> will not be recognized as an abbreviation for Saturday. </para> </caution> <caution> <para> - If you modify files in <filename>.../share/timezonesets/</>, + If you modify files in <filename>.../share/timezonesets/</filename>, it is up to you to make backups — a normal database dump will not include this directory. </para> @@ -492,10 +492,10 @@ <quote>datetime literal</quote>, the <quote>datetime values</quote> are constrained by the natural rules for dates and times according to the Gregorian calendar</quote>. - <productname>PostgreSQL</> follows the SQL + <productname>PostgreSQL</productname> follows the SQL standard's lead by counting dates exclusively in the Gregorian calendar, even for years before that calendar was in use. - This rule is known as the <firstterm>proleptic Gregorian calendar</>. + This rule is known as the <firstterm>proleptic Gregorian calendar</firstterm>. </para> <para> @@ -569,7 +569,7 @@ $ <userinput>cal 9 1752</userinput> dominions, not other places. Since it would be difficult and confusing to try to track the actual calendars that were in use in various places at various times, - <productname>PostgreSQL</> does not try, but rather follows the Gregorian + <productname>PostgreSQL</productname> does not try, but rather follows the Gregorian calendar rules for all dates, even though this method is not historically accurate. </para> @@ -597,7 +597,7 @@ $ <userinput>cal 9 1752</userinput> and probably takes its name from Scaliger's father, the Italian scholar Julius Caesar Scaliger (1484-1558). In the Julian Date system, each day has a sequential number, starting - from JD 0 (which is sometimes called <emphasis>the</> Julian Date). + from JD 0 (which is sometimes called <emphasis>the</emphasis> Julian Date). JD 0 corresponds to 1 January 4713 BC in the Julian calendar, or 24 November 4714 BC in the Gregorian calendar. Julian Date counting is most often used by astronomers for labeling their nightly observations, @@ -607,10 +607,10 @@ $ <userinput>cal 9 1752</userinput> </para> <para> - Although <productname>PostgreSQL</> supports Julian Date notation for + Although <productname>PostgreSQL</productname> supports Julian Date notation for input and output of dates (and also uses Julian dates for some internal datetime calculations), it does not observe the nicety of having dates - run from noon to noon. <productname>PostgreSQL</> treats a Julian Date + run from noon to noon. <productname>PostgreSQL</productname> treats a Julian Date as running from midnight to midnight. </para> diff --git a/doc/src/sgml/dblink.sgml b/doc/src/sgml/dblink.sgml index f19c6b19f53..1f17d3ad2dc 100644 --- a/doc/src/sgml/dblink.sgml +++ b/doc/src/sgml/dblink.sgml @@ -8,8 +8,8 @@ </indexterm> <para> - <filename>dblink</> is a module that supports connections to - other <productname>PostgreSQL</> databases from within a database + <filename>dblink</filename> is a module that supports connections to + other <productname>PostgreSQL</productname> databases from within a database session. </para> @@ -44,9 +44,9 @@ dblink_connect(text connname, text connstr) returns text <title>Description</title> <para> - <function>dblink_connect()</> establishes a connection to a remote - <productname>PostgreSQL</> database. The server and database to - be contacted are identified through a standard <application>libpq</> + <function>dblink_connect()</function> establishes a connection to a remote + <productname>PostgreSQL</productname> database. The server and database to + be contacted are identified through a standard <application>libpq</application> connection string. Optionally, a name can be assigned to the connection. Multiple named connections can be open at once, but only one unnamed connection is permitted at a time. The connection @@ -81,9 +81,9 @@ dblink_connect(text connname, text connstr) returns text <varlistentry> <term><parameter>connstr</parameter></term> <listitem> - <para><application>libpq</>-style connection info string, for example + <para><application>libpq</application>-style connection info string, for example <literal>hostaddr=127.0.0.1 port=5432 dbname=mydb user=postgres - password=mypasswd</>. + password=mypasswd</literal>. For details see <xref linkend="libpq-connstring">. Alternatively, the name of a foreign server. </para> @@ -96,7 +96,7 @@ dblink_connect(text connname, text connstr) returns text <title>Return Value</title> <para> - Returns status, which is always <literal>OK</> (since any error + Returns status, which is always <literal>OK</literal> (since any error causes the function to throw an error instead of returning). </para> </refsect1> @@ -105,15 +105,15 @@ dblink_connect(text connname, text connstr) returns text <title>Notes</title> <para> - Only superusers may use <function>dblink_connect</> to create + Only superusers may use <function>dblink_connect</function> to create non-password-authenticated connections. If non-superusers need this - capability, use <function>dblink_connect_u</> instead. + capability, use <function>dblink_connect_u</function> instead. </para> <para> It is unwise to choose connection names that contain equal signs, as this opens a risk of confusion with connection info strings - in other <filename>dblink</> functions. + in other <filename>dblink</filename> functions. </para> </refsect1> @@ -208,8 +208,8 @@ dblink_connect_u(text connname, text connstr) returns text <title>Description</title> <para> - <function>dblink_connect_u()</> is identical to - <function>dblink_connect()</>, except that it will allow non-superusers + <function>dblink_connect_u()</function> is identical to + <function>dblink_connect()</function>, except that it will allow non-superusers to connect using any authentication method. </para> @@ -217,24 +217,24 @@ dblink_connect_u(text connname, text connstr) returns text If the remote server selects an authentication method that does not involve a password, then impersonation and subsequent escalation of privileges can occur, because the session will appear to have - originated from the user as which the local <productname>PostgreSQL</> + originated from the user as which the local <productname>PostgreSQL</productname> server runs. Also, even if the remote server does demand a password, it is possible for the password to be supplied from the server - environment, such as a <filename>~/.pgpass</> file belonging to the + environment, such as a <filename>~/.pgpass</filename> file belonging to the server's user. This opens not only a risk of impersonation, but the possibility of exposing a password to an untrustworthy remote server. - Therefore, <function>dblink_connect_u()</> is initially - installed with all privileges revoked from <literal>PUBLIC</>, + Therefore, <function>dblink_connect_u()</function> is initially + installed with all privileges revoked from <literal>PUBLIC</literal>, making it un-callable except by superusers. In some situations - it may be appropriate to grant <literal>EXECUTE</> permission for - <function>dblink_connect_u()</> to specific users who are considered + it may be appropriate to grant <literal>EXECUTE</literal> permission for + <function>dblink_connect_u()</function> to specific users who are considered trustworthy, but this should be done with care. It is also recommended - that any <filename>~/.pgpass</> file belonging to the server's user - <emphasis>not</> contain any records specifying a wildcard host name. + that any <filename>~/.pgpass</filename> file belonging to the server's user + <emphasis>not</emphasis> contain any records specifying a wildcard host name. </para> <para> - For further details see <function>dblink_connect()</>. + For further details see <function>dblink_connect()</function>. </para> </refsect1> </refentry> @@ -265,8 +265,8 @@ dblink_disconnect(text connname) returns text <title>Description</title> <para> - <function>dblink_disconnect()</> closes a connection previously opened - by <function>dblink_connect()</>. The form with no arguments closes + <function>dblink_disconnect()</function> closes a connection previously opened + by <function>dblink_connect()</function>. The form with no arguments closes an unnamed connection. </para> </refsect1> @@ -290,7 +290,7 @@ dblink_disconnect(text connname) returns text <title>Return Value</title> <para> - Returns status, which is always <literal>OK</> (since any error + Returns status, which is always <literal>OK</literal> (since any error causes the function to throw an error instead of returning). </para> </refsect1> @@ -341,15 +341,15 @@ dblink(text sql [, bool fail_on_error]) returns setof record <title>Description</title> <para> - <function>dblink</> executes a query (usually a <command>SELECT</>, + <function>dblink</function> executes a query (usually a <command>SELECT</command>, but it can be any SQL statement that returns rows) in a remote database. </para> <para> - When two <type>text</> arguments are given, the first one is first + When two <type>text</type> arguments are given, the first one is first looked up as a persistent connection's name; if found, the command is executed on that connection. If not found, the first argument - is treated as a connection info string as for <function>dblink_connect</>, + is treated as a connection info string as for <function>dblink_connect</function>, and the indicated connection is made just for the duration of this command. </para> </refsect1> @@ -373,7 +373,7 @@ dblink(text sql [, bool fail_on_error]) returns setof record <listitem> <para> A connection info string, as previously described for - <function>dblink_connect</>. + <function>dblink_connect</function>. </para> </listitem> </varlistentry> @@ -383,7 +383,7 @@ dblink(text sql [, bool fail_on_error]) returns setof record <listitem> <para> The SQL query that you wish to execute in the remote database, - for example <literal>select * from foo</>. + for example <literal>select * from foo</literal>. </para> </listitem> </varlistentry> @@ -407,11 +407,11 @@ dblink(text sql [, bool fail_on_error]) returns setof record <para> The function returns the row(s) produced by the query. Since - <function>dblink</> can be used with any query, it is declared - to return <type>record</>, rather than specifying any particular + <function>dblink</function> can be used with any query, it is declared + to return <type>record</type>, rather than specifying any particular set of columns. This means that you must specify the expected set of columns in the calling query — otherwise - <productname>PostgreSQL</> would not know what to expect. + <productname>PostgreSQL</productname> would not know what to expect. Here is an example: <programlisting> @@ -421,20 +421,20 @@ SELECT * WHERE proname LIKE 'bytea%'; </programlisting> - The <quote>alias</> part of the <literal>FROM</> clause must + The <quote>alias</quote> part of the <literal>FROM</literal> clause must specify the column names and types that the function will return. (Specifying column names in an alias is actually standard SQL - syntax, but specifying column types is a <productname>PostgreSQL</> + syntax, but specifying column types is a <productname>PostgreSQL</productname> extension.) This allows the system to understand what - <literal>*</> should expand to, and what <structname>proname</> - in the <literal>WHERE</> clause refers to, in advance of trying + <literal>*</literal> should expand to, and what <structname>proname</structname> + in the <literal>WHERE</literal> clause refers to, in advance of trying to execute the function. At run time, an error will be thrown if the actual query result from the remote database does not - have the same number of columns shown in the <literal>FROM</> clause. - The column names need not match, however, and <function>dblink</> + have the same number of columns shown in the <literal>FROM</literal> clause. + The column names need not match, however, and <function>dblink</function> does not insist on exact type matches either. It will succeed so long as the returned data strings are valid input for the - column type declared in the <literal>FROM</> clause. + column type declared in the <literal>FROM</literal> clause. </para> </refsect1> @@ -442,7 +442,7 @@ SELECT * <title>Notes</title> <para> - A convenient way to use <function>dblink</> with predetermined + A convenient way to use <function>dblink</function> with predetermined queries is to create a view. This allows the column type information to be buried in the view, instead of having to spell it out in every query. For example, @@ -559,15 +559,15 @@ dblink_exec(text sql [, bool fail_on_error]) returns text <title>Description</title> <para> - <function>dblink_exec</> executes a command (that is, any SQL statement + <function>dblink_exec</function> executes a command (that is, any SQL statement that doesn't return rows) in a remote database. </para> <para> - When two <type>text</> arguments are given, the first one is first + When two <type>text</type> arguments are given, the first one is first looked up as a persistent connection's name; if found, the command is executed on that connection. If not found, the first argument - is treated as a connection info string as for <function>dblink_connect</>, + is treated as a connection info string as for <function>dblink_connect</function>, and the indicated connection is made just for the duration of this command. </para> </refsect1> @@ -591,7 +591,7 @@ dblink_exec(text sql [, bool fail_on_error]) returns text <listitem> <para> A connection info string, as previously described for - <function>dblink_connect</>. + <function>dblink_connect</function>. </para> </listitem> </varlistentry> @@ -602,7 +602,7 @@ dblink_exec(text sql [, bool fail_on_error]) returns text <para> The SQL command that you wish to execute in the remote database, for example - <literal>insert into foo values(0,'a','{"a0","b0","c0"}')</>. + <literal>insert into foo values(0,'a','{"a0","b0","c0"}')</literal>. </para> </listitem> </varlistentry> @@ -614,7 +614,7 @@ dblink_exec(text sql [, bool fail_on_error]) returns text If true (the default when omitted) then an error thrown on the remote side of the connection causes an error to also be thrown locally. If false, the remote error is locally reported as a NOTICE, - and the function's return value is set to <literal>ERROR</>. + and the function's return value is set to <literal>ERROR</literal>. </para> </listitem> </varlistentry> @@ -625,7 +625,7 @@ dblink_exec(text sql [, bool fail_on_error]) returns text <title>Return Value</title> <para> - Returns status, either the command's status string or <literal>ERROR</>. + Returns status, either the command's status string or <literal>ERROR</literal>. </para> </refsect1> @@ -695,9 +695,9 @@ dblink_open(text connname, text cursorname, text sql [, bool fail_on_error]) ret <title>Description</title> <para> - <function>dblink_open()</> opens a cursor in a remote database. + <function>dblink_open()</function> opens a cursor in a remote database. The cursor can subsequently be manipulated with - <function>dblink_fetch()</> and <function>dblink_close()</>. + <function>dblink_fetch()</function> and <function>dblink_close()</function>. </para> </refsect1> @@ -728,8 +728,8 @@ dblink_open(text connname, text cursorname, text sql [, bool fail_on_error]) ret <term><parameter>sql</parameter></term> <listitem> <para> - The <command>SELECT</> statement that you wish to execute in the remote - database, for example <literal>select * from pg_class</>. + The <command>SELECT</command> statement that you wish to execute in the remote + database, for example <literal>select * from pg_class</literal>. </para> </listitem> </varlistentry> @@ -741,7 +741,7 @@ dblink_open(text connname, text cursorname, text sql [, bool fail_on_error]) ret If true (the default when omitted) then an error thrown on the remote side of the connection causes an error to also be thrown locally. If false, the remote error is locally reported as a NOTICE, - and the function's return value is set to <literal>ERROR</>. + and the function's return value is set to <literal>ERROR</literal>. </para> </listitem> </varlistentry> @@ -752,7 +752,7 @@ dblink_open(text connname, text cursorname, text sql [, bool fail_on_error]) ret <title>Return Value</title> <para> - Returns status, either <literal>OK</> or <literal>ERROR</>. + Returns status, either <literal>OK</literal> or <literal>ERROR</literal>. </para> </refsect1> @@ -761,16 +761,16 @@ dblink_open(text connname, text cursorname, text sql [, bool fail_on_error]) ret <para> Since a cursor can only persist within a transaction, - <function>dblink_open</> starts an explicit transaction block - (<command>BEGIN</>) on the remote side, if the remote side was + <function>dblink_open</function> starts an explicit transaction block + (<command>BEGIN</command>) on the remote side, if the remote side was not already within a transaction. This transaction will be - closed again when the matching <function>dblink_close</> is + closed again when the matching <function>dblink_close</function> is executed. Note that if - you use <function>dblink_exec</> to change data between - <function>dblink_open</> and <function>dblink_close</>, - and then an error occurs or you use <function>dblink_disconnect</> before - <function>dblink_close</>, your change <emphasis>will be - lost</> because the transaction will be aborted. + you use <function>dblink_exec</function> to change data between + <function>dblink_open</function> and <function>dblink_close</function>, + and then an error occurs or you use <function>dblink_disconnect</function> before + <function>dblink_close</function>, your change <emphasis>will be + lost</emphasis> because the transaction will be aborted. </para> </refsect1> @@ -819,8 +819,8 @@ dblink_fetch(text connname, text cursorname, int howmany [, bool fail_on_error]) <title>Description</title> <para> - <function>dblink_fetch</> fetches rows from a cursor previously - established by <function>dblink_open</>. + <function>dblink_fetch</function> fetches rows from a cursor previously + established by <function>dblink_open</function>. </para> </refsect1> @@ -851,7 +851,7 @@ dblink_fetch(text connname, text cursorname, int howmany [, bool fail_on_error]) <term><parameter>howmany</parameter></term> <listitem> <para> - The maximum number of rows to retrieve. The next <parameter>howmany</> + The maximum number of rows to retrieve. The next <parameter>howmany</parameter> rows are fetched, starting at the current cursor position, moving forward. Once the cursor has reached its end, no more rows are produced. </para> @@ -878,7 +878,7 @@ dblink_fetch(text connname, text cursorname, int howmany [, bool fail_on_error]) <para> The function returns the row(s) fetched from the cursor. To use this function, you will need to specify the expected set of columns, - as previously discussed for <function>dblink</>. + as previously discussed for <function>dblink</function>. </para> </refsect1> @@ -887,11 +887,11 @@ dblink_fetch(text connname, text cursorname, int howmany [, bool fail_on_error]) <para> On a mismatch between the number of return columns specified in the - <literal>FROM</> clause, and the actual number of columns returned by the + <literal>FROM</literal> clause, and the actual number of columns returned by the remote cursor, an error will be thrown. In this event, the remote cursor is still advanced by as many rows as it would have been if the error had not occurred. The same is true for any other error occurring in the local - query after the remote <command>FETCH</> has been done. + query after the remote <command>FETCH</command> has been done. </para> </refsect1> @@ -972,8 +972,8 @@ dblink_close(text connname, text cursorname [, bool fail_on_error]) returns text <title>Description</title> <para> - <function>dblink_close</> closes a cursor previously opened with - <function>dblink_open</>. + <function>dblink_close</function> closes a cursor previously opened with + <function>dblink_open</function>. </para> </refsect1> @@ -1007,7 +1007,7 @@ dblink_close(text connname, text cursorname [, bool fail_on_error]) returns text If true (the default when omitted) then an error thrown on the remote side of the connection causes an error to also be thrown locally. If false, the remote error is locally reported as a NOTICE, - and the function's return value is set to <literal>ERROR</>. + and the function's return value is set to <literal>ERROR</literal>. </para> </listitem> </varlistentry> @@ -1018,7 +1018,7 @@ dblink_close(text connname, text cursorname [, bool fail_on_error]) returns text <title>Return Value</title> <para> - Returns status, either <literal>OK</> or <literal>ERROR</>. + Returns status, either <literal>OK</literal> or <literal>ERROR</literal>. </para> </refsect1> @@ -1026,9 +1026,9 @@ dblink_close(text connname, text cursorname [, bool fail_on_error]) returns text <title>Notes</title> <para> - If <function>dblink_open</> started an explicit transaction block, + If <function>dblink_open</function> started an explicit transaction block, and this is the last remaining open cursor in this connection, - <function>dblink_close</> will issue the matching <command>COMMIT</>. + <function>dblink_close</function> will issue the matching <command>COMMIT</command>. </para> </refsect1> @@ -1082,8 +1082,8 @@ dblink_get_connections() returns text[] <title>Description</title> <para> - <function>dblink_get_connections</> returns an array of the names - of all open named <filename>dblink</> connections. + <function>dblink_get_connections</function> returns an array of the names + of all open named <filename>dblink</filename> connections. </para> </refsect1> @@ -1127,7 +1127,7 @@ dblink_error_message(text connname) returns text <title>Description</title> <para> - <function>dblink_error_message</> fetches the most recent remote + <function>dblink_error_message</function> fetches the most recent remote error message for a given connection. </para> </refsect1> @@ -1190,7 +1190,7 @@ dblink_send_query(text connname, text sql) returns int <title>Description</title> <para> - <function>dblink_send_query</> sends a query to be executed + <function>dblink_send_query</function> sends a query to be executed asynchronously, that is, without immediately waiting for the result. There must not be an async query already in progress on the connection. @@ -1198,10 +1198,10 @@ dblink_send_query(text connname, text sql) returns int <para> After successfully dispatching an async query, completion status - can be checked with <function>dblink_is_busy</>, and the results - are ultimately collected with <function>dblink_get_result</>. + can be checked with <function>dblink_is_busy</function>, and the results + are ultimately collected with <function>dblink_get_result</function>. It is also possible to attempt to cancel an active async query - using <function>dblink_cancel_query</>. + using <function>dblink_cancel_query</function>. </para> </refsect1> @@ -1223,7 +1223,7 @@ dblink_send_query(text connname, text sql) returns int <listitem> <para> The SQL statement that you wish to execute in the remote database, - for example <literal>select * from pg_class</>. + for example <literal>select * from pg_class</literal>. </para> </listitem> </varlistentry> @@ -1272,7 +1272,7 @@ dblink_is_busy(text connname) returns int <title>Description</title> <para> - <function>dblink_is_busy</> tests whether an async query is in progress. + <function>dblink_is_busy</function> tests whether an async query is in progress. </para> </refsect1> @@ -1297,7 +1297,7 @@ dblink_is_busy(text connname) returns int <para> Returns 1 if connection is busy, 0 if it is not busy. If this function returns 0, it is guaranteed that - <function>dblink_get_result</> will not block. + <function>dblink_get_result</function> will not block. </para> </refsect1> @@ -1336,10 +1336,10 @@ dblink_get_notify(text connname) returns setof (notify_name text, be_pid int, ex <title>Description</title> <para> - <function>dblink_get_notify</> retrieves notifications on either + <function>dblink_get_notify</function> retrieves notifications on either the unnamed connection, or on a named connection if specified. - To receive notifications via dblink, <function>LISTEN</> must - first be issued, using <function>dblink_exec</>. + To receive notifications via dblink, <function>LISTEN</function> must + first be issued, using <function>dblink_exec</function>. For details see <xref linkend="sql-listen"> and <xref linkend="sql-notify">. </para> @@ -1417,9 +1417,9 @@ dblink_get_result(text connname [, bool fail_on_error]) returns setof record <title>Description</title> <para> - <function>dblink_get_result</> collects the results of an - asynchronous query previously sent with <function>dblink_send_query</>. - If the query is not already completed, <function>dblink_get_result</> + <function>dblink_get_result</function> collects the results of an + asynchronous query previously sent with <function>dblink_send_query</function>. + If the query is not already completed, <function>dblink_get_result</function> will wait until it is. </para> </refsect1> @@ -1458,14 +1458,14 @@ dblink_get_result(text connname [, bool fail_on_error]) returns setof record For an async query (that is, a SQL statement returning rows), the function returns the row(s) produced by the query. To use this function, you will need to specify the expected set of columns, - as previously discussed for <function>dblink</>. + as previously discussed for <function>dblink</function>. </para> <para> For an async command (that is, a SQL statement not returning rows), the function returns a single row with a single text column containing the command's status string. It is still necessary to specify that - the result will have a single text column in the calling <literal>FROM</> + the result will have a single text column in the calling <literal>FROM</literal> clause. </para> </refsect1> @@ -1474,22 +1474,22 @@ dblink_get_result(text connname [, bool fail_on_error]) returns setof record <title>Notes</title> <para> - This function <emphasis>must</> be called if - <function>dblink_send_query</> returned 1. + This function <emphasis>must</emphasis> be called if + <function>dblink_send_query</function> returned 1. It must be called once for each query sent, and one additional time to obtain an empty set result, before the connection can be used again. </para> <para> - When using <function>dblink_send_query</> and - <function>dblink_get_result</>, <application>dblink</> fetches the entire + When using <function>dblink_send_query</function> and + <function>dblink_get_result</function>, <application>dblink</application> fetches the entire remote query result before returning any of it to the local query processor. If the query returns a large number of rows, this can result in transient memory bloat in the local session. It may be better to open - such a query as a cursor with <function>dblink_open</> and then fetch a + such a query as a cursor with <function>dblink_open</function> and then fetch a manageable number of rows at a time. Alternatively, use plain - <function>dblink()</>, which avoids memory bloat by spooling large result + <function>dblink()</function>, which avoids memory bloat by spooling large result sets to disk. </para> </refsect1> @@ -1581,13 +1581,13 @@ dblink_cancel_query(text connname) returns text <title>Description</title> <para> - <function>dblink_cancel_query</> attempts to cancel any query that + <function>dblink_cancel_query</function> attempts to cancel any query that is in progress on the named connection. Note that this is not certain to succeed (since, for example, the remote query might already have finished). A cancel request simply improves the odds that the query will fail soon. You must still complete the normal query protocol, for example by calling - <function>dblink_get_result</>. + <function>dblink_get_result</function>. </para> </refsect1> @@ -1610,7 +1610,7 @@ dblink_cancel_query(text connname) returns text <title>Return Value</title> <para> - Returns <literal>OK</> if the cancel request has been sent, or + Returns <literal>OK</literal> if the cancel request has been sent, or the text of an error message on failure. </para> </refsect1> @@ -1651,7 +1651,7 @@ dblink_get_pkey(text relname) returns setof dblink_pkey_results <title>Description</title> <para> - <function>dblink_get_pkey</> provides information about the primary + <function>dblink_get_pkey</function> provides information about the primary key of a relation in the local database. This is sometimes useful in generating queries to be sent to remote databases. </para> @@ -1665,10 +1665,10 @@ dblink_get_pkey(text relname) returns setof dblink_pkey_results <term><parameter>relname</parameter></term> <listitem> <para> - Name of a local relation, for example <literal>foo</> or - <literal>myschema.mytab</>. Include double quotes if the + Name of a local relation, for example <literal>foo</literal> or + <literal>myschema.mytab</literal>. Include double quotes if the name is mixed-case or contains special characters, for - example <literal>"FooBar"</>; without quotes, the string + example <literal>"FooBar"</literal>; without quotes, the string will be folded to lower case. </para> </listitem> @@ -1687,7 +1687,7 @@ dblink_get_pkey(text relname) returns setof dblink_pkey_results CREATE TYPE dblink_pkey_results AS (position int, colname text); </programlisting> - The <literal>position</> column simply runs from 1 to <replaceable>N</>; + The <literal>position</literal> column simply runs from 1 to <replaceable>N</replaceable>; it is the number of the field within the primary key, not the number within the table's columns. </para> @@ -1748,10 +1748,10 @@ dblink_build_sql_insert(text relname, <title>Description</title> <para> - <function>dblink_build_sql_insert</> can be useful in doing selective + <function>dblink_build_sql_insert</function> can be useful in doing selective replication of a local table to a remote database. It selects a row from the local table based on primary key, and then builds a SQL - <command>INSERT</> command that will duplicate that row, but with + <command>INSERT</command> command that will duplicate that row, but with the primary key values replaced by the values in the last argument. (To make an exact copy of the row, just specify the same values for the last two arguments.) @@ -1766,10 +1766,10 @@ dblink_build_sql_insert(text relname, <term><parameter>relname</parameter></term> <listitem> <para> - Name of a local relation, for example <literal>foo</> or - <literal>myschema.mytab</>. Include double quotes if the + Name of a local relation, for example <literal>foo</literal> or + <literal>myschema.mytab</literal>. Include double quotes if the name is mixed-case or contains special characters, for - example <literal>"FooBar"</>; without quotes, the string + example <literal>"FooBar"</literal>; without quotes, the string will be folded to lower case. </para> </listitem> @@ -1780,7 +1780,7 @@ dblink_build_sql_insert(text relname, <listitem> <para> Attribute numbers (1-based) of the primary key fields, - for example <literal>1 2</>. + for example <literal>1 2</literal>. </para> </listitem> </varlistentry> @@ -1811,7 +1811,7 @@ dblink_build_sql_insert(text relname, <listitem> <para> Values of the primary key fields to be placed in the resulting - <command>INSERT</> command. Each field is represented in text form. + <command>INSERT</command> command. Each field is represented in text form. </para> </listitem> </varlistentry> @@ -1828,10 +1828,10 @@ dblink_build_sql_insert(text relname, <title>Notes</title> <para> - As of <productname>PostgreSQL</> 9.0, the attribute numbers in + As of <productname>PostgreSQL</productname> 9.0, the attribute numbers in <parameter>primary_key_attnums</parameter> are interpreted as logical column numbers, corresponding to the column's position in - <literal>SELECT * FROM relname</>. Previous versions interpreted the + <literal>SELECT * FROM relname</literal>. Previous versions interpreted the numbers as physical column positions. There is a difference if any column(s) to the left of the indicated column have been dropped during the lifetime of the table. @@ -1881,9 +1881,9 @@ dblink_build_sql_delete(text relname, <title>Description</title> <para> - <function>dblink_build_sql_delete</> can be useful in doing selective + <function>dblink_build_sql_delete</function> can be useful in doing selective replication of a local table to a remote database. It builds a SQL - <command>DELETE</> command that will delete the row with the given + <command>DELETE</command> command that will delete the row with the given primary key values. </para> </refsect1> @@ -1896,10 +1896,10 @@ dblink_build_sql_delete(text relname, <term><parameter>relname</parameter></term> <listitem> <para> - Name of a local relation, for example <literal>foo</> or - <literal>myschema.mytab</>. Include double quotes if the + Name of a local relation, for example <literal>foo</literal> or + <literal>myschema.mytab</literal>. Include double quotes if the name is mixed-case or contains special characters, for - example <literal>"FooBar"</>; without quotes, the string + example <literal>"FooBar"</literal>; without quotes, the string will be folded to lower case. </para> </listitem> @@ -1910,7 +1910,7 @@ dblink_build_sql_delete(text relname, <listitem> <para> Attribute numbers (1-based) of the primary key fields, - for example <literal>1 2</>. + for example <literal>1 2</literal>. </para> </listitem> </varlistentry> @@ -1929,7 +1929,7 @@ dblink_build_sql_delete(text relname, <listitem> <para> Values of the primary key fields to be used in the resulting - <command>DELETE</> command. Each field is represented in text form. + <command>DELETE</command> command. Each field is represented in text form. </para> </listitem> </varlistentry> @@ -1946,10 +1946,10 @@ dblink_build_sql_delete(text relname, <title>Notes</title> <para> - As of <productname>PostgreSQL</> 9.0, the attribute numbers in + As of <productname>PostgreSQL</productname> 9.0, the attribute numbers in <parameter>primary_key_attnums</parameter> are interpreted as logical column numbers, corresponding to the column's position in - <literal>SELECT * FROM relname</>. Previous versions interpreted the + <literal>SELECT * FROM relname</literal>. Previous versions interpreted the numbers as physical column positions. There is a difference if any column(s) to the left of the indicated column have been dropped during the lifetime of the table. @@ -2000,15 +2000,15 @@ dblink_build_sql_update(text relname, <title>Description</title> <para> - <function>dblink_build_sql_update</> can be useful in doing selective + <function>dblink_build_sql_update</function> can be useful in doing selective replication of a local table to a remote database. It selects a row from the local table based on primary key, and then builds a SQL - <command>UPDATE</> command that will duplicate that row, but with + <command>UPDATE</command> command that will duplicate that row, but with the primary key values replaced by the values in the last argument. (To make an exact copy of the row, just specify the same values for - the last two arguments.) The <command>UPDATE</> command always assigns + the last two arguments.) The <command>UPDATE</command> command always assigns all fields of the row — the main difference between this and - <function>dblink_build_sql_insert</> is that it's assumed that + <function>dblink_build_sql_insert</function> is that it's assumed that the target row already exists in the remote table. </para> </refsect1> @@ -2021,10 +2021,10 @@ dblink_build_sql_update(text relname, <term><parameter>relname</parameter></term> <listitem> <para> - Name of a local relation, for example <literal>foo</> or - <literal>myschema.mytab</>. Include double quotes if the + Name of a local relation, for example <literal>foo</literal> or + <literal>myschema.mytab</literal>. Include double quotes if the name is mixed-case or contains special characters, for - example <literal>"FooBar"</>; without quotes, the string + example <literal>"FooBar"</literal>; without quotes, the string will be folded to lower case. </para> </listitem> @@ -2035,7 +2035,7 @@ dblink_build_sql_update(text relname, <listitem> <para> Attribute numbers (1-based) of the primary key fields, - for example <literal>1 2</>. + for example <literal>1 2</literal>. </para> </listitem> </varlistentry> @@ -2066,7 +2066,7 @@ dblink_build_sql_update(text relname, <listitem> <para> Values of the primary key fields to be placed in the resulting - <command>UPDATE</> command. Each field is represented in text form. + <command>UPDATE</command> command. Each field is represented in text form. </para> </listitem> </varlistentry> @@ -2083,10 +2083,10 @@ dblink_build_sql_update(text relname, <title>Notes</title> <para> - As of <productname>PostgreSQL</> 9.0, the attribute numbers in + As of <productname>PostgreSQL</productname> 9.0, the attribute numbers in <parameter>primary_key_attnums</parameter> are interpreted as logical column numbers, corresponding to the column's position in - <literal>SELECT * FROM relname</>. Previous versions interpreted the + <literal>SELECT * FROM relname</literal>. Previous versions interpreted the numbers as physical column positions. There is a difference if any column(s) to the left of the indicated column have been dropped during the lifetime of the table. diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml index b05a9c21500..817db92af2d 100644 --- a/doc/src/sgml/ddl.sgml +++ b/doc/src/sgml/ddl.sgml @@ -149,7 +149,7 @@ DROP TABLE products; Nevertheless, it is common in SQL script files to unconditionally try to drop each table before creating it, ignoring any error messages, so that the script works whether or not the table exists. - (If you like, you can use the <literal>DROP TABLE IF EXISTS</> variant + (If you like, you can use the <literal>DROP TABLE IF EXISTS</literal> variant to avoid the error messages, but this is not standard SQL.) </para> @@ -207,9 +207,9 @@ CREATE TABLE products ( The default value can be an expression, which will be evaluated whenever the default value is inserted (<emphasis>not</emphasis> when the table is created). A common example - is for a <type>timestamp</type> column to have a default of <literal>CURRENT_TIMESTAMP</>, + is for a <type>timestamp</type> column to have a default of <literal>CURRENT_TIMESTAMP</literal>, so that it gets set to the time of row insertion. Another common - example is generating a <quote>serial number</> for each row. + example is generating a <quote>serial number</quote> for each row. In <productname>PostgreSQL</productname> this is typically done by something like: <programlisting> @@ -218,8 +218,8 @@ CREATE TABLE products ( ... ); </programlisting> - where the <literal>nextval()</> function supplies successive values - from a <firstterm>sequence object</> (see <xref + where the <literal>nextval()</literal> function supplies successive values + from a <firstterm>sequence object</firstterm> (see <xref linkend="functions-sequence">). This arrangement is sufficiently common that there's a special shorthand for it: <programlisting> @@ -228,7 +228,7 @@ CREATE TABLE products ( ... ); </programlisting> - The <literal>SERIAL</> shorthand is discussed further in <xref + The <literal>SERIAL</literal> shorthand is discussed further in <xref linkend="datatype-serial">. </para> </sect1> @@ -385,7 +385,7 @@ CREATE TABLE products ( CHECK (price > 0), discounted_price numeric, CHECK (discounted_price > 0), - <emphasis>CONSTRAINT valid_discount</> CHECK (price > discounted_price) + <emphasis>CONSTRAINT valid_discount</emphasis> CHECK (price > discounted_price) ); </programlisting> </para> @@ -623,7 +623,7 @@ CREATE TABLE example ( <para> Adding a primary key will automatically create a unique B-tree index on the column or group of columns listed in the primary key, and will - force the column(s) to be marked <literal>NOT NULL</>. + force the column(s) to be marked <literal>NOT NULL</literal>. </para> <para> @@ -828,7 +828,7 @@ CREATE TABLE order_items ( (The essential difference between these two choices is that <literal>NO ACTION</literal> allows the check to be deferred until later in the transaction, whereas <literal>RESTRICT</literal> does not.) - <literal>CASCADE</> specifies that when a referenced row is deleted, + <literal>CASCADE</literal> specifies that when a referenced row is deleted, row(s) referencing it should be automatically deleted as well. There are two other options: <literal>SET NULL</literal> and <literal>SET DEFAULT</literal>. @@ -845,19 +845,19 @@ CREATE TABLE order_items ( Analogous to <literal>ON DELETE</literal> there is also <literal>ON UPDATE</literal> which is invoked when a referenced column is changed (updated). The possible actions are the same. - In this case, <literal>CASCADE</> means that the updated values of the + In this case, <literal>CASCADE</literal> means that the updated values of the referenced column(s) should be copied into the referencing row(s). </para> <para> Normally, a referencing row need not satisfy the foreign key constraint - if any of its referencing columns are null. If <literal>MATCH FULL</> + if any of its referencing columns are null. If <literal>MATCH FULL</literal> is added to the foreign key declaration, a referencing row escapes satisfying the constraint only if all its referencing columns are null (so a mix of null and non-null values is guaranteed to fail a - <literal>MATCH FULL</> constraint). If you don't want referencing rows + <literal>MATCH FULL</literal> constraint). If you don't want referencing rows to be able to avoid satisfying the foreign key constraint, declare the - referencing column(s) as <literal>NOT NULL</>. + referencing column(s) as <literal>NOT NULL</literal>. </para> <para> @@ -909,7 +909,7 @@ CREATE TABLE circles ( <para> See also <link linkend="SQL-CREATETABLE-EXCLUDE"><command>CREATE - TABLE ... CONSTRAINT ... EXCLUDE</></link> for details. + TABLE ... CONSTRAINT ... EXCLUDE</command></link> for details. </para> <para> @@ -923,7 +923,7 @@ CREATE TABLE circles ( <title>System Columns</title> <para> - Every table has several <firstterm>system columns</> that are + Every table has several <firstterm>system columns</firstterm> that are implicitly defined by the system. Therefore, these names cannot be used as names of user-defined columns. (Note that these restrictions are separate from whether the name is a key word or @@ -939,7 +939,7 @@ CREATE TABLE circles ( <variablelist> <varlistentry> - <term><structfield>oid</></term> + <term><structfield>oid</structfield></term> <listitem> <para> <indexterm> @@ -957,7 +957,7 @@ CREATE TABLE circles ( </varlistentry> <varlistentry> - <term><structfield>tableoid</></term> + <term><structfield>tableoid</structfield></term> <listitem> <indexterm> <primary>tableoid</primary> @@ -976,7 +976,7 @@ CREATE TABLE circles ( </varlistentry> <varlistentry> - <term><structfield>xmin</></term> + <term><structfield>xmin</structfield></term> <listitem> <indexterm> <primary>xmin</primary> @@ -992,7 +992,7 @@ CREATE TABLE circles ( </varlistentry> <varlistentry> - <term><structfield>cmin</></term> + <term><structfield>cmin</structfield></term> <listitem> <indexterm> <primary>cmin</primary> @@ -1006,7 +1006,7 @@ CREATE TABLE circles ( </varlistentry> <varlistentry> - <term><structfield>xmax</></term> + <term><structfield>xmax</structfield></term> <listitem> <indexterm> <primary>xmax</primary> @@ -1023,7 +1023,7 @@ CREATE TABLE circles ( </varlistentry> <varlistentry> - <term><structfield>cmax</></term> + <term><structfield>cmax</structfield></term> <listitem> <indexterm> <primary>cmax</primary> @@ -1036,7 +1036,7 @@ CREATE TABLE circles ( </varlistentry> <varlistentry> - <term><structfield>ctid</></term> + <term><structfield>ctid</structfield></term> <listitem> <indexterm> <primary>ctid</primary> @@ -1047,7 +1047,7 @@ CREATE TABLE circles ( although the <structfield>ctid</structfield> can be used to locate the row version very quickly, a row's <structfield>ctid</structfield> will change if it is - updated or moved by <command>VACUUM FULL</>. Therefore + updated or moved by <command>VACUUM FULL</command>. Therefore <structfield>ctid</structfield> is useless as a long-term row identifier. The OID, or even better a user-defined serial number, should be used to identify logical rows. @@ -1074,7 +1074,7 @@ CREATE TABLE circles ( a unique constraint (or unique index) exists, the system takes care not to generate an OID matching an already-existing row. (Of course, this is only possible if the table contains fewer - than 2<superscript>32</> (4 billion) rows, and in practice the + than 2<superscript>32</superscript> (4 billion) rows, and in practice the table size had better be much less than that, or performance might suffer.) </para> @@ -1082,7 +1082,7 @@ CREATE TABLE circles ( <listitem> <para> OIDs should never be assumed to be unique across tables; use - the combination of <structfield>tableoid</> and row OID if you + the combination of <structfield>tableoid</structfield> and row OID if you need a database-wide identifier. </para> </listitem> @@ -1090,7 +1090,7 @@ CREATE TABLE circles ( <para> Of course, the tables in question must be created <literal>WITH OIDS</literal>. As of <productname>PostgreSQL</productname> 8.1, - <literal>WITHOUT OIDS</> is the default. + <literal>WITHOUT OIDS</literal> is the default. </para> </listitem> </itemizedlist> @@ -1107,7 +1107,7 @@ CREATE TABLE circles ( <para> Command identifiers are also 32-bit quantities. This creates a hard limit - of 2<superscript>32</> (4 billion) <acronym>SQL</acronym> commands + of 2<superscript>32</superscript> (4 billion) <acronym>SQL</acronym> commands within a single transaction. In practice this limit is not a problem — note that the limit is on the number of <acronym>SQL</acronym> commands, not the number of rows processed. @@ -1186,7 +1186,7 @@ CREATE TABLE circles ( ALTER TABLE products ADD COLUMN description text; </programlisting> The new column is initially filled with whatever default - value is given (null if you don't specify a <literal>DEFAULT</> clause). + value is given (null if you don't specify a <literal>DEFAULT</literal> clause). </para> <para> @@ -1196,9 +1196,9 @@ ALTER TABLE products ADD COLUMN description text; ALTER TABLE products ADD COLUMN description text CHECK (description <> ''); </programlisting> In fact all the options that can be applied to a column description - in <command>CREATE TABLE</> can be used here. Keep in mind however + in <command>CREATE TABLE</command> can be used here. Keep in mind however that the default value must satisfy the given constraints, or the - <literal>ADD</> will fail. Alternatively, you can add + <literal>ADD</literal> will fail. Alternatively, you can add constraints later (see below) after you've filled in the new column correctly. </para> @@ -1210,7 +1210,7 @@ ALTER TABLE products ADD COLUMN description text CHECK (description <> '') specified, <productname>PostgreSQL</productname> is able to avoid the physical update. So if you intend to fill the column with mostly nondefault values, it's best to add the column with no default, - insert the correct values using <command>UPDATE</>, and then add any + insert the correct values using <command>UPDATE</command>, and then add any desired default as described below. </para> </tip> @@ -1234,7 +1234,7 @@ ALTER TABLE products DROP COLUMN description; foreign key constraint of another table, <productname>PostgreSQL</productname> will not silently drop that constraint. You can authorize dropping everything that depends on - the column by adding <literal>CASCADE</>: + the column by adding <literal>CASCADE</literal>: <programlisting> ALTER TABLE products DROP COLUMN description CASCADE; </programlisting> @@ -1290,13 +1290,13 @@ ALTER TABLE products ALTER COLUMN product_no SET NOT NULL; <programlisting> ALTER TABLE products DROP CONSTRAINT some_name; </programlisting> - (If you are dealing with a generated constraint name like <literal>$2</>, + (If you are dealing with a generated constraint name like <literal>$2</literal>, don't forget that you'll need to double-quote it to make it a valid identifier.) </para> <para> - As with dropping a column, you need to add <literal>CASCADE</> if you + As with dropping a column, you need to add <literal>CASCADE</literal> if you want to drop a constraint that something else depends on. An example is that a foreign key constraint depends on a unique or primary key constraint on the referenced column(s). @@ -1326,7 +1326,7 @@ ALTER TABLE products ALTER COLUMN product_no DROP NOT NULL; ALTER TABLE products ALTER COLUMN price SET DEFAULT 7.77; </programlisting> Note that this doesn't affect any existing rows in the table, it - just changes the default for future <command>INSERT</> commands. + just changes the default for future <command>INSERT</command> commands. </para> <para> @@ -1356,12 +1356,12 @@ ALTER TABLE products ALTER COLUMN price TYPE numeric(10,2); </programlisting> This will succeed only if each existing entry in the column can be converted to the new type by an implicit cast. If a more complex - conversion is needed, you can add a <literal>USING</> clause that + conversion is needed, you can add a <literal>USING</literal> clause that specifies how to compute the new values from the old. </para> <para> - <productname>PostgreSQL</> will attempt to convert the column's + <productname>PostgreSQL</productname> will attempt to convert the column's default value (if any) to the new type, as well as any constraints that involve the column. But these conversions might fail, or might produce surprising results. It's often best to drop any constraints @@ -1437,11 +1437,11 @@ ALTER TABLE products RENAME TO items; </para> <para> - There are different kinds of privileges: <literal>SELECT</>, - <literal>INSERT</>, <literal>UPDATE</>, <literal>DELETE</>, - <literal>TRUNCATE</>, <literal>REFERENCES</>, <literal>TRIGGER</>, - <literal>CREATE</>, <literal>CONNECT</>, <literal>TEMPORARY</>, - <literal>EXECUTE</>, and <literal>USAGE</>. + There are different kinds of privileges: <literal>SELECT</literal>, + <literal>INSERT</literal>, <literal>UPDATE</literal>, <literal>DELETE</literal>, + <literal>TRUNCATE</literal>, <literal>REFERENCES</literal>, <literal>TRIGGER</literal>, + <literal>CREATE</literal>, <literal>CONNECT</literal>, <literal>TEMPORARY</literal>, + <literal>EXECUTE</literal>, and <literal>USAGE</literal>. The privileges applicable to a particular object vary depending on the object's type (table, function, etc). For complete information on the different types of privileges @@ -1480,7 +1480,7 @@ GRANT UPDATE ON accounts TO joe; <para> The special <quote>role</quote> name <literal>PUBLIC</literal> can be used to grant a privilege to every role on the system. Also, - <quote>group</> roles can be set up to help manage privileges when + <quote>group</quote> roles can be set up to help manage privileges when there are many users of a database — for details see <xref linkend="user-manag">. </para> @@ -1492,7 +1492,7 @@ GRANT UPDATE ON accounts TO joe; REVOKE ALL ON accounts FROM PUBLIC; </programlisting> The special privileges of the object owner (i.e., the right to do - <command>DROP</>, <command>GRANT</>, <command>REVOKE</>, etc.) + <command>DROP</command>, <command>GRANT</command>, <command>REVOKE</command>, etc.) are always implicit in being the owner, and cannot be granted or revoked. But the object owner can choose to revoke their own ordinary privileges, for example to make a @@ -1502,7 +1502,7 @@ REVOKE ALL ON accounts FROM PUBLIC; <para> Ordinarily, only the object's owner (or a superuser) can grant or revoke privileges on an object. However, it is possible to grant a - privilege <quote>with grant option</>, which gives the recipient + privilege <quote>with grant option</quote>, which gives the recipient the right to grant it in turn to others. If the grant option is subsequently revoked then all who received the privilege from that recipient (directly or through a chain of grants) will lose the @@ -1525,10 +1525,10 @@ REVOKE ALL ON accounts FROM PUBLIC; <para> In addition to the SQL-standard <link linkend="ddl-priv">privilege system</link> available through <xref linkend="sql-grant">, - tables can have <firstterm>row security policies</> that restrict, + tables can have <firstterm>row security policies</firstterm> that restrict, on a per-user basis, which rows can be returned by normal queries or inserted, updated, or deleted by data modification commands. - This feature is also known as <firstterm>Row-Level Security</>. + This feature is also known as <firstterm>Row-Level Security</firstterm>. By default, tables do not have any policies, so that if a user has access privileges to a table according to the SQL privilege system, all rows within it are equally available for querying or updating. @@ -1537,20 +1537,20 @@ REVOKE ALL ON accounts FROM PUBLIC; <para> When row security is enabled on a table (with <link linkend="sql-altertable">ALTER TABLE ... ENABLE ROW LEVEL - SECURITY</>), all normal access to the table for selecting rows or + SECURITY</link>), all normal access to the table for selecting rows or modifying rows must be allowed by a row security policy. (However, the table's owner is typically not subject to row security policies.) If no policy exists for the table, a default-deny policy is used, meaning that no rows are visible or can be modified. Operations that apply to the - whole table, such as <command>TRUNCATE</> and <literal>REFERENCES</>, + whole table, such as <command>TRUNCATE</command> and <literal>REFERENCES</literal>, are not subject to row security. </para> <para> Row security policies can be specific to commands, or to roles, or to both. A policy can be specified to apply to <literal>ALL</literal> - commands, or to <literal>SELECT</>, <literal>INSERT</>, <literal>UPDATE</>, - or <literal>DELETE</>. Multiple roles can be assigned to a given + commands, or to <literal>SELECT</literal>, <literal>INSERT</literal>, <literal>UPDATE</literal>, + or <literal>DELETE</literal>. Multiple roles can be assigned to a given policy, and normal role membership and inheritance rules apply. </para> @@ -1562,7 +1562,7 @@ REVOKE ALL ON accounts FROM PUBLIC; rule are <literal>leakproof</literal> functions, which are guaranteed to not leak information; the optimizer may choose to apply such functions ahead of the row-security check.) Rows for which the expression does - not return <literal>true</> will not be processed. Separate expressions + not return <literal>true</literal> will not be processed. Separate expressions may be specified to provide independent control over the rows which are visible and the rows which are allowed to be modified. Policy expressions are run as part of the query and with the privileges of the @@ -1571,11 +1571,11 @@ REVOKE ALL ON accounts FROM PUBLIC; </para> <para> - Superusers and roles with the <literal>BYPASSRLS</> attribute always + Superusers and roles with the <literal>BYPASSRLS</literal> attribute always bypass the row security system when accessing a table. Table owners normally bypass row security as well, though a table owner can choose to be subject to row security with <link linkend="sql-altertable">ALTER - TABLE ... FORCE ROW LEVEL SECURITY</>. + TABLE ... FORCE ROW LEVEL SECURITY</link>. </para> <para> @@ -1609,8 +1609,8 @@ REVOKE ALL ON accounts FROM PUBLIC; <para> As a simple example, here is how to create a policy on - the <literal>account</> relation to allow only members of - the <literal>managers</> role to access rows, and only rows of their + the <literal>account</literal> relation to allow only members of + the <literal>managers</literal> role to access rows, and only rows of their accounts: </para> @@ -1627,7 +1627,7 @@ CREATE POLICY account_managers ON accounts TO managers If no role is specified, or the special user name <literal>PUBLIC</literal> is used, then the policy applies to all users on the system. To allow all users to access their own row in - a <literal>users</> table, a simple policy can be used: + a <literal>users</literal> table, a simple policy can be used: </para> <programlisting> @@ -1637,9 +1637,9 @@ CREATE POLICY user_policy ON users <para> To use a different policy for rows that are being added to the table - compared to those rows that are visible, the <literal>WITH CHECK</> + compared to those rows that are visible, the <literal>WITH CHECK</literal> clause can be used. This policy would allow all users to view all rows - in the <literal>users</> table, but only modify their own: + in the <literal>users</literal> table, but only modify their own: </para> <programlisting> @@ -1649,7 +1649,7 @@ CREATE POLICY user_policy ON users </programlisting> <para> - Row security can also be disabled with the <command>ALTER TABLE</> + Row security can also be disabled with the <command>ALTER TABLE</command> command. Disabling row security does not remove any policies that are defined on the table; they are simply ignored. Then all rows in the table are visible and modifiable, subject to the standard SQL privileges @@ -1658,7 +1658,7 @@ CREATE POLICY user_policy ON users <para> Below is a larger example of how this feature can be used in production - environments. The table <literal>passwd</> emulates a Unix password + environments. The table <literal>passwd</literal> emulates a Unix password file: </para> @@ -1820,7 +1820,7 @@ UPDATE 0 Referential integrity checks, such as unique or primary key constraints and foreign key references, always bypass row security to ensure that data integrity is maintained. Care must be taken when developing - schemas and row level policies to avoid <quote>covert channel</> leaks of + schemas and row level policies to avoid <quote>covert channel</quote> leaks of information through such referential integrity checks. </para> @@ -1830,7 +1830,7 @@ UPDATE 0 disastrous if row security silently caused some rows to be omitted from the backup. In such a situation, you can set the <xref linkend="guc-row-security"> configuration parameter - to <literal>off</>. This does not in itself bypass row security; + to <literal>off</literal>. This does not in itself bypass row security; what it does is throw an error if any query's results would get filtered by a policy. The reason for the error can then be investigated and fixed. @@ -1842,7 +1842,7 @@ UPDATE 0 best-performing case; when possible, it's best to design row security applications to work this way. If it is necessary to consult other rows or other tables to make a policy decision, that can be accomplished using - sub-<command>SELECT</>s, or functions that contain <command>SELECT</>s, + sub-<command>SELECT</command>s, or functions that contain <command>SELECT</command>s, in the policy expressions. Be aware however that such accesses can create race conditions that could allow information leakage if care is not taken. As an example, consider the following table design: @@ -1896,8 +1896,8 @@ GRANT ALL ON information TO public; </programlisting> <para> - Now suppose that <literal>alice</> wishes to change the <quote>slightly - secret</> information, but decides that <literal>mallory</> should not + Now suppose that <literal>alice</literal> wishes to change the <quote>slightly + secret</quote> information, but decides that <literal>mallory</literal> should not be trusted with the new content of that row, so she does: </para> @@ -1909,36 +1909,36 @@ COMMIT; </programlisting> <para> - That looks safe; there is no window wherein <literal>mallory</> should be - able to see the <quote>secret from mallory</> string. However, there is - a race condition here. If <literal>mallory</> is concurrently doing, + That looks safe; there is no window wherein <literal>mallory</literal> should be + able to see the <quote>secret from mallory</quote> string. However, there is + a race condition here. If <literal>mallory</literal> is concurrently doing, say, <programlisting> SELECT * FROM information WHERE group_id = 2 FOR UPDATE; </programlisting> - and her transaction is in <literal>READ COMMITTED</> mode, it is possible - for her to see <quote>secret from mallory</>. That happens if her - transaction reaches the <structname>information</> row just - after <literal>alice</>'s does. It blocks waiting - for <literal>alice</>'s transaction to commit, then fetches the updated - row contents thanks to the <literal>FOR UPDATE</> clause. However, it - does <emphasis>not</> fetch an updated row for the - implicit <command>SELECT</> from <structname>users</>, because that - sub-<command>SELECT</> did not have <literal>FOR UPDATE</>; instead - the <structname>users</> row is read with the snapshot taken at the start + and her transaction is in <literal>READ COMMITTED</literal> mode, it is possible + for her to see <quote>secret from mallory</quote>. That happens if her + transaction reaches the <structname>information</structname> row just + after <literal>alice</literal>'s does. It blocks waiting + for <literal>alice</literal>'s transaction to commit, then fetches the updated + row contents thanks to the <literal>FOR UPDATE</literal> clause. However, it + does <emphasis>not</emphasis> fetch an updated row for the + implicit <command>SELECT</command> from <structname>users</structname>, because that + sub-<command>SELECT</command> did not have <literal>FOR UPDATE</literal>; instead + the <structname>users</structname> row is read with the snapshot taken at the start of the query. Therefore, the policy expression tests the old value - of <literal>mallory</>'s privilege level and allows her to see the + of <literal>mallory</literal>'s privilege level and allows her to see the updated row. </para> <para> There are several ways around this problem. One simple answer is to use - <literal>SELECT ... FOR SHARE</> in sub-<command>SELECT</>s in row - security policies. However, that requires granting <literal>UPDATE</> - privilege on the referenced table (here <structname>users</>) to the + <literal>SELECT ... FOR SHARE</literal> in sub-<command>SELECT</command>s in row + security policies. However, that requires granting <literal>UPDATE</literal> + privilege on the referenced table (here <structname>users</structname>) to the affected users, which might be undesirable. (But another row security policy could be applied to prevent them from actually exercising that - privilege; or the sub-<command>SELECT</> could be embedded into a security + privilege; or the sub-<command>SELECT</command> could be embedded into a security definer function.) Also, heavy concurrent use of row share locks on the referenced table could pose a performance problem, especially if updates of it are frequent. Another solution, practical if updates of the @@ -1977,19 +1977,19 @@ SELECT * FROM information WHERE group_id = 2 FOR UPDATE; <para> Users of a cluster do not necessarily have the privilege to access every database in the cluster. Sharing of user names means that there - cannot be different users named, say, <literal>joe</> in two databases + cannot be different users named, say, <literal>joe</literal> in two databases in the same cluster; but the system can be configured to allow - <literal>joe</> access to only some of the databases. + <literal>joe</literal> access to only some of the databases. </para> </note> <para> - A database contains one or more named <firstterm>schemas</>, which + A database contains one or more named <firstterm>schemas</firstterm>, which in turn contain tables. Schemas also contain other kinds of named objects, including data types, functions, and operators. The same object name can be used in different schemas without conflict; for - example, both <literal>schema1</> and <literal>myschema</> can - contain tables named <literal>mytable</>. Unlike databases, + example, both <literal>schema1</literal> and <literal>myschema</literal> can + contain tables named <literal>mytable</literal>. Unlike databases, schemas are not rigidly separated: a user can access objects in any of the schemas in the database they are connected to, if they have privileges to do so. @@ -2053,10 +2053,10 @@ CREATE SCHEMA myschema; <para> To create or access objects in a schema, write a - <firstterm>qualified name</> consisting of the schema name and + <firstterm>qualified name</firstterm> consisting of the schema name and table name separated by a dot: <synopsis> -<replaceable>schema</><literal>.</><replaceable>table</> +<replaceable>schema</replaceable><literal>.</literal><replaceable>table</replaceable> </synopsis> This works anywhere a table name is expected, including the table modification commands and the data access commands discussed in @@ -2068,10 +2068,10 @@ CREATE SCHEMA myschema; <para> Actually, the even more general syntax <synopsis> -<replaceable>database</><literal>.</><replaceable>schema</><literal>.</><replaceable>table</> +<replaceable>database</replaceable><literal>.</literal><replaceable>schema</replaceable><literal>.</literal><replaceable>table</replaceable> </synopsis> can be used too, but at present this is just for <foreignphrase>pro - forma</> compliance with the SQL standard. If you write a database name, + forma</foreignphrase> compliance with the SQL standard. If you write a database name, it must be the same as the database you are connected to. </para> @@ -2116,7 +2116,7 @@ CREATE SCHEMA <replaceable>schema_name</replaceable> AUTHORIZATION <replaceable> </para> <para> - Schema names beginning with <literal>pg_</> are reserved for + Schema names beginning with <literal>pg_</literal> are reserved for system purposes and cannot be created by users. </para> </sect2> @@ -2163,9 +2163,9 @@ CREATE TABLE public.products ( ... ); <para> Qualified names are tedious to write, and it's often best not to wire a particular schema name into applications anyway. Therefore - tables are often referred to by <firstterm>unqualified names</>, + tables are often referred to by <firstterm>unqualified names</firstterm>, which consist of just the table name. The system determines which table - is meant by following a <firstterm>search path</>, which is a list + is meant by following a <firstterm>search path</firstterm>, which is a list of schemas to look in. The first matching table in the search path is taken to be the one wanted. If there is no match in the search path, an error is reported, even if matching table names exist @@ -2180,7 +2180,7 @@ CREATE TABLE public.products ( ... ); <para> The first schema named in the search path is called the current schema. Aside from being the first schema searched, it is also the schema in - which new tables will be created if the <command>CREATE TABLE</> + which new tables will be created if the <command>CREATE TABLE</command> command does not specify a schema name. </para> @@ -2253,7 +2253,7 @@ SET search_path TO myschema; need to write a qualified operator name in an expression, there is a special provision: you must write <synopsis> -<literal>OPERATOR(</><replaceable>schema</><literal>.</><replaceable>operator</><literal>)</> +<literal>OPERATOR(</literal><replaceable>schema</replaceable><literal>.</literal><replaceable>operator</replaceable><literal>)</literal> </synopsis> This is needed to avoid syntactic ambiguity. An example is: <programlisting> @@ -2310,28 +2310,28 @@ REVOKE CREATE ON SCHEMA public FROM PUBLIC; </indexterm> <para> - In addition to <literal>public</> and user-created schemas, each - database contains a <literal>pg_catalog</> schema, which contains + In addition to <literal>public</literal> and user-created schemas, each + database contains a <literal>pg_catalog</literal> schema, which contains the system tables and all the built-in data types, functions, and - operators. <literal>pg_catalog</> is always effectively part of + operators. <literal>pg_catalog</literal> is always effectively part of the search path. If it is not named explicitly in the path then - it is implicitly searched <emphasis>before</> searching the path's + it is implicitly searched <emphasis>before</emphasis> searching the path's schemas. This ensures that built-in names will always be findable. However, you can explicitly place - <literal>pg_catalog</> at the end of your search path if you + <literal>pg_catalog</literal> at the end of your search path if you prefer to have user-defined names override built-in names. </para> <para> - Since system table names begin with <literal>pg_</>, it is best to + Since system table names begin with <literal>pg_</literal>, it is best to avoid such names to ensure that you won't suffer a conflict if some future version defines a system table named the same as your table. (With the default search path, an unqualified reference to your table name would then be resolved as the system table instead.) System tables will continue to follow the convention of having - names beginning with <literal>pg_</>, so that they will not + names beginning with <literal>pg_</literal>, so that they will not conflict with unqualified user-table names so long as users avoid - the <literal>pg_</> prefix. + the <literal>pg_</literal> prefix. </para> </sect2> @@ -2397,15 +2397,15 @@ REVOKE CREATE ON SCHEMA public FROM PUBLIC; implements only the basic schema support specified in the standard. Therefore, many users consider qualified names to really consist of - <literal><replaceable>user_name</>.<replaceable>table_name</></literal>. + <literal><replaceable>user_name</replaceable>.<replaceable>table_name</replaceable></literal>. This is how <productname>PostgreSQL</productname> will effectively behave if you create a per-user schema for every user. </para> <para> - Also, there is no concept of a <literal>public</> schema in the + Also, there is no concept of a <literal>public</literal> schema in the SQL standard. For maximum conformance to the standard, you should - not use (perhaps even remove) the <literal>public</> schema. + not use (perhaps even remove) the <literal>public</literal> schema. </para> <para> @@ -2461,9 +2461,9 @@ CREATE TABLE capitals ( ) INHERITS (cities); </programlisting> - In this case, the <structname>capitals</> table <firstterm>inherits</> - all the columns of its parent table, <structname>cities</>. State - capitals also have an extra column, <structfield>state</>, that shows + In this case, the <structname>capitals</structname> table <firstterm>inherits</firstterm> + all the columns of its parent table, <structname>cities</structname>. State + capitals also have an extra column, <structfield>state</structfield>, that shows their state. </para> @@ -2521,7 +2521,7 @@ SELECT name, altitude </para> <para> - You can also write the table name with a trailing <literal>*</> + You can also write the table name with a trailing <literal>*</literal> to explicitly specify that descendant tables are included: <programlisting> @@ -2530,7 +2530,7 @@ SELECT name, altitude WHERE altitude > 500; </programlisting> - Writing <literal>*</> is not necessary, since this behavior is always + Writing <literal>*</literal> is not necessary, since this behavior is always the default. However, this syntax is still supported for compatibility with older releases where the default could be changed. </para> @@ -2559,7 +2559,7 @@ WHERE c.altitude > 500; (If you try to reproduce this example, you will probably get different numeric OIDs.) By doing a join with - <structname>pg_class</> you can see the actual table names: + <structname>pg_class</structname> you can see the actual table names: <programlisting> SELECT p.relname, c.name, c.altitude @@ -2579,7 +2579,7 @@ WHERE c.altitude > 500 AND c.tableoid = p.oid; </para> <para> - Another way to get the same effect is to use the <type>regclass</> + Another way to get the same effect is to use the <type>regclass</type> alias type, which will print the table OID symbolically: <programlisting> @@ -2603,15 +2603,15 @@ VALUES ('Albany', NULL, NULL, 'NY'); <command>INSERT</command> always inserts into exactly the table specified. In some cases it is possible to redirect the insertion using a rule (see <xref linkend="rules">). However that does not - help for the above case because the <structname>cities</> table - does not contain the column <structfield>state</>, and so the + help for the above case because the <structname>cities</structname> table + does not contain the column <structfield>state</structfield>, and so the command will be rejected before the rule can be applied. </para> <para> All check constraints and not-null constraints on a parent table are automatically inherited by its children, unless explicitly specified - otherwise with <literal>NO INHERIT</> clauses. Other types of constraints + otherwise with <literal>NO INHERIT</literal> clauses. Other types of constraints (unique, primary key, and foreign key constraints) are not inherited. </para> @@ -2620,7 +2620,7 @@ VALUES ('Albany', NULL, NULL, 'NY'); the union of the columns defined by the parent tables. Any columns declared in the child table's definition are added to these. If the same column name appears in multiple parent tables, or in both a parent - table and the child's definition, then these columns are <quote>merged</> + table and the child's definition, then these columns are <quote>merged</quote> so that there is only one such column in the child table. To be merged, columns must have the same data types, else an error is raised. Inheritable check constraints and not-null constraints are merged in a @@ -2632,7 +2632,7 @@ VALUES ('Albany', NULL, NULL, 'NY'); <para> Table inheritance is typically established when the child table is - created, using the <literal>INHERITS</> clause of the + created, using the <literal>INHERITS</literal> clause of the <xref linkend="sql-createtable"> statement. Alternatively, a table which is already defined in a compatible way can @@ -2642,7 +2642,7 @@ VALUES ('Albany', NULL, NULL, 'NY'); the same names and types as the columns of the parent. It must also include check constraints with the same names and check expressions as those of the parent. Similarly an inheritance link can be removed from a child using the - <literal>NO INHERIT</literal> variant of <command>ALTER TABLE</>. + <literal>NO INHERIT</literal> variant of <command>ALTER TABLE</command>. Dynamically adding and removing inheritance links like this can be useful when the inheritance relationship is being used for table partitioning (see <xref linkend="ddl-partitioning">). @@ -2680,10 +2680,10 @@ VALUES ('Albany', NULL, NULL, 'NY'); <para> Inherited queries perform access permission checks on the parent table - only. Thus, for example, granting <literal>UPDATE</> permission on - the <structname>cities</> table implies permission to update rows in + only. Thus, for example, granting <literal>UPDATE</literal> permission on + the <structname>cities</structname> table implies permission to update rows in the <structname>capitals</structname> table as well, when they are - accessed through <structname>cities</>. This preserves the appearance + accessed through <structname>cities</structname>. This preserves the appearance that the data is (also) in the parent table. But the <structname>capitals</structname> table could not be updated directly without an additional grant. In a similar way, the parent table's row @@ -2732,33 +2732,33 @@ VALUES ('Albany', NULL, NULL, 'NY'); <itemizedlist> <listitem> <para> - If we declared <structname>cities</>.<structfield>name</> to be - <literal>UNIQUE</> or a <literal>PRIMARY KEY</>, this would not stop the - <structname>capitals</> table from having rows with names duplicating - rows in <structname>cities</>. And those duplicate rows would by - default show up in queries from <structname>cities</>. In fact, by - default <structname>capitals</> would have no unique constraint at all, + If we declared <structname>cities</structname>.<structfield>name</structfield> to be + <literal>UNIQUE</literal> or a <literal>PRIMARY KEY</literal>, this would not stop the + <structname>capitals</structname> table from having rows with names duplicating + rows in <structname>cities</structname>. And those duplicate rows would by + default show up in queries from <structname>cities</structname>. In fact, by + default <structname>capitals</structname> would have no unique constraint at all, and so could contain multiple rows with the same name. - You could add a unique constraint to <structname>capitals</>, but this - would not prevent duplication compared to <structname>cities</>. + You could add a unique constraint to <structname>capitals</structname>, but this + would not prevent duplication compared to <structname>cities</structname>. </para> </listitem> <listitem> <para> Similarly, if we were to specify that - <structname>cities</>.<structfield>name</> <literal>REFERENCES</> some + <structname>cities</structname>.<structfield>name</structfield> <literal>REFERENCES</literal> some other table, this constraint would not automatically propagate to - <structname>capitals</>. In this case you could work around it by - manually adding the same <literal>REFERENCES</> constraint to - <structname>capitals</>. + <structname>capitals</structname>. In this case you could work around it by + manually adding the same <literal>REFERENCES</literal> constraint to + <structname>capitals</structname>. </para> </listitem> <listitem> <para> Specifying that another table's column <literal>REFERENCES - cities(name)</> would allow the other table to contain city names, but + cities(name)</literal> would allow the other table to contain city names, but not capital names. There is no good workaround for this case. </para> </listitem> @@ -2825,10 +2825,10 @@ VALUES ('Albany', NULL, NULL, 'NY'); <para> Bulk loads and deletes can be accomplished by adding or removing partitions, if that requirement is planned into the partitioning design. - Doing <command>ALTER TABLE DETACH PARTITION</> or dropping an individual - partition using <command>DROP TABLE</> is far faster than a bulk + Doing <command>ALTER TABLE DETACH PARTITION</command> or dropping an individual + partition using <command>DROP TABLE</command> is far faster than a bulk operation. These commands also entirely avoid the - <command>VACUUM</command> overhead caused by a bulk <command>DELETE</>. + <command>VACUUM</command> overhead caused by a bulk <command>DELETE</command>. </para> </listitem> @@ -2921,7 +2921,7 @@ VALUES ('Albany', NULL, NULL, 'NY'); containing data as a partition of a partitioned table, or remove a partition from a partitioned table turning it into a standalone table; see <xref linkend="sql-altertable"> to learn more about the - <command>ATTACH PARTITION</> and <command>DETACH PARTITION</> + <command>ATTACH PARTITION</command> and <command>DETACH PARTITION</command> sub-commands. </para> @@ -2968,9 +2968,9 @@ VALUES ('Albany', NULL, NULL, 'NY'); <para> Partitions cannot have columns that are not present in the parent. It is neither possible to specify columns when creating partitions with - <command>CREATE TABLE</> nor is it possible to add columns to - partitions after-the-fact using <command>ALTER TABLE</>. Tables may be - added as a partition with <command>ALTER TABLE ... ATTACH PARTITION</> + <command>CREATE TABLE</command> nor is it possible to add columns to + partitions after-the-fact using <command>ALTER TABLE</command>. Tables may be + added as a partition with <command>ALTER TABLE ... ATTACH PARTITION</command> only if their columns exactly match the parent, including any <literal>oid</literal> column. </para> @@ -3049,7 +3049,7 @@ CREATE TABLE measurement ( accessing the partitioned table will have to scan fewer partitions if the conditions involve some or all of these columns. For example, consider a table range partitioned using columns - <structfield>lastname</> and <structfield>firstname</> (in that order) + <structfield>lastname</structfield> and <structfield>firstname</structfield> (in that order) as the partition key. </para> </listitem> @@ -3067,7 +3067,7 @@ CREATE TABLE measurement ( <para> Partitions thus created are in every way normal - <productname>PostgreSQL</> + <productname>PostgreSQL</productname> tables (or, possibly, foreign tables). It is possible to specify a tablespace and storage parameters for each partition separately. </para> @@ -3111,12 +3111,12 @@ CREATE TABLE measurement_y2006m02 PARTITION OF measurement PARTITION BY RANGE (peaktemp); </programlisting> - After creating partitions of <structname>measurement_y2006m02</>, - any data inserted into <structname>measurement</> that is mapped to - <structname>measurement_y2006m02</> (or data that is directly inserted - into <structname>measurement_y2006m02</>, provided it satisfies its + After creating partitions of <structname>measurement_y2006m02</structname>, + any data inserted into <structname>measurement</structname> that is mapped to + <structname>measurement_y2006m02</structname> (or data that is directly inserted + into <structname>measurement_y2006m02</structname>, provided it satisfies its partition constraint) will be further redirected to one of its - partitions based on the <structfield>peaktemp</> column. The partition + partitions based on the <structfield>peaktemp</structfield> column. The partition key specified may overlap with the parent's partition key, although care should be taken when specifying the bounds of a sub-partition such that the set of data it accepts constitutes a subset of what @@ -3147,7 +3147,7 @@ CREATE INDEX ON measurement_y2008m01 (logdate); <listitem> <para> Ensure that the <xref linkend="guc-constraint-exclusion"> - configuration parameter is not disabled in <filename>postgresql.conf</>. + configuration parameter is not disabled in <filename>postgresql.conf</filename>. If it is, queries will not be optimized as desired. </para> </listitem> @@ -3197,7 +3197,7 @@ ALTER TABLE measurement DETACH PARTITION measurement_y2006m02; This allows further operations to be performed on the data before it is dropped. For example, this is often a useful time to back up - the data using <command>COPY</>, <application>pg_dump</>, or + the data using <command>COPY</command>, <application>pg_dump</application>, or similar tools. It might also be a useful time to aggregate data into smaller formats, perform other data manipulations, or run reports. @@ -3236,14 +3236,14 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02 </para> <para> - Before running the <command>ATTACH PARTITION</> command, it is - recommended to create a <literal>CHECK</> constraint on the table to + Before running the <command>ATTACH PARTITION</command> command, it is + recommended to create a <literal>CHECK</literal> constraint on the table to be attached describing the desired partition constraint. That way, the system will be able to skip the scan to validate the implicit partition constraint. Without such a constraint, the table will be scanned to validate the partition constraint while holding an <literal>ACCESS EXCLUSIVE</literal> lock on the parent table. - One may then drop the constraint after <command>ATTACH PARTITION</> + One may then drop the constraint after <command>ATTACH PARTITION</command> is finished, because it is no longer necessary. </para> </sect3> @@ -3285,7 +3285,7 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02 <listitem> <para> - An <command>UPDATE</> that causes a row to move from one partition to + An <command>UPDATE</command> that causes a row to move from one partition to another fails, because the new value of the row fails to satisfy the implicit partition constraint of the original partition. </para> @@ -3376,7 +3376,7 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02 the master table. Normally, these tables will not add any columns to the set inherited from the master. Just as with declarative partitioning, these partitions are in every way normal - <productname>PostgreSQL</> tables (or foreign tables). + <productname>PostgreSQL</productname> tables (or foreign tables). </para> <para> @@ -3460,7 +3460,7 @@ CREATE INDEX measurement_y2008m01_logdate ON measurement_y2008m01 (logdate); <listitem> <para> We want our application to be able to say <literal>INSERT INTO - measurement ...</> and have the data be redirected into the + measurement ...</literal> and have the data be redirected into the appropriate partition table. We can arrange that by attaching a suitable trigger function to the master table. If data will be added only to the latest partition, we can @@ -3567,9 +3567,9 @@ DO INSTEAD </para> <para> - Be aware that <command>COPY</> ignores rules. If you want to - use <command>COPY</> to insert data, you'll need to copy into the - correct partition table rather than into the master. <command>COPY</> + Be aware that <command>COPY</command> ignores rules. If you want to + use <command>COPY</command> to insert data, you'll need to copy into the + correct partition table rather than into the master. <command>COPY</command> does fire triggers, so you can use it normally if you use the trigger approach. </para> @@ -3585,7 +3585,7 @@ DO INSTEAD <para> Ensure that the <xref linkend="guc-constraint-exclusion"> configuration parameter is not disabled in - <filename>postgresql.conf</>. + <filename>postgresql.conf</filename>. If it is, queries will not be optimized as desired. </para> </listitem> @@ -3666,8 +3666,8 @@ ALTER TABLE measurement_y2008m02 INHERIT measurement; <para> The schemes shown here assume that the partition key column(s) of a row never change, or at least do not change enough to require - it to move to another partition. An <command>UPDATE</> that attempts - to do that will fail because of the <literal>CHECK</> constraints. + it to move to another partition. An <command>UPDATE</command> that attempts + to do that will fail because of the <literal>CHECK</literal> constraints. If you need to handle such cases, you can put suitable update triggers on the partition tables, but it makes management of the structure much more complicated. @@ -3688,8 +3688,8 @@ ANALYZE measurement; <listitem> <para> - <command>INSERT</command> statements with <literal>ON CONFLICT</> - clauses are unlikely to work as expected, as the <literal>ON CONFLICT</> + <command>INSERT</command> statements with <literal>ON CONFLICT</literal> + clauses are unlikely to work as expected, as the <literal>ON CONFLICT</literal> action is only taken in case of unique violations on the specified target relation, not its child relations. </para> @@ -3717,7 +3717,7 @@ ANALYZE measurement; </indexterm> <para> - <firstterm>Constraint exclusion</> is a query optimization technique + <firstterm>Constraint exclusion</firstterm> is a query optimization technique that improves performance for partitioned tables defined in the fashion described above (both declaratively partitioned tables and those implemented using inheritance). As an example: @@ -3728,17 +3728,17 @@ SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; </programlisting> Without constraint exclusion, the above query would scan each of - the partitions of the <structname>measurement</> table. With constraint + the partitions of the <structname>measurement</structname> table. With constraint exclusion enabled, the planner will examine the constraints of each partition and try to prove that the partition need not be scanned because it could not contain any rows meeting the query's - <literal>WHERE</> clause. When the planner can prove this, it + <literal>WHERE</literal> clause. When the planner can prove this, it excludes the partition from the query plan. </para> <para> - You can use the <command>EXPLAIN</> command to show the difference - between a plan with <varname>constraint_exclusion</> on and a plan + You can use the <command>EXPLAIN</command> command to show the difference + between a plan with <varname>constraint_exclusion</varname> on and a plan with it off. A typical unoptimized plan for this type of table setup is: <programlisting> @@ -3783,7 +3783,7 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; </para> <para> - Note that constraint exclusion is driven only by <literal>CHECK</> + Note that constraint exclusion is driven only by <literal>CHECK</literal> constraints, not by the presence of indexes. Therefore it isn't necessary to define indexes on the key columns. Whether an index needs to be created for a given partition depends on whether you @@ -3795,11 +3795,11 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; <para> The default (and recommended) setting of <xref linkend="guc-constraint-exclusion"> is actually neither - <literal>on</> nor <literal>off</>, but an intermediate setting - called <literal>partition</>, which causes the technique to be + <literal>on</literal> nor <literal>off</literal>, but an intermediate setting + called <literal>partition</literal>, which causes the technique to be applied only to queries that are likely to be working on partitioned - tables. The <literal>on</> setting causes the planner to examine - <literal>CHECK</> constraints in all queries, even simple ones that + tables. The <literal>on</literal> setting causes the planner to examine + <literal>CHECK</literal> constraints in all queries, even simple ones that are unlikely to benefit. </para> @@ -3810,7 +3810,7 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; <itemizedlist> <listitem> <para> - Constraint exclusion only works when the query's <literal>WHERE</> + Constraint exclusion only works when the query's <literal>WHERE</literal> clause contains constants (or externally supplied parameters). For example, a comparison against a non-immutable function such as <function>CURRENT_TIMESTAMP</function> cannot be optimized, since the @@ -3867,7 +3867,7 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; <productname>PostgreSQL</productname> implements portions of the SQL/MED specification, allowing you to access data that resides outside PostgreSQL using regular SQL queries. Such data is referred to as - <firstterm>foreign data</>. (Note that this usage is not to be confused + <firstterm>foreign data</firstterm>. (Note that this usage is not to be confused with foreign keys, which are a type of constraint within the database.) </para> @@ -3876,7 +3876,7 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; <firstterm>foreign data wrapper</firstterm>. A foreign data wrapper is a library that can communicate with an external data source, hiding the details of connecting to the data source and obtaining data from it. - There are some foreign data wrappers available as <filename>contrib</> + There are some foreign data wrappers available as <filename>contrib</filename> modules; see <xref linkend="contrib">. Other kinds of foreign data wrappers might be found as third party products. If none of the existing foreign data wrappers suit your needs, you can write your own; see <xref @@ -3884,7 +3884,7 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; </para> <para> - To access foreign data, you need to create a <firstterm>foreign server</> + To access foreign data, you need to create a <firstterm>foreign server</firstterm> object, which defines how to connect to a particular external data source according to the set of options used by its supporting foreign data wrapper. Then you need to create one or more <firstterm>foreign @@ -3899,7 +3899,7 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01'; <para> Accessing remote data may require authenticating to the external data source. This information can be provided by a - <firstterm>user mapping</>, which can provide additional data + <firstterm>user mapping</firstterm>, which can provide additional data such as user names and passwords based on the current <productname>PostgreSQL</productname> role. </para> @@ -4002,13 +4002,13 @@ DROP TABLE products CASCADE; that depend on them, recursively. In this case, it doesn't remove the orders table, it only removes the foreign key constraint. It stops there because nothing depends on the foreign key constraint. - (If you want to check what <command>DROP ... CASCADE</> will do, - run <command>DROP</> without <literal>CASCADE</> and read the - <literal>DETAIL</> output.) + (If you want to check what <command>DROP ... CASCADE</command> will do, + run <command>DROP</command> without <literal>CASCADE</literal> and read the + <literal>DETAIL</literal> output.) </para> <para> - Almost all <command>DROP</> commands in <productname>PostgreSQL</> support + Almost all <command>DROP</command> commands in <productname>PostgreSQL</productname> support specifying <literal>CASCADE</literal>. Of course, the nature of the possible dependencies varies with the type of the object. You can also write <literal>RESTRICT</literal> instead of @@ -4020,7 +4020,7 @@ DROP TABLE products CASCADE; <para> According to the SQL standard, specifying either <literal>RESTRICT</literal> or <literal>CASCADE</literal> is - required in a <command>DROP</> command. No database system actually + required in a <command>DROP</command> command. No database system actually enforces that rule, but whether the default behavior is <literal>RESTRICT</literal> or <literal>CASCADE</literal> varies across systems. @@ -4028,18 +4028,18 @@ DROP TABLE products CASCADE; </note> <para> - If a <command>DROP</> command lists multiple + If a <command>DROP</command> command lists multiple objects, <literal>CASCADE</literal> is only required when there are dependencies outside the specified group. For example, when saying <literal>DROP TABLE tab1, tab2</literal> the existence of a foreign - key referencing <literal>tab1</> from <literal>tab2</> would not mean + key referencing <literal>tab1</literal> from <literal>tab2</literal> would not mean that <literal>CASCADE</literal> is needed to succeed. </para> <para> For user-defined functions, <productname>PostgreSQL</productname> tracks dependencies associated with a function's externally-visible properties, - such as its argument and result types, but <emphasis>not</> dependencies + such as its argument and result types, but <emphasis>not</emphasis> dependencies that could only be known by examining the function body. As an example, consider this situation: @@ -4056,11 +4056,11 @@ CREATE FUNCTION get_color_note (rainbow) RETURNS text AS (See <xref linkend="xfunc-sql"> for an explanation of SQL-language functions.) <productname>PostgreSQL</productname> will be aware that - the <function>get_color_note</> function depends on the <type>rainbow</> + the <function>get_color_note</function> function depends on the <type>rainbow</type> type: dropping the type would force dropping the function, because its - argument type would no longer be defined. But <productname>PostgreSQL</> - will not consider <function>get_color_note</> to depend on - the <structname>my_colors</> table, and so will not drop the function if + argument type would no longer be defined. But <productname>PostgreSQL</productname> + will not consider <function>get_color_note</function> to depend on + the <structname>my_colors</structname> table, and so will not drop the function if the table is dropped. While there are disadvantages to this approach, there are also benefits. The function is still valid in some sense if the table is missing, though executing it would cause an error; creating a new diff --git a/doc/src/sgml/dfunc.sgml b/doc/src/sgml/dfunc.sgml index 23af270e32c..7ef996b51f7 100644 --- a/doc/src/sgml/dfunc.sgml +++ b/doc/src/sgml/dfunc.sgml @@ -9,7 +9,7 @@ C, they must be compiled and linked in a special way to produce a file that can be dynamically loaded by the server. To be precise, a <firstterm>shared library</firstterm> needs to be - created.<indexterm><primary>shared library</></indexterm> + created.<indexterm><primary>shared library</primary></indexterm> </para> @@ -30,7 +30,7 @@ executables: first the source files are compiled into object files, then the object files are linked together. The object files need to be created as <firstterm>position-independent code</firstterm> - (<acronym>PIC</acronym>),<indexterm><primary>PIC</></> which + (<acronym>PIC</acronym>),<indexterm><primary>PIC</primary></indexterm> which conceptually means that they can be placed at an arbitrary location in memory when they are loaded by the executable. (Object files intended for executables are usually not compiled that way.) The @@ -57,8 +57,8 @@ <variablelist> <varlistentry> <term> - <systemitem class="osname">FreeBSD</> - <indexterm><primary>FreeBSD</><secondary>shared library</></> + <systemitem class="osname">FreeBSD</systemitem> + <indexterm><primary>FreeBSD</primary><secondary>shared library</secondary></indexterm> </term> <listitem> <para> @@ -70,15 +70,15 @@ gcc -fPIC -c foo.c gcc -shared -o foo.so foo.o </programlisting> This is applicable as of version 3.0 of - <systemitem class="osname">FreeBSD</>. + <systemitem class="osname">FreeBSD</systemitem>. </para> </listitem> </varlistentry> <varlistentry> <term> - <systemitem class="osname">HP-UX</> - <indexterm><primary>HP-UX</><secondary>shared library</></> + <systemitem class="osname">HP-UX</systemitem> + <indexterm><primary>HP-UX</primary><secondary>shared library</secondary></indexterm> </term> <listitem> <para> @@ -97,7 +97,7 @@ gcc -fPIC -c foo.c <programlisting> ld -b -o foo.sl foo.o </programlisting> - <systemitem class="osname">HP-UX</> uses the extension + <systemitem class="osname">HP-UX</systemitem> uses the extension <filename>.sl</filename> for shared libraries, unlike most other systems. </para> @@ -106,8 +106,8 @@ ld -b -o foo.sl foo.o <varlistentry> <term> - <systemitem class="osname">Linux</> - <indexterm><primary>Linux</><secondary>shared library</></> + <systemitem class="osname">Linux</systemitem> + <indexterm><primary>Linux</primary><secondary>shared library</secondary></indexterm> </term> <listitem> <para> @@ -125,8 +125,8 @@ cc -shared -o foo.so foo.o <varlistentry> <term> - <systemitem class="osname">macOS</> - <indexterm><primary>macOS</><secondary>shared library</></> + <systemitem class="osname">macOS</systemitem> + <indexterm><primary>macOS</primary><secondary>shared library</secondary></indexterm> </term> <listitem> <para> @@ -141,8 +141,8 @@ cc -bundle -flat_namespace -undefined suppress -o foo.so foo.o <varlistentry> <term> - <systemitem class="osname">NetBSD</> - <indexterm><primary>NetBSD</><secondary>shared library</></> + <systemitem class="osname">NetBSD</systemitem> + <indexterm><primary>NetBSD</primary><secondary>shared library</secondary></indexterm> </term> <listitem> <para> @@ -161,8 +161,8 @@ gcc -shared -o foo.so foo.o <varlistentry> <term> - <systemitem class="osname">OpenBSD</> - <indexterm><primary>OpenBSD</><secondary>shared library</></> + <systemitem class="osname">OpenBSD</systemitem> + <indexterm><primary>OpenBSD</primary><secondary>shared library</secondary></indexterm> </term> <listitem> <para> @@ -179,17 +179,17 @@ ld -Bshareable -o foo.so foo.o <varlistentry> <term> - <systemitem class="osname">Solaris</> - <indexterm><primary>Solaris</><secondary>shared library</></> + <systemitem class="osname">Solaris</systemitem> + <indexterm><primary>Solaris</primary><secondary>shared library</secondary></indexterm> </term> <listitem> <para> The compiler flag to create <acronym>PIC</acronym> is <option>-KPIC</option> with the Sun compiler and - <option>-fPIC</option> with <application>GCC</>. To + <option>-fPIC</option> with <application>GCC</application>. To link shared libraries, the compiler option is <option>-G</option> with either compiler or alternatively - <option>-shared</option> with <application>GCC</>. + <option>-shared</option> with <application>GCC</application>. <programlisting> cc -KPIC -c foo.c cc -G -o foo.so foo.o diff --git a/doc/src/sgml/dict-int.sgml b/doc/src/sgml/dict-int.sgml index d49f3e2a3a3..04cf14a73d9 100644 --- a/doc/src/sgml/dict-int.sgml +++ b/doc/src/sgml/dict-int.sgml @@ -8,7 +8,7 @@ </indexterm> <para> - <filename>dict_int</> is an example of an add-on dictionary template + <filename>dict_int</filename> is an example of an add-on dictionary template for full-text search. The motivation for this example dictionary is to control the indexing of integers (signed and unsigned), allowing such numbers to be indexed while preventing excessive growth in the number of @@ -25,17 +25,17 @@ <itemizedlist> <listitem> <para> - The <literal>maxlen</> parameter specifies the maximum number of + The <literal>maxlen</literal> parameter specifies the maximum number of digits allowed in an integer word. The default value is 6. </para> </listitem> <listitem> <para> - The <literal>rejectlong</> parameter specifies whether an overlength - integer should be truncated or ignored. If <literal>rejectlong</> is - <literal>false</> (the default), the dictionary returns the first - <literal>maxlen</> digits of the integer. If <literal>rejectlong</> is - <literal>true</>, the dictionary treats an overlength integer as a stop + The <literal>rejectlong</literal> parameter specifies whether an overlength + integer should be truncated or ignored. If <literal>rejectlong</literal> is + <literal>false</literal> (the default), the dictionary returns the first + <literal>maxlen</literal> digits of the integer. If <literal>rejectlong</literal> is + <literal>true</literal>, the dictionary treats an overlength integer as a stop word, so that it will not be indexed. Note that this also means that such an integer cannot be searched for. </para> @@ -47,8 +47,8 @@ <title>Usage</title> <para> - Installing the <literal>dict_int</> extension creates a text search - template <literal>intdict_template</> and a dictionary <literal>intdict</> + Installing the <literal>dict_int</literal> extension creates a text search + template <literal>intdict_template</literal> and a dictionary <literal>intdict</literal> based on it, with the default parameters. You can alter the parameters, for example diff --git a/doc/src/sgml/dict-xsyn.sgml b/doc/src/sgml/dict-xsyn.sgml index 42362ffbc8d..bf4965c36fd 100644 --- a/doc/src/sgml/dict-xsyn.sgml +++ b/doc/src/sgml/dict-xsyn.sgml @@ -8,7 +8,7 @@ </indexterm> <para> - <filename>dict_xsyn</> (Extended Synonym Dictionary) is an example of an + <filename>dict_xsyn</filename> (Extended Synonym Dictionary) is an example of an add-on dictionary template for full-text search. This dictionary type replaces words with groups of their synonyms, and so makes it possible to search for a word using any of its synonyms. @@ -18,41 +18,41 @@ <title>Configuration</title> <para> - A <literal>dict_xsyn</> dictionary accepts the following options: + A <literal>dict_xsyn</literal> dictionary accepts the following options: </para> <itemizedlist> <listitem> <para> - <literal>matchorig</> controls whether the original word is accepted by - the dictionary. Default is <literal>true</>. + <literal>matchorig</literal> controls whether the original word is accepted by + the dictionary. Default is <literal>true</literal>. </para> </listitem> <listitem> <para> - <literal>matchsynonyms</> controls whether the synonyms are - accepted by the dictionary. Default is <literal>false</>. + <literal>matchsynonyms</literal> controls whether the synonyms are + accepted by the dictionary. Default is <literal>false</literal>. </para> </listitem> <listitem> <para> - <literal>keeporig</> controls whether the original word is included in - the dictionary's output. Default is <literal>true</>. + <literal>keeporig</literal> controls whether the original word is included in + the dictionary's output. Default is <literal>true</literal>. </para> </listitem> <listitem> <para> - <literal>keepsynonyms</> controls whether the synonyms are included in - the dictionary's output. Default is <literal>true</>. + <literal>keepsynonyms</literal> controls whether the synonyms are included in + the dictionary's output. Default is <literal>true</literal>. </para> </listitem> <listitem> <para> - <literal>rules</> is the base name of the file containing the list of + <literal>rules</literal> is the base name of the file containing the list of synonyms. This file must be stored in - <filename>$SHAREDIR/tsearch_data/</> (where <literal>$SHAREDIR</> means - the <productname>PostgreSQL</> installation's shared-data directory). - Its name must end in <literal>.rules</> (which is not to be included in - the <literal>rules</> parameter). + <filename>$SHAREDIR/tsearch_data/</filename> (where <literal>$SHAREDIR</literal> means + the <productname>PostgreSQL</productname> installation's shared-data directory). + Its name must end in <literal>.rules</literal> (which is not to be included in + the <literal>rules</literal> parameter). </para> </listitem> </itemizedlist> @@ -71,15 +71,15 @@ word syn1 syn2 syn3 </listitem> <listitem> <para> - The sharp (<literal>#</>) sign is a comment delimiter. It may appear at + The sharp (<literal>#</literal>) sign is a comment delimiter. It may appear at any position in a line. The rest of the line will be skipped. </para> </listitem> </itemizedlist> <para> - Look at <filename>xsyn_sample.rules</>, which is installed in - <filename>$SHAREDIR/tsearch_data/</>, for an example. + Look at <filename>xsyn_sample.rules</filename>, which is installed in + <filename>$SHAREDIR/tsearch_data/</filename>, for an example. </para> </sect2> @@ -87,8 +87,8 @@ word syn1 syn2 syn3 <title>Usage</title> <para> - Installing the <literal>dict_xsyn</> extension creates a text search - template <literal>xsyn_template</> and a dictionary <literal>xsyn</> + Installing the <literal>dict_xsyn</literal> extension creates a text search + template <literal>xsyn_template</literal> and a dictionary <literal>xsyn</literal> based on it, with default parameters. You can alter the parameters, for example diff --git a/doc/src/sgml/diskusage.sgml b/doc/src/sgml/diskusage.sgml index 461deb9dbad..ba230843549 100644 --- a/doc/src/sgml/diskusage.sgml +++ b/doc/src/sgml/diskusage.sgml @@ -5,7 +5,7 @@ <para> This chapter discusses how to monitor the disk usage of a - <productname>PostgreSQL</> database system. + <productname>PostgreSQL</productname> database system. </para> <sect1 id="disk-usage"> @@ -18,10 +18,10 @@ <para> Each table has a primary heap disk file where most of the data is stored. If the table has any columns with potentially-wide values, - there also might be a <acronym>TOAST</> file associated with the table, + there also might be a <acronym>TOAST</acronym> file associated with the table, which is used to store values too wide to fit comfortably in the main table (see <xref linkend="storage-toast">). There will be one valid index - on the <acronym>TOAST</> table, if present. There also might be indexes + on the <acronym>TOAST</acronym> table, if present. There also might be indexes associated with the base table. Each table and index is stored in a separate disk file — possibly more than one file, if the file would exceed one gigabyte. Naming conventions for these files are described @@ -39,7 +39,7 @@ </para> <para> - Using <application>psql</> on a recently vacuumed or analyzed database, + Using <application>psql</application> on a recently vacuumed or analyzed database, you can issue queries to see the disk usage of any table: <programlisting> SELECT pg_relation_filepath(oid), relpages FROM pg_class WHERE relname = 'customer'; @@ -49,14 +49,14 @@ SELECT pg_relation_filepath(oid), relpages FROM pg_class WHERE relname = 'custom base/16384/16806 | 60 (1 row) </programlisting> - Each page is typically 8 kilobytes. (Remember, <structfield>relpages</> - is only updated by <command>VACUUM</>, <command>ANALYZE</>, and - a few DDL commands such as <command>CREATE INDEX</>.) The file path name + Each page is typically 8 kilobytes. (Remember, <structfield>relpages</structfield> + is only updated by <command>VACUUM</command>, <command>ANALYZE</command>, and + a few DDL commands such as <command>CREATE INDEX</command>.) The file path name is of interest if you want to examine the table's disk file directly. </para> <para> - To show the space used by <acronym>TOAST</> tables, use a query + To show the space used by <acronym>TOAST</acronym> tables, use a query like the following: <programlisting> SELECT relname, relpages diff --git a/doc/src/sgml/dml.sgml b/doc/src/sgml/dml.sgml index 071cdb610f0..bc016d3cae6 100644 --- a/doc/src/sgml/dml.sgml +++ b/doc/src/sgml/dml.sgml @@ -285,42 +285,42 @@ DELETE FROM products; <para> Sometimes it is useful to obtain data from modified rows while they are - being manipulated. The <command>INSERT</>, <command>UPDATE</>, - and <command>DELETE</> commands all have an - optional <literal>RETURNING</> clause that supports this. Use - of <literal>RETURNING</> avoids performing an extra database query to + being manipulated. The <command>INSERT</command>, <command>UPDATE</command>, + and <command>DELETE</command> commands all have an + optional <literal>RETURNING</literal> clause that supports this. Use + of <literal>RETURNING</literal> avoids performing an extra database query to collect the data, and is especially valuable when it would otherwise be difficult to identify the modified rows reliably. </para> <para> - The allowed contents of a <literal>RETURNING</> clause are the same as - a <command>SELECT</> command's output list + The allowed contents of a <literal>RETURNING</literal> clause are the same as + a <command>SELECT</command> command's output list (see <xref linkend="queries-select-lists">). It can contain column names of the command's target table, or value expressions using those - columns. A common shorthand is <literal>RETURNING *</>, which selects + columns. A common shorthand is <literal>RETURNING *</literal>, which selects all columns of the target table in order. </para> <para> - In an <command>INSERT</>, the data available to <literal>RETURNING</> is + In an <command>INSERT</command>, the data available to <literal>RETURNING</literal> is the row as it was inserted. This is not so useful in trivial inserts, since it would just repeat the data provided by the client. But it can be very handy when relying on computed default values. For example, - when using a <link linkend="datatype-serial"><type>serial</></link> - column to provide unique identifiers, <literal>RETURNING</> can return + when using a <link linkend="datatype-serial"><type>serial</type></link> + column to provide unique identifiers, <literal>RETURNING</literal> can return the ID assigned to a new row: <programlisting> CREATE TABLE users (firstname text, lastname text, id serial primary key); INSERT INTO users (firstname, lastname) VALUES ('Joe', 'Cool') RETURNING id; </programlisting> - The <literal>RETURNING</> clause is also very useful - with <literal>INSERT ... SELECT</>. + The <literal>RETURNING</literal> clause is also very useful + with <literal>INSERT ... SELECT</literal>. </para> <para> - In an <command>UPDATE</>, the data available to <literal>RETURNING</> is + In an <command>UPDATE</command>, the data available to <literal>RETURNING</literal> is the new content of the modified row. For example: <programlisting> UPDATE products SET price = price * 1.10 @@ -330,7 +330,7 @@ UPDATE products SET price = price * 1.10 </para> <para> - In a <command>DELETE</>, the data available to <literal>RETURNING</> is + In a <command>DELETE</command>, the data available to <literal>RETURNING</literal> is the content of the deleted row. For example: <programlisting> DELETE FROM products @@ -341,9 +341,9 @@ DELETE FROM products <para> If there are triggers (<xref linkend="triggers">) on the target table, - the data available to <literal>RETURNING</> is the row as modified by + the data available to <literal>RETURNING</literal> is the row as modified by the triggers. Thus, inspecting columns computed by triggers is another - common use-case for <literal>RETURNING</>. + common use-case for <literal>RETURNING</literal>. </para> </sect1> diff --git a/doc/src/sgml/docguide.sgml b/doc/src/sgml/docguide.sgml index ff58a173356..3a5b88ca1ca 100644 --- a/doc/src/sgml/docguide.sgml +++ b/doc/src/sgml/docguide.sgml @@ -449,7 +449,7 @@ checking for fop... fop <para> To produce HTML documentation with the stylesheet used on <ulink - url="https://www.postgresql.org/docs/current">postgresql.org</> instead of the + url="https://www.postgresql.org/docs/current">postgresql.org</ulink> instead of the default simple style use: <screen> <prompt>doc/src/sgml$ </prompt><userinput>make STYLE=website html</userinput> diff --git a/doc/src/sgml/earthdistance.sgml b/doc/src/sgml/earthdistance.sgml index 6dedc4a5f49..1bdcf64629f 100644 --- a/doc/src/sgml/earthdistance.sgml +++ b/doc/src/sgml/earthdistance.sgml @@ -8,18 +8,18 @@ </indexterm> <para> - The <filename>earthdistance</> module provides two different approaches to + The <filename>earthdistance</filename> module provides two different approaches to calculating great circle distances on the surface of the Earth. The one - described first depends on the <filename>cube</> module (which - <emphasis>must</> be installed before <filename>earthdistance</> can be - installed). The second one is based on the built-in <type>point</> data type, + described first depends on the <filename>cube</filename> module (which + <emphasis>must</emphasis> be installed before <filename>earthdistance</filename> can be + installed). The second one is based on the built-in <type>point</type> data type, using longitude and latitude for the coordinates. </para> <para> In this module, the Earth is assumed to be perfectly spherical. (If that's too inaccurate for you, you might want to look at the - <application><ulink url="http://postgis.net/">PostGIS</ulink></> + <application><ulink url="http://postgis.net/">PostGIS</ulink></application> project.) </para> @@ -29,13 +29,13 @@ <para> Data is stored in cubes that are points (both corners are the same) using 3 coordinates representing the x, y, and z distance from the center of the - Earth. A domain <type>earth</> over <type>cube</> is provided, which + Earth. A domain <type>earth</type> over <type>cube</type> is provided, which includes constraint checks that the value meets these restrictions and is reasonably close to the actual surface of the Earth. </para> <para> - The radius of the Earth is obtained from the <function>earth()</> + The radius of the Earth is obtained from the <function>earth()</function> function. It is given in meters. But by changing this one function you can change the module to use some other units, or to use a different value of the radius that you feel is more appropriate. @@ -43,8 +43,8 @@ <para> This package has applications to astronomical databases as well. - Astronomers will probably want to change <function>earth()</> to return a - radius of <literal>180/pi()</> so that distances are in degrees. + Astronomers will probably want to change <function>earth()</function> to return a + radius of <literal>180/pi()</literal> so that distances are in degrees. </para> <para> @@ -123,11 +123,11 @@ <entry><function>earth_box(earth, float8)</function><indexterm><primary>earth_box</primary></indexterm></entry> <entry><type>cube</type></entry> <entry>Returns a box suitable for an indexed search using the cube - <literal>@></> + <literal>@></literal> operator for points within a given great circle distance of a location. Some points in this box are further than the specified great circle distance from the location, so a second check using - <function>earth_distance</> should be included in the query. + <function>earth_distance</function> should be included in the query. </entry> </row> </tbody> @@ -141,7 +141,7 @@ <para> The second part of the module relies on representing Earth locations as - values of type <type>point</>, in which the first component is taken to + values of type <type>point</type>, in which the first component is taken to represent longitude in degrees, and the second component is taken to represent latitude in degrees. Points are taken as (longitude, latitude) and not vice versa because longitude is closer to the intuitive idea of @@ -165,7 +165,7 @@ </thead> <tbody> <row> - <entry><type>point</> <literal><@></literal> <type>point</></entry> + <entry><type>point</type> <literal><@></literal> <type>point</type></entry> <entry><type>float8</type></entry> <entry>Gives the distance in statute miles between two points on the Earth's surface. @@ -176,15 +176,15 @@ </table> <para> - Note that unlike the <type>cube</>-based part of the module, units - are hardwired here: changing the <function>earth()</> function will + Note that unlike the <type>cube</type>-based part of the module, units + are hardwired here: changing the <function>earth()</function> function will not affect the results of this operator. </para> <para> One disadvantage of the longitude/latitude representation is that you need to be careful about the edge conditions near the poles - and near +/- 180 degrees of longitude. The <type>cube</>-based + and near +/- 180 degrees of longitude. The <type>cube</type>-based representation avoids these discontinuities. </para> diff --git a/doc/src/sgml/ecpg.sgml b/doc/src/sgml/ecpg.sgml index 716a101838a..0f9ff3a8eb8 100644 --- a/doc/src/sgml/ecpg.sgml +++ b/doc/src/sgml/ecpg.sgml @@ -46,7 +46,7 @@ correctness. Third, embedded <acronym>SQL</acronym> in C is specified in the <acronym>SQL</acronym> standard and supported by many other <acronym>SQL</acronym> database systems. The - <productname>PostgreSQL</> implementation is designed to match this + <productname>PostgreSQL</productname> implementation is designed to match this standard as much as possible, and it is usually possible to port embedded <acronym>SQL</acronym> programs written for other SQL databases to <productname>PostgreSQL</productname> with relative @@ -97,19 +97,19 @@ EXEC SQL CONNECT TO <replaceable>target</replaceable> <optional>AS <replaceable> <itemizedlist> <listitem> <simpara> - <literal><replaceable>dbname</><optional>@<replaceable>hostname</></optional><optional>:<replaceable>port</></optional></literal> + <literal><replaceable>dbname</replaceable><optional>@<replaceable>hostname</replaceable></optional><optional>:<replaceable>port</replaceable></optional></literal> </simpara> </listitem> <listitem> <simpara> - <literal>tcp:postgresql://<replaceable>hostname</><optional>:<replaceable>port</></optional><optional>/<replaceable>dbname</></optional><optional>?<replaceable>options</></optional></literal> + <literal>tcp:postgresql://<replaceable>hostname</replaceable><optional>:<replaceable>port</replaceable></optional><optional>/<replaceable>dbname</replaceable></optional><optional>?<replaceable>options</replaceable></optional></literal> </simpara> </listitem> <listitem> <simpara> - <literal>unix:postgresql://<replaceable>hostname</><optional>:<replaceable>port</></optional><optional>/<replaceable>dbname</></optional><optional>?<replaceable>options</></optional></literal> + <literal>unix:postgresql://<replaceable>hostname</replaceable><optional>:<replaceable>port</replaceable></optional><optional>/<replaceable>dbname</replaceable></optional><optional>?<replaceable>options</replaceable></optional></literal> </simpara> </listitem> @@ -475,7 +475,7 @@ EXEC SQL COMMIT; In the default mode, statements are committed only when <command>EXEC SQL COMMIT</command> is issued. The embedded SQL interface also supports autocommit of transactions (similar to - <application>psql</>'s default behavior) via the <option>-t</option> + <application>psql</application>'s default behavior) via the <option>-t</option> command-line option to <command>ecpg</command> (see <xref linkend="app-ecpg">) or via the <literal>EXEC SQL SET AUTOCOMMIT TO ON</literal> statement. In autocommit mode, each command is @@ -507,7 +507,7 @@ EXEC SQL COMMIT; </varlistentry> <varlistentry> - <term><literal>EXEC SQL PREPARE TRANSACTION </literal><replaceable class="parameter">transaction_id</></term> + <term><literal>EXEC SQL PREPARE TRANSACTION </literal><replaceable class="parameter">transaction_id</replaceable></term> <listitem> <para> Prepare the current transaction for two-phase commit. @@ -516,7 +516,7 @@ EXEC SQL COMMIT; </varlistentry> <varlistentry> - <term><literal>EXEC SQL COMMIT PREPARED </literal><replaceable class="parameter">transaction_id</></term> + <term><literal>EXEC SQL COMMIT PREPARED </literal><replaceable class="parameter">transaction_id</replaceable></term> <listitem> <para> Commit a transaction that is in prepared state. @@ -525,7 +525,7 @@ EXEC SQL COMMIT; </varlistentry> <varlistentry> - <term><literal>EXEC SQL ROLLBACK PREPARED </literal><replaceable class="parameter">transaction_id</></term> + <term><literal>EXEC SQL ROLLBACK PREPARED </literal><replaceable class="parameter">transaction_id</replaceable></term> <listitem> <para> Roll back a transaction that is in prepared state. @@ -720,7 +720,7 @@ EXEC SQL int i = 4; <para> The definition of a structure or union also must be listed inside - a <literal>DECLARE</> section. Otherwise the preprocessor cannot + a <literal>DECLARE</literal> section. Otherwise the preprocessor cannot handle these types since it does not know the definition. </para> </sect2> @@ -890,8 +890,8 @@ do </row> <row> - <entry><type>character(<replaceable>n</>)</type>, <type>varchar(<replaceable>n</>)</type>, <type>text</type></entry> - <entry><type>char[<replaceable>n</>+1]</type>, <type>VARCHAR[<replaceable>n</>+1]</type><footnote><para>declared in <filename>ecpglib.h</filename></para></footnote></entry> + <entry><type>character(<replaceable>n</replaceable>)</type>, <type>varchar(<replaceable>n</replaceable>)</type>, <type>text</type></entry> + <entry><type>char[<replaceable>n</replaceable>+1]</type>, <type>VARCHAR[<replaceable>n</replaceable>+1]</type><footnote><para>declared in <filename>ecpglib.h</filename></para></footnote></entry> </row> <row> @@ -955,7 +955,7 @@ EXEC SQL END DECLARE SECTION; The other way is using the <type>VARCHAR</type> type, which is a special type provided by ECPG. The definition on an array of type <type>VARCHAR</type> is converted into a - named <type>struct</> for every variable. A declaration like: + named <type>struct</type> for every variable. A declaration like: <programlisting> VARCHAR var[180]; </programlisting> @@ -994,10 +994,10 @@ struct varchar_var { int len; char arr[180]; } var; ECPG contains some special types that help you to interact easily with some special data types from the PostgreSQL server. In particular, it has implemented support for the - <type>numeric</>, <type>decimal</type>, <type>date</>, <type>timestamp</>, - and <type>interval</> types. These data types cannot usefully be + <type>numeric</type>, <type>decimal</type>, <type>date</type>, <type>timestamp</type>, + and <type>interval</type> types. These data types cannot usefully be mapped to primitive host variable types (such - as <type>int</>, <type>long long int</type>, + as <type>int</type>, <type>long long int</type>, or <type>char[]</type>), because they have a complex internal structure. Applications deal with these types by declaring host variables in special types and accessing them using functions in @@ -1942,10 +1942,10 @@ free(out); <para> The numeric type offers to do calculations with arbitrary precision. See <xref linkend="datatype-numeric"> for the equivalent type in the - <productname>PostgreSQL</> server. Because of the arbitrary precision this + <productname>PostgreSQL</productname> server. Because of the arbitrary precision this variable needs to be able to expand and shrink dynamically. That's why you can only create numeric variables on the heap, by means of the - <function>PGTYPESnumeric_new</> and <function>PGTYPESnumeric_free</> + <function>PGTYPESnumeric_new</function> and <function>PGTYPESnumeric_free</function> functions. The decimal type, which is similar but limited in precision, can be created on the stack as well as on the heap. </para> @@ -2092,17 +2092,17 @@ int PGTYPESnumeric_cmp(numeric *var1, numeric *var2) <itemizedlist> <listitem> <para> - 1, if <literal>var1</> is bigger than <literal>var2</> + 1, if <literal>var1</literal> is bigger than <literal>var2</literal> </para> </listitem> <listitem> <para> - -1, if <literal>var1</> is smaller than <literal>var2</> + -1, if <literal>var1</literal> is smaller than <literal>var2</literal> </para> </listitem> <listitem> <para> - 0, if <literal>var1</> and <literal>var2</> are equal + 0, if <literal>var1</literal> and <literal>var2</literal> are equal </para> </listitem> </itemizedlist> @@ -2119,7 +2119,7 @@ int PGTYPESnumeric_cmp(numeric *var1, numeric *var2) int PGTYPESnumeric_from_int(signed int int_val, numeric *var); </synopsis> This function accepts a variable of type signed int and stores it - in the numeric variable <literal>var</>. Upon success, 0 is returned and + in the numeric variable <literal>var</literal>. Upon success, 0 is returned and -1 in case of a failure. </para> </listitem> @@ -2134,7 +2134,7 @@ int PGTYPESnumeric_from_int(signed int int_val, numeric *var); int PGTYPESnumeric_from_long(signed long int long_val, numeric *var); </synopsis> This function accepts a variable of type signed long int and stores it - in the numeric variable <literal>var</>. Upon success, 0 is returned and + in the numeric variable <literal>var</literal>. Upon success, 0 is returned and -1 in case of a failure. </para> </listitem> @@ -2149,7 +2149,7 @@ int PGTYPESnumeric_from_long(signed long int long_val, numeric *var); int PGTYPESnumeric_copy(numeric *src, numeric *dst); </synopsis> This function copies over the value of the variable that - <literal>src</literal> points to into the variable that <literal>dst</> + <literal>src</literal> points to into the variable that <literal>dst</literal> points to. It returns 0 on success and -1 if an error occurs. </para> </listitem> @@ -2164,7 +2164,7 @@ int PGTYPESnumeric_copy(numeric *src, numeric *dst); int PGTYPESnumeric_from_double(double d, numeric *dst); </synopsis> This function accepts a variable of type double and stores the result - in the variable that <literal>dst</> points to. It returns 0 on success + in the variable that <literal>dst</literal> points to. It returns 0 on success and -1 if an error occurs. </para> </listitem> @@ -2179,10 +2179,10 @@ int PGTYPESnumeric_from_double(double d, numeric *dst); int PGTYPESnumeric_to_double(numeric *nv, double *dp) </synopsis> The function converts the numeric value from the variable that - <literal>nv</> points to into the double variable that <literal>dp</> points + <literal>nv</literal> points to into the double variable that <literal>dp</literal> points to. It returns 0 on success and -1 if an error occurs, including - overflow. On overflow, the global variable <literal>errno</> will be set - to <literal>PGTYPES_NUM_OVERFLOW</> additionally. + overflow. On overflow, the global variable <literal>errno</literal> will be set + to <literal>PGTYPES_NUM_OVERFLOW</literal> additionally. </para> </listitem> </varlistentry> @@ -2196,10 +2196,10 @@ int PGTYPESnumeric_to_double(numeric *nv, double *dp) int PGTYPESnumeric_to_int(numeric *nv, int *ip); </synopsis> The function converts the numeric value from the variable that - <literal>nv</> points to into the integer variable that <literal>ip</> + <literal>nv</literal> points to into the integer variable that <literal>ip</literal> points to. It returns 0 on success and -1 if an error occurs, including - overflow. On overflow, the global variable <literal>errno</> will be set - to <literal>PGTYPES_NUM_OVERFLOW</> additionally. + overflow. On overflow, the global variable <literal>errno</literal> will be set + to <literal>PGTYPES_NUM_OVERFLOW</literal> additionally. </para> </listitem> </varlistentry> @@ -2213,10 +2213,10 @@ int PGTYPESnumeric_to_int(numeric *nv, int *ip); int PGTYPESnumeric_to_long(numeric *nv, long *lp); </synopsis> The function converts the numeric value from the variable that - <literal>nv</> points to into the long integer variable that - <literal>lp</> points to. It returns 0 on success and -1 if an error + <literal>nv</literal> points to into the long integer variable that + <literal>lp</literal> points to. It returns 0 on success and -1 if an error occurs, including overflow. On overflow, the global variable - <literal>errno</> will be set to <literal>PGTYPES_NUM_OVERFLOW</> + <literal>errno</literal> will be set to <literal>PGTYPES_NUM_OVERFLOW</literal> additionally. </para> </listitem> @@ -2231,10 +2231,10 @@ int PGTYPESnumeric_to_long(numeric *nv, long *lp); int PGTYPESnumeric_to_decimal(numeric *src, decimal *dst); </synopsis> The function converts the numeric value from the variable that - <literal>src</> points to into the decimal variable that - <literal>dst</> points to. It returns 0 on success and -1 if an error + <literal>src</literal> points to into the decimal variable that + <literal>dst</literal> points to. It returns 0 on success and -1 if an error occurs, including overflow. On overflow, the global variable - <literal>errno</> will be set to <literal>PGTYPES_NUM_OVERFLOW</> + <literal>errno</literal> will be set to <literal>PGTYPES_NUM_OVERFLOW</literal> additionally. </para> </listitem> @@ -2249,8 +2249,8 @@ int PGTYPESnumeric_to_decimal(numeric *src, decimal *dst); int PGTYPESnumeric_from_decimal(decimal *src, numeric *dst); </synopsis> The function converts the decimal value from the variable that - <literal>src</> points to into the numeric variable that - <literal>dst</> points to. It returns 0 on success and -1 if an error + <literal>src</literal> points to into the numeric variable that + <literal>dst</literal> points to. It returns 0 on success and -1 if an error occurs. Since the decimal type is implemented as a limited version of the numeric type, overflow cannot occur with this conversion. </para> @@ -2265,7 +2265,7 @@ int PGTYPESnumeric_from_decimal(decimal *src, numeric *dst); <para> The date type in C enables your programs to deal with data of the SQL type date. See <xref linkend="datatype-datetime"> for the equivalent type in the - <productname>PostgreSQL</> server. + <productname>PostgreSQL</productname> server. </para> <para> The following functions can be used to work with the date type: @@ -2292,8 +2292,8 @@ date PGTYPESdate_from_timestamp(timestamp dt); <synopsis> date PGTYPESdate_from_asc(char *str, char **endptr); </synopsis> - The function receives a C char* string <literal>str</> and a pointer to - a C char* string <literal>endptr</>. At the moment ECPG always parses + The function receives a C char* string <literal>str</literal> and a pointer to + a C char* string <literal>endptr</literal>. At the moment ECPG always parses the complete string and so it currently does not support to store the address of the first invalid character in <literal>*endptr</literal>. You can safely set <literal>endptr</literal> to NULL. @@ -2397,9 +2397,9 @@ date PGTYPESdate_from_asc(char *str, char **endptr); <synopsis> char *PGTYPESdate_to_asc(date dDate); </synopsis> - The function receives the date <literal>dDate</> as its only parameter. - It will output the date in the form <literal>1999-01-18</>, i.e., in the - <literal>YYYY-MM-DD</> format. + The function receives the date <literal>dDate</literal> as its only parameter. + It will output the date in the form <literal>1999-01-18</literal>, i.e., in the + <literal>YYYY-MM-DD</literal> format. </para> </listitem> </varlistentry> @@ -2414,11 +2414,11 @@ char *PGTYPESdate_to_asc(date dDate); void PGTYPESdate_julmdy(date d, int *mdy); </synopsis> <!-- almost same description as for rjulmdy() --> - The function receives the date <literal>d</> and a pointer to an array - of 3 integer values <literal>mdy</>. The variable name indicates - the sequential order: <literal>mdy[0]</> will be set to contain the - number of the month, <literal>mdy[1]</> will be set to the value of the - day and <literal>mdy[2]</> will contain the year. + The function receives the date <literal>d</literal> and a pointer to an array + of 3 integer values <literal>mdy</literal>. The variable name indicates + the sequential order: <literal>mdy[0]</literal> will be set to contain the + number of the month, <literal>mdy[1]</literal> will be set to the value of the + day and <literal>mdy[2]</literal> will contain the year. </para> </listitem> </varlistentry> @@ -2432,7 +2432,7 @@ void PGTYPESdate_julmdy(date d, int *mdy); <synopsis> void PGTYPESdate_mdyjul(int *mdy, date *jdate); </synopsis> - The function receives the array of the 3 integers (<literal>mdy</>) as + The function receives the array of the 3 integers (<literal>mdy</literal>) as its first argument and as its second argument a pointer to a variable of type date that should hold the result of the operation. </para> @@ -2447,7 +2447,7 @@ void PGTYPESdate_mdyjul(int *mdy, date *jdate); <synopsis> int PGTYPESdate_dayofweek(date d); </synopsis> - The function receives the date variable <literal>d</> as its only + The function receives the date variable <literal>d</literal> as its only argument and returns an integer that indicates the day of the week for this date. <itemizedlist> @@ -2499,7 +2499,7 @@ int PGTYPESdate_dayofweek(date d); <synopsis> void PGTYPESdate_today(date *d); </synopsis> - The function receives a pointer to a date variable (<literal>d</>) + The function receives a pointer to a date variable (<literal>d</literal>) that it sets to the current date. </para> </listitem> @@ -2514,9 +2514,9 @@ void PGTYPESdate_today(date *d); <synopsis> int PGTYPESdate_fmt_asc(date dDate, char *fmtstring, char *outbuf); </synopsis> - The function receives the date to convert (<literal>dDate</>), the - format mask (<literal>fmtstring</>) and the string that will hold the - textual representation of the date (<literal>outbuf</>). + The function receives the date to convert (<literal>dDate</literal>), the + format mask (<literal>fmtstring</literal>) and the string that will hold the + textual representation of the date (<literal>outbuf</literal>). </para> <para> On success, 0 is returned and a negative value if an error occurred. @@ -2637,9 +2637,9 @@ int PGTYPESdate_defmt_asc(date *d, char *fmt, char *str); </synopsis> <!-- same description as rdefmtdate --> The function receives a pointer to the date value that should hold the - result of the operation (<literal>d</>), the format mask to use for - parsing the date (<literal>fmt</>) and the C char* string containing - the textual representation of the date (<literal>str</>). The textual + result of the operation (<literal>d</literal>), the format mask to use for + parsing the date (<literal>fmt</literal>) and the C char* string containing + the textual representation of the date (<literal>str</literal>). The textual representation is expected to match the format mask. However you do not need to have a 1:1 mapping of the string to the format mask. The function only analyzes the sequential order and looks for the literals @@ -2742,7 +2742,7 @@ int PGTYPESdate_defmt_asc(date *d, char *fmt, char *str); <para> The timestamp type in C enables your programs to deal with data of the SQL type timestamp. See <xref linkend="datatype-datetime"> for the equivalent - type in the <productname>PostgreSQL</> server. + type in the <productname>PostgreSQL</productname> server. </para> <para> The following functions can be used to work with the timestamp type: @@ -2756,8 +2756,8 @@ int PGTYPESdate_defmt_asc(date *d, char *fmt, char *str); <synopsis> timestamp PGTYPEStimestamp_from_asc(char *str, char **endptr); </synopsis> - The function receives the string to parse (<literal>str</>) and a - pointer to a C char* (<literal>endptr</>). + The function receives the string to parse (<literal>str</literal>) and a + pointer to a C char* (<literal>endptr</literal>). At the moment ECPG always parses the complete string and so it currently does not support to store the address of the first invalid character in <literal>*endptr</literal>. @@ -2765,15 +2765,15 @@ timestamp PGTYPEStimestamp_from_asc(char *str, char **endptr); </para> <para> The function returns the parsed timestamp on success. On error, - <literal>PGTYPESInvalidTimestamp</literal> is returned and <varname>errno</> is - set to <literal>PGTYPES_TS_BAD_TIMESTAMP</>. See <xref linkend="PGTYPESInvalidTimestamp"> for important notes on this value. + <literal>PGTYPESInvalidTimestamp</literal> is returned and <varname>errno</varname> is + set to <literal>PGTYPES_TS_BAD_TIMESTAMP</literal>. See <xref linkend="PGTYPESInvalidTimestamp"> for important notes on this value. </para> <para> In general, the input string can contain any combination of an allowed date specification, a whitespace character and an allowed time specification. Note that time zones are not supported by ECPG. It can parse them but does not apply any calculation as the - <productname>PostgreSQL</> server does for example. Timezone + <productname>PostgreSQL</productname> server does for example. Timezone specifiers are silently discarded. </para> <para> @@ -2819,7 +2819,7 @@ timestamp PGTYPEStimestamp_from_asc(char *str, char **endptr); <synopsis> char *PGTYPEStimestamp_to_asc(timestamp tstamp); </synopsis> - The function receives the timestamp <literal>tstamp</> as + The function receives the timestamp <literal>tstamp</literal> as its only argument and returns an allocated string that contains the textual representation of the timestamp. </para> @@ -2835,7 +2835,7 @@ char *PGTYPEStimestamp_to_asc(timestamp tstamp); void PGTYPEStimestamp_current(timestamp *ts); </synopsis> The function retrieves the current timestamp and saves it into the - timestamp variable that <literal>ts</> points to. + timestamp variable that <literal>ts</literal> points to. </para> </listitem> </varlistentry> @@ -2849,8 +2849,8 @@ void PGTYPEStimestamp_current(timestamp *ts); int PGTYPEStimestamp_fmt_asc(timestamp *ts, char *output, int str_len, char *fmtstr); </synopsis> The function receives a pointer to the timestamp to convert as its - first argument (<literal>ts</>), a pointer to the output buffer - (<literal>output</>), the maximal length that has been allocated for + first argument (<literal>ts</literal>), a pointer to the output buffer + (<literal>output</literal>), the maximal length that has been allocated for the output buffer (<literal>str_len</literal>) and the format mask to use for the conversion (<literal>fmtstr</literal>). </para> @@ -2861,7 +2861,7 @@ int PGTYPEStimestamp_fmt_asc(timestamp *ts, char *output, int str_len, char *fmt <para> You can use the following format specifiers for the format mask. The format specifiers are the same ones that are used in the - <function>strftime</> function in <productname>libc</productname>. Any + <function>strftime</function> function in <productname>libc</productname>. Any non-format specifier will be copied into the output buffer. <!-- This is from the FreeBSD man page: http://www.freebsd.org/cgi/man.cgi?query=strftime&apropos=0&sektion=3&manpath=FreeBSD+7.0-current&format=html @@ -3184,9 +3184,9 @@ int PGTYPEStimestamp_fmt_asc(timestamp *ts, char *output, int str_len, char *fmt <synopsis> int PGTYPEStimestamp_sub(timestamp *ts1, timestamp *ts2, interval *iv); </synopsis> - The function will subtract the timestamp variable that <literal>ts2</> - points to from the timestamp variable that <literal>ts1</> points to - and will store the result in the interval variable that <literal>iv</> + The function will subtract the timestamp variable that <literal>ts2</literal> + points to from the timestamp variable that <literal>ts1</literal> points to + and will store the result in the interval variable that <literal>iv</literal> points to. </para> <para> @@ -3206,12 +3206,12 @@ int PGTYPEStimestamp_sub(timestamp *ts1, timestamp *ts2, interval *iv); int PGTYPEStimestamp_defmt_asc(char *str, char *fmt, timestamp *d); </synopsis> The function receives the textual representation of a timestamp in the - variable <literal>str</> as well as the formatting mask to use in the - variable <literal>fmt</>. The result will be stored in the variable - that <literal>d</> points to. + variable <literal>str</literal> as well as the formatting mask to use in the + variable <literal>fmt</literal>. The result will be stored in the variable + that <literal>d</literal> points to. </para> <para> - If the formatting mask <literal>fmt</> is NULL, the function will fall + If the formatting mask <literal>fmt</literal> is NULL, the function will fall back to the default formatting mask which is <literal>%Y-%m-%d %H:%M:%S</literal>. </para> @@ -3231,10 +3231,10 @@ int PGTYPEStimestamp_defmt_asc(char *str, char *fmt, timestamp *d); <synopsis> int PGTYPEStimestamp_add_interval(timestamp *tin, interval *span, timestamp *tout); </synopsis> - The function receives a pointer to a timestamp variable <literal>tin</> - and a pointer to an interval variable <literal>span</>. It adds the + The function receives a pointer to a timestamp variable <literal>tin</literal> + and a pointer to an interval variable <literal>span</literal>. It adds the interval to the timestamp and saves the resulting timestamp in the - variable that <literal>tout</> points to. + variable that <literal>tout</literal> points to. </para> <para> Upon success, the function returns 0 and a negative value if an @@ -3251,9 +3251,9 @@ int PGTYPEStimestamp_add_interval(timestamp *tin, interval *span, timestamp *tou <synopsis> int PGTYPEStimestamp_sub_interval(timestamp *tin, interval *span, timestamp *tout); </synopsis> - The function subtracts the interval variable that <literal>span</> - points to from the timestamp variable that <literal>tin</> points to - and saves the result into the variable that <literal>tout</> points + The function subtracts the interval variable that <literal>span</literal> + points to from the timestamp variable that <literal>tin</literal> points to + and saves the result into the variable that <literal>tout</literal> points to. </para> <para> @@ -3271,7 +3271,7 @@ int PGTYPEStimestamp_sub_interval(timestamp *tin, interval *span, timestamp *tou <para> The interval type in C enables your programs to deal with data of the SQL type interval. See <xref linkend="datatype-datetime"> for the equivalent - type in the <productname>PostgreSQL</> server. + type in the <productname>PostgreSQL</productname> server. </para> <para> The following functions can be used to work with the interval type: @@ -3309,7 +3309,7 @@ void PGTYPESinterval_new(interval *intvl); <synopsis> interval *PGTYPESinterval_from_asc(char *str, char **endptr); </synopsis> - The function parses the input string <literal>str</> and returns a + The function parses the input string <literal>str</literal> and returns a pointer to an allocated interval variable. At the moment ECPG always parses the complete string and so it currently does not support to store the @@ -3327,7 +3327,7 @@ interval *PGTYPESinterval_from_asc(char *str, char **endptr); <synopsis> char *PGTYPESinterval_to_asc(interval *span); </synopsis> - The function converts the interval variable that <literal>span</> + The function converts the interval variable that <literal>span</literal> points to into a C char*. The output looks like this example: <literal>@ 1 day 12 hours 59 mins 10 secs</literal>. </para> @@ -3342,8 +3342,8 @@ char *PGTYPESinterval_to_asc(interval *span); <synopsis> int PGTYPESinterval_copy(interval *intvlsrc, interval *intvldest); </synopsis> - The function copies the interval variable that <literal>intvlsrc</> - points to into the variable that <literal>intvldest</> points to. Note + The function copies the interval variable that <literal>intvlsrc</literal> + points to into the variable that <literal>intvldest</literal> points to. Note that you need to allocate the memory for the destination variable before. </para> @@ -3360,15 +3360,15 @@ int PGTYPESinterval_copy(interval *intvlsrc, interval *intvldest); a maximum precision of 30 significant digits. In contrast to the numeric type which can be created on the heap only, the decimal type can be created either on the stack or on the heap (by means of the functions - <function>PGTYPESdecimal_new</> and - <function>PGTYPESdecimal_free</>). + <function>PGTYPESdecimal_new</function> and + <function>PGTYPESdecimal_free</function>). There are a lot of other functions that deal with the decimal type in the <productname>Informix</productname> compatibility mode described in <xref linkend="ecpg-informix-compat">. </para> <para> The following functions can be used to work with the decimal type and are - not only contained in the <literal>libcompat</> library. + not only contained in the <literal>libcompat</literal> library. <variablelist> <varlistentry> <term><function>PGTYPESdecimal_new</function></term> @@ -3548,15 +3548,15 @@ void PGTYPESdecimal_free(decimal *var); <listitem> <para> A value of type timestamp representing an invalid time stamp. This is - returned by the function <function>PGTYPEStimestamp_from_asc</> on + returned by the function <function>PGTYPEStimestamp_from_asc</function> on parse error. Note that due to the internal representation of the <type>timestamp</type> data type, <literal>PGTYPESInvalidTimestamp</literal> is also a valid timestamp at - the same time. It is set to <literal>1899-12-31 23:59:59</>. In order + the same time. It is set to <literal>1899-12-31 23:59:59</literal>. In order to detect errors, make sure that your application does not only test for <literal>PGTYPESInvalidTimestamp</literal> but also for - <literal>errno != 0</> after each call to - <function>PGTYPEStimestamp_from_asc</>. + <literal>errno != 0</literal> after each call to + <function>PGTYPEStimestamp_from_asc</function>. </para> </listitem> </varlistentry> @@ -3927,7 +3927,7 @@ typedef struct sqlda_struct sqlda_t; <variablelist> <varlistentry> - <term><literal>sqldaid</></term> + <term><literal>sqldaid</literal></term> <listitem> <para> It contains the literal string <literal>"SQLDA "</literal>. @@ -3936,7 +3936,7 @@ typedef struct sqlda_struct sqlda_t; </varlistentry> <varlistentry> - <term><literal>sqldabc</></term> + <term><literal>sqldabc</literal></term> <listitem> <para> It contains the size of the allocated space in bytes. @@ -3945,7 +3945,7 @@ typedef struct sqlda_struct sqlda_t; </varlistentry> <varlistentry> - <term><literal>sqln</></term> + <term><literal>sqln</literal></term> <listitem> <para> It contains the number of input parameters for a parameterized query in @@ -3960,7 +3960,7 @@ typedef struct sqlda_struct sqlda_t; </varlistentry> <varlistentry> - <term><literal>sqld</></term> + <term><literal>sqld</literal></term> <listitem> <para> It contains the number of fields in a result set. @@ -3969,17 +3969,17 @@ typedef struct sqlda_struct sqlda_t; </varlistentry> <varlistentry> - <term><literal>desc_next</></term> + <term><literal>desc_next</literal></term> <listitem> <para> If the query returns more than one record, multiple linked - SQLDA structures are returned, and <literal>desc_next</> holds + SQLDA structures are returned, and <literal>desc_next</literal> holds a pointer to the next entry in the list. </para> </listitem> </varlistentry> <varlistentry> - <term><literal>sqlvar</></term> + <term><literal>sqlvar</literal></term> <listitem> <para> This is the array of the columns in the result set. @@ -4015,7 +4015,7 @@ typedef struct sqlvar_struct sqlvar_t; <variablelist> <varlistentry> - <term><literal>sqltype</></term> + <term><literal>sqltype</literal></term> <listitem> <para> Contains the type identifier of the field. For values, @@ -4025,7 +4025,7 @@ typedef struct sqlvar_struct sqlvar_t; </varlistentry> <varlistentry> - <term><literal>sqllen</></term> + <term><literal>sqllen</literal></term> <listitem> <para> Contains the binary length of the field. e.g. 4 bytes for <type>ECPGt_int</type>. @@ -4034,7 +4034,7 @@ typedef struct sqlvar_struct sqlvar_t; </varlistentry> <varlistentry> - <term><literal>sqldata</></term> + <term><literal>sqldata</literal></term> <listitem> <para> Points to the data. The format of the data is described @@ -4044,7 +4044,7 @@ typedef struct sqlvar_struct sqlvar_t; </varlistentry> <varlistentry> - <term><literal>sqlind</></term> + <term><literal>sqlind</literal></term> <listitem> <para> Points to the null indicator. 0 means not null, -1 means @@ -4054,7 +4054,7 @@ typedef struct sqlvar_struct sqlvar_t; </varlistentry> <varlistentry> - <term><literal>sqlname</></term> + <term><literal>sqlname</literal></term> <listitem> <para> The name of the field. @@ -4084,7 +4084,7 @@ struct sqlname The meaning of the fields is: <variablelist> <varlistentry> - <term><literal>length</></term> + <term><literal>length</literal></term> <listitem> <para> Contains the length of the field name. @@ -4092,7 +4092,7 @@ struct sqlname </listitem> </varlistentry> <varlistentry> - <term><literal>data</></term> + <term><literal>data</literal></term> <listitem> <para> Contains the actual field name. @@ -4113,10 +4113,10 @@ struct sqlname SQLDA are: </para> <step><simpara>Declare an <type>sqlda_t</type> structure to receive the result set.</simpara></step> - <step><simpara>Execute <command>FETCH</>/<command>EXECUTE</>/<command>DESCRIBE</> commands to process a query specifying the declared SQLDA.</simpara></step> - <step><simpara>Check the number of records in the result set by looking at <structfield>sqln</>, a member of the <type>sqlda_t</type> structure.</simpara></step> - <step><simpara>Get the values of each column from <literal>sqlvar[0]</>, <literal>sqlvar[1]</>, etc., members of the <type>sqlda_t</type> structure.</simpara></step> - <step><simpara>Go to next row (<type>sqlda_t</type> structure) by following the <structfield>desc_next</> pointer, a member of the <type>sqlda_t</type> structure.</simpara></step> + <step><simpara>Execute <command>FETCH</command>/<command>EXECUTE</command>/<command>DESCRIBE</command> commands to process a query specifying the declared SQLDA.</simpara></step> + <step><simpara>Check the number of records in the result set by looking at <structfield>sqln</structfield>, a member of the <type>sqlda_t</type> structure.</simpara></step> + <step><simpara>Get the values of each column from <literal>sqlvar[0]</literal>, <literal>sqlvar[1]</literal>, etc., members of the <type>sqlda_t</type> structure.</simpara></step> + <step><simpara>Go to next row (<type>sqlda_t</type> structure) by following the <structfield>desc_next</structfield> pointer, a member of the <type>sqlda_t</type> structure.</simpara></step> <step><simpara>Repeat above as you need.</simpara></step> </procedure> @@ -4133,7 +4133,7 @@ sqlda_t *sqlda1; <para> Next, specify the SQLDA in a command. This is - a <command>FETCH</> command example. + a <command>FETCH</command> command example. <programlisting> EXEC SQL FETCH NEXT FROM cur1 INTO DESCRIPTOR sqlda1; </programlisting> @@ -4168,10 +4168,10 @@ for (i = 0; i < cur_sqlda->sqld; i++) </para> <para> - To get a column value, check the <structfield>sqltype</> value, + To get a column value, check the <structfield>sqltype</structfield> value, a member of the <type>sqlvar_t</type> structure. Then, switch to an appropriate way, depending on the column type, to copy - data from the <structfield>sqlvar</> field to a host variable. + data from the <structfield>sqlvar</structfield> field to a host variable. <programlisting> char var_buf[1024]; @@ -4225,7 +4225,7 @@ EXEC SQL PREPARE stmt1 FROM :query; <para> Next, allocate memory for an SQLDA, and set the number of input - parameters in <structfield>sqln</>, a member variable of + parameters in <structfield>sqln</structfield>, a member variable of the <type>sqlda_t</type> structure. When two or more input parameters are required for the prepared query, the application has to allocate additional memory space which is calculated by @@ -4386,8 +4386,8 @@ main(void) <para> Read each columns in the first record. The number of columns is - stored in <structfield>sqld</>, the actual data of the first - column is stored in <literal>sqlvar[0]</>, both members of + stored in <structfield>sqld</structfield>, the actual data of the first + column is stored in <literal>sqlvar[0]</literal>, both members of the <type>sqlda_t</type> structure. <programlisting> @@ -4404,9 +4404,9 @@ main(void) </para> <para> - Now, the column data is stored in the variable <varname>v</>. + Now, the column data is stored in the variable <varname>v</varname>. Copy every datum into host variables, looking - at <literal>v.sqltype</> for the type of the column. + at <literal>v.sqltype</literal> for the type of the column. <programlisting> switch (v.sqltype) { int intval; @@ -4947,7 +4947,7 @@ struct </para> <para> - Here is one example that combines the use of <literal>WHENEVER</> + Here is one example that combines the use of <literal>WHENEVER</literal> and <varname>sqlca</varname>, printing out the contents of <varname>sqlca</varname> when an error occurs. This is perhaps useful for debugging or prototyping applications, before @@ -5227,8 +5227,8 @@ while (1) <listitem> <para> This means the host variable is of type <type>bool</type> and - the datum in the database is neither <literal>'t'</> nor - <literal>'f'</>. (SQLSTATE 42804) + the datum in the database is neither <literal>'t'</literal> nor + <literal>'f'</literal>. (SQLSTATE 42804) </para> </listitem> </varlistentry> @@ -5575,8 +5575,8 @@ EXEC SQL INCLUDE "<replaceable>filename</replaceable>"; Similar to the directive <literal>#define</literal> that is known from C, embedded SQL has a similar concept: <programlisting> -EXEC SQL DEFINE <replaceable>name</>; -EXEC SQL DEFINE <replaceable>name</> <replaceable>value</>; +EXEC SQL DEFINE <replaceable>name</replaceable>; +EXEC SQL DEFINE <replaceable>name</replaceable> <replaceable>value</replaceable>; </programlisting> So you can define a name: <programlisting> @@ -5587,7 +5587,7 @@ EXEC SQL DEFINE HAVE_FEATURE; EXEC SQL DEFINE MYNUMBER 12; EXEC SQL DEFINE MYSTRING 'abc'; </programlisting> - Use <literal>undef</> to remove a previous definition: + Use <literal>undef</literal> to remove a previous definition: <programlisting> EXEC SQL UNDEF MYNUMBER; </programlisting> @@ -5597,15 +5597,15 @@ EXEC SQL UNDEF MYNUMBER; Of course you can continue to use the C versions <literal>#define</literal> and <literal>#undef</literal> in your embedded SQL program. The difference is where your defined values get evaluated. If you use <literal>EXEC SQL - DEFINE</> then the <command>ecpg</> preprocessor evaluates the defines and substitutes + DEFINE</literal> then the <command>ecpg</command> preprocessor evaluates the defines and substitutes the values. For example if you write: <programlisting> EXEC SQL DEFINE MYNUMBER 12; ... EXEC SQL UPDATE Tbl SET col = MYNUMBER; </programlisting> - then <command>ecpg</> will already do the substitution and your C compiler will never - see any name or identifier <literal>MYNUMBER</>. Note that you cannot use + then <command>ecpg</command> will already do the substitution and your C compiler will never + see any name or identifier <literal>MYNUMBER</literal>. Note that you cannot use <literal>#define</literal> for a constant that you are going to use in an embedded SQL query because in this case the embedded SQL precompiler is not able to see this declaration. @@ -5619,23 +5619,23 @@ EXEC SQL UPDATE Tbl SET col = MYNUMBER; <variablelist> <varlistentry> - <term><literal>EXEC SQL ifdef <replaceable>name</>;</literal></term> + <term><literal>EXEC SQL ifdef <replaceable>name</replaceable>;</literal></term> <listitem> <para> - Checks a <replaceable>name</> and processes subsequent lines if - <replaceable>name</> has been created with <literal>EXEC SQL define - <replaceable>name</></literal>. + Checks a <replaceable>name</replaceable> and processes subsequent lines if + <replaceable>name</replaceable> has been created with <literal>EXEC SQL define + <replaceable>name</replaceable></literal>. </para> </listitem> </varlistentry> <varlistentry> - <term><literal>EXEC SQL ifndef <replaceable>name</>;</literal></term> + <term><literal>EXEC SQL ifndef <replaceable>name</replaceable>;</literal></term> <listitem> <para> - Checks a <replaceable>name</> and processes subsequent lines if - <replaceable>name</> has <emphasis>not</emphasis> been created with - <literal>EXEC SQL define <replaceable>name</></literal>. + Checks a <replaceable>name</replaceable> and processes subsequent lines if + <replaceable>name</replaceable> has <emphasis>not</emphasis> been created with + <literal>EXEC SQL define <replaceable>name</replaceable></literal>. </para> </listitem> </varlistentry> @@ -5645,19 +5645,19 @@ EXEC SQL UPDATE Tbl SET col = MYNUMBER; <listitem> <para> Starts processing an alternative section to a section introduced by - either <literal>EXEC SQL ifdef <replaceable>name</></literal> or - <literal>EXEC SQL ifndef <replaceable>name</></literal>. + either <literal>EXEC SQL ifdef <replaceable>name</replaceable></literal> or + <literal>EXEC SQL ifndef <replaceable>name</replaceable></literal>. </para> </listitem> </varlistentry> <varlistentry> - <term><literal>EXEC SQL elif <replaceable>name</>;</literal></term> + <term><literal>EXEC SQL elif <replaceable>name</replaceable>;</literal></term> <listitem> <para> - Checks <replaceable>name</> and starts an alternative section if - <replaceable>name</> has been created with <literal>EXEC SQL define - <replaceable>name</></literal>. + Checks <replaceable>name</replaceable> and starts an alternative section if + <replaceable>name</replaceable> has been created with <literal>EXEC SQL define + <replaceable>name</replaceable></literal>. </para> </listitem> </varlistentry> @@ -5707,7 +5707,7 @@ EXEC SQL endif; <para> The preprocessor program is called <filename>ecpg</filename> and is - included in a normal <productname>PostgreSQL</> installation. + included in a normal <productname>PostgreSQL</productname> installation. Embedded SQL programs are typically named with an extension <filename>.pgc</filename>. If you have a program file called <filename>prog1.pgc</filename>, you can preprocess it by simply @@ -5727,8 +5727,8 @@ ecpg prog1.pgc cc -c prog1.c </programlisting> The generated C source files include header files from the - <productname>PostgreSQL</> installation, so if you installed - <productname>PostgreSQL</> in a location that is not searched by + <productname>PostgreSQL</productname> installation, so if you installed + <productname>PostgreSQL</productname> in a location that is not searched by default, you have to add an option such as <literal>-I/usr/local/pgsql/include</literal> to the compilation command line. @@ -5803,10 +5803,10 @@ ECPG = ecpg </para> <note> <para> - On Windows, if the <application>ecpg</> libraries and an application are + On Windows, if the <application>ecpg</application> libraries and an application are compiled with different flags, this function call will crash the application because the internal representation of the - <literal>FILE</> pointers differ. Specifically, + <literal>FILE</literal> pointers differ. Specifically, multithreaded/single-threaded, release/debug, and static/dynamic flags should be the same for the library and all applications using that library. @@ -5844,7 +5844,7 @@ ECPG = ecpg <function>ECPGstatus(int <replaceable>lineno</replaceable>, const char* <replaceable>connection_name</replaceable>)</function> returns true if you are connected to a database and false if not. - <replaceable>connection_name</replaceable> can be <literal>NULL</> + <replaceable>connection_name</replaceable> can be <literal>NULL</literal> if a single connection is being used. </para> </listitem> @@ -6217,10 +6217,10 @@ main(void) <para> To build the application, proceed as follows. Convert - <filename>test_mod.pgc</> into <filename>test_mod.c</> by + <filename>test_mod.pgc</filename> into <filename>test_mod.c</filename> by running <command>ecpg</command>, and generate - <filename>test_mod.o</> by compiling - <filename>test_mod.c</> with the C compiler: + <filename>test_mod.o</filename> by compiling + <filename>test_mod.c</filename> with the C compiler: <programlisting> ecpg -o test_mod.c test_mod.pgc cc -c test_mod.c -o test_mod.o @@ -6228,16 +6228,16 @@ cc -c test_mod.c -o test_mod.o </para> <para> - Next, generate <filename>test_cpp.o</> by compiling - <filename>test_cpp.cpp</> with the C++ compiler: + Next, generate <filename>test_cpp.o</filename> by compiling + <filename>test_cpp.cpp</filename> with the C++ compiler: <programlisting> c++ -c test_cpp.cpp -o test_cpp.o </programlisting> </para> <para> - Finally, link these object files, <filename>test_cpp.o</> - and <filename>test_mod.o</>, into one executable, using the C++ + Finally, link these object files, <filename>test_cpp.o</filename> + and <filename>test_mod.o</filename>, into one executable, using the C++ compiler driver: <programlisting> c++ test_cpp.o test_mod.o -lecpg -o test_cpp @@ -7101,7 +7101,7 @@ EXEC SQL GET DESCRIPTOR d VALUE 2 :d_data = DATA; <para> Here is an example for a whole procedure of - executing <literal>SELECT current_database();</> and showing the number of + executing <literal>SELECT current_database();</literal> and showing the number of columns, the column data length, and the column data: <programlisting> int @@ -7866,10 +7866,10 @@ main(void) <sect1 id="ecpg-informix-compat"> <title><productname>Informix</productname> Compatibility Mode</title> <para> - <command>ecpg</command> can be run in a so-called <firstterm>Informix compatibility mode</>. If + <command>ecpg</command> can be run in a so-called <firstterm>Informix compatibility mode</firstterm>. If this mode is active, it tries to behave as if it were the <productname>Informix</productname> precompiler for <productname>Informix</productname> E/SQL. Generally spoken this will allow you to use - the dollar sign instead of the <literal>EXEC SQL</> primitive to introduce + the dollar sign instead of the <literal>EXEC SQL</literal> primitive to introduce embedded SQL commands: <programlisting> $int j = 3; @@ -7891,11 +7891,11 @@ $COMMIT; </note> <para> - There are two compatibility modes: <literal>INFORMIX</>, <literal>INFORMIX_SE</> + There are two compatibility modes: <literal>INFORMIX</literal>, <literal>INFORMIX_SE</literal> </para> <para> When linking programs that use this compatibility mode, remember to link - against <literal>libcompat</> that is shipped with ECPG. + against <literal>libcompat</literal> that is shipped with ECPG. </para> <para> Besides the previously explained syntactic sugar, the <productname>Informix</productname> compatibility @@ -7913,7 +7913,7 @@ $COMMIT; no drop-in replacement if you are using <productname>Informix</productname> at the moment. Moreover, some of the data types are different. For example, <productname>PostgreSQL's</productname> datetime and interval types do not - know about ranges like for example <literal>YEAR TO MINUTE</> so you won't + know about ranges like for example <literal>YEAR TO MINUTE</literal> so you won't find support in ECPG for that either. </para> @@ -7938,11 +7938,11 @@ EXEC SQL FETCH MYCUR INTO :userid; <para> <variablelist> <varlistentry> - <term><literal>CLOSE DATABASE</></term> + <term><literal>CLOSE DATABASE</literal></term> <listitem> <para> This statement closes the current connection. In fact, this is a - synonym for ECPG's <literal>DISCONNECT CURRENT</>: + synonym for ECPG's <literal>DISCONNECT CURRENT</literal>: <programlisting> $CLOSE DATABASE; /* close the current connection */ EXEC SQL CLOSE DATABASE; @@ -7951,12 +7951,12 @@ EXEC SQL CLOSE DATABASE; </listitem> </varlistentry> <varlistentry> - <term><literal>FREE cursor_name</></term> + <term><literal>FREE cursor_name</literal></term> <listitem> <para> Due to the differences how ECPG works compared to Informix's ESQL/C (i.e. which steps are purely grammar transformations and which steps rely on the underlying run-time library) - there is no <literal>FREE cursor_name</> statement in ECPG. This is because in ECPG, + there is no <literal>FREE cursor_name</literal> statement in ECPG. This is because in ECPG, <literal>DECLARE CURSOR</literal> doesn't translate to a function call into the run-time library that uses to the cursor name. This means that there's no run-time bookkeeping of SQL cursors in the ECPG run-time library, only in the PostgreSQL server. @@ -7964,10 +7964,10 @@ EXEC SQL CLOSE DATABASE; </listitem> </varlistentry> <varlistentry> - <term><literal>FREE statement_name</></term> + <term><literal>FREE statement_name</literal></term> <listitem> <para> - <literal>FREE statement_name</> is a synonym for <literal>DEALLOCATE PREPARE statement_name</>. + <literal>FREE statement_name</literal> is a synonym for <literal>DEALLOCATE PREPARE statement_name</literal>. </para> </listitem> </varlistentry> @@ -8024,16 +8024,16 @@ typedef struct sqlda_compat sqlda_t; <variablelist> <varlistentry> - <term><literal>sqld</></term> + <term><literal>sqld</literal></term> <listitem> <para> - The number of fields in the <literal>SQLDA</> descriptor. + The number of fields in the <literal>SQLDA</literal> descriptor. </para> </listitem> </varlistentry> <varlistentry> - <term><literal>sqlvar</></term> + <term><literal>sqlvar</literal></term> <listitem> <para> Pointer to the per-field properties. @@ -8042,7 +8042,7 @@ typedef struct sqlda_compat sqlda_t; </varlistentry> <varlistentry> - <term><literal>desc_name</></term> + <term><literal>desc_name</literal></term> <listitem> <para> Unused, filled with zero-bytes. @@ -8051,7 +8051,7 @@ typedef struct sqlda_compat sqlda_t; </varlistentry> <varlistentry> - <term><literal>desc_occ</></term> + <term><literal>desc_occ</literal></term> <listitem> <para> Size of the allocated structure. @@ -8060,7 +8060,7 @@ typedef struct sqlda_compat sqlda_t; </varlistentry> <varlistentry> - <term><literal>desc_next</></term> + <term><literal>desc_next</literal></term> <listitem> <para> Pointer to the next SQLDA structure if the result set contains more than one record. @@ -8069,7 +8069,7 @@ typedef struct sqlda_compat sqlda_t; </varlistentry> <varlistentry> - <term><literal>reserved</></term> + <term><literal>reserved</literal></term> <listitem> <para> Unused pointer, contains NULL. Kept for Informix-compatibility. @@ -8084,7 +8084,7 @@ typedef struct sqlda_compat sqlda_t; <variablelist> <varlistentry> - <term><literal>sqltype</></term> + <term><literal>sqltype</literal></term> <listitem> <para> Type of the field. Constants are in <literal>sqltypes.h</literal> @@ -8093,7 +8093,7 @@ typedef struct sqlda_compat sqlda_t; </varlistentry> <varlistentry> - <term><literal>sqllen</></term> + <term><literal>sqllen</literal></term> <listitem> <para> Length of the field data. @@ -8102,7 +8102,7 @@ typedef struct sqlda_compat sqlda_t; </varlistentry> <varlistentry> - <term><literal>sqldata</></term> + <term><literal>sqldata</literal></term> <listitem> <para> Pointer to the field data. The pointer is of <literal>char *</literal> type, @@ -8123,7 +8123,7 @@ switch (sqldata->sqlvar[i].sqltype) </varlistentry> <varlistentry> - <term><literal>sqlind</></term> + <term><literal>sqlind</literal></term> <listitem> <para> Pointer to the NULL indicator. If returned by DESCRIBE or FETCH then it's always a valid pointer. @@ -8139,7 +8139,7 @@ if (*(int2 *)sqldata->sqlvar[i].sqlind != 0) </varlistentry> <varlistentry> - <term><literal>sqlname</></term> + <term><literal>sqlname</literal></term> <listitem> <para> Name of the field. 0-terminated string. @@ -8148,16 +8148,16 @@ if (*(int2 *)sqldata->sqlvar[i].sqlind != 0) </varlistentry> <varlistentry> - <term><literal>sqlformat</></term> + <term><literal>sqlformat</literal></term> <listitem> <para> - Reserved in Informix, value of <function>PQfformat()</> for the field. + Reserved in Informix, value of <function>PQfformat()</function> for the field. </para> </listitem> </varlistentry> <varlistentry> - <term><literal>sqlitype</></term> + <term><literal>sqlitype</literal></term> <listitem> <para> Type of the NULL indicator data. It's always SQLSMINT when returning data from the server. @@ -8168,7 +8168,7 @@ if (*(int2 *)sqldata->sqlvar[i].sqlind != 0) </varlistentry> <varlistentry> - <term><literal>sqlilen</></term> + <term><literal>sqlilen</literal></term> <listitem> <para> Length of the NULL indicator data. @@ -8177,23 +8177,23 @@ if (*(int2 *)sqldata->sqlvar[i].sqlind != 0) </varlistentry> <varlistentry> - <term><literal>sqlxid</></term> + <term><literal>sqlxid</literal></term> <listitem> <para> - Extended type of the field, result of <function>PQftype()</>. + Extended type of the field, result of <function>PQftype()</function>. </para> </listitem> </varlistentry> <varlistentry> - <term><literal>sqltypename</></term> - <term><literal>sqltypelen</></term> - <term><literal>sqlownerlen</></term> - <term><literal>sqlsourcetype</></term> - <term><literal>sqlownername</></term> - <term><literal>sqlsourceid</></term> - <term><literal>sqlflags</></term> - <term><literal>sqlreserved</></term> + <term><literal>sqltypename</literal></term> + <term><literal>sqltypelen</literal></term> + <term><literal>sqlownerlen</literal></term> + <term><literal>sqlsourcetype</literal></term> + <term><literal>sqlownername</literal></term> + <term><literal>sqlsourceid</literal></term> + <term><literal>sqlflags</literal></term> + <term><literal>sqlreserved</literal></term> <listitem> <para> Unused. @@ -8202,7 +8202,7 @@ if (*(int2 *)sqldata->sqlvar[i].sqlind != 0) </varlistentry> <varlistentry> - <term><literal>sqlilongdata</></term> + <term><literal>sqlilongdata</literal></term> <listitem> <para> It equals to <literal>sqldata</literal> if <literal>sqllen</literal> is larger than 32kB. @@ -8247,7 +8247,7 @@ EXEC SQL INCLUDE sqlda.h; free(sqlda); /* The main structure is all to be free(), * sqlda and sqlda->sqlvar is in one allocated area */ </programlisting> - For more information, see the <literal>sqlda.h</> header and the + For more information, see the <literal>sqlda.h</literal> header and the <literal>src/interfaces/ecpg/test/compat_informix/sqlda.pgc</literal> regression test. </para> </sect2> @@ -8257,7 +8257,7 @@ EXEC SQL INCLUDE sqlda.h; <para> <variablelist> <varlistentry> - <term><function>decadd</></term> + <term><function>decadd</function></term> <listitem> <para> Add two decimal type values. @@ -8265,19 +8265,19 @@ EXEC SQL INCLUDE sqlda.h; int decadd(decimal *arg1, decimal *arg2, decimal *sum); </synopsis> The function receives a pointer to the first operand of type decimal - (<literal>arg1</>), a pointer to the second operand of type decimal - (<literal>arg2</>) and a pointer to a value of type decimal that will - contain the sum (<literal>sum</>). On success, the function returns 0. - <symbol>ECPG_INFORMIX_NUM_OVERFLOW</> is returned in case of overflow and - <symbol>ECPG_INFORMIX_NUM_UNDERFLOW</> in case of underflow. -1 is returned for - other failures and <varname>errno</> is set to the respective <varname>errno</> number of the + (<literal>arg1</literal>), a pointer to the second operand of type decimal + (<literal>arg2</literal>) and a pointer to a value of type decimal that will + contain the sum (<literal>sum</literal>). On success, the function returns 0. + <symbol>ECPG_INFORMIX_NUM_OVERFLOW</symbol> is returned in case of overflow and + <symbol>ECPG_INFORMIX_NUM_UNDERFLOW</symbol> in case of underflow. -1 is returned for + other failures and <varname>errno</varname> is set to the respective <varname>errno</varname> number of the pgtypeslib. </para> </listitem> </varlistentry> <varlistentry> - <term><function>deccmp</></term> + <term><function>deccmp</function></term> <listitem> <para> Compare two variables of type decimal. @@ -8285,25 +8285,25 @@ int decadd(decimal *arg1, decimal *arg2, decimal *sum); int deccmp(decimal *arg1, decimal *arg2); </synopsis> The function receives a pointer to the first decimal value - (<literal>arg1</>), a pointer to the second decimal value - (<literal>arg2</>) and returns an integer value that indicates which is + (<literal>arg1</literal>), a pointer to the second decimal value + (<literal>arg2</literal>) and returns an integer value that indicates which is the bigger value. <itemizedlist> <listitem> <para> - 1, if the value that <literal>arg1</> points to is bigger than the - value that <literal>var2</> points to + 1, if the value that <literal>arg1</literal> points to is bigger than the + value that <literal>var2</literal> points to </para> </listitem> <listitem> <para> - -1, if the value that <literal>arg1</> points to is smaller than the - value that <literal>arg2</> points to </para> + -1, if the value that <literal>arg1</literal> points to is smaller than the + value that <literal>arg2</literal> points to </para> </listitem> <listitem> <para> - 0, if the value that <literal>arg1</> points to and the value that - <literal>arg2</> points to are equal + 0, if the value that <literal>arg1</literal> points to and the value that + <literal>arg2</literal> points to are equal </para> </listitem> </itemizedlist> @@ -8312,7 +8312,7 @@ int deccmp(decimal *arg1, decimal *arg2); </varlistentry> <varlistentry> - <term><function>deccopy</></term> + <term><function>deccopy</function></term> <listitem> <para> Copy a decimal value. @@ -8320,15 +8320,15 @@ int deccmp(decimal *arg1, decimal *arg2); void deccopy(decimal *src, decimal *target); </synopsis> The function receives a pointer to the decimal value that should be - copied as the first argument (<literal>src</>) and a pointer to the - target structure of type decimal (<literal>target</>) as the second + copied as the first argument (<literal>src</literal>) and a pointer to the + target structure of type decimal (<literal>target</literal>) as the second argument. </para> </listitem> </varlistentry> <varlistentry> - <term><function>deccvasc</></term> + <term><function>deccvasc</function></term> <listitem> <para> Convert a value from its ASCII representation into a decimal type. @@ -8336,8 +8336,8 @@ void deccopy(decimal *src, decimal *target); int deccvasc(char *cp, int len, decimal *np); </synopsis> The function receives a pointer to string that contains the string - representation of the number to be converted (<literal>cp</>) as well - as its length <literal>len</>. <literal>np</> is a pointer to the + representation of the number to be converted (<literal>cp</literal>) as well + as its length <literal>len</literal>. <literal>np</literal> is a pointer to the decimal value that saves the result of the operation. </para> <para> @@ -8350,18 +8350,18 @@ int deccvasc(char *cp, int len, decimal *np); </para> <para> The function returns 0 on success. If overflow or underflow occurred, - <literal>ECPG_INFORMIX_NUM_OVERFLOW</> or - <literal>ECPG_INFORMIX_NUM_UNDERFLOW</> is returned. If the ASCII + <literal>ECPG_INFORMIX_NUM_OVERFLOW</literal> or + <literal>ECPG_INFORMIX_NUM_UNDERFLOW</literal> is returned. If the ASCII representation could not be parsed, - <literal>ECPG_INFORMIX_BAD_NUMERIC</> is returned or - <literal>ECPG_INFORMIX_BAD_EXPONENT</> if this problem occurred while + <literal>ECPG_INFORMIX_BAD_NUMERIC</literal> is returned or + <literal>ECPG_INFORMIX_BAD_EXPONENT</literal> if this problem occurred while parsing the exponent. </para> </listitem> </varlistentry> <varlistentry> - <term><function>deccvdbl</></term> + <term><function>deccvdbl</function></term> <listitem> <para> Convert a value of type double to a value of type decimal. @@ -8369,8 +8369,8 @@ int deccvasc(char *cp, int len, decimal *np); int deccvdbl(double dbl, decimal *np); </synopsis> The function receives the variable of type double that should be - converted as its first argument (<literal>dbl</>). As the second - argument (<literal>np</>), the function receives a pointer to the + converted as its first argument (<literal>dbl</literal>). As the second + argument (<literal>np</literal>), the function receives a pointer to the decimal variable that should hold the result of the operation. </para> <para> @@ -8381,7 +8381,7 @@ int deccvdbl(double dbl, decimal *np); </varlistentry> <varlistentry> - <term><function>deccvint</></term> + <term><function>deccvint</function></term> <listitem> <para> Convert a value of type int to a value of type decimal. @@ -8389,8 +8389,8 @@ int deccvdbl(double dbl, decimal *np); int deccvint(int in, decimal *np); </synopsis> The function receives the variable of type int that should be - converted as its first argument (<literal>in</>). As the second - argument (<literal>np</>), the function receives a pointer to the + converted as its first argument (<literal>in</literal>). As the second + argument (<literal>np</literal>), the function receives a pointer to the decimal variable that should hold the result of the operation. </para> <para> @@ -8401,7 +8401,7 @@ int deccvint(int in, decimal *np); </varlistentry> <varlistentry> - <term><function>deccvlong</></term> + <term><function>deccvlong</function></term> <listitem> <para> Convert a value of type long to a value of type decimal. @@ -8409,8 +8409,8 @@ int deccvint(int in, decimal *np); int deccvlong(long lng, decimal *np); </synopsis> The function receives the variable of type long that should be - converted as its first argument (<literal>lng</>). As the second - argument (<literal>np</>), the function receives a pointer to the + converted as its first argument (<literal>lng</literal>). As the second + argument (<literal>np</literal>), the function receives a pointer to the decimal variable that should hold the result of the operation. </para> <para> @@ -8421,7 +8421,7 @@ int deccvlong(long lng, decimal *np); </varlistentry> <varlistentry> - <term><function>decdiv</></term> + <term><function>decdiv</function></term> <listitem> <para> Divide two variables of type decimal. @@ -8429,15 +8429,15 @@ int deccvlong(long lng, decimal *np); int decdiv(decimal *n1, decimal *n2, decimal *result); </synopsis> The function receives pointers to the variables that are the first - (<literal>n1</>) and the second (<literal>n2</>) operands and - calculates <literal>n1</>/<literal>n2</>. <literal>result</> is a + (<literal>n1</literal>) and the second (<literal>n2</literal>) operands and + calculates <literal>n1</literal>/<literal>n2</literal>. <literal>result</literal> is a pointer to the variable that should hold the result of the operation. </para> <para> On success, 0 is returned and a negative value if the division fails. If overflow or underflow occurred, the function returns - <literal>ECPG_INFORMIX_NUM_OVERFLOW</> or - <literal>ECPG_INFORMIX_NUM_UNDERFLOW</> respectively. If an attempt to + <literal>ECPG_INFORMIX_NUM_OVERFLOW</literal> or + <literal>ECPG_INFORMIX_NUM_UNDERFLOW</literal> respectively. If an attempt to divide by zero is observed, the function returns <literal>ECPG_INFORMIX_DIVIDE_ZERO</literal>. </para> @@ -8445,7 +8445,7 @@ int decdiv(decimal *n1, decimal *n2, decimal *result); </varlistentry> <varlistentry> - <term><function>decmul</></term> + <term><function>decmul</function></term> <listitem> <para> Multiply two decimal values. @@ -8453,21 +8453,21 @@ int decdiv(decimal *n1, decimal *n2, decimal *result); int decmul(decimal *n1, decimal *n2, decimal *result); </synopsis> The function receives pointers to the variables that are the first - (<literal>n1</>) and the second (<literal>n2</>) operands and - calculates <literal>n1</>*<literal>n2</>. <literal>result</> is a + (<literal>n1</literal>) and the second (<literal>n2</literal>) operands and + calculates <literal>n1</literal>*<literal>n2</literal>. <literal>result</literal> is a pointer to the variable that should hold the result of the operation. </para> <para> On success, 0 is returned and a negative value if the multiplication fails. If overflow or underflow occurred, the function returns - <literal>ECPG_INFORMIX_NUM_OVERFLOW</> or - <literal>ECPG_INFORMIX_NUM_UNDERFLOW</> respectively. + <literal>ECPG_INFORMIX_NUM_OVERFLOW</literal> or + <literal>ECPG_INFORMIX_NUM_UNDERFLOW</literal> respectively. </para> </listitem> </varlistentry> <varlistentry> - <term><function>decsub</></term> + <term><function>decsub</function></term> <listitem> <para> Subtract one decimal value from another. @@ -8475,21 +8475,21 @@ int decmul(decimal *n1, decimal *n2, decimal *result); int decsub(decimal *n1, decimal *n2, decimal *result); </synopsis> The function receives pointers to the variables that are the first - (<literal>n1</>) and the second (<literal>n2</>) operands and - calculates <literal>n1</>-<literal>n2</>. <literal>result</> is a + (<literal>n1</literal>) and the second (<literal>n2</literal>) operands and + calculates <literal>n1</literal>-<literal>n2</literal>. <literal>result</literal> is a pointer to the variable that should hold the result of the operation. </para> <para> On success, 0 is returned and a negative value if the subtraction fails. If overflow or underflow occurred, the function returns - <literal>ECPG_INFORMIX_NUM_OVERFLOW</> or - <literal>ECPG_INFORMIX_NUM_UNDERFLOW</> respectively. + <literal>ECPG_INFORMIX_NUM_OVERFLOW</literal> or + <literal>ECPG_INFORMIX_NUM_UNDERFLOW</literal> respectively. </para> </listitem> </varlistentry> <varlistentry> - <term><function>dectoasc</></term> + <term><function>dectoasc</function></term> <listitem> <para> Convert a variable of type decimal to its ASCII representation in a C @@ -8498,28 +8498,28 @@ int decsub(decimal *n1, decimal *n2, decimal *result); int dectoasc(decimal *np, char *cp, int len, int right) </synopsis> The function receives a pointer to a variable of type decimal - (<literal>np</>) that it converts to its textual representation. - <literal>cp</> is the buffer that should hold the result of the - operation. The parameter <literal>right</> specifies, how many digits + (<literal>np</literal>) that it converts to its textual representation. + <literal>cp</literal> is the buffer that should hold the result of the + operation. The parameter <literal>right</literal> specifies, how many digits right of the decimal point should be included in the output. The result will be rounded to this number of decimal digits. Setting - <literal>right</> to -1 indicates that all available decimal digits + <literal>right</literal> to -1 indicates that all available decimal digits should be included in the output. If the length of the output buffer, - which is indicated by <literal>len</> is not sufficient to hold the + which is indicated by <literal>len</literal> is not sufficient to hold the textual representation including the trailing zero byte, only a - single <literal>*</> character is stored in the result and -1 is + single <literal>*</literal> character is stored in the result and -1 is returned. </para> <para> - The function returns either -1 if the buffer <literal>cp</> was too - small or <literal>ECPG_INFORMIX_OUT_OF_MEMORY</> if memory was + The function returns either -1 if the buffer <literal>cp</literal> was too + small or <literal>ECPG_INFORMIX_OUT_OF_MEMORY</literal> if memory was exhausted. </para> </listitem> </varlistentry> <varlistentry> - <term><function>dectodbl</></term> + <term><function>dectodbl</function></term> <listitem> <para> Convert a variable of type decimal to a double. @@ -8527,8 +8527,8 @@ int dectoasc(decimal *np, char *cp, int len, int right) int dectodbl(decimal *np, double *dblp); </synopsis> The function receives a pointer to the decimal value to convert - (<literal>np</>) and a pointer to the double variable that - should hold the result of the operation (<literal>dblp</>). + (<literal>np</literal>) and a pointer to the double variable that + should hold the result of the operation (<literal>dblp</literal>). </para> <para> On success, 0 is returned and a negative value if the conversion @@ -8538,7 +8538,7 @@ int dectodbl(decimal *np, double *dblp); </varlistentry> <varlistentry> - <term><function>dectoint</></term> + <term><function>dectoint</function></term> <listitem> <para> Convert a variable to type decimal to an integer. @@ -8546,25 +8546,25 @@ int dectodbl(decimal *np, double *dblp); int dectoint(decimal *np, int *ip); </synopsis> The function receives a pointer to the decimal value to convert - (<literal>np</>) and a pointer to the integer variable that - should hold the result of the operation (<literal>ip</>). + (<literal>np</literal>) and a pointer to the integer variable that + should hold the result of the operation (<literal>ip</literal>). </para> <para> On success, 0 is returned and a negative value if the conversion - failed. If an overflow occurred, <literal>ECPG_INFORMIX_NUM_OVERFLOW</> + failed. If an overflow occurred, <literal>ECPG_INFORMIX_NUM_OVERFLOW</literal> is returned. </para> <para> Note that the ECPG implementation differs from the <productname>Informix</productname> implementation. <productname>Informix</productname> limits an integer to the range from -32767 to 32767, while the limits in the ECPG implementation depend on the - architecture (<literal>-INT_MAX .. INT_MAX</>). + architecture (<literal>-INT_MAX .. INT_MAX</literal>). </para> </listitem> </varlistentry> <varlistentry> - <term><function>dectolong</></term> + <term><function>dectolong</function></term> <listitem> <para> Convert a variable to type decimal to a long integer. @@ -8572,12 +8572,12 @@ int dectoint(decimal *np, int *ip); int dectolong(decimal *np, long *lngp); </synopsis> The function receives a pointer to the decimal value to convert - (<literal>np</>) and a pointer to the long variable that - should hold the result of the operation (<literal>lngp</>). + (<literal>np</literal>) and a pointer to the long variable that + should hold the result of the operation (<literal>lngp</literal>). </para> <para> On success, 0 is returned and a negative value if the conversion - failed. If an overflow occurred, <literal>ECPG_INFORMIX_NUM_OVERFLOW</> + failed. If an overflow occurred, <literal>ECPG_INFORMIX_NUM_OVERFLOW</literal> is returned. </para> <para> @@ -8585,13 +8585,13 @@ int dectolong(decimal *np, long *lngp); implementation. <productname>Informix</productname> limits a long integer to the range from -2,147,483,647 to 2,147,483,647, while the limits in the ECPG implementation depend on the architecture (<literal>-LONG_MAX .. - LONG_MAX</>). + LONG_MAX</literal>). </para> </listitem> </varlistentry> <varlistentry> - <term><function>rdatestr</></term> + <term><function>rdatestr</function></term> <listitem> <para> Converts a date to a C char* string. @@ -8599,8 +8599,8 @@ int dectolong(decimal *np, long *lngp); int rdatestr(date d, char *str); </synopsis> The function receives two arguments, the first one is the date to - convert (<literal>d</>) and the second one is a pointer to the target - string. The output format is always <literal>yyyy-mm-dd</>, so you need + convert (<literal>d</literal>) and the second one is a pointer to the target + string. The output format is always <literal>yyyy-mm-dd</literal>, so you need to allocate at least 11 bytes (including the zero-byte terminator) for the string. </para> @@ -8618,7 +8618,7 @@ int rdatestr(date d, char *str); </varlistentry> <varlistentry> - <term><function>rstrdate</></term> + <term><function>rstrdate</function></term> <listitem> <para> Parse the textual representation of a date. @@ -8626,30 +8626,30 @@ int rdatestr(date d, char *str); int rstrdate(char *str, date *d); </synopsis> The function receives the textual representation of the date to convert - (<literal>str</>) and a pointer to a variable of type date - (<literal>d</>). This function does not allow you to specify a format + (<literal>str</literal>) and a pointer to a variable of type date + (<literal>d</literal>). This function does not allow you to specify a format mask. It uses the default format mask of <productname>Informix</productname> which is - <literal>mm/dd/yyyy</>. Internally, this function is implemented by - means of <function>rdefmtdate</>. Therefore, <function>rstrdate</> is + <literal>mm/dd/yyyy</literal>. Internally, this function is implemented by + means of <function>rdefmtdate</function>. Therefore, <function>rstrdate</function> is not faster and if you have the choice you should opt for - <function>rdefmtdate</> which allows you to specify the format mask + <function>rdefmtdate</function> which allows you to specify the format mask explicitly. </para> <para> - The function returns the same values as <function>rdefmtdate</>. + The function returns the same values as <function>rdefmtdate</function>. </para> </listitem> </varlistentry> <varlistentry> - <term><function>rtoday</></term> + <term><function>rtoday</function></term> <listitem> <para> Get the current date. <synopsis> void rtoday(date *d); </synopsis> - The function receives a pointer to a date variable (<literal>d</>) + The function receives a pointer to a date variable (<literal>d</literal>) that it sets to the current date. </para> <para> @@ -8660,7 +8660,7 @@ void rtoday(date *d); </varlistentry> <varlistentry> - <term><function>rjulmdy</></term> + <term><function>rjulmdy</function></term> <listitem> <para> Extract the values for the day, the month and the year from a variable @@ -8668,11 +8668,11 @@ void rtoday(date *d); <synopsis> int rjulmdy(date d, short mdy[3]); </synopsis> - The function receives the date <literal>d</> and a pointer to an array - of 3 short integer values <literal>mdy</>. The variable name indicates - the sequential order: <literal>mdy[0]</> will be set to contain the - number of the month, <literal>mdy[1]</> will be set to the value of the - day and <literal>mdy[2]</> will contain the year. + The function receives the date <literal>d</literal> and a pointer to an array + of 3 short integer values <literal>mdy</literal>. The variable name indicates + the sequential order: <literal>mdy[0]</literal> will be set to contain the + number of the month, <literal>mdy[1]</literal> will be set to the value of the + day and <literal>mdy[2]</literal> will contain the year. </para> <para> The function always returns 0 at the moment. @@ -8685,7 +8685,7 @@ int rjulmdy(date d, short mdy[3]); </varlistentry> <varlistentry> - <term><function>rdefmtdate</></term> + <term><function>rdefmtdate</function></term> <listitem> <para> Use a format mask to convert a character string to a value of type @@ -8694,9 +8694,9 @@ int rjulmdy(date d, short mdy[3]); int rdefmtdate(date *d, char *fmt, char *str); </synopsis> The function receives a pointer to the date value that should hold the - result of the operation (<literal>d</>), the format mask to use for - parsing the date (<literal>fmt</>) and the C char* string containing - the textual representation of the date (<literal>str</>). The textual + result of the operation (<literal>d</literal>), the format mask to use for + parsing the date (<literal>fmt</literal>) and the C char* string containing + the textual representation of the date (<literal>str</literal>). The textual representation is expected to match the format mask. However you do not need to have a 1:1 mapping of the string to the format mask. The function only analyzes the sequential order and looks for the literals @@ -8715,32 +8715,32 @@ int rdefmtdate(date *d, char *fmt, char *str); </listitem> <listitem> <para> - <literal>ECPG_INFORMIX_ENOSHORTDATE</> - The date does not contain + <literal>ECPG_INFORMIX_ENOSHORTDATE</literal> - The date does not contain delimiters between day, month and year. In this case the input string must be exactly 6 or 8 bytes long but isn't. </para> </listitem> <listitem> <para> - <literal>ECPG_INFORMIX_ENOTDMY</> - The format string did not + <literal>ECPG_INFORMIX_ENOTDMY</literal> - The format string did not correctly indicate the sequential order of year, month and day. </para> </listitem> <listitem> <para> - <literal>ECPG_INFORMIX_BAD_DAY</> - The input string does not + <literal>ECPG_INFORMIX_BAD_DAY</literal> - The input string does not contain a valid day. </para> </listitem> <listitem> <para> - <literal>ECPG_INFORMIX_BAD_MONTH</> - The input string does not + <literal>ECPG_INFORMIX_BAD_MONTH</literal> - The input string does not contain a valid month. </para> </listitem> <listitem> <para> - <literal>ECPG_INFORMIX_BAD_YEAR</> - The input string does not + <literal>ECPG_INFORMIX_BAD_YEAR</literal> - The input string does not contain a valid year. </para> </listitem> @@ -8755,7 +8755,7 @@ int rdefmtdate(date *d, char *fmt, char *str); </varlistentry> <varlistentry> - <term><function>rfmtdate</></term> + <term><function>rfmtdate</function></term> <listitem> <para> Convert a variable of type date to its textual representation using a @@ -8763,9 +8763,9 @@ int rdefmtdate(date *d, char *fmt, char *str); <synopsis> int rfmtdate(date d, char *fmt, char *str); </synopsis> - The function receives the date to convert (<literal>d</>), the format - mask (<literal>fmt</>) and the string that will hold the textual - representation of the date (<literal>str</>). + The function receives the date to convert (<literal>d</literal>), the format + mask (<literal>fmt</literal>) and the string that will hold the textual + representation of the date (<literal>str</literal>). </para> <para> On success, 0 is returned and a negative value if an error occurred. @@ -8778,7 +8778,7 @@ int rfmtdate(date d, char *fmt, char *str); </varlistentry> <varlistentry> - <term><function>rmdyjul</></term> + <term><function>rmdyjul</function></term> <listitem> <para> Create a date value from an array of 3 short integers that specify the @@ -8787,7 +8787,7 @@ int rfmtdate(date d, char *fmt, char *str); int rmdyjul(short mdy[3], date *d); </synopsis> The function receives the array of the 3 short integers - (<literal>mdy</>) and a pointer to a variable of type date that should + (<literal>mdy</literal>) and a pointer to a variable of type date that should hold the result of the operation. </para> <para> @@ -8801,14 +8801,14 @@ int rmdyjul(short mdy[3], date *d); </varlistentry> <varlistentry> - <term><function>rdayofweek</></term> + <term><function>rdayofweek</function></term> <listitem> <para> Return a number representing the day of the week for a date value. <synopsis> int rdayofweek(date d); </synopsis> - The function receives the date variable <literal>d</> as its only + The function receives the date variable <literal>d</literal> as its only argument and returns an integer that indicates the day of the week for this date. <itemizedlist> @@ -8857,7 +8857,7 @@ int rdayofweek(date d); </varlistentry> <varlistentry> - <term><function>dtcurrent</></term> + <term><function>dtcurrent</function></term> <listitem> <para> Retrieve the current timestamp. @@ -8865,13 +8865,13 @@ int rdayofweek(date d); void dtcurrent(timestamp *ts); </synopsis> The function retrieves the current timestamp and saves it into the - timestamp variable that <literal>ts</> points to. + timestamp variable that <literal>ts</literal> points to. </para> </listitem> </varlistentry> <varlistentry> - <term><function>dtcvasc</></term> + <term><function>dtcvasc</function></term> <listitem> <para> Parses a timestamp from its textual representation @@ -8879,9 +8879,9 @@ void dtcurrent(timestamp *ts); <synopsis> int dtcvasc(char *str, timestamp *ts); </synopsis> - The function receives the string to parse (<literal>str</>) and a + The function receives the string to parse (<literal>str</literal>) and a pointer to the timestamp variable that should hold the result of the - operation (<literal>ts</>). + operation (<literal>ts</literal>). </para> <para> The function returns 0 on success and a negative value in case of @@ -8896,7 +8896,7 @@ int dtcvasc(char *str, timestamp *ts); </varlistentry> <varlistentry> - <term><function>dtcvfmtasc</></term> + <term><function>dtcvfmtasc</function></term> <listitem> <para> Parses a timestamp from its textual representation @@ -8904,10 +8904,10 @@ int dtcvasc(char *str, timestamp *ts); <synopsis> dtcvfmtasc(char *inbuf, char *fmtstr, timestamp *dtvalue) </synopsis> - The function receives the string to parse (<literal>inbuf</>), the - format mask to use (<literal>fmtstr</>) and a pointer to the timestamp + The function receives the string to parse (<literal>inbuf</literal>), the + format mask to use (<literal>fmtstr</literal>) and a pointer to the timestamp variable that should hold the result of the operation - (<literal>dtvalue</>). + (<literal>dtvalue</literal>). </para> <para> This function is implemented by means of the <xref @@ -8922,7 +8922,7 @@ dtcvfmtasc(char *inbuf, char *fmtstr, timestamp *dtvalue) </varlistentry> <varlistentry> - <term><function>dtsub</></term> + <term><function>dtsub</function></term> <listitem> <para> Subtract one timestamp from another and return a variable of type @@ -8930,9 +8930,9 @@ dtcvfmtasc(char *inbuf, char *fmtstr, timestamp *dtvalue) <synopsis> int dtsub(timestamp *ts1, timestamp *ts2, interval *iv); </synopsis> - The function will subtract the timestamp variable that <literal>ts2</> - points to from the timestamp variable that <literal>ts1</> points to - and will store the result in the interval variable that <literal>iv</> + The function will subtract the timestamp variable that <literal>ts2</literal> + points to from the timestamp variable that <literal>ts1</literal> points to + and will store the result in the interval variable that <literal>iv</literal> points to. </para> <para> @@ -8943,7 +8943,7 @@ int dtsub(timestamp *ts1, timestamp *ts2, interval *iv); </varlistentry> <varlistentry> - <term><function>dttoasc</></term> + <term><function>dttoasc</function></term> <listitem> <para> Convert a timestamp variable to a C char* string. @@ -8951,8 +8951,8 @@ int dtsub(timestamp *ts1, timestamp *ts2, interval *iv); int dttoasc(timestamp *ts, char *output); </synopsis> The function receives a pointer to the timestamp variable to convert - (<literal>ts</>) and the string that should hold the result of the - operation (<literal>output</>). It converts <literal>ts</> to its + (<literal>ts</literal>) and the string that should hold the result of the + operation (<literal>output</literal>). It converts <literal>ts</literal> to its textual representation according to the SQL standard, which is be <literal>YYYY-MM-DD HH:MM:SS</literal>. </para> @@ -8964,7 +8964,7 @@ int dttoasc(timestamp *ts, char *output); </varlistentry> <varlistentry> - <term><function>dttofmtasc</></term> + <term><function>dttofmtasc</function></term> <listitem> <para> Convert a timestamp variable to a C char* using a format mask. @@ -8972,8 +8972,8 @@ int dttoasc(timestamp *ts, char *output); int dttofmtasc(timestamp *ts, char *output, int str_len, char *fmtstr); </synopsis> The function receives a pointer to the timestamp to convert as its - first argument (<literal>ts</>), a pointer to the output buffer - (<literal>output</>), the maximal length that has been allocated for + first argument (<literal>ts</literal>), a pointer to the output buffer + (<literal>output</literal>), the maximal length that has been allocated for the output buffer (<literal>str_len</literal>) and the format mask to use for the conversion (<literal>fmtstr</literal>). </para> @@ -8990,7 +8990,7 @@ int dttofmtasc(timestamp *ts, char *output, int str_len, char *fmtstr); </varlistentry> <varlistentry> - <term><function>intoasc</></term> + <term><function>intoasc</function></term> <listitem> <para> Convert an interval variable to a C char* string. @@ -8998,8 +8998,8 @@ int dttofmtasc(timestamp *ts, char *output, int str_len, char *fmtstr); int intoasc(interval *i, char *str); </synopsis> The function receives a pointer to the interval variable to convert - (<literal>i</>) and the string that should hold the result of the - operation (<literal>str</>). It converts <literal>i</> to its + (<literal>i</literal>) and the string that should hold the result of the + operation (<literal>str</literal>). It converts <literal>i</literal> to its textual representation according to the SQL standard, which is be <literal>YYYY-MM-DD HH:MM:SS</literal>. </para> @@ -9011,7 +9011,7 @@ int intoasc(interval *i, char *str); </varlistentry> <varlistentry> - <term><function>rfmtlong</></term> + <term><function>rfmtlong</function></term> <listitem> <para> Convert a long integer value to its textual representation using a @@ -9019,9 +9019,9 @@ int intoasc(interval *i, char *str); <synopsis> int rfmtlong(long lng_val, char *fmt, char *outbuf); </synopsis> - The function receives the long value <literal>lng_val</>, the format - mask <literal>fmt</> and a pointer to the output buffer - <literal>outbuf</>. It converts the long value according to the format + The function receives the long value <literal>lng_val</literal>, the format + mask <literal>fmt</literal> and a pointer to the output buffer + <literal>outbuf</literal>. It converts the long value according to the format mask to its textual representation. </para> <para> @@ -9097,7 +9097,7 @@ int rfmtlong(long lng_val, char *fmt, char *outbuf); </varlistentry> <varlistentry> - <term><function>rupshift</></term> + <term><function>rupshift</function></term> <listitem> <para> Convert a string to upper case. @@ -9111,7 +9111,7 @@ void rupshift(char *str); </varlistentry> <varlistentry> - <term><function>byleng</></term> + <term><function>byleng</function></term> <listitem> <para> Return the number of characters in a string without counting trailing @@ -9120,15 +9120,15 @@ void rupshift(char *str); int byleng(char *str, int len); </synopsis> The function expects a fixed-length string as its first argument - (<literal>str</>) and its length as its second argument - (<literal>len</>). It returns the number of significant characters, + (<literal>str</literal>) and its length as its second argument + (<literal>len</literal>). It returns the number of significant characters, that is the length of the string without trailing blanks. </para> </listitem> </varlistentry> <varlistentry> - <term><function>ldchar</></term> + <term><function>ldchar</function></term> <listitem> <para> Copy a fixed-length string into a null-terminated string. @@ -9136,10 +9136,10 @@ int byleng(char *str, int len); void ldchar(char *src, int len, char *dest); </synopsis> The function receives the fixed-length string to copy - (<literal>src</>), its length (<literal>len</>) and a pointer to the - destination memory (<literal>dest</>). Note that you need to reserve at - least <literal>len+1</> bytes for the string that <literal>dest</> - points to. The function copies at most <literal>len</> bytes to the new + (<literal>src</literal>), its length (<literal>len</literal>) and a pointer to the + destination memory (<literal>dest</literal>). Note that you need to reserve at + least <literal>len+1</literal> bytes for the string that <literal>dest</literal> + points to. The function copies at most <literal>len</literal> bytes to the new location (less if the source string has trailing blanks) and adds the null-terminator. </para> @@ -9147,7 +9147,7 @@ void ldchar(char *src, int len, char *dest); </varlistentry> <varlistentry> - <term><function>rgetmsg</></term> + <term><function>rgetmsg</function></term> <listitem> <para> <synopsis> @@ -9159,7 +9159,7 @@ int rgetmsg(int msgnum, char *s, int maxsize); </varlistentry> <varlistentry> - <term><function>rtypalign</></term> + <term><function>rtypalign</function></term> <listitem> <para> <synopsis> @@ -9171,7 +9171,7 @@ int rtypalign(int offset, int type); </varlistentry> <varlistentry> - <term><function>rtypmsize</></term> + <term><function>rtypmsize</function></term> <listitem> <para> <synopsis> @@ -9183,7 +9183,7 @@ int rtypmsize(int type, int len); </varlistentry> <varlistentry> - <term><function>rtypwidth</></term> + <term><function>rtypwidth</function></term> <listitem> <para> <synopsis> @@ -9195,7 +9195,7 @@ int rtypwidth(int sqltype, int sqllen); </varlistentry> <varlistentry id="rsetnull"> - <term><function>rsetnull</></term> + <term><function>rsetnull</function></term> <listitem> <para> Set a variable to NULL. @@ -9279,15 +9279,15 @@ rsetnull(CINTTYPE, (char *) &i); </varlistentry> <varlistentry> - <term><function>risnull</></term> + <term><function>risnull</function></term> <listitem> <para> Test if a variable is NULL. <synopsis> int risnull(int t, char *ptr); </synopsis> - The function receives the type of the variable to test (<literal>t</>) - as well a pointer to this variable (<literal>ptr</>). Note that the + The function receives the type of the variable to test (<literal>t</literal>) + as well a pointer to this variable (<literal>ptr</literal>). Note that the latter needs to be cast to a char*. See the function <xref linkend="rsetnull"> for a list of possible variable types. </para> @@ -9321,7 +9321,7 @@ risnull(CINTTYPE, (char *) &i); values. <variablelist> <varlistentry> - <term><literal>ECPG_INFORMIX_NUM_OVERFLOW</></term> + <term><literal>ECPG_INFORMIX_NUM_OVERFLOW</literal></term> <listitem> <para> Functions return this value if an overflow occurred in a @@ -9332,7 +9332,7 @@ risnull(CINTTYPE, (char *) &i); </varlistentry> <varlistentry> - <term><literal>ECPG_INFORMIX_NUM_UNDERFLOW</></term> + <term><literal>ECPG_INFORMIX_NUM_UNDERFLOW</literal></term> <listitem> <para> Functions return this value if an underflow occurred in a calculation. @@ -9342,7 +9342,7 @@ risnull(CINTTYPE, (char *) &i); </varlistentry> <varlistentry> - <term><literal>ECPG_INFORMIX_DIVIDE_ZERO</></term> + <term><literal>ECPG_INFORMIX_DIVIDE_ZERO</literal></term> <listitem> <para> Functions return this value if an attempt to divide by zero is @@ -9352,7 +9352,7 @@ risnull(CINTTYPE, (char *) &i); </varlistentry> <varlistentry> - <term><literal>ECPG_INFORMIX_BAD_YEAR</></term> + <term><literal>ECPG_INFORMIX_BAD_YEAR</literal></term> <listitem> <para> Functions return this value if a bad value for a year was found while @@ -9363,7 +9363,7 @@ risnull(CINTTYPE, (char *) &i); </varlistentry> <varlistentry> - <term><literal>ECPG_INFORMIX_BAD_MONTH</></term> + <term><literal>ECPG_INFORMIX_BAD_MONTH</literal></term> <listitem> <para> Functions return this value if a bad value for a month was found while @@ -9374,7 +9374,7 @@ risnull(CINTTYPE, (char *) &i); </varlistentry> <varlistentry> - <term><literal>ECPG_INFORMIX_BAD_DAY</></term> + <term><literal>ECPG_INFORMIX_BAD_DAY</literal></term> <listitem> <para> Functions return this value if a bad value for a day was found while @@ -9385,7 +9385,7 @@ risnull(CINTTYPE, (char *) &i); </varlistentry> <varlistentry> - <term><literal>ECPG_INFORMIX_ENOSHORTDATE</></term> + <term><literal>ECPG_INFORMIX_ENOSHORTDATE</literal></term> <listitem> <para> Functions return this value if a parsing routine needs a short date @@ -9396,7 +9396,7 @@ risnull(CINTTYPE, (char *) &i); </varlistentry> <varlistentry> - <term><literal>ECPG_INFORMIX_DATE_CONVERT</></term> + <term><literal>ECPG_INFORMIX_DATE_CONVERT</literal></term> <listitem> <para> Functions return this value if an error occurred during date @@ -9407,7 +9407,7 @@ risnull(CINTTYPE, (char *) &i); </varlistentry> <varlistentry> - <term><literal>ECPG_INFORMIX_OUT_OF_MEMORY</></term> + <term><literal>ECPG_INFORMIX_OUT_OF_MEMORY</literal></term> <listitem> <para> Functions return this value if memory was exhausted during @@ -9418,18 +9418,18 @@ risnull(CINTTYPE, (char *) &i); </varlistentry> <varlistentry> - <term><literal>ECPG_INFORMIX_ENOTDMY</></term> + <term><literal>ECPG_INFORMIX_ENOTDMY</literal></term> <listitem> <para> Functions return this value if a parsing routine was supposed to get a - format mask (like <literal>mmddyy</>) but not all fields were listed + format mask (like <literal>mmddyy</literal>) but not all fields were listed correctly. Internally it is defined as -1212 (the <productname>Informix</productname> definition). </para> </listitem> </varlistentry> <varlistentry> - <term><literal>ECPG_INFORMIX_BAD_NUMERIC</></term> + <term><literal>ECPG_INFORMIX_BAD_NUMERIC</literal></term> <listitem> <para> Functions return this value either if a parsing routine cannot parse @@ -9442,7 +9442,7 @@ risnull(CINTTYPE, (char *) &i); </varlistentry> <varlistentry> - <term><literal>ECPG_INFORMIX_BAD_EXPONENT</></term> + <term><literal>ECPG_INFORMIX_BAD_EXPONENT</literal></term> <listitem> <para> Functions return this value if a parsing routine cannot parse @@ -9453,7 +9453,7 @@ risnull(CINTTYPE, (char *) &i); </varlistentry> <varlistentry> - <term><literal>ECPG_INFORMIX_BAD_DATE</></term> + <term><literal>ECPG_INFORMIX_BAD_DATE</literal></term> <listitem> <para> Functions return this value if a parsing routine cannot parse @@ -9464,7 +9464,7 @@ risnull(CINTTYPE, (char *) &i); </varlistentry> <varlistentry> - <term><literal>ECPG_INFORMIX_EXTRA_CHARS</></term> + <term><literal>ECPG_INFORMIX_EXTRA_CHARS</literal></term> <listitem> <para> Functions return this value if a parsing routine is passed extra @@ -9507,7 +9507,7 @@ risnull(CINTTYPE, (char *) &i); Variable substitution occurs when a symbol starts with a colon (<literal>:</literal>). The variable with that name is looked up among the variables that were previously declared within a - <literal>EXEC SQL DECLARE</> section. + <literal>EXEC SQL DECLARE</literal> section. </para> <para> @@ -9555,10 +9555,10 @@ risnull(CINTTYPE, (char *) &i); </varlistentry> <varlistentry> - <term><parameter>ECPGt_EOIT</></term> + <term><parameter>ECPGt_EOIT</parameter></term> <listitem> <para> - An <type>enum</> telling that there are no more input + An <type>enum</type> telling that there are no more input variables. </para> </listitem> @@ -9575,10 +9575,10 @@ risnull(CINTTYPE, (char *) &i); </varlistentry> <varlistentry> - <term><parameter>ECPGt_EORT</></term> + <term><parameter>ECPGt_EORT</parameter></term> <listitem> <para> - An <type>enum</> telling that there are no more variables. + An <type>enum</type> telling that there are no more variables. </para> </listitem> </varlistentry> @@ -9660,7 +9660,7 @@ risnull(CINTTYPE, (char *) &i); EXEC SQL OPEN <replaceable>cursor</replaceable>; </programlisting> is not copied to the output. Instead, the cursor's - <command>DECLARE</> command is used at the position of the <command>OPEN</> command + <command>DECLARE</command> command is used at the position of the <command>OPEN</command> command because it indeed opens the cursor. </para> diff --git a/doc/src/sgml/errcodes.sgml b/doc/src/sgml/errcodes.sgml index 40b4191c104..61ad3e00e91 100644 --- a/doc/src/sgml/errcodes.sgml +++ b/doc/src/sgml/errcodes.sgml @@ -11,13 +11,13 @@ <para> All messages emitted by the <productname>PostgreSQL</productname> server are assigned five-character error codes that follow the SQL - standard's conventions for <quote>SQLSTATE</> codes. Applications + standard's conventions for <quote>SQLSTATE</quote> codes. Applications that need to know which error condition has occurred should usually test the error code, rather than looking at the textual error message. The error codes are less likely to change across - <productname>PostgreSQL</> releases, and also are not subject to + <productname>PostgreSQL</productname> releases, and also are not subject to change due to localization of error messages. Note that some, but - not all, of the error codes produced by <productname>PostgreSQL</> + not all, of the error codes produced by <productname>PostgreSQL</productname> are defined by the SQL standard; some additional error codes for conditions not defined by the standard have been invented or borrowed from other databases. @@ -36,16 +36,16 @@ <productname>PostgreSQL</productname> &version;. (Some are not actually used at present, but are defined by the SQL standard.) The error classes are also shown. For each error class there is a - <quote>standard</> error code having the last three characters - <literal>000</>. This code is used only for error conditions that fall + <quote>standard</quote> error code having the last three characters + <literal>000</literal>. This code is used only for error conditions that fall within the class but do not have any more-specific code assigned. </para> <para> The symbol shown in the column <quote>Condition Name</quote> is - the condition name to use in <application>PL/pgSQL</>. Condition + the condition name to use in <application>PL/pgSQL</application>. Condition names can be written in either upper or lower case. (Note that - <application>PL/pgSQL</> does not recognize warning, as opposed to error, + <application>PL/pgSQL</application> does not recognize warning, as opposed to error, condition names; those are classes 00, 01, and 02.) </para> @@ -53,10 +53,10 @@ For some types of errors, the server reports the name of a database object (a table, table column, data type, or constraint) associated with the error; for example, the name of the unique constraint that caused a - <symbol>unique_violation</> error. Such names are supplied in separate + <symbol>unique_violation</symbol> error. Such names are supplied in separate fields of the error report message so that applications need not try to extract them from the possibly-localized human-readable text of the message. - As of <productname>PostgreSQL</> 9.3, complete coverage for this feature + As of <productname>PostgreSQL</productname> 9.3, complete coverage for this feature exists only for errors in SQLSTATE class 23 (integrity constraint violation), but this is likely to be expanded in future. </para> diff --git a/doc/src/sgml/event-trigger.sgml b/doc/src/sgml/event-trigger.sgml index c7b880d7c9c..e19571b8eb7 100644 --- a/doc/src/sgml/event-trigger.sgml +++ b/doc/src/sgml/event-trigger.sgml @@ -9,7 +9,7 @@ <para> To supplement the trigger mechanism discussed in <xref linkend="triggers">, - <productname>PostgreSQL</> also provides event triggers. Unlike regular + <productname>PostgreSQL</productname> also provides event triggers. Unlike regular triggers, which are attached to a single table and capture only DML events, event triggers are global to a particular database and are capable of capturing DDL events. @@ -28,67 +28,67 @@ An event trigger fires whenever the event with which it is associated occurs in the database in which it is defined. Currently, the only supported events are - <literal>ddl_command_start</>, - <literal>ddl_command_end</>, - <literal>table_rewrite</> - and <literal>sql_drop</>. + <literal>ddl_command_start</literal>, + <literal>ddl_command_end</literal>, + <literal>table_rewrite</literal> + and <literal>sql_drop</literal>. Support for additional events may be added in future releases. </para> <para> - The <literal>ddl_command_start</> event occurs just before the - execution of a <literal>CREATE</>, <literal>ALTER</>, <literal>DROP</>, - <literal>SECURITY LABEL</>, - <literal>COMMENT</>, <literal>GRANT</> or <literal>REVOKE</> + The <literal>ddl_command_start</literal> event occurs just before the + execution of a <literal>CREATE</literal>, <literal>ALTER</literal>, <literal>DROP</literal>, + <literal>SECURITY LABEL</literal>, + <literal>COMMENT</literal>, <literal>GRANT</literal> or <literal>REVOKE</literal> command. No check whether the affected object exists or doesn't exist is performed before the event trigger fires. As an exception, however, this event does not occur for DDL commands targeting shared objects — databases, roles, and tablespaces — or for commands targeting event triggers themselves. The event trigger mechanism does not support these object types. - <literal>ddl_command_start</> also occurs just before the execution of a + <literal>ddl_command_start</literal> also occurs just before the execution of a <literal>SELECT INTO</literal> command, since this is equivalent to <literal>CREATE TABLE AS</literal>. </para> <para> - The <literal>ddl_command_end</> event occurs just after the execution of - this same set of commands. To obtain more details on the <acronym>DDL</> + The <literal>ddl_command_end</literal> event occurs just after the execution of + this same set of commands. To obtain more details on the <acronym>DDL</acronym> operations that took place, use the set-returning function - <literal>pg_event_trigger_ddl_commands()</> from the - <literal>ddl_command_end</> event trigger code (see + <literal>pg_event_trigger_ddl_commands()</literal> from the + <literal>ddl_command_end</literal> event trigger code (see <xref linkend="functions-event-triggers">). Note that the trigger fires after the actions have taken place (but before the transaction commits), and thus the system catalogs can be read as already changed. </para> <para> - The <literal>sql_drop</> event occurs just before the - <literal>ddl_command_end</> event trigger for any operation that drops + The <literal>sql_drop</literal> event occurs just before the + <literal>ddl_command_end</literal> event trigger for any operation that drops database objects. To list the objects that have been dropped, use the - set-returning function <literal>pg_event_trigger_dropped_objects()</> from the - <literal>sql_drop</> event trigger code (see + set-returning function <literal>pg_event_trigger_dropped_objects()</literal> from the + <literal>sql_drop</literal> event trigger code (see <xref linkend="functions-event-triggers">). Note that the trigger is executed after the objects have been deleted from the system catalogs, so it's not possible to look them up anymore. </para> <para> - The <literal>table_rewrite</> event occurs just before a table is - rewritten by some actions of the commands <literal>ALTER TABLE</> and - <literal>ALTER TYPE</>. While other + The <literal>table_rewrite</literal> event occurs just before a table is + rewritten by some actions of the commands <literal>ALTER TABLE</literal> and + <literal>ALTER TYPE</literal>. While other control statements are available to rewrite a table, like <literal>CLUSTER</literal> and <literal>VACUUM</literal>, - the <literal>table_rewrite</> event is not triggered by them. + the <literal>table_rewrite</literal> event is not triggered by them. </para> <para> Event triggers (like other functions) cannot be executed in an aborted transaction. Thus, if a DDL command fails with an error, any associated - <literal>ddl_command_end</> triggers will not be executed. Conversely, - if a <literal>ddl_command_start</> trigger fails with an error, no + <literal>ddl_command_end</literal> triggers will not be executed. Conversely, + if a <literal>ddl_command_start</literal> trigger fails with an error, no further event triggers will fire, and no attempt will be made to execute - the command itself. Similarly, if a <literal>ddl_command_end</> trigger + the command itself. Similarly, if a <literal>ddl_command_end</literal> trigger fails with an error, the effects of the DDL statement will be rolled back, just as they would be in any other case where the containing transaction aborts. @@ -879,14 +879,14 @@ </para> <para> - Event trigger functions must use the <quote>version 1</> function + Event trigger functions must use the <quote>version 1</quote> function manager interface. </para> <para> When a function is called by the event trigger manager, it is not passed - any normal arguments, but it is passed a <quote>context</> pointer - pointing to a <structname>EventTriggerData</> structure. C functions can + any normal arguments, but it is passed a <quote>context</quote> pointer + pointing to a <structname>EventTriggerData</structname> structure. C functions can check whether they were called from the event trigger manager or not by executing the macro: <programlisting> @@ -897,10 +897,10 @@ CALLED_AS_EVENT_TRIGGER(fcinfo) ((fcinfo)->context != NULL && IsA((fcinfo)->context, EventTriggerData)) </programlisting> If this returns true, then it is safe to cast - <literal>fcinfo->context</> to type <literal>EventTriggerData + <literal>fcinfo->context</literal> to type <literal>EventTriggerData *</literal> and make use of the pointed-to - <structname>EventTriggerData</> structure. The function must - <emphasis>not</emphasis> alter the <structname>EventTriggerData</> + <structname>EventTriggerData</structname> structure. The function must + <emphasis>not</emphasis> alter the <structname>EventTriggerData</structname> structure or any of the data it points to. </para> @@ -922,7 +922,7 @@ typedef struct EventTriggerData <variablelist> <varlistentry> - <term><structfield>type</></term> + <term><structfield>type</structfield></term> <listitem> <para> Always <literal>T_EventTriggerData</literal>. @@ -931,7 +931,7 @@ typedef struct EventTriggerData </varlistentry> <varlistentry> - <term><structfield>event</></term> + <term><structfield>event</structfield></term> <listitem> <para> Describes the event for which the function is called, one of @@ -944,7 +944,7 @@ typedef struct EventTriggerData </varlistentry> <varlistentry> - <term><structfield>parsetree</></term> + <term><structfield>parsetree</structfield></term> <listitem> <para> A pointer to the parse tree of the command. Check the PostgreSQL @@ -955,7 +955,7 @@ typedef struct EventTriggerData </varlistentry> <varlistentry> - <term><structfield>tag</></term> + <term><structfield>tag</structfield></term> <listitem> <para> The command tag associated with the event for which the event trigger @@ -967,8 +967,8 @@ typedef struct EventTriggerData </para> <para> - An event trigger function must return a <symbol>NULL</> pointer - (<emphasis>not</> an SQL null value, that is, do not + An event trigger function must return a <symbol>NULL</symbol> pointer + (<emphasis>not</emphasis> an SQL null value, that is, do not set <parameter>isNull</parameter> true). </para> </sect1> @@ -983,7 +983,7 @@ typedef struct EventTriggerData </para> <para> - The function <function>noddl</> raises an exception each time it is called. + The function <function>noddl</function> raises an exception each time it is called. The event trigger definition associated the function with the <literal>ddl_command_start</literal> event. The effect is that all DDL commands (with the exceptions mentioned @@ -1068,7 +1068,7 @@ COMMIT; <title>A Table Rewrite Event Trigger Example</title> <para> - Thanks to the <literal>table_rewrite</> event, it is possible to implement + Thanks to the <literal>table_rewrite</literal> event, it is possible to implement a table rewriting policy only allowing the rewrite in maintenance windows. </para> diff --git a/doc/src/sgml/extend.sgml b/doc/src/sgml/extend.sgml index b96ef389a28..c1bd03ad4c9 100644 --- a/doc/src/sgml/extend.sgml +++ b/doc/src/sgml/extend.sgml @@ -116,7 +116,7 @@ <para> Base types are those, like <type>int4</type>, that are - implemented below the level of the <acronym>SQL</> language + implemented below the level of the <acronym>SQL</acronym> language (typically in a low-level language such as C). They generally correspond to what are often known as abstract data types. <productname>PostgreSQL</productname> can only operate on such @@ -136,11 +136,11 @@ Composite types, or row types, are created whenever the user creates a table. It is also possible to use <xref linkend="sql-createtype"> to - define a <quote>stand-alone</> composite type with no associated + define a <quote>stand-alone</quote> composite type with no associated table. A composite type is simply a list of types with associated field names. A value of a composite type is a row or record of field values. The user can access the component fields - from <acronym>SQL</> queries. Refer to <xref linkend="rowtypes"> + from <acronym>SQL</acronym> queries. Refer to <xref linkend="rowtypes"> for more information on composite types. </para> </sect2> @@ -156,7 +156,7 @@ </para> <para> - Domains can be created using the <acronym>SQL</> command + Domains can be created using the <acronym>SQL</acronym> command <xref linkend="sql-createdomain">. Their creation and use is not discussed in this chapter. </para> @@ -166,7 +166,7 @@ <title>Pseudo-Types</title> <para> - There are a few <quote>pseudo-types</> for special purposes. + There are a few <quote>pseudo-types</quote> for special purposes. Pseudo-types cannot appear as columns of tables or attributes of composite types, but they can be used to declare the argument and result types of functions. This provides a mechanism within the @@ -198,12 +198,12 @@ </indexterm> <para> - Five pseudo-types of special interest are <type>anyelement</>, - <type>anyarray</>, <type>anynonarray</>, <type>anyenum</>, - and <type>anyrange</>, - which are collectively called <firstterm>polymorphic types</>. + Five pseudo-types of special interest are <type>anyelement</type>, + <type>anyarray</type>, <type>anynonarray</type>, <type>anyenum</type>, + and <type>anyrange</type>, + which are collectively called <firstterm>polymorphic types</firstterm>. Any function declared using these types is said to be - a <firstterm>polymorphic function</>. A polymorphic function can + a <firstterm>polymorphic function</firstterm>. A polymorphic function can operate on many different data types, with the specific data type(s) being determined by the data types actually passed to it in a particular call. @@ -228,10 +228,10 @@ and others declared <type>anyelement</type>, the actual range type in the <type>anyrange</type> positions must be a range whose subtype is the same type appearing in the <type>anyelement</type> positions. - <type>anynonarray</> is treated exactly the same as <type>anyelement</>, + <type>anynonarray</type> is treated exactly the same as <type>anyelement</type>, but adds the additional constraint that the actual type must not be an array type. - <type>anyenum</> is treated exactly the same as <type>anyelement</>, + <type>anyenum</type> is treated exactly the same as <type>anyelement</type>, but adds the additional constraint that the actual type must be an enum type. </para> @@ -240,7 +240,7 @@ Thus, when more than one argument position is declared with a polymorphic type, the net effect is that only certain combinations of actual argument types are allowed. For example, a function declared as - <literal>equal(anyelement, anyelement)</> will take any two input values, + <literal>equal(anyelement, anyelement)</literal> will take any two input values, so long as they are of the same data type. </para> @@ -251,19 +251,19 @@ result type for that call. For example, if there were not already an array subscripting mechanism, one could define a function that implements subscripting as <literal>subscript(anyarray, integer) - returns anyelement</>. This declaration constrains the actual first + returns anyelement</literal>. This declaration constrains the actual first argument to be an array type, and allows the parser to infer the correct result type from the actual first argument's type. Another example - is that a function declared as <literal>f(anyarray) returns anyenum</> + is that a function declared as <literal>f(anyarray) returns anyenum</literal> will only accept arrays of enum types. </para> <para> - Note that <type>anynonarray</> and <type>anyenum</> do not represent + Note that <type>anynonarray</type> and <type>anyenum</type> do not represent separate type variables; they are the same type as <type>anyelement</type>, just with an additional constraint. For - example, declaring a function as <literal>f(anyelement, anyenum)</> - is equivalent to declaring it as <literal>f(anyenum, anyenum)</>: + example, declaring a function as <literal>f(anyelement, anyenum)</literal> + is equivalent to declaring it as <literal>f(anyenum, anyenum)</literal>: both actual arguments have to be the same enum type. </para> @@ -271,10 +271,10 @@ A variadic function (one taking a variable number of arguments, as in <xref linkend="xfunc-sql-variadic-functions">) can be polymorphic: this is accomplished by declaring its last parameter as - <literal>VARIADIC</> <type>anyarray</>. For purposes of argument + <literal>VARIADIC</literal> <type>anyarray</type>. For purposes of argument matching and determining the actual result type, such a function behaves the same as if you had written the appropriate number of - <type>anynonarray</> parameters. + <type>anynonarray</type> parameters. </para> </sect2> </sect1> @@ -294,15 +294,15 @@ </indexterm> <para> - A useful extension to <productname>PostgreSQL</> typically includes + A useful extension to <productname>PostgreSQL</productname> typically includes multiple SQL objects; for example, a new data type will require new functions, new operators, and probably new index operator classes. It is helpful to collect all these objects into a single package - to simplify database management. <productname>PostgreSQL</> calls - such a package an <firstterm>extension</>. To define an extension, - you need at least a <firstterm>script file</> that contains the - <acronym>SQL</> commands to create the extension's objects, and a - <firstterm>control file</> that specifies a few basic properties + to simplify database management. <productname>PostgreSQL</productname> calls + such a package an <firstterm>extension</firstterm>. To define an extension, + you need at least a <firstterm>script file</firstterm> that contains the + <acronym>SQL</acronym> commands to create the extension's objects, and a + <firstterm>control file</firstterm> that specifies a few basic properties of the extension itself. If the extension includes C code, there will typically also be a shared library file into which the C code has been built. Once you have these files, a simple @@ -312,14 +312,14 @@ <para> The main advantage of using an extension, rather than just running the - <acronym>SQL</> script to load a bunch of <quote>loose</> objects - into your database, is that <productname>PostgreSQL</> will then + <acronym>SQL</acronym> script to load a bunch of <quote>loose</quote> objects + into your database, is that <productname>PostgreSQL</productname> will then understand that the objects of the extension go together. You can drop all the objects with a single <xref linkend="sql-dropextension"> - command (no need to maintain a separate <quote>uninstall</> script). - Even more useful, <application>pg_dump</> knows that it should not + command (no need to maintain a separate <quote>uninstall</quote> script). + Even more useful, <application>pg_dump</application> knows that it should not dump the individual member objects of the extension — it will - just include a <command>CREATE EXTENSION</> command in dumps, instead. + just include a <command>CREATE EXTENSION</command> command in dumps, instead. This vastly simplifies migration to a new version of the extension that might contain more or different objects than the old version. Note however that you must have the extension's control, script, and @@ -327,12 +327,12 @@ </para> <para> - <productname>PostgreSQL</> will not let you drop an individual object + <productname>PostgreSQL</productname> will not let you drop an individual object contained in an extension, except by dropping the whole extension. Also, while you can change the definition of an extension member object (for example, via <command>CREATE OR REPLACE FUNCTION</command> for a function), bear in mind that the modified definition will not be dumped - by <application>pg_dump</>. Such a change is usually only sensible if + by <application>pg_dump</application>. Such a change is usually only sensible if you concurrently make the same change in the extension's script file. (But there are special provisions for tables containing configuration data; see <xref linkend="extend-extensions-config-tables">.) @@ -346,19 +346,19 @@ statements. The final set of privileges for each object (if any are set) will be stored in the <link linkend="catalog-pg-init-privs"><structname>pg_init_privs</structname></link> - system catalog. When <application>pg_dump</> is used, the - <command>CREATE EXTENSION</> command will be included in the dump, followed + system catalog. When <application>pg_dump</application> is used, the + <command>CREATE EXTENSION</command> command will be included in the dump, followed by the set of <command>GRANT</command> and <command>REVOKE</command> statements necessary to set the privileges on the objects to what they were at the time the dump was taken. </para> <para> - <productname>PostgreSQL</> does not currently support extension scripts + <productname>PostgreSQL</productname> does not currently support extension scripts issuing <command>CREATE POLICY</command> or <command>SECURITY LABEL</command> statements. These are expected to be set after the extension has been created. All RLS policies and security labels on extension objects will be - included in dumps created by <application>pg_dump</>. + included in dumps created by <application>pg_dump</application>. </para> <para> @@ -366,8 +366,8 @@ scripts that adjust the definitions of the SQL objects contained in an extension. For example, if version 1.1 of an extension adds one function and changes the body of another function compared to 1.0, the extension - author can provide an <firstterm>update script</> that makes just those - two changes. The <command>ALTER EXTENSION UPDATE</> command can then + author can provide an <firstterm>update script</firstterm> that makes just those + two changes. The <command>ALTER EXTENSION UPDATE</command> command can then be used to apply these changes and track which version of the extension is actually installed in a given database. </para> @@ -384,7 +384,7 @@ considered members of the extension. Another important point is that schemas can belong to extensions, but not vice versa: an extension as such has an unqualified name and does not - exist <quote>within</> any schema. The extension's member objects, + exist <quote>within</quote> any schema. The extension's member objects, however, will belong to schemas whenever appropriate for their object types. It may or may not be appropriate for an extension to own the schema(s) its member objects are within. @@ -409,23 +409,23 @@ <para> The <xref linkend="sql-createextension"> command relies on a control file for each extension, which must be named the same as the extension - with a suffix of <literal>.control</>, and must be placed in the + with a suffix of <literal>.control</literal>, and must be placed in the installation's <literal>SHAREDIR/extension</literal> directory. There - must also be at least one <acronym>SQL</> script file, which follows the + must also be at least one <acronym>SQL</acronym> script file, which follows the naming pattern - <literal><replaceable>extension</>--<replaceable>version</>.sql</literal> - (for example, <literal>foo--1.0.sql</> for version <literal>1.0</> of - extension <literal>foo</>). By default, the script file(s) are also + <literal><replaceable>extension</replaceable>--<replaceable>version</replaceable>.sql</literal> + (for example, <literal>foo--1.0.sql</literal> for version <literal>1.0</literal> of + extension <literal>foo</literal>). By default, the script file(s) are also placed in the <literal>SHAREDIR/extension</literal> directory; but the control file can specify a different directory for the script file(s). </para> <para> The file format for an extension control file is the same as for the - <filename>postgresql.conf</> file, namely a list of - <replaceable>parameter_name</> <literal>=</> <replaceable>value</> + <filename>postgresql.conf</filename> file, namely a list of + <replaceable>parameter_name</replaceable> <literal>=</literal> <replaceable>value</replaceable> assignments, one per line. Blank lines and comments introduced by - <literal>#</> are allowed. Be sure to quote any value that is not + <literal>#</literal> are allowed. Be sure to quote any value that is not a single word or number. </para> @@ -438,11 +438,11 @@ <term><varname>directory</varname> (<type>string</type>)</term> <listitem> <para> - The directory containing the extension's <acronym>SQL</> script + The directory containing the extension's <acronym>SQL</acronym> script file(s). Unless an absolute path is given, the name is relative to the installation's <literal>SHAREDIR</literal> directory. The default behavior is equivalent to specifying - <literal>directory = 'extension'</>. + <literal>directory = 'extension'</literal>. </para> </listitem> </varlistentry> @@ -452,9 +452,9 @@ <listitem> <para> The default version of the extension (the one that will be installed - if no version is specified in <command>CREATE EXTENSION</>). Although - this can be omitted, that will result in <command>CREATE EXTENSION</> - failing if no <literal>VERSION</> option appears, so you generally + if no version is specified in <command>CREATE EXTENSION</command>). Although + this can be omitted, that will result in <command>CREATE EXTENSION</command> + failing if no <literal>VERSION</literal> option appears, so you generally don't want to do that. </para> </listitem> @@ -489,11 +489,11 @@ <listitem> <para> The value of this parameter will be substituted for each occurrence - of <literal>MODULE_PATHNAME</> in the script file(s). If it is not + of <literal>MODULE_PATHNAME</literal> in the script file(s). If it is not set, no substitution is made. Typically, this is set to - <literal>$libdir/<replaceable>shared_library_name</></literal> and - then <literal>MODULE_PATHNAME</> is used in <command>CREATE - FUNCTION</> commands for C-language functions, so that the script + <literal>$libdir/<replaceable>shared_library_name</replaceable></literal> and + then <literal>MODULE_PATHNAME</literal> is used in <command>CREATE + FUNCTION</command> commands for C-language functions, so that the script files do not need to hard-wire the name of the shared library. </para> </listitem> @@ -514,9 +514,9 @@ <term><varname>superuser</varname> (<type>boolean</type>)</term> <listitem> <para> - If this parameter is <literal>true</> (which is the default), + If this parameter is <literal>true</literal> (which is the default), only superusers can create the extension or update it to a new - version. If it is set to <literal>false</>, just the privileges + version. If it is set to <literal>false</literal>, just the privileges required to execute the commands in the installation or update script are required. </para> @@ -527,9 +527,9 @@ <term><varname>relocatable</varname> (<type>boolean</type>)</term> <listitem> <para> - An extension is <firstterm>relocatable</> if it is possible to move + An extension is <firstterm>relocatable</firstterm> if it is possible to move its contained objects into a different schema after initial creation - of the extension. The default is <literal>false</>, i.e. the + of the extension. The default is <literal>false</literal>, i.e. the extension is not relocatable. See <xref linkend="extend-extensions-relocation"> for more information. </para> @@ -553,45 +553,45 @@ <para> In addition to the primary control file - <literal><replaceable>extension</>.control</literal>, + <literal><replaceable>extension</replaceable>.control</literal>, an extension can have secondary control files named in the style - <literal><replaceable>extension</>--<replaceable>version</>.control</literal>. + <literal><replaceable>extension</replaceable>--<replaceable>version</replaceable>.control</literal>. If supplied, these must be located in the script file directory. Secondary control files follow the same format as the primary control file. Any parameters set in a secondary control file override the primary control file when installing or updating to that version of - the extension. However, the parameters <varname>directory</> and - <varname>default_version</> cannot be set in a secondary control file. + the extension. However, the parameters <varname>directory</varname> and + <varname>default_version</varname> cannot be set in a secondary control file. </para> <para> - An extension's <acronym>SQL</> script files can contain any SQL commands, - except for transaction control commands (<command>BEGIN</>, - <command>COMMIT</>, etc) and commands that cannot be executed inside a - transaction block (such as <command>VACUUM</>). This is because the + An extension's <acronym>SQL</acronym> script files can contain any SQL commands, + except for transaction control commands (<command>BEGIN</command>, + <command>COMMIT</command>, etc) and commands that cannot be executed inside a + transaction block (such as <command>VACUUM</command>). This is because the script files are implicitly executed within a transaction block. </para> <para> - An extension's <acronym>SQL</> script files can also contain lines - beginning with <literal>\echo</>, which will be ignored (treated as + An extension's <acronym>SQL</acronym> script files can also contain lines + beginning with <literal>\echo</literal>, which will be ignored (treated as comments) by the extension mechanism. This provision is commonly used - to throw an error if the script file is fed to <application>psql</> - rather than being loaded via <command>CREATE EXTENSION</> (see example + to throw an error if the script file is fed to <application>psql</application> + rather than being loaded via <command>CREATE EXTENSION</command> (see example script in <xref linkend="extend-extensions-example">). Without that, users might accidentally load the - extension's contents as <quote>loose</> objects rather than as an + extension's contents as <quote>loose</quote> objects rather than as an extension, a state of affairs that's a bit tedious to recover from. </para> <para> While the script files can contain any characters allowed by the specified encoding, control files should contain only plain ASCII, because there - is no way for <productname>PostgreSQL</> to know what encoding a + is no way for <productname>PostgreSQL</productname> to know what encoding a control file is in. In practice this is only an issue if you want to use non-ASCII characters in the extension's comment. Recommended - practice in that case is to not use the control file <varname>comment</> - parameter, but instead use <command>COMMENT ON EXTENSION</> + practice in that case is to not use the control file <varname>comment</varname> + parameter, but instead use <command>COMMENT ON EXTENSION</command> within a script file to set the comment. </para> @@ -611,14 +611,14 @@ <para> A fully relocatable extension can be moved into another schema at any time, even after it's been loaded into a database. - This is done with the <command>ALTER EXTENSION SET SCHEMA</> + This is done with the <command>ALTER EXTENSION SET SCHEMA</command> command, which automatically renames all the member objects into the new schema. Normally, this is only possible if the extension contains no internal assumptions about what schema any of its objects are in. Also, the extension's objects must all be in one schema to begin with (ignoring objects that do not belong to any schema, such as procedural languages). Mark a fully relocatable - extension by setting <literal>relocatable = true</> in its control + extension by setting <literal>relocatable = true</literal> in its control file. </para> </listitem> @@ -628,26 +628,26 @@ An extension might be relocatable during installation but not afterwards. This is typically the case if the extension's script file needs to reference the target schema explicitly, for example - in setting <literal>search_path</> properties for SQL functions. - For such an extension, set <literal>relocatable = false</> in its - control file, and use <literal>@extschema@</> to refer to the target + in setting <literal>search_path</literal> properties for SQL functions. + For such an extension, set <literal>relocatable = false</literal> in its + control file, and use <literal>@extschema@</literal> to refer to the target schema in the script file. All occurrences of this string will be replaced by the actual target schema's name before the script is executed. The user can set the target schema using the - <literal>SCHEMA</> option of <command>CREATE EXTENSION</>. + <literal>SCHEMA</literal> option of <command>CREATE EXTENSION</command>. </para> </listitem> <listitem> <para> If the extension does not support relocation at all, set - <literal>relocatable = false</> in its control file, and also set - <literal>schema</> to the name of the intended target schema. This - will prevent use of the <literal>SCHEMA</> option of <command>CREATE - EXTENSION</>, unless it specifies the same schema named in the control + <literal>relocatable = false</literal> in its control file, and also set + <literal>schema</literal> to the name of the intended target schema. This + will prevent use of the <literal>SCHEMA</literal> option of <command>CREATE + EXTENSION</command>, unless it specifies the same schema named in the control file. This choice is typically necessary if the extension contains internal assumptions about schema names that can't be replaced by - uses of <literal>@extschema@</>. The <literal>@extschema@</> + uses of <literal>@extschema@</literal>. The <literal>@extschema@</literal> substitution mechanism is available in this case too, although it is of limited use since the schema name is determined by the control file. </para> @@ -657,23 +657,23 @@ <para> In all cases, the script file will be executed with <xref linkend="guc-search-path"> initially set to point to the target - schema; that is, <command>CREATE EXTENSION</> does the equivalent of + schema; that is, <command>CREATE EXTENSION</command> does the equivalent of this: <programlisting> SET LOCAL search_path TO @extschema@; </programlisting> This allows the objects created by the script file to go into the target - schema. The script file can change <varname>search_path</> if it wishes, - but that is generally undesirable. <varname>search_path</> is restored - to its previous setting upon completion of <command>CREATE EXTENSION</>. + schema. The script file can change <varname>search_path</varname> if it wishes, + but that is generally undesirable. <varname>search_path</varname> is restored + to its previous setting upon completion of <command>CREATE EXTENSION</command>. </para> <para> - The target schema is determined by the <varname>schema</> parameter in - the control file if that is given, otherwise by the <literal>SCHEMA</> - option of <command>CREATE EXTENSION</> if that is given, otherwise the + The target schema is determined by the <varname>schema</varname> parameter in + the control file if that is given, otherwise by the <literal>SCHEMA</literal> + option of <command>CREATE EXTENSION</command> if that is given, otherwise the current default object creation schema (the first one in the caller's - <varname>search_path</>). When the control file <varname>schema</> + <varname>search_path</varname>). When the control file <varname>schema</varname> parameter is used, the target schema will be created if it doesn't already exist, but in the other two cases it must already exist. </para> @@ -681,7 +681,7 @@ SET LOCAL search_path TO @extschema@; <para> If any prerequisite extensions are listed in <varname>requires</varname> in the control file, their target schemas are appended to the initial - setting of <varname>search_path</>. This allows their objects to be + setting of <varname>search_path</varname>. This allows their objects to be visible to the new extension's script file. </para> @@ -690,7 +690,7 @@ SET LOCAL search_path TO @extschema@; multiple schemas, it is usually desirable to place all the objects meant for external use into a single schema, which is considered the extension's target schema. Such an arrangement works conveniently with the default - setting of <varname>search_path</> during creation of dependent + setting of <varname>search_path</varname> during creation of dependent extensions. </para> </sect2> @@ -703,7 +703,7 @@ SET LOCAL search_path TO @extschema@; might be added or changed by the user after installation of the extension. Ordinarily, if a table is part of an extension, neither the table's definition nor its content will be dumped by - <application>pg_dump</>. But that behavior is undesirable for a + <application>pg_dump</application>. But that behavior is undesirable for a configuration table; any data changes made by the user need to be included in dumps, or the extension will behave differently after a dump and reload. @@ -716,9 +716,9 @@ SET LOCAL search_path TO @extschema@; <para> To solve this problem, an extension's script file can mark a table or a sequence it has created as a configuration relation, which will - cause <application>pg_dump</> to include the table's or the sequence's + cause <application>pg_dump</application> to include the table's or the sequence's contents (not its definition) in dumps. To do that, call the function - <function>pg_extension_config_dump(regclass, text)</> after creating the + <function>pg_extension_config_dump(regclass, text)</function> after creating the table or the sequence, for example <programlisting> CREATE TABLE my_config (key text, value text); @@ -728,30 +728,30 @@ SELECT pg_catalog.pg_extension_config_dump('my_config', ''); SELECT pg_catalog.pg_extension_config_dump('my_config_seq', ''); </programlisting> Any number of tables or sequences can be marked this way. Sequences - associated with <type>serial</> or <type>bigserial</> columns can + associated with <type>serial</type> or <type>bigserial</type> columns can be marked as well. </para> <para> - When the second argument of <function>pg_extension_config_dump</> is + When the second argument of <function>pg_extension_config_dump</function> is an empty string, the entire contents of the table are dumped by - <application>pg_dump</>. This is usually only correct if the table + <application>pg_dump</application>. This is usually only correct if the table is initially empty as created by the extension script. If there is a mixture of initial data and user-provided data in the table, - the second argument of <function>pg_extension_config_dump</> provides - a <literal>WHERE</> condition that selects the data to be dumped. + the second argument of <function>pg_extension_config_dump</function> provides + a <literal>WHERE</literal> condition that selects the data to be dumped. For example, you might do <programlisting> CREATE TABLE my_config (key text, value text, standard_entry boolean); SELECT pg_catalog.pg_extension_config_dump('my_config', 'WHERE NOT standard_entry'); </programlisting> - and then make sure that <structfield>standard_entry</> is true only + and then make sure that <structfield>standard_entry</structfield> is true only in the rows created by the extension's script. </para> <para> - For sequences, the second argument of <function>pg_extension_config_dump</> + For sequences, the second argument of <function>pg_extension_config_dump</function> has no effect. </para> @@ -763,10 +763,10 @@ SELECT pg_catalog.pg_extension_config_dump('my_config', 'WHERE NOT standard_entr <para> You can alter the filter condition associated with a configuration table - by calling <function>pg_extension_config_dump</> again. (This would + by calling <function>pg_extension_config_dump</function> again. (This would typically be useful in an extension update script.) The only way to mark a table as no longer a configuration table is to dissociate it from the - extension with <command>ALTER EXTENSION ... DROP TABLE</>. + extension with <command>ALTER EXTENSION ... DROP TABLE</command>. </para> <para> @@ -781,7 +781,7 @@ SELECT pg_catalog.pg_extension_config_dump('my_config', 'WHERE NOT standard_entr </para> <para> - Sequences associated with <type>serial</> or <type>bigserial</> columns + Sequences associated with <type>serial</type> or <type>bigserial</type> columns need to be directly marked to dump their state. Marking their parent relation is not enough for this purpose. </para> @@ -797,20 +797,20 @@ SELECT pg_catalog.pg_extension_config_dump('my_config', 'WHERE NOT standard_entr each released version of the extension's installation script. In addition, if you want users to be able to update their databases dynamically from one version to the next, you should provide - <firstterm>update scripts</> that make the necessary changes to go from + <firstterm>update scripts</firstterm> that make the necessary changes to go from one version to the next. Update scripts have names following the pattern - <literal><replaceable>extension</>--<replaceable>oldversion</>--<replaceable>newversion</>.sql</literal> - (for example, <literal>foo--1.0--1.1.sql</> contains the commands to modify - version <literal>1.0</> of extension <literal>foo</> into version - <literal>1.1</>). + <literal><replaceable>extension</replaceable>--<replaceable>oldversion</replaceable>--<replaceable>newversion</replaceable>.sql</literal> + (for example, <literal>foo--1.0--1.1.sql</literal> contains the commands to modify + version <literal>1.0</literal> of extension <literal>foo</literal> into version + <literal>1.1</literal>). </para> <para> Given that a suitable update script is available, the command - <command>ALTER EXTENSION UPDATE</> will update an installed extension + <command>ALTER EXTENSION UPDATE</command> will update an installed extension to the specified new version. The update script is run in the same - environment that <command>CREATE EXTENSION</> provides for installation - scripts: in particular, <varname>search_path</> is set up in the same + environment that <command>CREATE EXTENSION</command> provides for installation + scripts: in particular, <varname>search_path</varname> is set up in the same way, and any new objects created by the script are automatically added to the extension. Also, if the script chooses to drop extension member objects, they are automatically dissociated from the extension. @@ -824,56 +824,56 @@ SELECT pg_catalog.pg_extension_config_dump('my_config', 'WHERE NOT standard_entr <para> The update mechanism can be used to solve an important special case: - converting a <quote>loose</> collection of objects into an extension. + converting a <quote>loose</quote> collection of objects into an extension. Before the extension mechanism was added to <productname>PostgreSQL</productname> (in 9.1), many people wrote extension modules that simply created assorted unpackaged objects. Given an existing database containing such objects, how can we convert the objects into a properly packaged extension? Dropping them and then - doing a plain <command>CREATE EXTENSION</> is one way, but it's not + doing a plain <command>CREATE EXTENSION</command> is one way, but it's not desirable if the objects have dependencies (for example, if there are table columns of a data type created by the extension). The way to fix this situation is to create an empty extension, then use <command>ALTER - EXTENSION ADD</> to attach each pre-existing object to the extension, + EXTENSION ADD</command> to attach each pre-existing object to the extension, then finally create any new objects that are in the current extension version but were not in the unpackaged release. <command>CREATE - EXTENSION</> supports this case with its <literal>FROM</> <replaceable - class="parameter">old_version</> option, which causes it to not run the + EXTENSION</command> supports this case with its <literal>FROM</literal> <replaceable + class="parameter">old_version</replaceable> option, which causes it to not run the normal installation script for the target version, but instead the update script named - <literal><replaceable>extension</>--<replaceable>old_version</>--<replaceable>target_version</>.sql</literal>. + <literal><replaceable>extension</replaceable>--<replaceable>old_version</replaceable>--<replaceable>target_version</replaceable>.sql</literal>. The choice of the dummy version name to use as <replaceable - class="parameter">old_version</> is up to the extension author, though - <literal>unpackaged</> is a common convention. If you have multiple + class="parameter">old_version</replaceable> is up to the extension author, though + <literal>unpackaged</literal> is a common convention. If you have multiple prior versions you need to be able to update into extension style, use multiple dummy version names to identify them. </para> <para> - <command>ALTER EXTENSION</> is able to execute sequences of update + <command>ALTER EXTENSION</command> is able to execute sequences of update script files to achieve a requested update. For example, if only - <literal>foo--1.0--1.1.sql</> and <literal>foo--1.1--2.0.sql</> are - available, <command>ALTER EXTENSION</> will apply them in sequence if an - update to version <literal>2.0</> is requested when <literal>1.0</> is + <literal>foo--1.0--1.1.sql</literal> and <literal>foo--1.1--2.0.sql</literal> are + available, <command>ALTER EXTENSION</command> will apply them in sequence if an + update to version <literal>2.0</literal> is requested when <literal>1.0</literal> is currently installed. </para> <para> - <productname>PostgreSQL</> doesn't assume anything about the properties - of version names: for example, it does not know whether <literal>1.1</> - follows <literal>1.0</>. It just matches up the available version names + <productname>PostgreSQL</productname> doesn't assume anything about the properties + of version names: for example, it does not know whether <literal>1.1</literal> + follows <literal>1.0</literal>. It just matches up the available version names and follows the path that requires applying the fewest update scripts. (A version name can actually be any string that doesn't contain - <literal>--</> or leading or trailing <literal>-</>.) + <literal>--</literal> or leading or trailing <literal>-</literal>.) </para> <para> - Sometimes it is useful to provide <quote>downgrade</> scripts, for - example <literal>foo--1.1--1.0.sql</> to allow reverting the changes - associated with version <literal>1.1</>. If you do that, be careful + Sometimes it is useful to provide <quote>downgrade</quote> scripts, for + example <literal>foo--1.1--1.0.sql</literal> to allow reverting the changes + associated with version <literal>1.1</literal>. If you do that, be careful of the possibility that a downgrade script might unexpectedly get applied because it yields a shorter path. The risky case is where - there is a <quote>fast path</> update script that jumps ahead several + there is a <quote>fast path</quote> update script that jumps ahead several versions as well as a downgrade script to the fast path's start point. It might take fewer steps to apply the downgrade and then the fast path than to move ahead one version at a time. If the downgrade script @@ -883,14 +883,14 @@ SELECT pg_catalog.pg_extension_config_dump('my_config', 'WHERE NOT standard_entr <para> To check for unexpected update paths, use this command: <programlisting> -SELECT * FROM pg_extension_update_paths('<replaceable>extension_name</>'); +SELECT * FROM pg_extension_update_paths('<replaceable>extension_name</replaceable>'); </programlisting> This shows each pair of distinct known version names for the specified extension, together with the update path sequence that would be taken to - get from the source version to the target version, or <literal>NULL</> if + get from the source version to the target version, or <literal>NULL</literal> if there is no available update path. The path is shown in textual form - with <literal>--</> separators. You can use - <literal>regexp_split_to_array(path,'--')</> if you prefer an array + with <literal>--</literal> separators. You can use + <literal>regexp_split_to_array(path,'--')</literal> if you prefer an array format. </para> </sect2> @@ -901,24 +901,24 @@ SELECT * FROM pg_extension_update_paths('<replaceable>extension_name</>'); <para> An extension that has been around for awhile will probably exist in several versions, for which the author will need to write update scripts. - For example, if you have released a <literal>foo</> extension in - versions <literal>1.0</>, <literal>1.1</>, and <literal>1.2</>, there - should be update scripts <filename>foo--1.0--1.1.sql</> - and <filename>foo--1.1--1.2.sql</>. - Before <productname>PostgreSQL</> 10, it was necessary to also create - new script files <filename>foo--1.1.sql</> and <filename>foo--1.2.sql</> + For example, if you have released a <literal>foo</literal> extension in + versions <literal>1.0</literal>, <literal>1.1</literal>, and <literal>1.2</literal>, there + should be update scripts <filename>foo--1.0--1.1.sql</filename> + and <filename>foo--1.1--1.2.sql</filename>. + Before <productname>PostgreSQL</productname> 10, it was necessary to also create + new script files <filename>foo--1.1.sql</filename> and <filename>foo--1.2.sql</filename> that directly build the newer extension versions, or else the newer versions could not be installed directly, only by - installing <literal>1.0</> and then updating. That was tedious and + installing <literal>1.0</literal> and then updating. That was tedious and duplicative, but now it's unnecessary, because <command>CREATE - EXTENSION</> can follow update chains automatically. + EXTENSION</command> can follow update chains automatically. For example, if only the script - files <filename>foo--1.0.sql</>, <filename>foo--1.0--1.1.sql</>, - and <filename>foo--1.1--1.2.sql</> are available then a request to - install version <literal>1.2</> is honored by running those three + files <filename>foo--1.0.sql</filename>, <filename>foo--1.0--1.1.sql</filename>, + and <filename>foo--1.1--1.2.sql</filename> are available then a request to + install version <literal>1.2</literal> is honored by running those three scripts in sequence. The processing is the same as if you'd first - installed <literal>1.0</> and then updated to <literal>1.2</>. - (As with <command>ALTER EXTENSION UPDATE</>, if multiple pathways are + installed <literal>1.0</literal> and then updated to <literal>1.2</literal>. + (As with <command>ALTER EXTENSION UPDATE</command>, if multiple pathways are available then the shortest is preferred.) Arranging an extension's script files in this style can reduce the amount of maintenance effort needed to produce small updates. @@ -929,10 +929,10 @@ SELECT * FROM pg_extension_update_paths('<replaceable>extension_name</>'); maintained in this style, keep in mind that each version needs a control file even if it has no stand-alone installation script, as that control file will determine how the implicit update to that version is performed. - For example, if <filename>foo--1.0.control</> specifies <literal>requires - = 'bar'</> but <literal>foo</>'s other control files do not, the - extension's dependency on <literal>bar</> will be dropped when updating - from <literal>1.0</> to another version. + For example, if <filename>foo--1.0.control</filename> specifies <literal>requires + = 'bar'</literal> but <literal>foo</literal>'s other control files do not, the + extension's dependency on <literal>bar</literal> will be dropped when updating + from <literal>1.0</literal> to another version. </para> </sect2> @@ -940,14 +940,14 @@ SELECT * FROM pg_extension_update_paths('<replaceable>extension_name</>'); <title>Extension Example</title> <para> - Here is a complete example of an <acronym>SQL</>-only + Here is a complete example of an <acronym>SQL</acronym>-only extension, a two-element composite type that can store any type of value - in its slots, which are named <quote>k</> and <quote>v</>. Non-text + in its slots, which are named <quote>k</quote> and <quote>v</quote>. Non-text values are automatically coerced to text for storage. </para> <para> - The script file <filename>pair--1.0.sql</> looks like this: + The script file <filename>pair--1.0.sql</filename> looks like this: <programlisting><![CDATA[ -- complain if script is sourced in psql, rather than via CREATE EXTENSION @@ -976,7 +976,7 @@ CREATE OPERATOR ~> (LEFTARG = text, RIGHTARG = text, PROCEDURE = pair); </para> <para> - The control file <filename>pair.control</> looks like this: + The control file <filename>pair.control</filename> looks like this: <programlisting> # pair extension @@ -988,7 +988,7 @@ relocatable = true <para> While you hardly need a makefile to install these two files into the - correct directory, you could use a <filename>Makefile</> containing this: + correct directory, you could use a <filename>Makefile</filename> containing this: <programlisting> EXTENSION = pair @@ -1000,9 +1000,9 @@ include $(PGXS) </programlisting> This makefile relies on <acronym>PGXS</acronym>, which is described - in <xref linkend="extend-pgxs">. The command <literal>make install</> + in <xref linkend="extend-pgxs">. The command <literal>make install</literal> will install the control and script files into the correct - directory as reported by <application>pg_config</>. + directory as reported by <application>pg_config</application>. </para> <para> @@ -1022,16 +1022,16 @@ include $(PGXS) <para> If you are thinking about distributing your - <productname>PostgreSQL</> extension modules, setting up a + <productname>PostgreSQL</productname> extension modules, setting up a portable build system for them can be fairly difficult. Therefore - the <productname>PostgreSQL</> installation provides a build + the <productname>PostgreSQL</productname> installation provides a build infrastructure for extensions, called <acronym>PGXS</acronym>, so that simple extension modules can be built simply against an already installed server. <acronym>PGXS</acronym> is mainly intended for extensions that include C code, although it can be used for pure-SQL extensions too. Note that <acronym>PGXS</acronym> is not intended to be a universal build system framework that can be used - to build any software interfacing to <productname>PostgreSQL</>; + to build any software interfacing to <productname>PostgreSQL</productname>; it simply automates common build rules for simple server extension modules. For more complicated packages, you might need to write your own build system. @@ -1115,7 +1115,7 @@ include $(PGXS) <term><varname>MODULEDIR</varname></term> <listitem> <para> - subdirectory of <literal><replaceable>prefix</>/share</literal> + subdirectory of <literal><replaceable>prefix</replaceable>/share</literal> into which DATA and DOCS files should be installed (if not set, default is <literal>extension</literal> if <varname>EXTENSION</varname> is set, @@ -1198,7 +1198,7 @@ include $(PGXS) <term><varname>REGRESS_OPTS</varname></term> <listitem> <para> - additional switches to pass to <application>pg_regress</> + additional switches to pass to <application>pg_regress</application> </para> </listitem> </varlistentry> @@ -1252,10 +1252,10 @@ include $(PGXS) <term><varname>PG_CONFIG</varname></term> <listitem> <para> - path to <application>pg_config</> program for the + path to <application>pg_config</application> program for the <productname>PostgreSQL</productname> installation to build against - (typically just <literal>pg_config</> to use the first one in your - <varname>PATH</>) + (typically just <literal>pg_config</literal> to use the first one in your + <varname>PATH</varname>) </para> </listitem> </varlistentry> @@ -1270,7 +1270,7 @@ include $(PGXS) compiled and installed for the <productname>PostgreSQL</productname> installation that corresponds to the first <command>pg_config</command> program - found in your <varname>PATH</>. You can use a different installation by + found in your <varname>PATH</varname>. You can use a different installation by setting <varname>PG_CONFIG</varname> to point to its <command>pg_config</command> program, either within the makefile or on the <literal>make</literal> command line. @@ -1293,7 +1293,7 @@ make -f /path/to/extension/source/tree/Makefile install <para> Alternatively, you can set up a directory for a VPATH build in a similar way to how it is done for the core code. One way to do this is using the - core script <filename>config/prep_buildtree</>. Once this has been done + core script <filename>config/prep_buildtree</filename>. Once this has been done you can build by setting the <literal>make</literal> variable <varname>VPATH</varname> like this: <programlisting> @@ -1304,18 +1304,18 @@ make VPATH=/path/to/extension/source/tree install </para> <para> - The scripts listed in the <varname>REGRESS</> variable are used for + The scripts listed in the <varname>REGRESS</varname> variable are used for regression testing of your module, which can be invoked by <literal>make - installcheck</literal> after doing <literal>make install</>. For this to + installcheck</literal> after doing <literal>make install</literal>. For this to work you must have a running <productname>PostgreSQL</productname> server. - The script files listed in <varname>REGRESS</> must appear in a + The script files listed in <varname>REGRESS</varname> must appear in a subdirectory named <literal>sql/</literal> in your extension's directory. These files must have extension <literal>.sql</literal>, which must not be included in the <varname>REGRESS</varname> list in the makefile. For each test there should also be a file containing the expected output in a subdirectory named <literal>expected/</literal>, with the same stem and extension <literal>.out</literal>. <literal>make installcheck</literal> - executes each test script with <application>psql</>, and compares the + executes each test script with <application>psql</application>, and compares the resulting output to the matching expected file. Any differences will be written to the file <literal>regression.diffs</literal> in <command>diff -c</command> format. Note that trying to run a test that is missing its diff --git a/doc/src/sgml/external-projects.sgml b/doc/src/sgml/external-projects.sgml index 82eaf4a3554..03fd18aeb80 100644 --- a/doc/src/sgml/external-projects.sgml +++ b/doc/src/sgml/external-projects.sgml @@ -42,7 +42,7 @@ All other language interfaces are external projects and are distributed separately. <xref linkend="language-interface-table"> includes a list of some of these projects. Note that some of these packages might not be - released under the same license as <productname>PostgreSQL</>. For more + released under the same license as <productname>PostgreSQL</productname>. For more information on each language interface, including licensing terms, refer to its website and documentation. </para> @@ -145,8 +145,8 @@ <para> There are several administration tools available for - <productname>PostgreSQL</>. The most popular is - <application><ulink url="http://www.pgadmin.org/">pgAdmin III</ulink></>, + <productname>PostgreSQL</productname>. The most popular is + <application><ulink url="http://www.pgadmin.org/">pgAdmin III</ulink></application>, and there are several commercially available ones as well. </para> </sect1> @@ -172,7 +172,7 @@ and maintained outside the core <productname>PostgreSQL</productname> distribution. <xref linkend="pl-language-table"> lists some of these packages. Note that some of these projects might not be released under the same - license as <productname>PostgreSQL</>. For more information on each + license as <productname>PostgreSQL</productname>. For more information on each procedural language, including licensing information, refer to its website and documentation. </para> @@ -233,17 +233,17 @@ </indexterm> <para> - <productname>PostgreSQL</> is designed to be easily extensible. For + <productname>PostgreSQL</productname> is designed to be easily extensible. For this reason, extensions loaded into the database can function just like features that are built in. The - <filename>contrib/</> directory shipped with the source code + <filename>contrib/</filename> directory shipped with the source code contains several extensions, which are described in <xref linkend="contrib">. Other extensions are developed independently, like <application><ulink - url="http://postgis.net/">PostGIS</ulink></>. Even - <productname>PostgreSQL</> replication solutions can be developed + url="http://postgis.net/">PostGIS</ulink></application>. Even + <productname>PostgreSQL</productname> replication solutions can be developed externally. For example, <application> <ulink - url="http://www.slony.info">Slony-I</ulink></> is a popular + url="http://www.slony.info">Slony-I</ulink></application> is a popular master/standby replication solution that is developed independently from the core project. </para> diff --git a/doc/src/sgml/fdwhandler.sgml b/doc/src/sgml/fdwhandler.sgml index e63e29fd96f..4250a03f16e 100644 --- a/doc/src/sgml/fdwhandler.sgml +++ b/doc/src/sgml/fdwhandler.sgml @@ -21,7 +21,7 @@ <para> The foreign data wrappers included in the standard distribution are good references when trying to write your own. Look into the - <filename>contrib</> subdirectory of the source tree. + <filename>contrib</filename> subdirectory of the source tree. The <xref linkend="sql-createforeigndatawrapper"> reference page also has some useful details. </para> @@ -70,10 +70,10 @@ representing the type of object the options are associated with (in the form of the OID of the system catalog the object would be stored in, either - <literal>ForeignDataWrapperRelationId</>, - <literal>ForeignServerRelationId</>, - <literal>UserMappingRelationId</>, - or <literal>ForeignTableRelationId</>). + <literal>ForeignDataWrapperRelationId</literal>, + <literal>ForeignServerRelationId</literal>, + <literal>UserMappingRelationId</literal>, + or <literal>ForeignTableRelationId</literal>). If no validator function is supplied, options are not checked at object creation time or object alteration time. </para> @@ -84,14 +84,14 @@ <title>Foreign Data Wrapper Callback Routines</title> <para> - The FDW handler function returns a palloc'd <structname>FdwRoutine</> + The FDW handler function returns a palloc'd <structname>FdwRoutine</structname> struct containing pointers to the callback functions described below. The scan-related functions are required, the rest are optional. </para> <para> - The <structname>FdwRoutine</> struct type is declared in - <filename>src/include/foreign/fdwapi.h</>, which see for additional + The <structname>FdwRoutine</structname> struct type is declared in + <filename>src/include/foreign/fdwapi.h</filename>, which see for additional details. </para> @@ -108,20 +108,20 @@ GetForeignRelSize(PlannerInfo *root, Obtain relation size estimates for a foreign table. This is called at the beginning of planning for a query that scans a foreign table. - <literal>root</> is the planner's global information about the query; - <literal>baserel</> is the planner's information about this table; and - <literal>foreigntableid</> is the <structname>pg_class</> OID of the - foreign table. (<literal>foreigntableid</> could be obtained from the + <literal>root</literal> is the planner's global information about the query; + <literal>baserel</literal> is the planner's information about this table; and + <literal>foreigntableid</literal> is the <structname>pg_class</structname> OID of the + foreign table. (<literal>foreigntableid</literal> could be obtained from the planner data structures, but it's passed explicitly to save effort.) </para> <para> - This function should update <literal>baserel->rows</> to be the + This function should update <literal>baserel->rows</literal> to be the expected number of rows returned by the table scan, after accounting for the filtering done by the restriction quals. The initial value of - <literal>baserel->rows</> is just a constant default estimate, which + <literal>baserel->rows</literal> is just a constant default estimate, which should be replaced if at all possible. The function may also choose to - update <literal>baserel->width</> if it can compute a better estimate + update <literal>baserel->width</literal> if it can compute a better estimate of the average result row width. </para> @@ -139,18 +139,18 @@ GetForeignPaths(PlannerInfo *root, Create possible access paths for a scan on a foreign table. This is called during query planning. - The parameters are the same as for <function>GetForeignRelSize</>, + The parameters are the same as for <function>GetForeignRelSize</function>, which has already been called. </para> <para> This function must generate at least one access path - (<structname>ForeignPath</> node) for a scan on the foreign table and - must call <function>add_path</> to add each such path to - <literal>baserel->pathlist</>. It's recommended to use - <function>create_foreignscan_path</> to build the - <structname>ForeignPath</> nodes. The function can generate multiple - access paths, e.g., a path which has valid <literal>pathkeys</> to + (<structname>ForeignPath</structname> node) for a scan on the foreign table and + must call <function>add_path</function> to add each such path to + <literal>baserel->pathlist</literal>. It's recommended to use + <function>create_foreignscan_path</function> to build the + <structname>ForeignPath</structname> nodes. The function can generate multiple + access paths, e.g., a path which has valid <literal>pathkeys</literal> to represent a pre-sorted result. Each access path must contain cost estimates, and can contain any FDW-private information that is needed to identify the specific scan method intended. @@ -172,24 +172,24 @@ GetForeignPlan(PlannerInfo *root, Plan *outer_plan); </programlisting> - Create a <structname>ForeignScan</> plan node from the selected foreign + Create a <structname>ForeignScan</structname> plan node from the selected foreign access path. This is called at the end of query planning. - The parameters are as for <function>GetForeignRelSize</>, plus - the selected <structname>ForeignPath</> (previously produced by - <function>GetForeignPaths</>, <function>GetForeignJoinPaths</>, - or <function>GetForeignUpperPaths</>), + The parameters are as for <function>GetForeignRelSize</function>, plus + the selected <structname>ForeignPath</structname> (previously produced by + <function>GetForeignPaths</function>, <function>GetForeignJoinPaths</function>, + or <function>GetForeignUpperPaths</function>), the target list to be emitted by the plan node, the restriction clauses to be enforced by the plan node, - and the outer subplan of the <structname>ForeignScan</>, - which is used for rechecks performed by <function>RecheckForeignScan</>. + and the outer subplan of the <structname>ForeignScan</structname>, + which is used for rechecks performed by <function>RecheckForeignScan</function>. (If the path is for a join rather than a base - relation, <literal>foreigntableid</> is <literal>InvalidOid</>.) + relation, <literal>foreigntableid</literal> is <literal>InvalidOid</literal>.) </para> <para> - This function must create and return a <structname>ForeignScan</> plan - node; it's recommended to use <function>make_foreignscan</> to build the - <structname>ForeignScan</> node. + This function must create and return a <structname>ForeignScan</structname> plan + node; it's recommended to use <function>make_foreignscan</function> to build the + <structname>ForeignScan</structname> node. </para> <para> @@ -206,22 +206,22 @@ BeginForeignScan(ForeignScanState *node, Begin executing a foreign scan. This is called during executor startup. It should perform any initialization needed before the scan can start, but not start executing the actual scan (that should be done upon the - first call to <function>IterateForeignScan</>). - The <structname>ForeignScanState</> node has already been created, but - its <structfield>fdw_state</> field is still NULL. Information about + first call to <function>IterateForeignScan</function>). + The <structname>ForeignScanState</structname> node has already been created, but + its <structfield>fdw_state</structfield> field is still NULL. Information about the table to scan is accessible through the - <structname>ForeignScanState</> node (in particular, from the underlying - <structname>ForeignScan</> plan node, which contains any FDW-private - information provided by <function>GetForeignPlan</>). - <literal>eflags</> contains flag bits describing the executor's + <structname>ForeignScanState</structname> node (in particular, from the underlying + <structname>ForeignScan</structname> plan node, which contains any FDW-private + information provided by <function>GetForeignPlan</function>). + <literal>eflags</literal> contains flag bits describing the executor's operating mode for this plan node. </para> <para> - Note that when <literal>(eflags & EXEC_FLAG_EXPLAIN_ONLY)</> is + Note that when <literal>(eflags & EXEC_FLAG_EXPLAIN_ONLY)</literal> is true, this function should not perform any externally-visible actions; it should only do the minimum required to make the node state valid - for <function>ExplainForeignScan</> and <function>EndForeignScan</>. + for <function>ExplainForeignScan</function> and <function>EndForeignScan</function>. </para> <para> @@ -231,22 +231,22 @@ IterateForeignScan(ForeignScanState *node); </programlisting> Fetch one row from the foreign source, returning it in a tuple table slot - (the node's <structfield>ScanTupleSlot</> should be used for this + (the node's <structfield>ScanTupleSlot</structfield> should be used for this purpose). Return NULL if no more rows are available. The tuple table slot infrastructure allows either a physical or virtual tuple to be returned; in most cases the latter choice is preferable from a performance standpoint. Note that this is called in a short-lived memory context that will be reset between invocations. Create a memory context - in <function>BeginForeignScan</> if you need longer-lived storage, or use - the <structfield>es_query_cxt</> of the node's <structname>EState</>. + in <function>BeginForeignScan</function> if you need longer-lived storage, or use + the <structfield>es_query_cxt</structfield> of the node's <structname>EState</structname>. </para> <para> - The rows returned must match the <structfield>fdw_scan_tlist</> target + The rows returned must match the <structfield>fdw_scan_tlist</structfield> target list if one was supplied, otherwise they must match the row type of the foreign table being scanned. If you choose to optimize away fetching columns that are not needed, you should insert nulls in those column - positions, or else generate a <structfield>fdw_scan_tlist</> list with + positions, or else generate a <structfield>fdw_scan_tlist</structfield> list with those columns omitted. </para> @@ -307,11 +307,11 @@ GetForeignJoinPaths(PlannerInfo *root, Create possible access paths for a join of two (or more) foreign tables that all belong to the same foreign server. This optional function is called during query planning. As - with <function>GetForeignPaths</>, this function should - generate <structname>ForeignPath</> path(s) for the - supplied <literal>joinrel</>, and call <function>add_path</> to add these + with <function>GetForeignPaths</function>, this function should + generate <structname>ForeignPath</structname> path(s) for the + supplied <literal>joinrel</literal>, and call <function>add_path</function> to add these paths to the set of paths considered for the join. But unlike - <function>GetForeignPaths</>, it is not necessary that this function + <function>GetForeignPaths</function>, it is not necessary that this function succeed in creating at least one path, since paths involving local joining are always possible. </para> @@ -323,20 +323,20 @@ GetForeignJoinPaths(PlannerInfo *root, </para> <para> - If a <structname>ForeignPath</> path is chosen for the join, it will + If a <structname>ForeignPath</structname> path is chosen for the join, it will represent the entire join process; paths generated for the component tables and subsidiary joins will not be used. Subsequent processing of the join path proceeds much as it does for a path scanning a single - foreign table. One difference is that the <structfield>scanrelid</> of - the resulting <structname>ForeignScan</> plan node should be set to zero, + foreign table. One difference is that the <structfield>scanrelid</structfield> of + the resulting <structname>ForeignScan</structname> plan node should be set to zero, since there is no single relation that it represents; instead, - the <structfield>fs_relids</> field of the <structname>ForeignScan</> + the <structfield>fs_relids</structfield> field of the <structname>ForeignScan</structname> node represents the set of relations that were joined. (The latter field is set up automatically by the core planner code, and need not be filled by the FDW.) Another difference is that, because the column list for a remote join cannot be found from the system catalogs, the FDW must - fill <structfield>fdw_scan_tlist</> with an appropriate list - of <structfield>TargetEntry</> nodes, representing the set of columns + fill <structfield>fdw_scan_tlist</structfield> with an appropriate list + of <structfield>TargetEntry</structfield> nodes, representing the set of columns it will supply at run time in the tuples it returns. </para> @@ -361,27 +361,27 @@ GetForeignUpperPaths(PlannerInfo *root, RelOptInfo *input_rel, RelOptInfo *output_rel); </programlisting> - Create possible access paths for <firstterm>upper relation</> processing, + Create possible access paths for <firstterm>upper relation</firstterm> processing, which is the planner's term for all post-scan/join query processing, such as aggregation, window functions, sorting, and table updates. This optional function is called during query planning. Currently, it is called only if all base relation(s) involved in the query belong to the - same FDW. This function should generate <structname>ForeignPath</> + same FDW. This function should generate <structname>ForeignPath</structname> path(s) for any post-scan/join processing that the FDW knows how to - perform remotely, and call <function>add_path</> to add these paths to - the indicated upper relation. As with <function>GetForeignJoinPaths</>, + perform remotely, and call <function>add_path</function> to add these paths to + the indicated upper relation. As with <function>GetForeignJoinPaths</function>, it is not necessary that this function succeed in creating any paths, since paths involving local processing are always possible. </para> <para> - The <literal>stage</> parameter identifies which post-scan/join step is - currently being considered. <literal>output_rel</> is the upper relation + The <literal>stage</literal> parameter identifies which post-scan/join step is + currently being considered. <literal>output_rel</literal> is the upper relation that should receive paths representing computation of this step, - and <literal>input_rel</> is the relation representing the input to this - step. (Note that <structname>ForeignPath</> paths added - to <literal>output_rel</> would typically not have any direct dependency - on paths of the <literal>input_rel</>, since their processing is expected + and <literal>input_rel</literal> is the relation representing the input to this + step. (Note that <structname>ForeignPath</structname> paths added + to <literal>output_rel</literal> would typically not have any direct dependency + on paths of the <literal>input_rel</literal>, since their processing is expected to be done externally. However, examining paths previously generated for the previous processing step can be useful to avoid redundant planning work.) @@ -409,25 +409,25 @@ AddForeignUpdateTargets(Query *parsetree, Relation target_relation); </programlisting> - <command>UPDATE</> and <command>DELETE</> operations are performed + <command>UPDATE</command> and <command>DELETE</command> operations are performed against rows previously fetched by the table-scanning functions. The FDW may need extra information, such as a row ID or the values of primary-key columns, to ensure that it can identify the exact row to update or delete. To support that, this function can add extra hidden, - or <quote>junk</>, target columns to the list of columns that are to be - retrieved from the foreign table during an <command>UPDATE</> or - <command>DELETE</>. + or <quote>junk</quote>, target columns to the list of columns that are to be + retrieved from the foreign table during an <command>UPDATE</command> or + <command>DELETE</command>. </para> <para> - To do that, add <structname>TargetEntry</> items to - <literal>parsetree->targetList</>, containing expressions for the + To do that, add <structname>TargetEntry</structname> items to + <literal>parsetree->targetList</literal>, containing expressions for the extra values to be fetched. Each such entry must be marked - <structfield>resjunk</> = <literal>true</>, and must have a distinct - <structfield>resname</> that will identify it at execution time. - Avoid using names matching <literal>ctid<replaceable>N</></literal>, + <structfield>resjunk</structfield> = <literal>true</literal>, and must have a distinct + <structfield>resname</structfield> that will identify it at execution time. + Avoid using names matching <literal>ctid<replaceable>N</replaceable></literal>, <literal>wholerow</literal>, or - <literal>wholerow<replaceable>N</></literal>, as the core system can + <literal>wholerow<replaceable>N</replaceable></literal>, as the core system can generate junk columns of these names. </para> @@ -435,16 +435,16 @@ AddForeignUpdateTargets(Query *parsetree, This function is called in the rewriter, not the planner, so the information available is a bit different from that available to the planning routines. - <literal>parsetree</> is the parse tree for the <command>UPDATE</> or - <command>DELETE</> command, while <literal>target_rte</> and - <literal>target_relation</> describe the target foreign table. + <literal>parsetree</literal> is the parse tree for the <command>UPDATE</command> or + <command>DELETE</command> command, while <literal>target_rte</literal> and + <literal>target_relation</literal> describe the target foreign table. </para> <para> - If the <function>AddForeignUpdateTargets</> pointer is set to - <literal>NULL</>, no extra target expressions are added. - (This will make it impossible to implement <command>DELETE</> - operations, though <command>UPDATE</> may still be feasible if the FDW + If the <function>AddForeignUpdateTargets</function> pointer is set to + <literal>NULL</literal>, no extra target expressions are added. + (This will make it impossible to implement <command>DELETE</command> + operations, though <command>UPDATE</command> may still be feasible if the FDW relies on an unchanging primary key to identify rows.) </para> @@ -459,21 +459,21 @@ PlanForeignModify(PlannerInfo *root, Perform any additional planning actions needed for an insert, update, or delete on a foreign table. This function generates the FDW-private - information that will be attached to the <structname>ModifyTable</> plan + information that will be attached to the <structname>ModifyTable</structname> plan node that performs the update action. This private information must - have the form of a <literal>List</>, and will be delivered to - <function>BeginForeignModify</> during the execution stage. + have the form of a <literal>List</literal>, and will be delivered to + <function>BeginForeignModify</function> during the execution stage. </para> <para> - <literal>root</> is the planner's global information about the query. - <literal>plan</> is the <structname>ModifyTable</> plan node, which is - complete except for the <structfield>fdwPrivLists</> field. - <literal>resultRelation</> identifies the target foreign table by its - range table index. <literal>subplan_index</> identifies which target of - the <structname>ModifyTable</> plan node this is, counting from zero; - use this if you want to index into <literal>plan->plans</> or other - substructure of the <literal>plan</> node. + <literal>root</literal> is the planner's global information about the query. + <literal>plan</literal> is the <structname>ModifyTable</structname> plan node, which is + complete except for the <structfield>fdwPrivLists</structfield> field. + <literal>resultRelation</literal> identifies the target foreign table by its + range table index. <literal>subplan_index</literal> identifies which target of + the <structname>ModifyTable</structname> plan node this is, counting from zero; + use this if you want to index into <literal>plan->plans</literal> or other + substructure of the <literal>plan</literal> node. </para> <para> @@ -481,10 +481,10 @@ PlanForeignModify(PlannerInfo *root, </para> <para> - If the <function>PlanForeignModify</> pointer is set to - <literal>NULL</>, no additional plan-time actions are taken, and the - <literal>fdw_private</> list delivered to - <function>BeginForeignModify</> will be NIL. + If the <function>PlanForeignModify</function> pointer is set to + <literal>NULL</literal>, no additional plan-time actions are taken, and the + <literal>fdw_private</literal> list delivered to + <function>BeginForeignModify</function> will be NIL. </para> <para> @@ -500,37 +500,37 @@ BeginForeignModify(ModifyTableState *mtstate, Begin executing a foreign table modification operation. This routine is called during executor startup. It should perform any initialization needed prior to the actual table modifications. Subsequently, - <function>ExecForeignInsert</>, <function>ExecForeignUpdate</> or - <function>ExecForeignDelete</> will be called for each tuple to be + <function>ExecForeignInsert</function>, <function>ExecForeignUpdate</function> or + <function>ExecForeignDelete</function> will be called for each tuple to be inserted, updated, or deleted. </para> <para> - <literal>mtstate</> is the overall state of the - <structname>ModifyTable</> plan node being executed; global data about + <literal>mtstate</literal> is the overall state of the + <structname>ModifyTable</structname> plan node being executed; global data about the plan and execution state is available via this structure. - <literal>rinfo</> is the <structname>ResultRelInfo</> struct describing - the target foreign table. (The <structfield>ri_FdwState</> field of - <structname>ResultRelInfo</> is available for the FDW to store any + <literal>rinfo</literal> is the <structname>ResultRelInfo</structname> struct describing + the target foreign table. (The <structfield>ri_FdwState</structfield> field of + <structname>ResultRelInfo</structname> is available for the FDW to store any private state it needs for this operation.) - <literal>fdw_private</> contains the private data generated by - <function>PlanForeignModify</>, if any. - <literal>subplan_index</> identifies which target of - the <structname>ModifyTable</> plan node this is. - <literal>eflags</> contains flag bits describing the executor's + <literal>fdw_private</literal> contains the private data generated by + <function>PlanForeignModify</function>, if any. + <literal>subplan_index</literal> identifies which target of + the <structname>ModifyTable</structname> plan node this is. + <literal>eflags</literal> contains flag bits describing the executor's operating mode for this plan node. </para> <para> - Note that when <literal>(eflags & EXEC_FLAG_EXPLAIN_ONLY)</> is + Note that when <literal>(eflags & EXEC_FLAG_EXPLAIN_ONLY)</literal> is true, this function should not perform any externally-visible actions; it should only do the minimum required to make the node state valid - for <function>ExplainForeignModify</> and <function>EndForeignModify</>. + for <function>ExplainForeignModify</function> and <function>EndForeignModify</function>. </para> <para> - If the <function>BeginForeignModify</> pointer is set to - <literal>NULL</>, no action is taken during executor startup. + If the <function>BeginForeignModify</function> pointer is set to + <literal>NULL</literal>, no action is taken during executor startup. </para> <para> @@ -543,16 +543,16 @@ ExecForeignInsert(EState *estate, </programlisting> Insert one tuple into the foreign table. - <literal>estate</> is global execution state for the query. - <literal>rinfo</> is the <structname>ResultRelInfo</> struct describing + <literal>estate</literal> is global execution state for the query. + <literal>rinfo</literal> is the <structname>ResultRelInfo</structname> struct describing the target foreign table. - <literal>slot</> contains the tuple to be inserted; it will match the + <literal>slot</literal> contains the tuple to be inserted; it will match the row-type definition of the foreign table. - <literal>planSlot</> contains the tuple that was generated by the - <structname>ModifyTable</> plan node's subplan; it differs from - <literal>slot</> in possibly containing additional <quote>junk</> - columns. (The <literal>planSlot</> is typically of little interest - for <command>INSERT</> cases, but is provided for completeness.) + <literal>planSlot</literal> contains the tuple that was generated by the + <structname>ModifyTable</structname> plan node's subplan; it differs from + <literal>slot</literal> in possibly containing additional <quote>junk</quote> + columns. (The <literal>planSlot</literal> is typically of little interest + for <command>INSERT</command> cases, but is provided for completeness.) </para> <para> @@ -560,22 +560,22 @@ ExecForeignInsert(EState *estate, inserted (this might differ from the data supplied, for example as a result of trigger actions), or NULL if no row was actually inserted (again, typically as a result of triggers). The passed-in - <literal>slot</> can be re-used for this purpose. + <literal>slot</literal> can be re-used for this purpose. </para> <para> - The data in the returned slot is used only if the <command>INSERT</> - query has a <literal>RETURNING</> clause or the foreign table has - an <literal>AFTER ROW</> trigger. Triggers require all columns, but the + The data in the returned slot is used only if the <command>INSERT</command> + query has a <literal>RETURNING</literal> clause or the foreign table has + an <literal>AFTER ROW</literal> trigger. Triggers require all columns, but the FDW could choose to optimize away returning some or all columns depending - on the contents of the <literal>RETURNING</> clause. Regardless, some + on the contents of the <literal>RETURNING</literal> clause. Regardless, some slot must be returned to indicate success, or the query's reported row count will be wrong. </para> <para> - If the <function>ExecForeignInsert</> pointer is set to - <literal>NULL</>, attempts to insert into the foreign table will fail + If the <function>ExecForeignInsert</function> pointer is set to + <literal>NULL</literal>, attempts to insert into the foreign table will fail with an error message. </para> @@ -589,16 +589,16 @@ ExecForeignUpdate(EState *estate, </programlisting> Update one tuple in the foreign table. - <literal>estate</> is global execution state for the query. - <literal>rinfo</> is the <structname>ResultRelInfo</> struct describing + <literal>estate</literal> is global execution state for the query. + <literal>rinfo</literal> is the <structname>ResultRelInfo</structname> struct describing the target foreign table. - <literal>slot</> contains the new data for the tuple; it will match the + <literal>slot</literal> contains the new data for the tuple; it will match the row-type definition of the foreign table. - <literal>planSlot</> contains the tuple that was generated by the - <structname>ModifyTable</> plan node's subplan; it differs from - <literal>slot</> in possibly containing additional <quote>junk</> + <literal>planSlot</literal> contains the tuple that was generated by the + <structname>ModifyTable</structname> plan node's subplan; it differs from + <literal>slot</literal> in possibly containing additional <quote>junk</quote> columns. In particular, any junk columns that were requested by - <function>AddForeignUpdateTargets</> will be available from this slot. + <function>AddForeignUpdateTargets</function> will be available from this slot. </para> <para> @@ -606,22 +606,22 @@ ExecForeignUpdate(EState *estate, updated (this might differ from the data supplied, for example as a result of trigger actions), or NULL if no row was actually updated (again, typically as a result of triggers). The passed-in - <literal>slot</> can be re-used for this purpose. + <literal>slot</literal> can be re-used for this purpose. </para> <para> - The data in the returned slot is used only if the <command>UPDATE</> - query has a <literal>RETURNING</> clause or the foreign table has - an <literal>AFTER ROW</> trigger. Triggers require all columns, but the + The data in the returned slot is used only if the <command>UPDATE</command> + query has a <literal>RETURNING</literal> clause or the foreign table has + an <literal>AFTER ROW</literal> trigger. Triggers require all columns, but the FDW could choose to optimize away returning some or all columns depending - on the contents of the <literal>RETURNING</> clause. Regardless, some + on the contents of the <literal>RETURNING</literal> clause. Regardless, some slot must be returned to indicate success, or the query's reported row count will be wrong. </para> <para> - If the <function>ExecForeignUpdate</> pointer is set to - <literal>NULL</>, attempts to update the foreign table will fail + If the <function>ExecForeignUpdate</function> pointer is set to + <literal>NULL</literal>, attempts to update the foreign table will fail with an error message. </para> @@ -635,37 +635,37 @@ ExecForeignDelete(EState *estate, </programlisting> Delete one tuple from the foreign table. - <literal>estate</> is global execution state for the query. - <literal>rinfo</> is the <structname>ResultRelInfo</> struct describing + <literal>estate</literal> is global execution state for the query. + <literal>rinfo</literal> is the <structname>ResultRelInfo</structname> struct describing the target foreign table. - <literal>slot</> contains nothing useful upon call, but can be used to + <literal>slot</literal> contains nothing useful upon call, but can be used to hold the returned tuple. - <literal>planSlot</> contains the tuple that was generated by the - <structname>ModifyTable</> plan node's subplan; in particular, it will + <literal>planSlot</literal> contains the tuple that was generated by the + <structname>ModifyTable</structname> plan node's subplan; in particular, it will carry any junk columns that were requested by - <function>AddForeignUpdateTargets</>. The junk column(s) must be used + <function>AddForeignUpdateTargets</function>. The junk column(s) must be used to identify the tuple to be deleted. </para> <para> The return value is either a slot containing the row that was deleted, or NULL if no row was deleted (typically as a result of triggers). The - passed-in <literal>slot</> can be used to hold the tuple to be returned. + passed-in <literal>slot</literal> can be used to hold the tuple to be returned. </para> <para> - The data in the returned slot is used only if the <command>DELETE</> - query has a <literal>RETURNING</> clause or the foreign table has - an <literal>AFTER ROW</> trigger. Triggers require all columns, but the + The data in the returned slot is used only if the <command>DELETE</command> + query has a <literal>RETURNING</literal> clause or the foreign table has + an <literal>AFTER ROW</literal> trigger. Triggers require all columns, but the FDW could choose to optimize away returning some or all columns depending - on the contents of the <literal>RETURNING</> clause. Regardless, some + on the contents of the <literal>RETURNING</literal> clause. Regardless, some slot must be returned to indicate success, or the query's reported row count will be wrong. </para> <para> - If the <function>ExecForeignDelete</> pointer is set to - <literal>NULL</>, attempts to delete from the foreign table will fail + If the <function>ExecForeignDelete</function> pointer is set to + <literal>NULL</literal>, attempts to delete from the foreign table will fail with an error message. </para> @@ -682,8 +682,8 @@ EndForeignModify(EState *estate, </para> <para> - If the <function>EndForeignModify</> pointer is set to - <literal>NULL</>, no action is taken during executor shutdown. + If the <function>EndForeignModify</function> pointer is set to + <literal>NULL</literal>, no action is taken during executor shutdown. </para> <para> @@ -695,22 +695,22 @@ IsForeignRelUpdatable(Relation rel); Report which update operations the specified foreign table supports. The return value should be a bit mask of rule event numbers indicating which operations are supported by the foreign table, using the - <literal>CmdType</> enumeration; that is, - <literal>(1 << CMD_UPDATE) = 4</> for <command>UPDATE</>, - <literal>(1 << CMD_INSERT) = 8</> for <command>INSERT</>, and - <literal>(1 << CMD_DELETE) = 16</> for <command>DELETE</>. + <literal>CmdType</literal> enumeration; that is, + <literal>(1 << CMD_UPDATE) = 4</literal> for <command>UPDATE</command>, + <literal>(1 << CMD_INSERT) = 8</literal> for <command>INSERT</command>, and + <literal>(1 << CMD_DELETE) = 16</literal> for <command>DELETE</command>. </para> <para> - If the <function>IsForeignRelUpdatable</> pointer is set to - <literal>NULL</>, foreign tables are assumed to be insertable, updatable, - or deletable if the FDW provides <function>ExecForeignInsert</>, - <function>ExecForeignUpdate</>, or <function>ExecForeignDelete</> + If the <function>IsForeignRelUpdatable</function> pointer is set to + <literal>NULL</literal>, foreign tables are assumed to be insertable, updatable, + or deletable if the FDW provides <function>ExecForeignInsert</function>, + <function>ExecForeignUpdate</function>, or <function>ExecForeignDelete</function> respectively. This function is only needed if the FDW supports some tables that are updatable and some that are not. (Even then, it's permissible to throw an error in the execution routine instead of checking in this function. However, this function is used to determine - updatability for display in the <literal>information_schema</> views.) + updatability for display in the <literal>information_schema</literal> views.) </para> <para> @@ -736,26 +736,26 @@ PlanDirectModify(PlannerInfo *root, </programlisting> Decide whether it is safe to execute a direct modification - on the remote server. If so, return <literal>true</> after performing - planning actions needed for that. Otherwise, return <literal>false</>. + on the remote server. If so, return <literal>true</literal> after performing + planning actions needed for that. Otherwise, return <literal>false</literal>. This optional function is called during query planning. - If this function succeeds, <function>BeginDirectModify</>, - <function>IterateDirectModify</> and <function>EndDirectModify</> will + If this function succeeds, <function>BeginDirectModify</function>, + <function>IterateDirectModify</function> and <function>EndDirectModify</function> will be called at the execution stage, instead. Otherwise, the table modification will be executed using the table-updating functions described above. - The parameters are the same as for <function>PlanForeignModify</>. + The parameters are the same as for <function>PlanForeignModify</function>. </para> <para> To execute the direct modification on the remote server, this function - must rewrite the target subplan with a <structname>ForeignScan</> plan + must rewrite the target subplan with a <structname>ForeignScan</structname> plan node that executes the direct modification on the remote server. The - <structfield>operation</> field of the <structname>ForeignScan</> must - be set to the <literal>CmdType</> enumeration appropriately; that is, - <literal>CMD_UPDATE</> for <command>UPDATE</>, - <literal>CMD_INSERT</> for <command>INSERT</>, and - <literal>CMD_DELETE</> for <command>DELETE</>. + <structfield>operation</structfield> field of the <structname>ForeignScan</structname> must + be set to the <literal>CmdType</literal> enumeration appropriately; that is, + <literal>CMD_UPDATE</literal> for <command>UPDATE</command>, + <literal>CMD_INSERT</literal> for <command>INSERT</command>, and + <literal>CMD_DELETE</literal> for <command>DELETE</command>. </para> <para> @@ -763,8 +763,8 @@ PlanDirectModify(PlannerInfo *root, </para> <para> - If the <function>PlanDirectModify</> pointer is set to - <literal>NULL</>, no attempts to execute a direct modification on the + If the <function>PlanDirectModify</function> pointer is set to + <literal>NULL</literal>, no attempts to execute a direct modification on the remote server are taken. </para> @@ -778,27 +778,27 @@ BeginDirectModify(ForeignScanState *node, Prepare to execute a direct modification on the remote server. This is called during executor startup. It should perform any initialization needed prior to the direct modification (that should be - done upon the first call to <function>IterateDirectModify</>). - The <structname>ForeignScanState</> node has already been created, but - its <structfield>fdw_state</> field is still NULL. Information about + done upon the first call to <function>IterateDirectModify</function>). + The <structname>ForeignScanState</structname> node has already been created, but + its <structfield>fdw_state</structfield> field is still NULL. Information about the table to modify is accessible through the - <structname>ForeignScanState</> node (in particular, from the underlying - <structname>ForeignScan</> plan node, which contains any FDW-private - information provided by <function>PlanDirectModify</>). - <literal>eflags</> contains flag bits describing the executor's + <structname>ForeignScanState</structname> node (in particular, from the underlying + <structname>ForeignScan</structname> plan node, which contains any FDW-private + information provided by <function>PlanDirectModify</function>). + <literal>eflags</literal> contains flag bits describing the executor's operating mode for this plan node. </para> <para> - Note that when <literal>(eflags & EXEC_FLAG_EXPLAIN_ONLY)</> is + Note that when <literal>(eflags & EXEC_FLAG_EXPLAIN_ONLY)</literal> is true, this function should not perform any externally-visible actions; it should only do the minimum required to make the node state valid - for <function>ExplainDirectModify</> and <function>EndDirectModify</>. + for <function>ExplainDirectModify</function> and <function>EndDirectModify</function>. </para> <para> - If the <function>BeginDirectModify</> pointer is set to - <literal>NULL</>, no attempts to execute a direct modification on the + If the <function>BeginDirectModify</function> pointer is set to + <literal>NULL</literal>, no attempts to execute a direct modification on the remote server are taken. </para> @@ -808,43 +808,43 @@ TupleTableSlot * IterateDirectModify(ForeignScanState *node); </programlisting> - When the <command>INSERT</>, <command>UPDATE</> or <command>DELETE</> - query doesn't have a <literal>RETURNING</> clause, just return NULL + When the <command>INSERT</command>, <command>UPDATE</command> or <command>DELETE</command> + query doesn't have a <literal>RETURNING</literal> clause, just return NULL after a direct modification on the remote server. When the query has the clause, fetch one result containing the data - needed for the <literal>RETURNING</> calculation, returning it in a - tuple table slot (the node's <structfield>ScanTupleSlot</> should be + needed for the <literal>RETURNING</literal> calculation, returning it in a + tuple table slot (the node's <structfield>ScanTupleSlot</structfield> should be used for this purpose). The data that was actually inserted, updated or deleted must be stored in the - <literal>es_result_relation_info->ri_projectReturning->pi_exprContext->ecxt_scantuple</> - of the node's <structname>EState</>. + <literal>es_result_relation_info->ri_projectReturning->pi_exprContext->ecxt_scantuple</literal> + of the node's <structname>EState</structname>. Return NULL if no more rows are available. Note that this is called in a short-lived memory context that will be reset between invocations. Create a memory context in - <function>BeginDirectModify</> if you need longer-lived storage, or use - the <structfield>es_query_cxt</> of the node's <structname>EState</>. + <function>BeginDirectModify</function> if you need longer-lived storage, or use + the <structfield>es_query_cxt</structfield> of the node's <structname>EState</structname>. </para> <para> - The rows returned must match the <structfield>fdw_scan_tlist</> target + The rows returned must match the <structfield>fdw_scan_tlist</structfield> target list if one was supplied, otherwise they must match the row type of the foreign table being updated. If you choose to optimize away fetching - columns that are not needed for the <literal>RETURNING</> calculation, + columns that are not needed for the <literal>RETURNING</literal> calculation, you should insert nulls in those column positions, or else generate a - <structfield>fdw_scan_tlist</> list with those columns omitted. + <structfield>fdw_scan_tlist</structfield> list with those columns omitted. </para> <para> Whether the query has the clause or not, the query's reported row count must be incremented by the FDW itself. When the query doesn't have the clause, the FDW must also increment the row count for the - <structname>ForeignScanState</> node in the <command>EXPLAIN ANALYZE</> + <structname>ForeignScanState</structname> node in the <command>EXPLAIN ANALYZE</command> case. </para> <para> - If the <function>IterateDirectModify</> pointer is set to - <literal>NULL</>, no attempts to execute a direct modification on the + If the <function>IterateDirectModify</function> pointer is set to + <literal>NULL</literal>, no attempts to execute a direct modification on the remote server are taken. </para> @@ -860,8 +860,8 @@ EndDirectModify(ForeignScanState *node); </para> <para> - If the <function>EndDirectModify</> pointer is set to - <literal>NULL</>, no attempts to execute a direct modification on the + If the <function>EndDirectModify</function> pointer is set to + <literal>NULL</literal>, no attempts to execute a direct modification on the remote server are taken. </para> @@ -871,7 +871,7 @@ EndDirectModify(ForeignScanState *node); <title>FDW Routines For Row Locking</title> <para> - If an FDW wishes to support <firstterm>late row locking</> (as described + If an FDW wishes to support <firstterm>late row locking</firstterm> (as described in <xref linkend="fdw-row-locking">), it must provide the following callback functions: </para> @@ -884,23 +884,23 @@ GetForeignRowMarkType(RangeTblEntry *rte, </programlisting> Report which row-marking option to use for a foreign table. - <literal>rte</> is the <structname>RangeTblEntry</> node for the table - and <literal>strength</> describes the lock strength requested by the - relevant <literal>FOR UPDATE/SHARE</> clause, if any. The result must be - a member of the <literal>RowMarkType</> enum type. + <literal>rte</literal> is the <structname>RangeTblEntry</structname> node for the table + and <literal>strength</literal> describes the lock strength requested by the + relevant <literal>FOR UPDATE/SHARE</literal> clause, if any. The result must be + a member of the <literal>RowMarkType</literal> enum type. </para> <para> This function is called during query planning for each foreign table that - appears in an <command>UPDATE</>, <command>DELETE</>, or <command>SELECT - FOR UPDATE/SHARE</> query and is not the target of <command>UPDATE</> - or <command>DELETE</>. + appears in an <command>UPDATE</command>, <command>DELETE</command>, or <command>SELECT + FOR UPDATE/SHARE</command> query and is not the target of <command>UPDATE</command> + or <command>DELETE</command>. </para> <para> - If the <function>GetForeignRowMarkType</> pointer is set to - <literal>NULL</>, the <literal>ROW_MARK_COPY</> option is always used. - (This implies that <function>RefetchForeignRow</> will never be called, + If the <function>GetForeignRowMarkType</function> pointer is set to + <literal>NULL</literal>, the <literal>ROW_MARK_COPY</literal> option is always used. + (This implies that <function>RefetchForeignRow</function> will never be called, so it need not be provided either.) </para> @@ -918,48 +918,48 @@ RefetchForeignRow(EState *estate, </programlisting> Re-fetch one tuple from the foreign table, after locking it if required. - <literal>estate</> is global execution state for the query. - <literal>erm</> is the <structname>ExecRowMark</> struct describing + <literal>estate</literal> is global execution state for the query. + <literal>erm</literal> is the <structname>ExecRowMark</structname> struct describing the target foreign table and the row lock type (if any) to acquire. - <literal>rowid</> identifies the tuple to be fetched. - <literal>updated</> is an output parameter. + <literal>rowid</literal> identifies the tuple to be fetched. + <literal>updated</literal> is an output parameter. </para> <para> This function should return a palloc'ed copy of the fetched tuple, - or <literal>NULL</> if the row lock couldn't be obtained. The row lock - type to acquire is defined by <literal>erm->markType</>, which is the - value previously returned by <function>GetForeignRowMarkType</>. - (<literal>ROW_MARK_REFERENCE</> means to just re-fetch the tuple without - acquiring any lock, and <literal>ROW_MARK_COPY</> will never be seen by + or <literal>NULL</literal> if the row lock couldn't be obtained. The row lock + type to acquire is defined by <literal>erm->markType</literal>, which is the + value previously returned by <function>GetForeignRowMarkType</function>. + (<literal>ROW_MARK_REFERENCE</literal> means to just re-fetch the tuple without + acquiring any lock, and <literal>ROW_MARK_COPY</literal> will never be seen by this routine.) </para> <para> - In addition, <literal>*updated</> should be set to <literal>true</> + In addition, <literal>*updated</literal> should be set to <literal>true</literal> if what was fetched was an updated version of the tuple rather than the same version previously obtained. (If the FDW cannot be sure about - this, always returning <literal>true</> is recommended.) + this, always returning <literal>true</literal> is recommended.) </para> <para> Note that by default, failure to acquire a row lock should result in - raising an error; a <literal>NULL</> return is only appropriate if - the <literal>SKIP LOCKED</> option is specified - by <literal>erm->waitPolicy</>. + raising an error; a <literal>NULL</literal> return is only appropriate if + the <literal>SKIP LOCKED</literal> option is specified + by <literal>erm->waitPolicy</literal>. </para> <para> - The <literal>rowid</> is the <structfield>ctid</> value previously read - for the row to be re-fetched. Although the <literal>rowid</> value is - passed as a <type>Datum</>, it can currently only be a <type>tid</>. The + The <literal>rowid</literal> is the <structfield>ctid</structfield> value previously read + for the row to be re-fetched. Although the <literal>rowid</literal> value is + passed as a <type>Datum</type>, it can currently only be a <type>tid</type>. The function API is chosen in hopes that it may be possible to allow other data types for row IDs in future. </para> <para> - If the <function>RefetchForeignRow</> pointer is set to - <literal>NULL</>, attempts to re-fetch rows will fail + If the <function>RefetchForeignRow</function> pointer is set to + <literal>NULL</literal>, attempts to re-fetch rows will fail with an error message. </para> @@ -976,13 +976,13 @@ RecheckForeignScan(ForeignScanState *node, Recheck that a previously-returned tuple still matches the relevant scan and join qualifiers, and possibly provide a modified version of the tuple. For foreign data wrappers which do not perform join pushdown, - it will typically be more convenient to set this to <literal>NULL</> and + it will typically be more convenient to set this to <literal>NULL</literal> and instead set <structfield>fdw_recheck_quals</structfield> appropriately. When outer joins are pushed down, however, it isn't sufficient to reapply the checks relevant to all the base tables to the result tuple, even if all needed attributes are present, because failure to match some qualifier might result in some attributes going to NULL, rather than in - no tuple being returned. <literal>RecheckForeignScan</> can recheck + no tuple being returned. <literal>RecheckForeignScan</literal> can recheck qualifiers and return true if they are still satisfied and false otherwise, but it can also store a replacement tuple into the supplied slot. @@ -992,13 +992,13 @@ RecheckForeignScan(ForeignScanState *node, To implement join pushdown, a foreign data wrapper will typically construct an alternative local join plan which is used only for rechecks; this will become the outer subplan of the - <literal>ForeignScan</>. When a recheck is required, this subplan + <literal>ForeignScan</literal>. When a recheck is required, this subplan can be executed and the resulting tuple can be stored in the slot. This plan need not be efficient since no base table will return more than one row; for example, it may implement all joins as nested loops. - The function <literal>GetExistingLocalJoinPath</> may be used to search + The function <literal>GetExistingLocalJoinPath</literal> may be used to search existing paths for a suitable local join path, which can be used as the - alternative local join plan. <literal>GetExistingLocalJoinPath</> + alternative local join plan. <literal>GetExistingLocalJoinPath</literal> searches for an unparameterized path in the path list of the specified join relation. (If it does not find such a path, it returns NULL, in which case a foreign data wrapper may build the local path by itself or @@ -1007,7 +1007,7 @@ RecheckForeignScan(ForeignScanState *node, </sect2> <sect2 id="fdw-callbacks-explain"> - <title>FDW Routines for <command>EXPLAIN</></title> + <title>FDW Routines for <command>EXPLAIN</command></title> <para> <programlisting> @@ -1016,19 +1016,19 @@ ExplainForeignScan(ForeignScanState *node, ExplainState *es); </programlisting> - Print additional <command>EXPLAIN</> output for a foreign table scan. - This function can call <function>ExplainPropertyText</> and - related functions to add fields to the <command>EXPLAIN</> output. - The flag fields in <literal>es</> can be used to determine what to - print, and the state of the <structname>ForeignScanState</> node + Print additional <command>EXPLAIN</command> output for a foreign table scan. + This function can call <function>ExplainPropertyText</function> and + related functions to add fields to the <command>EXPLAIN</command> output. + The flag fields in <literal>es</literal> can be used to determine what to + print, and the state of the <structname>ForeignScanState</structname> node can be inspected to provide run-time statistics in the <command>EXPLAIN - ANALYZE</> case. + ANALYZE</command> case. </para> <para> - If the <function>ExplainForeignScan</> pointer is set to - <literal>NULL</>, no additional information is printed during - <command>EXPLAIN</>. + If the <function>ExplainForeignScan</function> pointer is set to + <literal>NULL</literal>, no additional information is printed during + <command>EXPLAIN</command>. </para> <para> @@ -1041,20 +1041,20 @@ ExplainForeignModify(ModifyTableState *mtstate, struct ExplainState *es); </programlisting> - Print additional <command>EXPLAIN</> output for a foreign table update. - This function can call <function>ExplainPropertyText</> and - related functions to add fields to the <command>EXPLAIN</> output. - The flag fields in <literal>es</> can be used to determine what to - print, and the state of the <structname>ModifyTableState</> node + Print additional <command>EXPLAIN</command> output for a foreign table update. + This function can call <function>ExplainPropertyText</function> and + related functions to add fields to the <command>EXPLAIN</command> output. + The flag fields in <literal>es</literal> can be used to determine what to + print, and the state of the <structname>ModifyTableState</structname> node can be inspected to provide run-time statistics in the <command>EXPLAIN - ANALYZE</> case. The first four arguments are the same as for - <function>BeginForeignModify</>. + ANALYZE</command> case. The first four arguments are the same as for + <function>BeginForeignModify</function>. </para> <para> - If the <function>ExplainForeignModify</> pointer is set to - <literal>NULL</>, no additional information is printed during - <command>EXPLAIN</>. + If the <function>ExplainForeignModify</function> pointer is set to + <literal>NULL</literal>, no additional information is printed during + <command>EXPLAIN</command>. </para> <para> @@ -1064,26 +1064,26 @@ ExplainDirectModify(ForeignScanState *node, ExplainState *es); </programlisting> - Print additional <command>EXPLAIN</> output for a direct modification + Print additional <command>EXPLAIN</command> output for a direct modification on the remote server. - This function can call <function>ExplainPropertyText</> and - related functions to add fields to the <command>EXPLAIN</> output. - The flag fields in <literal>es</> can be used to determine what to - print, and the state of the <structname>ForeignScanState</> node + This function can call <function>ExplainPropertyText</function> and + related functions to add fields to the <command>EXPLAIN</command> output. + The flag fields in <literal>es</literal> can be used to determine what to + print, and the state of the <structname>ForeignScanState</structname> node can be inspected to provide run-time statistics in the <command>EXPLAIN - ANALYZE</> case. + ANALYZE</command> case. </para> <para> - If the <function>ExplainDirectModify</> pointer is set to - <literal>NULL</>, no additional information is printed during - <command>EXPLAIN</>. + If the <function>ExplainDirectModify</function> pointer is set to + <literal>NULL</literal>, no additional information is printed during + <command>EXPLAIN</command>. </para> </sect2> <sect2 id="fdw-callbacks-analyze"> - <title>FDW Routines for <command>ANALYZE</></title> + <title>FDW Routines for <command>ANALYZE</command></title> <para> <programlisting> @@ -1095,15 +1095,15 @@ AnalyzeForeignTable(Relation relation, This function is called when <xref linkend="sql-analyze"> is executed on a foreign table. If the FDW can collect statistics for this - foreign table, it should return <literal>true</>, and provide a pointer + foreign table, it should return <literal>true</literal>, and provide a pointer to a function that will collect sample rows from the table in - <parameter>func</>, plus the estimated size of the table in pages in - <parameter>totalpages</>. Otherwise, return <literal>false</>. + <parameter>func</parameter>, plus the estimated size of the table in pages in + <parameter>totalpages</parameter>. Otherwise, return <literal>false</literal>. </para> <para> If the FDW does not support collecting statistics for any tables, the - <function>AnalyzeForeignTable</> pointer can be set to <literal>NULL</>. + <function>AnalyzeForeignTable</function> pointer can be set to <literal>NULL</literal>. </para> <para> @@ -1118,19 +1118,19 @@ AcquireSampleRowsFunc(Relation relation, double *totaldeadrows); </programlisting> - A random sample of up to <parameter>targrows</> rows should be collected - from the table and stored into the caller-provided <parameter>rows</> + A random sample of up to <parameter>targrows</parameter> rows should be collected + from the table and stored into the caller-provided <parameter>rows</parameter> array. The actual number of rows collected must be returned. In addition, store estimates of the total numbers of live and dead rows in - the table into the output parameters <parameter>totalrows</> and - <parameter>totaldeadrows</>. (Set <parameter>totaldeadrows</> to zero + the table into the output parameters <parameter>totalrows</parameter> and + <parameter>totaldeadrows</parameter>. (Set <parameter>totaldeadrows</parameter> to zero if the FDW does not have any concept of dead rows.) </para> </sect2> <sect2 id="fdw-callbacks-import"> - <title>FDW Routines For <command>IMPORT FOREIGN SCHEMA</></title> + <title>FDW Routines For <command>IMPORT FOREIGN SCHEMA</command></title> <para> <programlisting> @@ -1147,44 +1147,44 @@ ImportForeignSchema(ImportForeignSchemaStmt *stmt, Oid serverOid); </para> <para> - Within the <structname>ImportForeignSchemaStmt</> struct, - <structfield>remote_schema</> is the name of the remote schema from + Within the <structname>ImportForeignSchemaStmt</structname> struct, + <structfield>remote_schema</structfield> is the name of the remote schema from which tables are to be imported. - <structfield>list_type</> identifies how to filter table names: - <literal>FDW_IMPORT_SCHEMA_ALL</> means that all tables in the remote - schema should be imported (in this case <structfield>table_list</> is - empty), <literal>FDW_IMPORT_SCHEMA_LIMIT_TO</> means to include only - tables listed in <structfield>table_list</>, - and <literal>FDW_IMPORT_SCHEMA_EXCEPT</> means to exclude the tables - listed in <structfield>table_list</>. - <structfield>options</> is a list of options used for the import process. + <structfield>list_type</structfield> identifies how to filter table names: + <literal>FDW_IMPORT_SCHEMA_ALL</literal> means that all tables in the remote + schema should be imported (in this case <structfield>table_list</structfield> is + empty), <literal>FDW_IMPORT_SCHEMA_LIMIT_TO</literal> means to include only + tables listed in <structfield>table_list</structfield>, + and <literal>FDW_IMPORT_SCHEMA_EXCEPT</literal> means to exclude the tables + listed in <structfield>table_list</structfield>. + <structfield>options</structfield> is a list of options used for the import process. The meanings of the options are up to the FDW. For example, an FDW could use an option to define whether the - <literal>NOT NULL</> attributes of columns should be imported. + <literal>NOT NULL</literal> attributes of columns should be imported. These options need not have anything to do with those supported by the FDW as database object options. </para> <para> - The FDW may ignore the <structfield>local_schema</> field of - the <structname>ImportForeignSchemaStmt</>, because the core server + The FDW may ignore the <structfield>local_schema</structfield> field of + the <structname>ImportForeignSchemaStmt</structname>, because the core server will automatically insert that name into the parsed <command>CREATE - FOREIGN TABLE</> commands. + FOREIGN TABLE</command> commands. </para> <para> The FDW does not have to concern itself with implementing the filtering - specified by <structfield>list_type</> and <structfield>table_list</>, + specified by <structfield>list_type</structfield> and <structfield>table_list</structfield>, either, as the core server will automatically skip any returned commands for tables excluded according to those options. However, it's often useful to avoid the work of creating commands for excluded tables in the - first place. The function <function>IsImportableForeignTable()</> may be + first place. The function <function>IsImportableForeignTable()</function> may be useful to test whether a given foreign-table name will pass the filter. </para> <para> If the FDW does not support importing table definitions, the - <function>ImportForeignSchema</> pointer can be set to <literal>NULL</>. + <function>ImportForeignSchema</function> pointer can be set to <literal>NULL</literal>. </para> </sect2> @@ -1192,8 +1192,8 @@ ImportForeignSchema(ImportForeignSchemaStmt *stmt, Oid serverOid); <sect2 id="fdw-callbacks-parallel"> <title>FDW Routines for Parallel Execution</title> <para> - A <structname>ForeignScan</> node can, optionally, support parallel - execution. A parallel <structname>ForeignScan</> will be executed + A <structname>ForeignScan</structname> node can, optionally, support parallel + execution. A parallel <structname>ForeignScan</structname> will be executed in multiple processes and must return each row exactly once across all cooperating processes. To do this, processes can coordinate through fixed-size chunks of dynamic shared memory. This shared memory is not @@ -1245,8 +1245,8 @@ InitializeDSMForeignScan(ForeignScanState *node, ParallelContext *pcxt, void *coordinate); </programlisting> Initialize the dynamic shared memory that will be required for parallel - operation. <literal>coordinate</> points to a shared memory area of - size equal to the return value of <function>EstimateDSMForeignScan</>. + operation. <literal>coordinate</literal> points to a shared memory area of + size equal to the return value of <function>EstimateDSMForeignScan</function>. This function is optional, and can be omitted if not needed. </para> @@ -1260,9 +1260,9 @@ ReInitializeDSMForeignScan(ForeignScanState *node, ParallelContext *pcxt, when the foreign-scan plan node is about to be re-scanned. This function is optional, and can be omitted if not needed. Recommended practice is that this function reset only shared state, - while the <function>ReScanForeignScan</> function resets only local + while the <function>ReScanForeignScan</function> function resets only local state. Currently, this function will be called - before <function>ReScanForeignScan</>, but it's best not to rely on + before <function>ReScanForeignScan</function>, but it's best not to rely on that ordering. </para> @@ -1273,7 +1273,7 @@ InitializeWorkerForeignScan(ForeignScanState *node, shm_toc *toc, void *coordinate); </programlisting> Initialize a parallel worker's local state based on the shared state - set up by the leader during <function>InitializeDSMForeignScan</>. + set up by the leader during <function>InitializeDSMForeignScan</function>. This function is optional, and can be omitted if not needed. </para> @@ -1284,7 +1284,7 @@ ShutdownForeignScan(ForeignScanState *node); </programlisting> Release resources when it is anticipated the node will not be executed to completion. This is not called in all cases; sometimes, - <literal>EndForeignScan</> may be called without this function having + <literal>EndForeignScan</literal> may be called without this function having been called first. Since the DSM segment used by parallel query is destroyed just after this callback is invoked, foreign data wrappers that wish to take some action before the DSM segment goes away should implement @@ -1302,13 +1302,13 @@ ReparameterizeForeignPathByChild(PlannerInfo *root, List *fdw_private, RelOptInfo *child_rel); </programlisting> This function is called while converting a path parameterized by the - top-most parent of the given child relation <literal>child_rel</> to be + top-most parent of the given child relation <literal>child_rel</literal> to be parameterized by the child relation. The function is used to reparameterize any paths or translate any expression nodes saved in the given - <literal>fdw_private</> member of a <structname>ForeignPath</>. The - callback may use <literal>reparameterize_path_by_child</>, - <literal>adjust_appendrel_attrs</> or - <literal>adjust_appendrel_attrs_multilevel</> as required. + <literal>fdw_private</literal> member of a <structname>ForeignPath</structname>. The + callback may use <literal>reparameterize_path_by_child</literal>, + <literal>adjust_appendrel_attrs</literal> or + <literal>adjust_appendrel_attrs_multilevel</literal> as required. </para> </sect2> @@ -1360,7 +1360,7 @@ GetUserMapping(Oid userid, Oid serverid); This function returns a <structname>UserMapping</structname> object for the user mapping of the given role on the given server. (If there is no mapping for the specific user, it will return the mapping for - <literal>PUBLIC</>, or throw error if there is none.) A + <literal>PUBLIC</literal>, or throw error if there is none.) A <structname>UserMapping</structname> object contains properties of the user mapping (see <filename>foreign/foreign.h</filename> for details). </para> @@ -1423,25 +1423,25 @@ GetForeignServerByName(const char *name, bool missing_ok); <title>Foreign Data Wrapper Query Planning</title> <para> - The FDW callback functions <function>GetForeignRelSize</>, - <function>GetForeignPaths</>, <function>GetForeignPlan</>, - <function>PlanForeignModify</>, <function>GetForeignJoinPaths</>, - <function>GetForeignUpperPaths</>, and <function>PlanDirectModify</> - must fit into the workings of the <productname>PostgreSQL</> planner. + The FDW callback functions <function>GetForeignRelSize</function>, + <function>GetForeignPaths</function>, <function>GetForeignPlan</function>, + <function>PlanForeignModify</function>, <function>GetForeignJoinPaths</function>, + <function>GetForeignUpperPaths</function>, and <function>PlanDirectModify</function> + must fit into the workings of the <productname>PostgreSQL</productname> planner. Here are some notes about what they must do. </para> <para> - The information in <literal>root</> and <literal>baserel</> can be used + The information in <literal>root</literal> and <literal>baserel</literal> can be used to reduce the amount of information that has to be fetched from the foreign table (and therefore reduce the cost). - <literal>baserel->baserestrictinfo</> is particularly interesting, as - it contains restriction quals (<literal>WHERE</> clauses) that should be + <literal>baserel->baserestrictinfo</literal> is particularly interesting, as + it contains restriction quals (<literal>WHERE</literal> clauses) that should be used to filter the rows to be fetched. (The FDW itself is not required to enforce these quals, as the core executor can check them instead.) - <literal>baserel->reltarget->exprs</> can be used to determine which + <literal>baserel->reltarget->exprs</literal> can be used to determine which columns need to be fetched; but note that it only lists columns that - have to be emitted by the <structname>ForeignScan</> plan node, not + have to be emitted by the <structname>ForeignScan</structname> plan node, not columns that are used in qual evaluation but not output by the query. </para> @@ -1452,49 +1452,49 @@ GetForeignServerByName(const char *name, bool missing_ok); </para> <para> - <literal>baserel->fdw_private</> is a <type>void</> pointer that is + <literal>baserel->fdw_private</literal> is a <type>void</type> pointer that is available for FDW planning functions to store information relevant to the particular foreign table. The core planner does not touch it except - to initialize it to NULL when the <literal>RelOptInfo</> node is created. + to initialize it to NULL when the <literal>RelOptInfo</literal> node is created. It is useful for passing information forward from - <function>GetForeignRelSize</> to <function>GetForeignPaths</> and/or - <function>GetForeignPaths</> to <function>GetForeignPlan</>, thereby + <function>GetForeignRelSize</function> to <function>GetForeignPaths</function> and/or + <function>GetForeignPaths</function> to <function>GetForeignPlan</function>, thereby avoiding recalculation. </para> <para> - <function>GetForeignPaths</> can identify the meaning of different + <function>GetForeignPaths</function> can identify the meaning of different access paths by storing private information in the - <structfield>fdw_private</> field of <structname>ForeignPath</> nodes. - <structfield>fdw_private</> is declared as a <type>List</> pointer, but + <structfield>fdw_private</structfield> field of <structname>ForeignPath</structname> nodes. + <structfield>fdw_private</structfield> is declared as a <type>List</type> pointer, but could actually contain anything since the core planner does not touch it. However, best practice is to use a representation that's dumpable - by <function>nodeToString</>, for use with debugging support available + by <function>nodeToString</function>, for use with debugging support available in the backend. </para> <para> - <function>GetForeignPlan</> can examine the <structfield>fdw_private</> - field of the selected <structname>ForeignPath</> node, and can generate - <structfield>fdw_exprs</> and <structfield>fdw_private</> lists to be - placed in the <structname>ForeignScan</> plan node, where they will be + <function>GetForeignPlan</function> can examine the <structfield>fdw_private</structfield> + field of the selected <structname>ForeignPath</structname> node, and can generate + <structfield>fdw_exprs</structfield> and <structfield>fdw_private</structfield> lists to be + placed in the <structname>ForeignScan</structname> plan node, where they will be available at execution time. Both of these lists must be - represented in a form that <function>copyObject</> knows how to copy. - The <structfield>fdw_private</> list has no other restrictions and is + represented in a form that <function>copyObject</function> knows how to copy. + The <structfield>fdw_private</structfield> list has no other restrictions and is not interpreted by the core backend in any way. The - <structfield>fdw_exprs</> list, if not NIL, is expected to contain + <structfield>fdw_exprs</structfield> list, if not NIL, is expected to contain expression trees that are intended to be executed at run time. These trees will undergo post-processing by the planner to make them fully executable. </para> <para> - In <function>GetForeignPlan</>, generally the passed-in target list can - be copied into the plan node as-is. The passed <literal>scan_clauses</> list - contains the same clauses as <literal>baserel->baserestrictinfo</>, + In <function>GetForeignPlan</function>, generally the passed-in target list can + be copied into the plan node as-is. The passed <literal>scan_clauses</literal> list + contains the same clauses as <literal>baserel->baserestrictinfo</literal>, but may be re-ordered for better execution efficiency. In simple cases - the FDW can just strip <structname>RestrictInfo</> nodes from the - <literal>scan_clauses</> list (using <function>extract_actual_clauses</>) and put + the FDW can just strip <structname>RestrictInfo</structname> nodes from the + <literal>scan_clauses</literal> list (using <function>extract_actual_clauses</function>) and put all the clauses into the plan node's qual list, which means that all the clauses will be checked by the executor at run time. More complex FDWs may be able to check some of the clauses internally, in which case those @@ -1504,54 +1504,54 @@ GetForeignServerByName(const char *name, bool missing_ok); <para> As an example, the FDW might identify some restriction clauses of the - form <replaceable>foreign_variable</> <literal>=</> - <replaceable>sub_expression</>, which it determines can be executed on + form <replaceable>foreign_variable</replaceable> <literal>=</literal> + <replaceable>sub_expression</replaceable>, which it determines can be executed on the remote server given the locally-evaluated value of the - <replaceable>sub_expression</>. The actual identification of such a - clause should happen during <function>GetForeignPaths</>, since it would + <replaceable>sub_expression</replaceable>. The actual identification of such a + clause should happen during <function>GetForeignPaths</function>, since it would affect the cost estimate for the path. The path's - <structfield>fdw_private</> field would probably include a pointer to - the identified clause's <structname>RestrictInfo</> node. Then - <function>GetForeignPlan</> would remove that clause from <literal>scan_clauses</>, - but add the <replaceable>sub_expression</> to <structfield>fdw_exprs</> + <structfield>fdw_private</structfield> field would probably include a pointer to + the identified clause's <structname>RestrictInfo</structname> node. Then + <function>GetForeignPlan</function> would remove that clause from <literal>scan_clauses</literal>, + but add the <replaceable>sub_expression</replaceable> to <structfield>fdw_exprs</structfield> to ensure that it gets massaged into executable form. It would probably also put control information into the plan node's - <structfield>fdw_private</> field to tell the execution functions what + <structfield>fdw_private</structfield> field to tell the execution functions what to do at run time. The query transmitted to the remote server would - involve something like <literal>WHERE <replaceable>foreign_variable</> = + involve something like <literal>WHERE <replaceable>foreign_variable</replaceable> = $1</literal>, with the parameter value obtained at run time from - evaluation of the <structfield>fdw_exprs</> expression tree. + evaluation of the <structfield>fdw_exprs</structfield> expression tree. </para> <para> Any clauses removed from the plan node's qual list must instead be added - to <literal>fdw_recheck_quals</> or rechecked by - <literal>RecheckForeignScan</> in order to ensure correct behavior - at the <literal>READ COMMITTED</> isolation level. When a concurrent + to <literal>fdw_recheck_quals</literal> or rechecked by + <literal>RecheckForeignScan</literal> in order to ensure correct behavior + at the <literal>READ COMMITTED</literal> isolation level. When a concurrent update occurs for some other table involved in the query, the executor may need to verify that all of the original quals are still satisfied for the tuple, possibly against a different set of parameter values. Using - <literal>fdw_recheck_quals</> is typically easier than implementing checks - inside <literal>RecheckForeignScan</>, but this method will be + <literal>fdw_recheck_quals</literal> is typically easier than implementing checks + inside <literal>RecheckForeignScan</literal>, but this method will be insufficient when outer joins have been pushed down, since the join tuples in that case might have some fields go to NULL without rejecting the tuple entirely. </para> <para> - Another <structname>ForeignScan</> field that can be filled by FDWs - is <structfield>fdw_scan_tlist</>, which describes the tuples returned by + Another <structname>ForeignScan</structname> field that can be filled by FDWs + is <structfield>fdw_scan_tlist</structfield>, which describes the tuples returned by the FDW for this plan node. For simple foreign table scans this can be - set to <literal>NIL</>, implying that the returned tuples have the + set to <literal>NIL</literal>, implying that the returned tuples have the row type declared for the foreign table. A non-<symbol>NIL</symbol> value must be a - target list (list of <structname>TargetEntry</>s) containing Vars and/or + target list (list of <structname>TargetEntry</structname>s) containing Vars and/or expressions representing the returned columns. This might be used, for example, to show that the FDW has omitted some columns that it noticed won't be needed for the query. Also, if the FDW can compute expressions used by the query more cheaply than can be done locally, it could add - those expressions to <structfield>fdw_scan_tlist</>. Note that join - plans (created from paths made by <function>GetForeignJoinPaths</>) must - always supply <structfield>fdw_scan_tlist</> to describe the set of + those expressions to <structfield>fdw_scan_tlist</structfield>. Note that join + plans (created from paths made by <function>GetForeignJoinPaths</function>) must + always supply <structfield>fdw_scan_tlist</structfield> to describe the set of columns they will return. </para> @@ -1559,87 +1559,87 @@ GetForeignServerByName(const char *name, bool missing_ok); The FDW should always construct at least one path that depends only on the table's restriction clauses. In join queries, it might also choose to construct path(s) that depend on join clauses, for example - <replaceable>foreign_variable</> <literal>=</> - <replaceable>local_variable</>. Such clauses will not be found in - <literal>baserel->baserestrictinfo</> but must be sought in the + <replaceable>foreign_variable</replaceable> <literal>=</literal> + <replaceable>local_variable</replaceable>. Such clauses will not be found in + <literal>baserel->baserestrictinfo</literal> but must be sought in the relation's join lists. A path using such a clause is called a - <quote>parameterized path</>. It must identify the other relations + <quote>parameterized path</quote>. It must identify the other relations used in the selected join clause(s) with a suitable value of - <literal>param_info</>; use <function>get_baserel_parampathinfo</> - to compute that value. In <function>GetForeignPlan</>, the - <replaceable>local_variable</> portion of the join clause would be added - to <structfield>fdw_exprs</>, and then at run time the case works the + <literal>param_info</literal>; use <function>get_baserel_parampathinfo</function> + to compute that value. In <function>GetForeignPlan</function>, the + <replaceable>local_variable</replaceable> portion of the join clause would be added + to <structfield>fdw_exprs</structfield>, and then at run time the case works the same as for an ordinary restriction clause. </para> <para> - If an FDW supports remote joins, <function>GetForeignJoinPaths</> should - produce <structname>ForeignPath</>s for potential remote joins in much - the same way as <function>GetForeignPaths</> works for base tables. + If an FDW supports remote joins, <function>GetForeignJoinPaths</function> should + produce <structname>ForeignPath</structname>s for potential remote joins in much + the same way as <function>GetForeignPaths</function> works for base tables. Information about the intended join can be passed forward - to <function>GetForeignPlan</> in the same ways described above. - However, <structfield>baserestrictinfo</> is not relevant for join + to <function>GetForeignPlan</function> in the same ways described above. + However, <structfield>baserestrictinfo</structfield> is not relevant for join relations; instead, the relevant join clauses for a particular join are - passed to <function>GetForeignJoinPaths</> as a separate parameter - (<literal>extra->restrictlist</>). + passed to <function>GetForeignJoinPaths</function> as a separate parameter + (<literal>extra->restrictlist</literal>). </para> <para> An FDW might additionally support direct execution of some plan actions that are above the level of scans and joins, such as grouping or aggregation. To offer such options, the FDW should generate paths and - insert them into the appropriate <firstterm>upper relation</>. For + insert them into the appropriate <firstterm>upper relation</firstterm>. For example, a path representing remote aggregation should be inserted into - the <literal>UPPERREL_GROUP_AGG</> relation, using <function>add_path</>. + the <literal>UPPERREL_GROUP_AGG</literal> relation, using <function>add_path</function>. This path will be compared on a cost basis with local aggregation performed by reading a simple scan path for the foreign relation (note that such a path must also be supplied, else there will be an error at plan time). If the remote-aggregation path wins, which it usually would, it will be converted into a plan in the usual way, by - calling <function>GetForeignPlan</>. The recommended place to generate - such paths is in the <function>GetForeignUpperPaths</> + calling <function>GetForeignPlan</function>. The recommended place to generate + such paths is in the <function>GetForeignUpperPaths</function> callback function, which is called for each upper relation (i.e., each post-scan/join processing step), if all the base relations of the query come from the same FDW. </para> <para> - <function>PlanForeignModify</> and the other callbacks described in + <function>PlanForeignModify</function> and the other callbacks described in <xref linkend="fdw-callbacks-update"> are designed around the assumption that the foreign relation will be scanned in the usual way and then - individual row updates will be driven by a local <literal>ModifyTable</> + individual row updates will be driven by a local <literal>ModifyTable</literal> plan node. This approach is necessary for the general case where an update requires reading local tables as well as foreign tables. However, if the operation could be executed entirely by the foreign server, the FDW could generate a path representing that and insert it - into the <literal>UPPERREL_FINAL</> upper relation, where it would - compete against the <literal>ModifyTable</> approach. This approach - could also be used to implement remote <literal>SELECT FOR UPDATE</>, + into the <literal>UPPERREL_FINAL</literal> upper relation, where it would + compete against the <literal>ModifyTable</literal> approach. This approach + could also be used to implement remote <literal>SELECT FOR UPDATE</literal>, rather than using the row locking callbacks described in <xref linkend="fdw-callbacks-row-locking">. Keep in mind that a path - inserted into <literal>UPPERREL_FINAL</> is responsible for - implementing <emphasis>all</> behavior of the query. + inserted into <literal>UPPERREL_FINAL</literal> is responsible for + implementing <emphasis>all</emphasis> behavior of the query. </para> <para> - When planning an <command>UPDATE</> or <command>DELETE</>, - <function>PlanForeignModify</> and <function>PlanDirectModify</> - can look up the <structname>RelOptInfo</> + When planning an <command>UPDATE</command> or <command>DELETE</command>, + <function>PlanForeignModify</function> and <function>PlanDirectModify</function> + can look up the <structname>RelOptInfo</structname> struct for the foreign table and make use of the - <literal>baserel->fdw_private</> data previously created by the - scan-planning functions. However, in <command>INSERT</> the target - table is not scanned so there is no <structname>RelOptInfo</> for it. - The <structname>List</> returned by <function>PlanForeignModify</> has - the same restrictions as the <structfield>fdw_private</> list of a - <structname>ForeignScan</> plan node, that is it must contain only - structures that <function>copyObject</> knows how to copy. + <literal>baserel->fdw_private</literal> data previously created by the + scan-planning functions. However, in <command>INSERT</command> the target + table is not scanned so there is no <structname>RelOptInfo</structname> for it. + The <structname>List</structname> returned by <function>PlanForeignModify</function> has + the same restrictions as the <structfield>fdw_private</structfield> list of a + <structname>ForeignScan</structname> plan node, that is it must contain only + structures that <function>copyObject</function> knows how to copy. </para> <para> - <command>INSERT</> with an <literal>ON CONFLICT</> clause does not + <command>INSERT</command> with an <literal>ON CONFLICT</literal> clause does not support specifying the conflict target, as unique constraints or exclusion constraints on remote tables are not locally known. This - in turn implies that <literal>ON CONFLICT DO UPDATE</> is not supported, + in turn implies that <literal>ON CONFLICT DO UPDATE</literal> is not supported, since the specification is mandatory there. </para> @@ -1653,13 +1653,13 @@ GetForeignServerByName(const char *name, bool missing_ok); individual rows to prevent concurrent updates of those rows, it is usually worthwhile for the FDW to perform row-level locking with as close an approximation as practical to the semantics used in - ordinary <productname>PostgreSQL</> tables. There are multiple + ordinary <productname>PostgreSQL</productname> tables. There are multiple considerations involved in this. </para> <para> One key decision to be made is whether to perform <firstterm>early - locking</> or <firstterm>late locking</>. In early locking, a row is + locking</firstterm> or <firstterm>late locking</firstterm>. In early locking, a row is locked when it is first retrieved from the underlying store, while in late locking, the row is locked only when it is known that it needs to be locked. (The difference arises because some rows may be discarded by @@ -1669,25 +1669,25 @@ GetForeignServerByName(const char *name, bool missing_ok); concurrency or even unexpected deadlocks. Also, late locking is only possible if the row to be locked can be uniquely re-identified later. Preferably the row identifier should identify a specific version of the - row, as <productname>PostgreSQL</> TIDs do. + row, as <productname>PostgreSQL</productname> TIDs do. </para> <para> - By default, <productname>PostgreSQL</> ignores locking considerations + By default, <productname>PostgreSQL</productname> ignores locking considerations when interfacing to FDWs, but an FDW can perform early locking without any explicit support from the core code. The API functions described in <xref linkend="fdw-callbacks-row-locking">, which were added - in <productname>PostgreSQL</> 9.5, allow an FDW to use late locking if + in <productname>PostgreSQL</productname> 9.5, allow an FDW to use late locking if it wishes. </para> <para> - An additional consideration is that in <literal>READ COMMITTED</> - isolation mode, <productname>PostgreSQL</> may need to re-check + An additional consideration is that in <literal>READ COMMITTED</literal> + isolation mode, <productname>PostgreSQL</productname> may need to re-check restriction and join conditions against an updated version of some target tuple. Rechecking join conditions requires re-obtaining copies of the non-target rows that were previously joined to the target tuple. - When working with standard <productname>PostgreSQL</> tables, this is + When working with standard <productname>PostgreSQL</productname> tables, this is done by including the TIDs of the non-target tables in the column list projected through the join, and then re-fetching non-target rows when required. This approach keeps the join data set compact, but it @@ -1702,56 +1702,56 @@ GetForeignServerByName(const char *name, bool missing_ok); </para> <para> - For an <command>UPDATE</> or <command>DELETE</> on a foreign table, it - is recommended that the <literal>ForeignScan</> operation on the target + For an <command>UPDATE</command> or <command>DELETE</command> on a foreign table, it + is recommended that the <literal>ForeignScan</literal> operation on the target table perform early locking on the rows that it fetches, perhaps via the - equivalent of <command>SELECT FOR UPDATE</>. An FDW can detect whether - a table is an <command>UPDATE</>/<command>DELETE</> target at plan time - by comparing its relid to <literal>root->parse->resultRelation</>, - or at execution time by using <function>ExecRelationIsTargetRelation()</>. + equivalent of <command>SELECT FOR UPDATE</command>. An FDW can detect whether + a table is an <command>UPDATE</command>/<command>DELETE</command> target at plan time + by comparing its relid to <literal>root->parse->resultRelation</literal>, + or at execution time by using <function>ExecRelationIsTargetRelation()</function>. An alternative possibility is to perform late locking within the - <function>ExecForeignUpdate</> or <function>ExecForeignDelete</> + <function>ExecForeignUpdate</function> or <function>ExecForeignDelete</function> callback, but no special support is provided for this. </para> <para> For foreign tables that are specified to be locked by a <command>SELECT - FOR UPDATE/SHARE</> command, the <literal>ForeignScan</> operation can + FOR UPDATE/SHARE</command> command, the <literal>ForeignScan</literal> operation can again perform early locking by fetching tuples with the equivalent - of <command>SELECT FOR UPDATE/SHARE</>. To perform late locking + of <command>SELECT FOR UPDATE/SHARE</command>. To perform late locking instead, provide the callback functions defined in <xref linkend="fdw-callbacks-row-locking">. - In <function>GetForeignRowMarkType</>, select rowmark option - <literal>ROW_MARK_EXCLUSIVE</>, <literal>ROW_MARK_NOKEYEXCLUSIVE</>, - <literal>ROW_MARK_SHARE</>, or <literal>ROW_MARK_KEYSHARE</> depending + In <function>GetForeignRowMarkType</function>, select rowmark option + <literal>ROW_MARK_EXCLUSIVE</literal>, <literal>ROW_MARK_NOKEYEXCLUSIVE</literal>, + <literal>ROW_MARK_SHARE</literal>, or <literal>ROW_MARK_KEYSHARE</literal> depending on the requested lock strength. (The core code will act the same regardless of which of these four options you choose.) Elsewhere, you can detect whether a foreign table was specified to be - locked by this type of command by using <function>get_plan_rowmark</> at - plan time, or <function>ExecFindRowMark</> at execution time; you must + locked by this type of command by using <function>get_plan_rowmark</function> at + plan time, or <function>ExecFindRowMark</function> at execution time; you must check not only whether a non-null rowmark struct is returned, but that - its <structfield>strength</> field is not <literal>LCS_NONE</>. + its <structfield>strength</structfield> field is not <literal>LCS_NONE</literal>. </para> <para> - Lastly, for foreign tables that are used in an <command>UPDATE</>, - <command>DELETE</> or <command>SELECT FOR UPDATE/SHARE</> command but + Lastly, for foreign tables that are used in an <command>UPDATE</command>, + <command>DELETE</command> or <command>SELECT FOR UPDATE/SHARE</command> command but are not specified to be row-locked, you can override the default choice - to copy entire rows by having <function>GetForeignRowMarkType</> select - option <literal>ROW_MARK_REFERENCE</> when it sees lock strength - <literal>LCS_NONE</>. This will cause <function>RefetchForeignRow</> to - be called with that value for <structfield>markType</>; it should then + to copy entire rows by having <function>GetForeignRowMarkType</function> select + option <literal>ROW_MARK_REFERENCE</literal> when it sees lock strength + <literal>LCS_NONE</literal>. This will cause <function>RefetchForeignRow</function> to + be called with that value for <structfield>markType</structfield>; it should then re-fetch the row without acquiring any new lock. (If you have - a <function>GetForeignRowMarkType</> function but don't wish to re-fetch - unlocked rows, select option <literal>ROW_MARK_COPY</> - for <literal>LCS_NONE</>.) + a <function>GetForeignRowMarkType</function> function but don't wish to re-fetch + unlocked rows, select option <literal>ROW_MARK_COPY</literal> + for <literal>LCS_NONE</literal>.) </para> <para> - See <filename>src/include/nodes/lockoptions.h</>, the comments - for <type>RowMarkType</> and <type>PlanRowMark</> - in <filename>src/include/nodes/plannodes.h</>, and the comments for - <type>ExecRowMark</> in <filename>src/include/nodes/execnodes.h</> for + See <filename>src/include/nodes/lockoptions.h</filename>, the comments + for <type>RowMarkType</type> and <type>PlanRowMark</type> + in <filename>src/include/nodes/plannodes.h</filename>, and the comments for + <type>ExecRowMark</type> in <filename>src/include/nodes/execnodes.h</filename> for additional information. </para> diff --git a/doc/src/sgml/file-fdw.sgml b/doc/src/sgml/file-fdw.sgml index 74941a6f1ec..88aefb8ef07 100644 --- a/doc/src/sgml/file-fdw.sgml +++ b/doc/src/sgml/file-fdw.sgml @@ -8,7 +8,7 @@ </indexterm> <para> - The <filename>file_fdw</> module provides the foreign-data wrapper + The <filename>file_fdw</filename> module provides the foreign-data wrapper <function>file_fdw</function>, which can be used to access data files in the server's file system, or to execute programs on the server and read their output. The data file or program output must be in a format @@ -41,7 +41,7 @@ <listitem> <para> Specifies the command to be executed. The standard output of this - command will be read as though <command>COPY FROM PROGRAM</> were used. + command will be read as though <command>COPY FROM PROGRAM</command> were used. Either <literal>program</literal> or <literal>filename</literal> must be specified, but not both. </para> @@ -54,7 +54,7 @@ <listitem> <para> Specifies the data format, - the same as <command>COPY</>'s <literal>FORMAT</literal> option. + the same as <command>COPY</command>'s <literal>FORMAT</literal> option. </para> </listitem> </varlistentry> @@ -65,7 +65,7 @@ <listitem> <para> Specifies whether the data has a header line, - the same as <command>COPY</>'s <literal>HEADER</literal> option. + the same as <command>COPY</command>'s <literal>HEADER</literal> option. </para> </listitem> </varlistentry> @@ -76,7 +76,7 @@ <listitem> <para> Specifies the data delimiter character, - the same as <command>COPY</>'s <literal>DELIMITER</literal> option. + the same as <command>COPY</command>'s <literal>DELIMITER</literal> option. </para> </listitem> </varlistentry> @@ -87,7 +87,7 @@ <listitem> <para> Specifies the data quote character, - the same as <command>COPY</>'s <literal>QUOTE</literal> option. + the same as <command>COPY</command>'s <literal>QUOTE</literal> option. </para> </listitem> </varlistentry> @@ -98,7 +98,7 @@ <listitem> <para> Specifies the data escape character, - the same as <command>COPY</>'s <literal>ESCAPE</literal> option. + the same as <command>COPY</command>'s <literal>ESCAPE</literal> option. </para> </listitem> </varlistentry> @@ -109,7 +109,7 @@ <listitem> <para> Specifies the data null string, - the same as <command>COPY</>'s <literal>NULL</literal> option. + the same as <command>COPY</command>'s <literal>NULL</literal> option. </para> </listitem> </varlistentry> @@ -120,7 +120,7 @@ <listitem> <para> Specifies the data encoding, - the same as <command>COPY</>'s <literal>ENCODING</literal> option. + the same as <command>COPY</command>'s <literal>ENCODING</literal> option. </para> </listitem> </varlistentry> @@ -128,10 +128,10 @@ </variablelist> <para> - Note that while <command>COPY</> allows options such as <literal>HEADER</> + Note that while <command>COPY</command> allows options such as <literal>HEADER</literal> to be specified without a corresponding value, the foreign table option syntax requires a value to be present in all cases. To activate - <command>COPY</> options typically written without a value, you can pass + <command>COPY</command> options typically written without a value, you can pass the value TRUE, since all such options are Booleans. </para> @@ -150,7 +150,7 @@ This is a Boolean option. If true, it specifies that values of the column should not be matched against the null string (that is, the table-level <literal>null</literal> option). This has the same effect - as listing the column in <command>COPY</>'s + as listing the column in <command>COPY</command>'s <literal>FORCE_NOT_NULL</literal> option. </para> </listitem> @@ -162,11 +162,11 @@ <listitem> <para> This is a Boolean option. If true, it specifies that values of the - column which match the null string are returned as <literal>NULL</> + column which match the null string are returned as <literal>NULL</literal> even if the value is quoted. Without this option, only unquoted - values matching the null string are returned as <literal>NULL</>. + values matching the null string are returned as <literal>NULL</literal>. This has the same effect as listing the column in - <command>COPY</>'s <literal>FORCE_NULL</literal> option. + <command>COPY</command>'s <literal>FORCE_NULL</literal> option. </para> </listitem> </varlistentry> @@ -174,14 +174,14 @@ </variablelist> <para> - <command>COPY</>'s <literal>OIDS</literal> and + <command>COPY</command>'s <literal>OIDS</literal> and <literal>FORCE_QUOTE</literal> options are currently not supported by - <literal>file_fdw</>. + <literal>file_fdw</literal>. </para> <para> These options can only be specified for a foreign table or its columns, not - in the options of the <literal>file_fdw</> foreign-data wrapper, nor in the + in the options of the <literal>file_fdw</literal> foreign-data wrapper, nor in the options of a server or user mapping using the wrapper. </para> @@ -193,7 +193,7 @@ </para> <para> - When specifying the <literal>program</> option, keep in mind that the option + When specifying the <literal>program</literal> option, keep in mind that the option string is executed by the shell. If you need to pass any arguments to the command that come from an untrusted source, you must be careful to strip or escape any characters that might have special meaning to the shell. @@ -202,9 +202,9 @@ </para> <para> - For a foreign table using <literal>file_fdw</>, <command>EXPLAIN</> shows + For a foreign table using <literal>file_fdw</literal>, <command>EXPLAIN</command> shows the name of the file to be read or program to be run. - For a file, unless <literal>COSTS OFF</> is + For a file, unless <literal>COSTS OFF</literal> is specified, the file size (in bytes) is shown as well. </para> @@ -212,10 +212,10 @@ <title id="csvlog-fdw">Create a Foreign Table for PostgreSQL CSV Logs</title> <para> - One of the obvious uses for <literal>file_fdw</> is to make + One of the obvious uses for <literal>file_fdw</literal> is to make the PostgreSQL activity log available as a table for querying. To do this, first you must be logging to a CSV file, which here we - will call <literal>pglog.csv</>. First, install <literal>file_fdw</> + will call <literal>pglog.csv</literal>. First, install <literal>file_fdw</literal> as an extension: </para> @@ -233,7 +233,7 @@ CREATE SERVER pglog FOREIGN DATA WRAPPER file_fdw; <para> Now you are ready to create the foreign data table. Using the - <command>CREATE FOREIGN TABLE</> command, you will need to define + <command>CREATE FOREIGN TABLE</command> command, you will need to define the columns for the table, the CSV file name, and its format: <programlisting> diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml index b52407822dd..c672988cc51 100644 --- a/doc/src/sgml/func.sgml +++ b/doc/src/sgml/func.sgml @@ -77,13 +77,13 @@ </indexterm> <simplelist> - <member><literal>AND</></member> - <member><literal>OR</></member> - <member><literal>NOT</></member> + <member><literal>AND</literal></member> + <member><literal>OR</literal></member> + <member><literal>NOT</literal></member> </simplelist> <acronym>SQL</acronym> uses a three-valued logic system with true, - false, and <literal>null</>, which represents <quote>unknown</quote>. + false, and <literal>null</literal>, which represents <quote>unknown</quote>. Observe the following truth tables: <informaltable> @@ -274,82 +274,82 @@ <tbody> <row> - <entry> <replaceable>a</> <literal>BETWEEN</> <replaceable>x</> <literal>AND</> <replaceable>y</> </entry> + <entry> <replaceable>a</replaceable> <literal>BETWEEN</literal> <replaceable>x</replaceable> <literal>AND</literal> <replaceable>y</replaceable> </entry> <entry>between</entry> </row> <row> - <entry> <replaceable>a</> <literal>NOT BETWEEN</> <replaceable>x</> <literal>AND</> <replaceable>y</> </entry> + <entry> <replaceable>a</replaceable> <literal>NOT BETWEEN</literal> <replaceable>x</replaceable> <literal>AND</literal> <replaceable>y</replaceable> </entry> <entry>not between</entry> </row> <row> - <entry> <replaceable>a</> <literal>BETWEEN SYMMETRIC</> <replaceable>x</> <literal>AND</> <replaceable>y</> </entry> + <entry> <replaceable>a</replaceable> <literal>BETWEEN SYMMETRIC</literal> <replaceable>x</replaceable> <literal>AND</literal> <replaceable>y</replaceable> </entry> <entry>between, after sorting the comparison values</entry> </row> <row> - <entry> <replaceable>a</> <literal>NOT BETWEEN SYMMETRIC</> <replaceable>x</> <literal>AND</> <replaceable>y</> </entry> + <entry> <replaceable>a</replaceable> <literal>NOT BETWEEN SYMMETRIC</literal> <replaceable>x</replaceable> <literal>AND</literal> <replaceable>y</replaceable> </entry> <entry>not between, after sorting the comparison values</entry> </row> <row> - <entry> <replaceable>a</> <literal>IS DISTINCT FROM</> <replaceable>b</> </entry> + <entry> <replaceable>a</replaceable> <literal>IS DISTINCT FROM</literal> <replaceable>b</replaceable> </entry> <entry>not equal, treating null like an ordinary value</entry> </row> <row> - <entry><replaceable>a</> <literal>IS NOT DISTINCT FROM</> <replaceable>b</></entry> + <entry><replaceable>a</replaceable> <literal>IS NOT DISTINCT FROM</literal> <replaceable>b</replaceable></entry> <entry>equal, treating null like an ordinary value</entry> </row> <row> - <entry> <replaceable>expression</> <literal>IS NULL</> </entry> + <entry> <replaceable>expression</replaceable> <literal>IS NULL</literal> </entry> <entry>is null</entry> </row> <row> - <entry> <replaceable>expression</> <literal>IS NOT NULL</> </entry> + <entry> <replaceable>expression</replaceable> <literal>IS NOT NULL</literal> </entry> <entry>is not null</entry> </row> <row> - <entry> <replaceable>expression</> <literal>ISNULL</> </entry> + <entry> <replaceable>expression</replaceable> <literal>ISNULL</literal> </entry> <entry>is null (nonstandard syntax)</entry> </row> <row> - <entry> <replaceable>expression</> <literal>NOTNULL</> </entry> + <entry> <replaceable>expression</replaceable> <literal>NOTNULL</literal> </entry> <entry>is not null (nonstandard syntax)</entry> </row> <row> - <entry> <replaceable>boolean_expression</> <literal>IS TRUE</> </entry> + <entry> <replaceable>boolean_expression</replaceable> <literal>IS TRUE</literal> </entry> <entry>is true</entry> </row> <row> - <entry> <replaceable>boolean_expression</> <literal>IS NOT TRUE</> </entry> + <entry> <replaceable>boolean_expression</replaceable> <literal>IS NOT TRUE</literal> </entry> <entry>is false or unknown</entry> </row> <row> - <entry> <replaceable>boolean_expression</> <literal>IS FALSE</> </entry> + <entry> <replaceable>boolean_expression</replaceable> <literal>IS FALSE</literal> </entry> <entry>is false</entry> </row> <row> - <entry> <replaceable>boolean_expression</> <literal>IS NOT FALSE</> </entry> + <entry> <replaceable>boolean_expression</replaceable> <literal>IS NOT FALSE</literal> </entry> <entry>is true or unknown</entry> </row> <row> - <entry> <replaceable>boolean_expression</> <literal>IS UNKNOWN</> </entry> + <entry> <replaceable>boolean_expression</replaceable> <literal>IS UNKNOWN</literal> </entry> <entry>is unknown</entry> </row> <row> - <entry> <replaceable>boolean_expression</> <literal>IS NOT UNKNOWN</> </entry> + <entry> <replaceable>boolean_expression</replaceable> <literal>IS NOT UNKNOWN</literal> </entry> <entry>is true or false</entry> </row> </tbody> @@ -381,9 +381,9 @@ <indexterm> <primary>BETWEEN SYMMETRIC</primary> </indexterm> - <literal>BETWEEN SYMMETRIC</> is like <literal>BETWEEN</> + <literal>BETWEEN SYMMETRIC</literal> is like <literal>BETWEEN</literal> except there is no requirement that the argument to the left of - <literal>AND</> be less than or equal to the argument on the right. + <literal>AND</literal> be less than or equal to the argument on the right. If it is not, those two arguments are automatically swapped, so that a nonempty range is always implied. </para> @@ -395,23 +395,23 @@ <indexterm> <primary>IS NOT DISTINCT FROM</primary> </indexterm> - Ordinary comparison operators yield null (signifying <quote>unknown</>), + Ordinary comparison operators yield null (signifying <quote>unknown</quote>), not true or false, when either input is null. For example, - <literal>7 = NULL</> yields null, as does <literal>7 <> NULL</>. When + <literal>7 = NULL</literal> yields null, as does <literal>7 <> NULL</literal>. When this behavior is not suitable, use the - <literal>IS <optional> NOT </> DISTINCT FROM</literal> predicates: + <literal>IS <optional> NOT </optional> DISTINCT FROM</literal> predicates: <synopsis> <replaceable>a</replaceable> IS DISTINCT FROM <replaceable>b</replaceable> <replaceable>a</replaceable> IS NOT DISTINCT FROM <replaceable>b</replaceable> </synopsis> For non-null inputs, <literal>IS DISTINCT FROM</literal> is - the same as the <literal><></> operator. However, if both + the same as the <literal><></literal> operator. However, if both inputs are null it returns false, and if only one input is null it returns true. Similarly, <literal>IS NOT DISTINCT FROM</literal> is identical to <literal>=</literal> for non-null inputs, but it returns true when both inputs are null, and false when only one input is null. Thus, these predicates effectively act as though null - were a normal data value, rather than <quote>unknown</>. + were a normal data value, rather than <quote>unknown</quote>. </para> <para> @@ -443,8 +443,8 @@ <para> Do <emphasis>not</emphasis> write <literal><replaceable>expression</replaceable> = NULL</literal> - because <literal>NULL</> is not <quote>equal to</quote> - <literal>NULL</>. (The null value represents an unknown value, + because <literal>NULL</literal> is not <quote>equal to</quote> + <literal>NULL</literal>. (The null value represents an unknown value, and it is not known whether two unknown values are equal.) </para> @@ -464,16 +464,16 @@ <para> If the <replaceable>expression</replaceable> is row-valued, then - <literal>IS NULL</> is true when the row expression itself is null + <literal>IS NULL</literal> is true when the row expression itself is null or when all the row's fields are null, while - <literal>IS NOT NULL</> is true when the row expression itself is non-null + <literal>IS NOT NULL</literal> is true when the row expression itself is non-null and all the row's fields are non-null. Because of this behavior, - <literal>IS NULL</> and <literal>IS NOT NULL</> do not always return + <literal>IS NULL</literal> and <literal>IS NOT NULL</literal> do not always return inverse results for row-valued expressions; in particular, a row-valued expression that contains both null and non-null fields will return false for both tests. In some cases, it may be preferable to - write <replaceable>row</replaceable> <literal>IS DISTINCT FROM NULL</> - or <replaceable>row</replaceable> <literal>IS NOT DISTINCT FROM NULL</>, + write <replaceable>row</replaceable> <literal>IS DISTINCT FROM NULL</literal> + or <replaceable>row</replaceable> <literal>IS NOT DISTINCT FROM NULL</literal>, which will simply check whether the overall row value is null without any additional tests on the row fields. </para> @@ -508,8 +508,8 @@ </synopsis> These will always return true or false, never a null value, even when the operand is null. - A null input is treated as the logical value <quote>unknown</>. - Notice that <literal>IS UNKNOWN</> and <literal>IS NOT UNKNOWN</> are + A null input is treated as the logical value <quote>unknown</quote>. + Notice that <literal>IS UNKNOWN</literal> and <literal>IS NOT UNKNOWN</literal> are effectively the same as <literal>IS NULL</literal> and <literal>IS NOT NULL</literal>, respectively, except that the input expression must be of Boolean type. @@ -835,10 +835,10 @@ <indexterm> <primary>div</primary> </indexterm> - <literal><function>div(<parameter>y</parameter> <type>numeric</>, - <parameter>x</parameter> <type>numeric</>)</function></literal> + <literal><function>div(<parameter>y</parameter> <type>numeric</type>, + <parameter>x</parameter> <type>numeric</type>)</function></literal> </entry> - <entry><type>numeric</></entry> + <entry><type>numeric</type></entry> <entry>integer quotient of <parameter>y</parameter>/<parameter>x</parameter></entry> <entry><literal>div(9,4)</literal></entry> <entry><literal>2</literal></entry> @@ -941,7 +941,7 @@ <parameter>b</parameter> <type>dp</type>)</function></literal> </entry> <entry><type>dp</type></entry> - <entry><parameter>a</> raised to the power of <parameter>b</parameter></entry> + <entry><parameter>a</parameter> raised to the power of <parameter>b</parameter></entry> <entry><literal>power(9.0, 3.0)</literal></entry> <entry><literal>729</literal></entry> </row> @@ -950,7 +950,7 @@ <entry><literal><function>power(<parameter>a</parameter> <type>numeric</type>, <parameter>b</parameter> <type>numeric</type>)</function></literal></entry> <entry><type>numeric</type></entry> - <entry><parameter>a</> raised to the power of <parameter>b</parameter></entry> + <entry><parameter>a</parameter> raised to the power of <parameter>b</parameter></entry> <entry><literal>power(9.0, 3.0)</literal></entry> <entry><literal>729</literal></entry> </row> @@ -1056,10 +1056,10 @@ </indexterm> <literal><function>width_bucket(<parameter>operand</parameter> <type>dp</type>, <parameter>b1</parameter> <type>dp</type>, <parameter>b2</parameter> <type>dp</type>, <parameter>count</parameter> <type>int</type>)</function></literal></entry> <entry><type>int</type></entry> - <entry>return the bucket number to which <parameter>operand</> would - be assigned in a histogram having <parameter>count</> equal-width - buckets spanning the range <parameter>b1</> to <parameter>b2</>; - returns <literal>0</> or <literal><parameter>count</>+1</literal> for + <entry>return the bucket number to which <parameter>operand</parameter> would + be assigned in a histogram having <parameter>count</parameter> equal-width + buckets spanning the range <parameter>b1</parameter> to <parameter>b2</parameter>; + returns <literal>0</literal> or <literal><parameter>count</parameter>+1</literal> for an input outside the range</entry> <entry><literal>width_bucket(5.35, 0.024, 10.06, 5)</literal></entry> <entry><literal>3</literal></entry> @@ -1068,10 +1068,10 @@ <row> <entry><literal><function>width_bucket(<parameter>operand</parameter> <type>numeric</type>, <parameter>b1</parameter> <type>numeric</type>, <parameter>b2</parameter> <type>numeric</type>, <parameter>count</parameter> <type>int</type>)</function></literal></entry> <entry><type>int</type></entry> - <entry>return the bucket number to which <parameter>operand</> would - be assigned in a histogram having <parameter>count</> equal-width - buckets spanning the range <parameter>b1</> to <parameter>b2</>; - returns <literal>0</> or <literal><parameter>count</>+1</literal> for + <entry>return the bucket number to which <parameter>operand</parameter> would + be assigned in a histogram having <parameter>count</parameter> equal-width + buckets spanning the range <parameter>b1</parameter> to <parameter>b2</parameter>; + returns <literal>0</literal> or <literal><parameter>count</parameter>+1</literal> for an input outside the range</entry> <entry><literal>width_bucket(5.35, 0.024, 10.06, 5)</literal></entry> <entry><literal>3</literal></entry> @@ -1080,10 +1080,10 @@ <row> <entry><literal><function>width_bucket(<parameter>operand</parameter> <type>anyelement</type>, <parameter>thresholds</parameter> <type>anyarray</type>)</function></literal></entry> <entry><type>int</type></entry> - <entry>return the bucket number to which <parameter>operand</> would + <entry>return the bucket number to which <parameter>operand</parameter> would be assigned given an array listing the lower bounds of the buckets; - returns <literal>0</> for an input less than the first lower bound; - the <parameter>thresholds</> array <emphasis>must be sorted</>, + returns <literal>0</literal> for an input less than the first lower bound; + the <parameter>thresholds</parameter> array <emphasis>must be sorted</emphasis>, smallest first, or unexpected results will be obtained</entry> <entry><literal>width_bucket(now(), array['yesterday', 'today', 'tomorrow']::timestamptz[])</literal></entry> <entry><literal>2</literal></entry> @@ -1303,7 +1303,7 @@ and <literal><function>degrees()</function></literal> shown earlier. However, using the degree-based trigonometric functions is preferred, as that way avoids round-off error for special cases such - as <literal>sind(30)</>. + as <literal>sind(30)</literal>. </para> </note> @@ -1329,7 +1329,7 @@ key words, rather than commas, to separate arguments. Details are in <xref linkend="functions-string-sql">. - <productname>PostgreSQL</> also provides versions of these functions + <productname>PostgreSQL</productname> also provides versions of these functions that use the regular function invocation syntax (see <xref linkend="functions-string-other">). </para> @@ -1339,12 +1339,12 @@ Before <productname>PostgreSQL</productname> 8.3, these functions would silently accept values of several non-string data types as well, due to the presence of implicit coercions from those data types to - <type>text</>. Those coercions have been removed because they frequently + <type>text</type>. Those coercions have been removed because they frequently caused surprising behaviors. However, the string concatenation operator - (<literal>||</>) still accepts non-string input, so long as at least one + (<literal>||</literal>) still accepts non-string input, so long as at least one input is of a string type, as shown in <xref linkend="functions-string-sql">. For other cases, insert an explicit - coercion to <type>text</> if you need to duplicate the previous behavior. + coercion to <type>text</type> if you need to duplicate the previous behavior. </para> </note> @@ -1536,7 +1536,7 @@ <entry> Remove the longest string containing only characters from <parameter>characters</parameter> (a space by default) from the - start, end, or both ends (<literal>both</> is the default) + start, end, or both ends (<literal>both</literal> is the default) of <parameter>string</parameter> </entry> <entry><literal>trim(both 'xyz' from 'yxTomxx')</literal></entry> @@ -1553,7 +1553,7 @@ </entry> <entry><type>text</type></entry> <entry> - Non-standard syntax for <function>trim()</> + Non-standard syntax for <function>trim()</function> </entry> <entry><literal>trim(both from 'yxTomxx', 'xyz')</literal></entry> <entry><literal>Tom</literal></entry> @@ -1753,8 +1753,8 @@ </entry> <entry><type>bytea</type></entry> <entry> - Decode binary data from textual representation in <parameter>string</>. - Options for <parameter>format</> are same as in <function>encode</>. + Decode binary data from textual representation in <parameter>string</parameter>. + Options for <parameter>format</parameter> are same as in <function>encode</function>. </entry> <entry><literal>decode('MTIzAAE=', 'base64')</literal></entry> <entry><literal>\x3132330001</literal></entry> @@ -1771,9 +1771,9 @@ <entry><type>text</type></entry> <entry> Encode binary data into a textual representation. Supported - formats are: <literal>base64</>, <literal>hex</>, <literal>escape</>. - <literal>escape</> converts zero bytes and high-bit-set bytes to - octal sequences (<literal>\</><replaceable>nnn</>) and + formats are: <literal>base64</literal>, <literal>hex</literal>, <literal>escape</literal>. + <literal>escape</literal> converts zero bytes and high-bit-set bytes to + octal sequences (<literal>\</literal><replaceable>nnn</replaceable>) and doubles backslashes. </entry> <entry><literal>encode(E'123\\000\\001', 'base64')</literal></entry> @@ -1791,7 +1791,7 @@ <entry><type>text</type></entry> <entry> Format arguments according to a format string. - This function is similar to the C function <function>sprintf</>. + This function is similar to the C function <function>sprintf</function>. See <xref linkend="functions-string-format">. </entry> <entry><literal>format('Hello %s, %1$s', 'World')</literal></entry> @@ -1825,8 +1825,8 @@ </entry> <entry><type>text</type></entry> <entry> - Return first <replaceable>n</> characters in the string. When <replaceable>n</> - is negative, return all but last |<replaceable>n</>| characters. + Return first <replaceable>n</replaceable> characters in the string. When <replaceable>n</replaceable> + is negative, return all but last |<replaceable>n</replaceable>| characters. </entry> <entry><literal>left('abcde', 2)</literal></entry> <entry><literal>ab</literal></entry> @@ -1929,11 +1929,11 @@ Split <parameter>qualified_identifier</parameter> into an array of identifiers, removing any quoting of individual identifiers. By default, extra characters after the last identifier are considered an - error; but if the second parameter is <literal>false</>, then such + error; but if the second parameter is <literal>false</literal>, then such extra characters are ignored. (This behavior is useful for parsing names for objects like functions.) Note that this function does not truncate over-length identifiers. If you want truncation you can cast - the result to <type>name[]</>. + the result to <type>name[]</type>. </entry> <entry><literal>parse_ident('"SomeSchema".someTable')</literal></entry> <entry><literal>{SomeSchema,sometable}</literal></entry> @@ -2017,7 +2017,7 @@ <entry> Return the given string suitably quoted to be used as a string literal in an <acronym>SQL</acronym> statement string; or, if the argument - is null, return <literal>NULL</>. + is null, return <literal>NULL</literal>. Embedded single-quotes and backslashes are properly doubled. See also <xref linkend="plpgsql-quote-literal-example">. </entry> @@ -2030,7 +2030,7 @@ <entry><type>text</type></entry> <entry> Coerce the given value to text and then quote it as a literal; - or, if the argument is null, return <literal>NULL</>. + or, if the argument is null, return <literal>NULL</literal>. Embedded single-quotes and backslashes are properly doubled. </entry> <entry><literal>quote_nullable(42.5)</literal></entry> @@ -2177,8 +2177,8 @@ </entry> <entry><type>text</type></entry> <entry> - Return last <replaceable>n</> characters in the string. When <replaceable>n</> - is negative, return all but first |<replaceable>n</>| characters. + Return last <replaceable>n</replaceable> characters in the string. When <replaceable>n</replaceable> + is negative, return all but first |<replaceable>n</replaceable>| characters. </entry> <entry><literal>right('abcde', 2)</literal></entry> <entry><literal>de</literal></entry> @@ -2285,8 +2285,8 @@ <entry><type>text</type></entry> <entry> Convert <parameter>string</parameter> to <acronym>ASCII</acronym> from another encoding - (only supports conversion from <literal>LATIN1</>, <literal>LATIN2</>, <literal>LATIN9</>, - and <literal>WIN1250</> encodings) + (only supports conversion from <literal>LATIN1</literal>, <literal>LATIN2</literal>, <literal>LATIN9</literal>, + and <literal>WIN1250</literal> encodings) </entry> <entry><literal>to_ascii('Karel')</literal></entry> <entry><literal>Karel</literal></entry> @@ -3154,30 +3154,30 @@ </indexterm> <para> - The function <function>format</> produces output formatted according to + The function <function>format</function> produces output formatted according to a format string, in a style similar to the C function - <function>sprintf</>. + <function>sprintf</function>. </para> <para> <synopsis> -<function>format</>(<parameter>formatstr</> <type>text</> [, <parameter>formatarg</> <type>"any"</> [, ...] ]) +<function>format</function>(<parameter>formatstr</parameter> <type>text</type> [, <parameter>formatarg</parameter> <type>"any"</type> [, ...] ]) </synopsis> - <replaceable>formatstr</> is a format string that specifies how the + <replaceable>formatstr</replaceable> is a format string that specifies how the result should be formatted. Text in the format string is copied - directly to the result, except where <firstterm>format specifiers</> are + directly to the result, except where <firstterm>format specifiers</firstterm> are used. Format specifiers act as placeholders in the string, defining how subsequent function arguments should be formatted and inserted into the - result. Each <replaceable>formatarg</> argument is converted to text + result. Each <replaceable>formatarg</replaceable> argument is converted to text according to the usual output rules for its data type, and then formatted and inserted into the result string according to the format specifier(s). </para> <para> - Format specifiers are introduced by a <literal>%</> character and have + Format specifiers are introduced by a <literal>%</literal> character and have the form <synopsis> -%[<replaceable>position</>][<replaceable>flags</>][<replaceable>width</>]<replaceable>type</> +%[<replaceable>position</replaceable>][<replaceable>flags</replaceable>][<replaceable>width</replaceable>]<replaceable>type</replaceable> </synopsis> where the component fields are: @@ -3186,10 +3186,10 @@ <term><replaceable>position</replaceable> (optional)</term> <listitem> <para> - A string of the form <literal><replaceable>n</>$</> where - <replaceable>n</> is the index of the argument to print. + A string of the form <literal><replaceable>n</replaceable>$</literal> where + <replaceable>n</replaceable> is the index of the argument to print. Index 1 means the first argument after - <replaceable>formatstr</>. If the <replaceable>position</> is + <replaceable>formatstr</replaceable>. If the <replaceable>position</replaceable> is omitted, the default is to use the next argument in sequence. </para> </listitem> @@ -3201,8 +3201,8 @@ <para> Additional options controlling how the format specifier's output is formatted. Currently the only supported flag is a minus sign - (<literal>-</>) which will cause the format specifier's output to be - left-justified. This has no effect unless the <replaceable>width</> + (<literal>-</literal>) which will cause the format specifier's output to be + left-justified. This has no effect unless the <replaceable>width</replaceable> field is also specified. </para> </listitem> @@ -3212,23 +3212,23 @@ <term><replaceable>width</replaceable> (optional)</term> <listitem> <para> - Specifies the <emphasis>minimum</> number of characters to use to + Specifies the <emphasis>minimum</emphasis> number of characters to use to display the format specifier's output. The output is padded on the - left or right (depending on the <literal>-</> flag) with spaces as + left or right (depending on the <literal>-</literal> flag) with spaces as needed to fill the width. A too-small width does not cause truncation of the output, but is simply ignored. The width may be specified using any of the following: a positive integer; an - asterisk (<literal>*</>) to use the next function argument as the - width; or a string of the form <literal>*<replaceable>n</>$</> to - use the <replaceable>n</>th function argument as the width. + asterisk (<literal>*</literal>) to use the next function argument as the + width; or a string of the form <literal>*<replaceable>n</replaceable>$</literal> to + use the <replaceable>n</replaceable>th function argument as the width. </para> <para> If the width comes from a function argument, that argument is consumed before the argument that is used for the format specifier's value. If the width argument is negative, the result is left - aligned (as if the <literal>-</> flag had been specified) within a - field of length <function>abs</>(<replaceable>width</replaceable>). + aligned (as if the <literal>-</literal> flag had been specified) within a + field of length <function>abs</function>(<replaceable>width</replaceable>). </para> </listitem> </varlistentry> @@ -3251,13 +3251,13 @@ <literal>I</literal> treats the argument value as an SQL identifier, double-quoting it if necessary. It is an error for the value to be null (equivalent to - <function>quote_ident</>). + <function>quote_ident</function>). </para> </listitem> <listitem> <para> <literal>L</literal> quotes the argument value as an SQL literal. - A null value is displayed as the string <literal>NULL</>, without + A null value is displayed as the string <literal>NULL</literal>, without quotes (equivalent to <function>quote_nullable</function>). </para> </listitem> @@ -3270,7 +3270,7 @@ <para> In addition to the format specifiers described above, the special sequence - <literal>%%</> may be used to output a literal <literal>%</> character. + <literal>%%</literal> may be used to output a literal <literal>%</literal> character. </para> <para> @@ -3281,7 +3281,7 @@ SELECT format('Hello %s', 'World'); <lineannotation>Result: </lineannotation><computeroutput>Hello World</computeroutput> SELECT format('Testing %s, %s, %s, %%', 'one', 'two', 'three'); -<lineannotation>Result: </><computeroutput>Testing one, two, three, %</> +<lineannotation>Result: </lineannotation><computeroutput>Testing one, two, three, %</computeroutput> SELECT format('INSERT INTO %I VALUES(%L)', 'Foo bar', E'O\'Reilly'); <lineannotation>Result: </lineannotation><computeroutput>INSERT INTO "Foo bar" VALUES('O''Reilly')</computeroutput> @@ -3293,63 +3293,63 @@ SELECT format('INSERT INTO %I VALUES(%L)', 'locations', E'C:\\Program Files'); <para> Here are examples using <replaceable>width</replaceable> fields - and the <literal>-</> flag: + and the <literal>-</literal> flag: <screen> SELECT format('|%10s|', 'foo'); -<lineannotation>Result: </><computeroutput>| foo|</> +<lineannotation>Result: </lineannotation><computeroutput>| foo|</computeroutput> SELECT format('|%-10s|', 'foo'); -<lineannotation>Result: </><computeroutput>|foo |</> +<lineannotation>Result: </lineannotation><computeroutput>|foo |</computeroutput> SELECT format('|%*s|', 10, 'foo'); -<lineannotation>Result: </><computeroutput>| foo|</> +<lineannotation>Result: </lineannotation><computeroutput>| foo|</computeroutput> SELECT format('|%*s|', -10, 'foo'); -<lineannotation>Result: </><computeroutput>|foo |</> +<lineannotation>Result: </lineannotation><computeroutput>|foo |</computeroutput> SELECT format('|%-*s|', 10, 'foo'); -<lineannotation>Result: </><computeroutput>|foo |</> +<lineannotation>Result: </lineannotation><computeroutput>|foo |</computeroutput> SELECT format('|%-*s|', -10, 'foo'); -<lineannotation>Result: </><computeroutput>|foo |</> +<lineannotation>Result: </lineannotation><computeroutput>|foo |</computeroutput> </screen> </para> <para> - These examples show use of <replaceable>position</> fields: + These examples show use of <replaceable>position</replaceable> fields: <screen> SELECT format('Testing %3$s, %2$s, %1$s', 'one', 'two', 'three'); -<lineannotation>Result: </><computeroutput>Testing three, two, one</> +<lineannotation>Result: </lineannotation><computeroutput>Testing three, two, one</computeroutput> SELECT format('|%*2$s|', 'foo', 10, 'bar'); -<lineannotation>Result: </><computeroutput>| bar|</> +<lineannotation>Result: </lineannotation><computeroutput>| bar|</computeroutput> SELECT format('|%1$*2$s|', 'foo', 10, 'bar'); -<lineannotation>Result: </><computeroutput>| foo|</> +<lineannotation>Result: </lineannotation><computeroutput>| foo|</computeroutput> </screen> </para> <para> - Unlike the standard C function <function>sprintf</>, - <productname>PostgreSQL</>'s <function>format</> function allows format - specifiers with and without <replaceable>position</> fields to be mixed + Unlike the standard C function <function>sprintf</function>, + <productname>PostgreSQL</productname>'s <function>format</function> function allows format + specifiers with and without <replaceable>position</replaceable> fields to be mixed in the same format string. A format specifier without a - <replaceable>position</> field always uses the next argument after the + <replaceable>position</replaceable> field always uses the next argument after the last argument consumed. - In addition, the <function>format</> function does not require all + In addition, the <function>format</function> function does not require all function arguments to be used in the format string. For example: <screen> SELECT format('Testing %3$s, %2$s, %s', 'one', 'two', 'three'); -<lineannotation>Result: </><computeroutput>Testing three, two, three</> +<lineannotation>Result: </lineannotation><computeroutput>Testing three, two, three</computeroutput> </screen> </para> <para> - The <literal>%I</> and <literal>%L</> format specifiers are particularly + The <literal>%I</literal> and <literal>%L</literal> format specifiers are particularly useful for safely constructing dynamic SQL statements. See <xref linkend="plpgsql-quote-literal-example">. </para> @@ -3376,7 +3376,7 @@ SELECT format('Testing %3$s, %2$s, %s', 'one', 'two', 'three'); key words, rather than commas, to separate arguments. Details are in <xref linkend="functions-binarystring-sql">. - <productname>PostgreSQL</> also provides versions of these functions + <productname>PostgreSQL</productname> also provides versions of these functions that use the regular function invocation syntax (see <xref linkend="functions-binarystring-other">). </para> @@ -3384,7 +3384,7 @@ SELECT format('Testing %3$s, %2$s, %s', 'one', 'two', 'three'); <note> <para> The sample results shown on this page assume that the server parameter - <link linkend="guc-bytea-output"><varname>bytea_output</></link> is set + <link linkend="guc-bytea-output"><varname>bytea_output</varname></link> is set to <literal>escape</literal> (the traditional PostgreSQL format). </para> </note> @@ -3546,8 +3546,8 @@ SELECT format('Testing %3$s, %2$s, %s', 'one', 'two', 'three'); </entry> <entry><type>bytea</type></entry> <entry> - Decode binary data from textual representation in <parameter>string</>. - Options for <parameter>format</> are same as in <function>encode</>. + Decode binary data from textual representation in <parameter>string</parameter>. + Options for <parameter>format</parameter> are same as in <function>encode</function>. </entry> <entry><literal>decode(E'123\\000456', 'escape')</literal></entry> <entry><literal>123\000456</literal></entry> @@ -3564,9 +3564,9 @@ SELECT format('Testing %3$s, %2$s, %s', 'one', 'two', 'three'); <entry><type>text</type></entry> <entry> Encode binary data into a textual representation. Supported - formats are: <literal>base64</>, <literal>hex</>, <literal>escape</>. - <literal>escape</> converts zero bytes and high-bit-set bytes to - octal sequences (<literal>\</><replaceable>nnn</>) and + formats are: <literal>base64</literal>, <literal>hex</literal>, <literal>escape</literal>. + <literal>escape</literal> converts zero bytes and high-bit-set bytes to + octal sequences (<literal>\</literal><replaceable>nnn</replaceable>) and doubles backslashes. </entry> <entry><literal>encode(E'123\\000456'::bytea, 'escape')</literal></entry> @@ -3649,7 +3649,7 @@ SELECT format('Testing %3$s, %2$s, %s', 'one', 'two', 'three'); <primary>set_bit</primary> </indexterm> <literal><function>set_bit(<parameter>string</parameter>, - <parameter>offset</parameter>, <parameter>newvalue</>)</function></literal> + <parameter>offset</parameter>, <parameter>newvalue</parameter>)</function></literal> </entry> <entry><type>bytea</type></entry> <entry> @@ -3665,7 +3665,7 @@ SELECT format('Testing %3$s, %2$s, %s', 'one', 'two', 'three'); <primary>set_byte</primary> </indexterm> <literal><function>set_byte(<parameter>string</parameter>, - <parameter>offset</parameter>, <parameter>newvalue</>)</function></literal> + <parameter>offset</parameter>, <parameter>newvalue</parameter>)</function></literal> </entry> <entry><type>bytea</type></entry> <entry> @@ -3679,9 +3679,9 @@ SELECT format('Testing %3$s, %2$s, %s', 'one', 'two', 'three'); </table> <para> - <function>get_byte</> and <function>set_byte</> number the first byte + <function>get_byte</function> and <function>set_byte</function> number the first byte of a binary string as byte 0. - <function>get_bit</> and <function>set_bit</> number bits from the + <function>get_bit</function> and <function>set_bit</function> number bits from the right within each byte; for example bit 0 is the least significant bit of the first byte, and bit 15 is the most significant bit of the second byte. </para> @@ -3802,7 +3802,7 @@ SELECT format('Testing %3$s, %2$s, %s', 'one', 'two', 'three'); <para> In addition, it is possible to cast integral values to and from type - <type>bit</>. + <type>bit</type>. Some examples: <programlisting> 44::bit(10) <lineannotation>0000101100</lineannotation> @@ -3810,15 +3810,15 @@ SELECT format('Testing %3$s, %2$s, %s', 'one', 'two', 'three'); cast(-44 as bit(12)) <lineannotation>111111010100</lineannotation> '1110'::bit(4)::integer <lineannotation>14</lineannotation> </programlisting> - Note that casting to just <quote>bit</> means casting to - <literal>bit(1)</>, and so will deliver only the least significant + Note that casting to just <quote>bit</quote> means casting to + <literal>bit(1)</literal>, and so will deliver only the least significant bit of the integer. </para> <note> <para> - Casting an integer to <type>bit(n)</> copies the rightmost - <literal>n</> bits. Casting an integer to a bit string width wider + Casting an integer to <type>bit(n)</type> copies the rightmost + <literal>n</literal> bits. Casting an integer to a bit string width wider than the integer itself will sign-extend on the left. </para> </note> @@ -3840,7 +3840,7 @@ cast(-44 as bit(12)) <lineannotation>111111010100</lineannotation> more recent <function>SIMILAR TO</function> operator (added in SQL:1999), and <acronym>POSIX</acronym>-style regular expressions. Aside from the basic <quote>does this string match - this pattern?</> operators, functions are available to extract + this pattern?</quote> operators, functions are available to extract or replace matching substrings and to split a string at matching locations. </para> @@ -4004,9 +4004,9 @@ cast(-44 as bit(12)) <lineannotation>111111010100</lineannotation> can match any part of the string. Also like <function>LIKE</function>, <function>SIMILAR TO</function> uses - <literal>_</> and <literal>%</> as wildcard characters denoting + <literal>_</literal> and <literal>%</literal> as wildcard characters denoting any single character and any string, respectively (these are - comparable to <literal>.</> and <literal>.*</> in POSIX regular + comparable to <literal>.</literal> and <literal>.*</literal> in POSIX regular expressions). </para> @@ -4041,21 +4041,21 @@ cast(-44 as bit(12)) <lineannotation>111111010100</lineannotation> </listitem> <listitem> <para> - <literal>{</><replaceable>m</><literal>}</literal> denotes repetition - of the previous item exactly <replaceable>m</> times. + <literal>{</literal><replaceable>m</replaceable><literal>}</literal> denotes repetition + of the previous item exactly <replaceable>m</replaceable> times. </para> </listitem> <listitem> <para> - <literal>{</><replaceable>m</><literal>,}</literal> denotes repetition - of the previous item <replaceable>m</> or more times. + <literal>{</literal><replaceable>m</replaceable><literal>,}</literal> denotes repetition + of the previous item <replaceable>m</replaceable> or more times. </para> </listitem> <listitem> <para> - <literal>{</><replaceable>m</><literal>,</><replaceable>n</><literal>}</> - denotes repetition of the previous item at least <replaceable>m</> and - not more than <replaceable>n</> times. + <literal>{</literal><replaceable>m</replaceable><literal>,</literal><replaceable>n</replaceable><literal>}</literal> + denotes repetition of the previous item at least <replaceable>m</replaceable> and + not more than <replaceable>n</replaceable> times. </para> </listitem> <listitem> @@ -4072,14 +4072,14 @@ cast(-44 as bit(12)) <lineannotation>111111010100</lineannotation> </listitem> </itemizedlist> - Notice that the period (<literal>.</>) is not a metacharacter - for <function>SIMILAR TO</>. + Notice that the period (<literal>.</literal>) is not a metacharacter + for <function>SIMILAR TO</function>. </para> <para> - As with <function>LIKE</>, a backslash disables the special meaning + As with <function>LIKE</function>, a backslash disables the special meaning of any of these metacharacters; or a different escape character can - be specified with <literal>ESCAPE</>. + be specified with <literal>ESCAPE</literal>. </para> <para> @@ -4093,23 +4093,23 @@ cast(-44 as bit(12)) <lineannotation>111111010100</lineannotation> </para> <para> - The <function>substring</> function with three parameters, + The <function>substring</function> function with three parameters, <function>substring(<replaceable>string</replaceable> from <replaceable>pattern</replaceable> for <replaceable>escape-character</replaceable>)</function>, provides extraction of a substring that matches an SQL - regular expression pattern. As with <literal>SIMILAR TO</>, the + regular expression pattern. As with <literal>SIMILAR TO</literal>, the specified pattern must match the entire data string, or else the function fails and returns null. To indicate the part of the pattern that should be returned on success, the pattern must contain two occurrences of the escape character followed by a double quote - (<literal>"</>). <!-- " font-lock sanity --> + (<literal>"</literal>). <!-- " font-lock sanity --> The text matching the portion of the pattern between these markers is returned. </para> <para> - Some examples, with <literal>#"</> delimiting the return string: + Some examples, with <literal>#"</literal> delimiting the return string: <programlisting> substring('foobar' from '%#"o_b#"%' for '#') <lineannotation>oob</lineannotation> substring('foobar' from '#"o_b#"%' for '#') <lineannotation>NULL</lineannotation> @@ -4191,7 +4191,7 @@ substring('foobar' from '#"o_b#"%' for '#') <lineannotation>NULL</lineannotat <para> <acronym>POSIX</acronym> regular expressions provide a more powerful means for pattern matching than the <function>LIKE</function> and - <function>SIMILAR TO</> operators. + <function>SIMILAR TO</function> operators. Many Unix tools such as <command>egrep</command>, <command>sed</command>, or <command>awk</command> use a pattern matching language that is similar to the one described here. @@ -4228,7 +4228,7 @@ substring('foobar' from '#"o_b#"%' for '#') <lineannotation>NULL</lineannotat </para> <para> - The <function>substring</> function with two parameters, + The <function>substring</function> function with two parameters, <function>substring(<replaceable>string</replaceable> from <replaceable>pattern</replaceable>)</function>, provides extraction of a substring @@ -4253,30 +4253,30 @@ substring('foobar' from 'o(.)b') <lineannotation>o</lineannotation> </para> <para> - The <function>regexp_replace</> function provides substitution of + The <function>regexp_replace</function> function provides substitution of new text for substrings that match POSIX regular expression patterns. It has the syntax - <function>regexp_replace</function>(<replaceable>source</>, - <replaceable>pattern</>, <replaceable>replacement</> - <optional>, <replaceable>flags</> </optional>). - The <replaceable>source</> string is returned unchanged if - there is no match to the <replaceable>pattern</>. If there is a - match, the <replaceable>source</> string is returned with the - <replaceable>replacement</> string substituted for the matching - substring. The <replaceable>replacement</> string can contain - <literal>\</><replaceable>n</>, where <replaceable>n</> is 1 + <function>regexp_replace</function>(<replaceable>source</replaceable>, + <replaceable>pattern</replaceable>, <replaceable>replacement</replaceable> + <optional>, <replaceable>flags</replaceable> </optional>). + The <replaceable>source</replaceable> string is returned unchanged if + there is no match to the <replaceable>pattern</replaceable>. If there is a + match, the <replaceable>source</replaceable> string is returned with the + <replaceable>replacement</replaceable> string substituted for the matching + substring. The <replaceable>replacement</replaceable> string can contain + <literal>\</literal><replaceable>n</replaceable>, where <replaceable>n</replaceable> is 1 through 9, to indicate that the source substring matching the - <replaceable>n</>'th parenthesized subexpression of the pattern should be - inserted, and it can contain <literal>\&</> to indicate that the + <replaceable>n</replaceable>'th parenthesized subexpression of the pattern should be + inserted, and it can contain <literal>\&</literal> to indicate that the substring matching the entire pattern should be inserted. Write - <literal>\\</> if you need to put a literal backslash in the replacement + <literal>\\</literal> if you need to put a literal backslash in the replacement text. - The <replaceable>flags</> parameter is an optional text + The <replaceable>flags</replaceable> parameter is an optional text string containing zero or more single-letter flags that change the - function's behavior. Flag <literal>i</> specifies case-insensitive - matching, while flag <literal>g</> specifies replacement of each matching + function's behavior. Flag <literal>i</literal> specifies case-insensitive + matching, while flag <literal>g</literal> specifies replacement of each matching substring rather than only the first one. Supported flags (though - not <literal>g</>) are + not <literal>g</literal>) are described in <xref linkend="posix-embedded-options-table">. </para> @@ -4293,22 +4293,22 @@ regexp_replace('foobarbaz', 'b(..)', E'X\\1Y', 'g') </para> <para> - The <function>regexp_match</> function returns a text array of + The <function>regexp_match</function> function returns a text array of captured substring(s) resulting from the first match of a POSIX regular expression pattern to a string. It has the syntax - <function>regexp_match</function>(<replaceable>string</>, - <replaceable>pattern</> <optional>, <replaceable>flags</> </optional>). - If there is no match, the result is <literal>NULL</>. - If a match is found, and the <replaceable>pattern</> contains no + <function>regexp_match</function>(<replaceable>string</replaceable>, + <replaceable>pattern</replaceable> <optional>, <replaceable>flags</replaceable> </optional>). + If there is no match, the result is <literal>NULL</literal>. + If a match is found, and the <replaceable>pattern</replaceable> contains no parenthesized subexpressions, then the result is a single-element text array containing the substring matching the whole pattern. - If a match is found, and the <replaceable>pattern</> contains + If a match is found, and the <replaceable>pattern</replaceable> contains parenthesized subexpressions, then the result is a text array - whose <replaceable>n</>'th element is the substring matching - the <replaceable>n</>'th parenthesized subexpression of - the <replaceable>pattern</> (not counting <quote>non-capturing</> + whose <replaceable>n</replaceable>'th element is the substring matching + the <replaceable>n</replaceable>'th parenthesized subexpression of + the <replaceable>pattern</replaceable> (not counting <quote>non-capturing</quote> parentheses; see below for details). - The <replaceable>flags</> parameter is an optional text string + The <replaceable>flags</replaceable> parameter is an optional text string containing zero or more single-letter flags that change the function's behavior. Supported flags are described in <xref linkend="posix-embedded-options-table">. @@ -4330,7 +4330,7 @@ SELECT regexp_match('foobarbequebaz', '(bar)(beque)'); (1 row) </programlisting> In the common case where you just want the whole matching substring - or <literal>NULL</> for no match, write something like + or <literal>NULL</literal> for no match, write something like <programlisting> SELECT (regexp_match('foobarbequebaz', 'bar.*que'))[1]; regexp_match @@ -4341,20 +4341,20 @@ SELECT (regexp_match('foobarbequebaz', 'bar.*que'))[1]; </para> <para> - The <function>regexp_matches</> function returns a set of text arrays + The <function>regexp_matches</function> function returns a set of text arrays of captured substring(s) resulting from matching a POSIX regular expression pattern to a string. It has the same syntax as <function>regexp_match</function>. This function returns no rows if there is no match, one row if there is - a match and the <literal>g</> flag is not given, or <replaceable>N</> - rows if there are <replaceable>N</> matches and the <literal>g</> flag + a match and the <literal>g</literal> flag is not given, or <replaceable>N</replaceable> + rows if there are <replaceable>N</replaceable> matches and the <literal>g</literal> flag is given. Each returned row is a text array containing the whole matched substring or the substrings matching parenthesized - subexpressions of the <replaceable>pattern</>, just as described above + subexpressions of the <replaceable>pattern</replaceable>, just as described above for <function>regexp_match</function>. - <function>regexp_matches</> accepts all the flags shown + <function>regexp_matches</function> accepts all the flags shown in <xref linkend="posix-embedded-options-table">, plus - the <literal>g</> flag which commands it to return all matches, not + the <literal>g</literal> flag which commands it to return all matches, not just the first one. </para> @@ -4377,46 +4377,46 @@ SELECT regexp_matches('foobarbequebazilbarfbonk', '(b[^b]+)(b[^b]+)', 'g'); <tip> <para> - In most cases <function>regexp_matches()</> should be used with - the <literal>g</> flag, since if you only want the first match, it's - easier and more efficient to use <function>regexp_match()</>. - However, <function>regexp_match()</> only exists - in <productname>PostgreSQL</> version 10 and up. When working in older - versions, a common trick is to place a <function>regexp_matches()</> + In most cases <function>regexp_matches()</function> should be used with + the <literal>g</literal> flag, since if you only want the first match, it's + easier and more efficient to use <function>regexp_match()</function>. + However, <function>regexp_match()</function> only exists + in <productname>PostgreSQL</productname> version 10 and up. When working in older + versions, a common trick is to place a <function>regexp_matches()</function> call in a sub-select, for example: <programlisting> SELECT col1, (SELECT regexp_matches(col2, '(bar)(beque)')) FROM tab; </programlisting> - This produces a text array if there's a match, or <literal>NULL</> if - not, the same as <function>regexp_match()</> would do. Without the + This produces a text array if there's a match, or <literal>NULL</literal> if + not, the same as <function>regexp_match()</function> would do. Without the sub-select, this query would produce no output at all for table rows without a match, which is typically not the desired behavior. </para> </tip> <para> - The <function>regexp_split_to_table</> function splits a string using a POSIX + The <function>regexp_split_to_table</function> function splits a string using a POSIX regular expression pattern as a delimiter. It has the syntax - <function>regexp_split_to_table</function>(<replaceable>string</>, <replaceable>pattern</> - <optional>, <replaceable>flags</> </optional>). - If there is no match to the <replaceable>pattern</>, the function returns the - <replaceable>string</>. If there is at least one match, for each match it returns + <function>regexp_split_to_table</function>(<replaceable>string</replaceable>, <replaceable>pattern</replaceable> + <optional>, <replaceable>flags</replaceable> </optional>). + If there is no match to the <replaceable>pattern</replaceable>, the function returns the + <replaceable>string</replaceable>. If there is at least one match, for each match it returns the text from the end of the last match (or the beginning of the string) to the beginning of the match. When there are no more matches, it returns the text from the end of the last match to the end of the string. - The <replaceable>flags</> parameter is an optional text string containing + The <replaceable>flags</replaceable> parameter is an optional text string containing zero or more single-letter flags that change the function's behavior. <function>regexp_split_to_table</function> supports the flags described in <xref linkend="posix-embedded-options-table">. </para> <para> - The <function>regexp_split_to_array</> function behaves the same as - <function>regexp_split_to_table</>, except that <function>regexp_split_to_array</> - returns its result as an array of <type>text</>. It has the syntax - <function>regexp_split_to_array</function>(<replaceable>string</>, <replaceable>pattern</> - <optional>, <replaceable>flags</> </optional>). - The parameters are the same as for <function>regexp_split_to_table</>. + The <function>regexp_split_to_array</function> function behaves the same as + <function>regexp_split_to_table</function>, except that <function>regexp_split_to_array</function> + returns its result as an array of <type>text</type>. It has the syntax + <function>regexp_split_to_array</function>(<replaceable>string</replaceable>, <replaceable>pattern</replaceable> + <optional>, <replaceable>flags</replaceable> </optional>). + The parameters are the same as for <function>regexp_split_to_table</function>. </para> <para> @@ -4471,8 +4471,8 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; zero-length matches that occur at the start or end of the string or immediately after a previous match. This is contrary to the strict definition of regexp matching that is implemented by - <function>regexp_match</> and - <function>regexp_matches</>, but is usually the most convenient behavior + <function>regexp_match</function> and + <function>regexp_matches</function>, but is usually the most convenient behavior in practice. Other software systems such as Perl use similar definitions. </para> @@ -4491,16 +4491,16 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; <para> Regular expressions (<acronym>RE</acronym>s), as defined in <acronym>POSIX</acronym> 1003.2, come in two forms: - <firstterm>extended</> <acronym>RE</acronym>s or <acronym>ERE</>s + <firstterm>extended</firstterm> <acronym>RE</acronym>s or <acronym>ERE</acronym>s (roughly those of <command>egrep</command>), and - <firstterm>basic</> <acronym>RE</acronym>s or <acronym>BRE</>s + <firstterm>basic</firstterm> <acronym>RE</acronym>s or <acronym>BRE</acronym>s (roughly those of <command>ed</command>). <productname>PostgreSQL</productname> supports both forms, and also implements some extensions that are not in the POSIX standard, but have become widely used due to their availability in programming languages such as Perl and Tcl. <acronym>RE</acronym>s using these non-POSIX extensions are called - <firstterm>advanced</> <acronym>RE</acronym>s or <acronym>ARE</>s + <firstterm>advanced</firstterm> <acronym>RE</acronym>s or <acronym>ARE</acronym>s in this documentation. AREs are almost an exact superset of EREs, but BREs have several notational incompatibilities (as well as being much more limited). @@ -4510,9 +4510,9 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; <note> <para> - <productname>PostgreSQL</> always initially presumes that a regular + <productname>PostgreSQL</productname> always initially presumes that a regular expression follows the ARE rules. However, the more limited ERE or - BRE rules can be chosen by prepending an <firstterm>embedded option</> + BRE rules can be chosen by prepending an <firstterm>embedded option</firstterm> to the RE pattern, as described in <xref linkend="posix-metasyntax">. This can be useful for compatibility with applications that expect exactly the <acronym>POSIX</acronym> 1003.2 rules. @@ -4527,15 +4527,15 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; </para> <para> - A branch is zero or more <firstterm>quantified atoms</> or - <firstterm>constraints</>, concatenated. + A branch is zero or more <firstterm>quantified atoms</firstterm> or + <firstterm>constraints</firstterm>, concatenated. It matches a match for the first, followed by a match for the second, etc; an empty branch matches the empty string. </para> <para> - A quantified atom is an <firstterm>atom</> possibly followed - by a single <firstterm>quantifier</>. + A quantified atom is an <firstterm>atom</firstterm> possibly followed + by a single <firstterm>quantifier</firstterm>. Without a quantifier, it matches a match for the atom. With a quantifier, it can match some number of matches of the atom. An <firstterm>atom</firstterm> can be any of the possibilities @@ -4545,7 +4545,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; </para> <para> - A <firstterm>constraint</> matches an empty string, but matches only when + A <firstterm>constraint</firstterm> matches an empty string, but matches only when specific conditions are met. A constraint can be used where an atom could be used, except it cannot be followed by a quantifier. The simple constraints are shown in @@ -4567,57 +4567,57 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; <tbody> <row> - <entry> <literal>(</><replaceable>re</><literal>)</> </entry> - <entry> (where <replaceable>re</> is any regular expression) + <entry> <literal>(</literal><replaceable>re</replaceable><literal>)</literal> </entry> + <entry> (where <replaceable>re</replaceable> is any regular expression) matches a match for - <replaceable>re</>, with the match noted for possible reporting </entry> + <replaceable>re</replaceable>, with the match noted for possible reporting </entry> </row> <row> - <entry> <literal>(?:</><replaceable>re</><literal>)</> </entry> + <entry> <literal>(?:</literal><replaceable>re</replaceable><literal>)</literal> </entry> <entry> as above, but the match is not noted for reporting - (a <quote>non-capturing</> set of parentheses) + (a <quote>non-capturing</quote> set of parentheses) (AREs only) </entry> </row> <row> - <entry> <literal>.</> </entry> + <entry> <literal>.</literal> </entry> <entry> matches any single character </entry> </row> <row> - <entry> <literal>[</><replaceable>chars</><literal>]</> </entry> - <entry> a <firstterm>bracket expression</>, - matching any one of the <replaceable>chars</> (see + <entry> <literal>[</literal><replaceable>chars</replaceable><literal>]</literal> </entry> + <entry> a <firstterm>bracket expression</firstterm>, + matching any one of the <replaceable>chars</replaceable> (see <xref linkend="posix-bracket-expressions"> for more detail) </entry> </row> <row> - <entry> <literal>\</><replaceable>k</> </entry> - <entry> (where <replaceable>k</> is a non-alphanumeric character) + <entry> <literal>\</literal><replaceable>k</replaceable> </entry> + <entry> (where <replaceable>k</replaceable> is a non-alphanumeric character) matches that character taken as an ordinary character, - e.g., <literal>\\</> matches a backslash character </entry> + e.g., <literal>\\</literal> matches a backslash character </entry> </row> <row> - <entry> <literal>\</><replaceable>c</> </entry> - <entry> where <replaceable>c</> is alphanumeric + <entry> <literal>\</literal><replaceable>c</replaceable> </entry> + <entry> where <replaceable>c</replaceable> is alphanumeric (possibly followed by other characters) - is an <firstterm>escape</>, see <xref linkend="posix-escape-sequences"> - (AREs only; in EREs and BREs, this matches <replaceable>c</>) </entry> + is an <firstterm>escape</firstterm>, see <xref linkend="posix-escape-sequences"> + (AREs only; in EREs and BREs, this matches <replaceable>c</replaceable>) </entry> </row> <row> - <entry> <literal>{</> </entry> + <entry> <literal>{</literal> </entry> <entry> when followed by a character other than a digit, - matches the left-brace character <literal>{</>; + matches the left-brace character <literal>{</literal>; when followed by a digit, it is the beginning of a - <replaceable>bound</> (see below) </entry> + <replaceable>bound</replaceable> (see below) </entry> </row> <row> - <entry> <replaceable>x</> </entry> - <entry> where <replaceable>x</> is a single character with no other + <entry> <replaceable>x</replaceable> </entry> + <entry> where <replaceable>x</replaceable> is a single character with no other significance, matches that character </entry> </row> </tbody> @@ -4625,7 +4625,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; </table> <para> - An RE cannot end with a backslash (<literal>\</>). + An RE cannot end with a backslash (<literal>\</literal>). </para> <note> @@ -4649,82 +4649,82 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; <tbody> <row> - <entry> <literal>*</> </entry> + <entry> <literal>*</literal> </entry> <entry> a sequence of 0 or more matches of the atom </entry> </row> <row> - <entry> <literal>+</> </entry> + <entry> <literal>+</literal> </entry> <entry> a sequence of 1 or more matches of the atom </entry> </row> <row> - <entry> <literal>?</> </entry> + <entry> <literal>?</literal> </entry> <entry> a sequence of 0 or 1 matches of the atom </entry> </row> <row> - <entry> <literal>{</><replaceable>m</><literal>}</> </entry> - <entry> a sequence of exactly <replaceable>m</> matches of the atom </entry> + <entry> <literal>{</literal><replaceable>m</replaceable><literal>}</literal> </entry> + <entry> a sequence of exactly <replaceable>m</replaceable> matches of the atom </entry> </row> <row> - <entry> <literal>{</><replaceable>m</><literal>,}</> </entry> - <entry> a sequence of <replaceable>m</> or more matches of the atom </entry> + <entry> <literal>{</literal><replaceable>m</replaceable><literal>,}</literal> </entry> + <entry> a sequence of <replaceable>m</replaceable> or more matches of the atom </entry> </row> <row> <entry> - <literal>{</><replaceable>m</><literal>,</><replaceable>n</><literal>}</> </entry> - <entry> a sequence of <replaceable>m</> through <replaceable>n</> - (inclusive) matches of the atom; <replaceable>m</> cannot exceed - <replaceable>n</> </entry> + <literal>{</literal><replaceable>m</replaceable><literal>,</literal><replaceable>n</replaceable><literal>}</literal> </entry> + <entry> a sequence of <replaceable>m</replaceable> through <replaceable>n</replaceable> + (inclusive) matches of the atom; <replaceable>m</replaceable> cannot exceed + <replaceable>n</replaceable> </entry> </row> <row> - <entry> <literal>*?</> </entry> - <entry> non-greedy version of <literal>*</> </entry> + <entry> <literal>*?</literal> </entry> + <entry> non-greedy version of <literal>*</literal> </entry> </row> <row> - <entry> <literal>+?</> </entry> - <entry> non-greedy version of <literal>+</> </entry> + <entry> <literal>+?</literal> </entry> + <entry> non-greedy version of <literal>+</literal> </entry> </row> <row> - <entry> <literal>??</> </entry> - <entry> non-greedy version of <literal>?</> </entry> + <entry> <literal>??</literal> </entry> + <entry> non-greedy version of <literal>?</literal> </entry> </row> <row> - <entry> <literal>{</><replaceable>m</><literal>}?</> </entry> - <entry> non-greedy version of <literal>{</><replaceable>m</><literal>}</> </entry> + <entry> <literal>{</literal><replaceable>m</replaceable><literal>}?</literal> </entry> + <entry> non-greedy version of <literal>{</literal><replaceable>m</replaceable><literal>}</literal> </entry> </row> <row> - <entry> <literal>{</><replaceable>m</><literal>,}?</> </entry> - <entry> non-greedy version of <literal>{</><replaceable>m</><literal>,}</> </entry> + <entry> <literal>{</literal><replaceable>m</replaceable><literal>,}?</literal> </entry> + <entry> non-greedy version of <literal>{</literal><replaceable>m</replaceable><literal>,}</literal> </entry> </row> <row> <entry> - <literal>{</><replaceable>m</><literal>,</><replaceable>n</><literal>}?</> </entry> - <entry> non-greedy version of <literal>{</><replaceable>m</><literal>,</><replaceable>n</><literal>}</> </entry> + <literal>{</literal><replaceable>m</replaceable><literal>,</literal><replaceable>n</replaceable><literal>}?</literal> </entry> + <entry> non-greedy version of <literal>{</literal><replaceable>m</replaceable><literal>,</literal><replaceable>n</replaceable><literal>}</literal> </entry> </row> </tbody> </tgroup> </table> <para> - The forms using <literal>{</><replaceable>...</><literal>}</> - are known as <firstterm>bounds</>. - The numbers <replaceable>m</> and <replaceable>n</> within a bound are + The forms using <literal>{</literal><replaceable>...</replaceable><literal>}</literal> + are known as <firstterm>bounds</firstterm>. + The numbers <replaceable>m</replaceable> and <replaceable>n</replaceable> within a bound are unsigned decimal integers with permissible values from 0 to 255 inclusive. </para> <para> - <firstterm>Non-greedy</> quantifiers (available in AREs only) match the - same possibilities as their corresponding normal (<firstterm>greedy</>) + <firstterm>Non-greedy</firstterm> quantifiers (available in AREs only) match the + same possibilities as their corresponding normal (<firstterm>greedy</firstterm>) counterparts, but prefer the smallest number rather than the largest number of matches. See <xref linkend="posix-matching-rules"> for more detail. @@ -4733,7 +4733,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; <note> <para> A quantifier cannot immediately follow another quantifier, e.g., - <literal>**</> is invalid. + <literal>**</literal> is invalid. A quantifier cannot begin an expression or subexpression or follow <literal>^</literal> or <literal>|</literal>. @@ -4753,40 +4753,40 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; <tbody> <row> - <entry> <literal>^</> </entry> + <entry> <literal>^</literal> </entry> <entry> matches at the beginning of the string </entry> </row> <row> - <entry> <literal>$</> </entry> + <entry> <literal>$</literal> </entry> <entry> matches at the end of the string </entry> </row> <row> - <entry> <literal>(?=</><replaceable>re</><literal>)</> </entry> - <entry> <firstterm>positive lookahead</> matches at any point - where a substring matching <replaceable>re</> begins + <entry> <literal>(?=</literal><replaceable>re</replaceable><literal>)</literal> </entry> + <entry> <firstterm>positive lookahead</firstterm> matches at any point + where a substring matching <replaceable>re</replaceable> begins (AREs only) </entry> </row> <row> - <entry> <literal>(?!</><replaceable>re</><literal>)</> </entry> - <entry> <firstterm>negative lookahead</> matches at any point - where no substring matching <replaceable>re</> begins + <entry> <literal>(?!</literal><replaceable>re</replaceable><literal>)</literal> </entry> + <entry> <firstterm>negative lookahead</firstterm> matches at any point + where no substring matching <replaceable>re</replaceable> begins (AREs only) </entry> </row> <row> - <entry> <literal>(?<=</><replaceable>re</><literal>)</> </entry> - <entry> <firstterm>positive lookbehind</> matches at any point - where a substring matching <replaceable>re</> ends + <entry> <literal>(?<=</literal><replaceable>re</replaceable><literal>)</literal> </entry> + <entry> <firstterm>positive lookbehind</firstterm> matches at any point + where a substring matching <replaceable>re</replaceable> ends (AREs only) </entry> </row> <row> - <entry> <literal>(?<!</><replaceable>re</><literal>)</> </entry> - <entry> <firstterm>negative lookbehind</> matches at any point - where no substring matching <replaceable>re</> ends + <entry> <literal>(?<!</literal><replaceable>re</replaceable><literal>)</literal> </entry> + <entry> <firstterm>negative lookbehind</firstterm> matches at any point + where no substring matching <replaceable>re</replaceable> ends (AREs only) </entry> </row> </tbody> @@ -4795,7 +4795,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; <para> Lookahead and lookbehind constraints cannot contain <firstterm>back - references</> (see <xref linkend="posix-escape-sequences">), + references</firstterm> (see <xref linkend="posix-escape-sequences">), and all parentheses within them are considered non-capturing. </para> </sect3> @@ -4808,7 +4808,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; characters enclosed in <literal>[]</literal>. It normally matches any single character from the list (but see below). If the list begins with <literal>^</literal>, it matches any single character - <emphasis>not</> from the rest of the list. + <emphasis>not</emphasis> from the rest of the list. If two characters in the list are separated by <literal>-</literal>, this is shorthand for the full range of characters between those two @@ -4853,7 +4853,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; <note> <para> - <productname>PostgreSQL</> currently does not support multi-character collating + <productname>PostgreSQL</productname> currently does not support multi-character collating elements. This information describes possible future behavior. </para> </note> @@ -4861,7 +4861,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; <para> Within a bracket expression, a collating element enclosed in <literal>[=</literal> and <literal>=]</literal> is an <firstterm>equivalence - class</>, standing for the sequences of characters of all collating + class</firstterm>, standing for the sequences of characters of all collating elements equivalent to that one, including itself. (If there are no other equivalent collating elements, the treatment is as if the enclosing delimiters were <literal>[.</literal> and @@ -4896,7 +4896,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; matching empty strings at the beginning and end of a word respectively. A word is defined as a sequence of word characters that is neither preceded nor followed by word - characters. A word character is an <literal>alnum</> character (as + characters. A word character is an <literal>alnum</literal> character (as defined by <citerefentry><refentrytitle>ctype</refentrytitle><manvolnum>3</manvolnum></citerefentry>) or an underscore. This is an extension, compatible with but not @@ -4911,44 +4911,44 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; <title>Regular Expression Escapes</title> <para> - <firstterm>Escapes</> are special sequences beginning with <literal>\</> + <firstterm>Escapes</firstterm> are special sequences beginning with <literal>\</literal> followed by an alphanumeric character. Escapes come in several varieties: character entry, class shorthands, constraint escapes, and back references. - A <literal>\</> followed by an alphanumeric character but not constituting + A <literal>\</literal> followed by an alphanumeric character but not constituting a valid escape is illegal in AREs. In EREs, there are no escapes: outside a bracket expression, - a <literal>\</> followed by an alphanumeric character merely stands for + a <literal>\</literal> followed by an alphanumeric character merely stands for that character as an ordinary character, and inside a bracket expression, - <literal>\</> is an ordinary character. + <literal>\</literal> is an ordinary character. (The latter is the one actual incompatibility between EREs and AREs.) </para> <para> - <firstterm>Character-entry escapes</> exist to make it easier to specify + <firstterm>Character-entry escapes</firstterm> exist to make it easier to specify non-printing and other inconvenient characters in REs. They are shown in <xref linkend="posix-character-entry-escapes-table">. </para> <para> - <firstterm>Class-shorthand escapes</> provide shorthands for certain + <firstterm>Class-shorthand escapes</firstterm> provide shorthands for certain commonly-used character classes. They are shown in <xref linkend="posix-class-shorthand-escapes-table">. </para> <para> - A <firstterm>constraint escape</> is a constraint, + A <firstterm>constraint escape</firstterm> is a constraint, matching the empty string if specific conditions are met, written as an escape. They are shown in <xref linkend="posix-constraint-escapes-table">. </para> <para> - A <firstterm>back reference</> (<literal>\</><replaceable>n</>) matches the + A <firstterm>back reference</firstterm> (<literal>\</literal><replaceable>n</replaceable>) matches the same string matched by the previous parenthesized subexpression specified - by the number <replaceable>n</> + by the number <replaceable>n</replaceable> (see <xref linkend="posix-constraint-backref-table">). For example, - <literal>([bc])\1</> matches <literal>bb</> or <literal>cc</> - but not <literal>bc</> or <literal>cb</>. + <literal>([bc])\1</literal> matches <literal>bb</literal> or <literal>cc</literal> + but not <literal>bc</literal> or <literal>cb</literal>. The subexpression must entirely precede the back reference in the RE. Subexpressions are numbered in the order of their leading parentheses. Non-capturing parentheses do not define subexpressions. @@ -4967,122 +4967,122 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; <tbody> <row> - <entry> <literal>\a</> </entry> + <entry> <literal>\a</literal> </entry> <entry> alert (bell) character, as in C </entry> </row> <row> - <entry> <literal>\b</> </entry> + <entry> <literal>\b</literal> </entry> <entry> backspace, as in C </entry> </row> <row> - <entry> <literal>\B</> </entry> - <entry> synonym for backslash (<literal>\</>) to help reduce the need for backslash + <entry> <literal>\B</literal> </entry> + <entry> synonym for backslash (<literal>\</literal>) to help reduce the need for backslash doubling </entry> </row> <row> - <entry> <literal>\c</><replaceable>X</> </entry> - <entry> (where <replaceable>X</> is any character) the character whose + <entry> <literal>\c</literal><replaceable>X</replaceable> </entry> + <entry> (where <replaceable>X</replaceable> is any character) the character whose low-order 5 bits are the same as those of - <replaceable>X</>, and whose other bits are all zero </entry> + <replaceable>X</replaceable>, and whose other bits are all zero </entry> </row> <row> - <entry> <literal>\e</> </entry> + <entry> <literal>\e</literal> </entry> <entry> the character whose collating-sequence name - is <literal>ESC</>, - or failing that, the character with octal value <literal>033</> </entry> + is <literal>ESC</literal>, + or failing that, the character with octal value <literal>033</literal> </entry> </row> <row> - <entry> <literal>\f</> </entry> + <entry> <literal>\f</literal> </entry> <entry> form feed, as in C </entry> </row> <row> - <entry> <literal>\n</> </entry> + <entry> <literal>\n</literal> </entry> <entry> newline, as in C </entry> </row> <row> - <entry> <literal>\r</> </entry> + <entry> <literal>\r</literal> </entry> <entry> carriage return, as in C </entry> </row> <row> - <entry> <literal>\t</> </entry> + <entry> <literal>\t</literal> </entry> <entry> horizontal tab, as in C </entry> </row> <row> - <entry> <literal>\u</><replaceable>wxyz</> </entry> - <entry> (where <replaceable>wxyz</> is exactly four hexadecimal digits) + <entry> <literal>\u</literal><replaceable>wxyz</replaceable> </entry> + <entry> (where <replaceable>wxyz</replaceable> is exactly four hexadecimal digits) the character whose hexadecimal value is - <literal>0x</><replaceable>wxyz</> + <literal>0x</literal><replaceable>wxyz</replaceable> </entry> </row> <row> - <entry> <literal>\U</><replaceable>stuvwxyz</> </entry> - <entry> (where <replaceable>stuvwxyz</> is exactly eight hexadecimal + <entry> <literal>\U</literal><replaceable>stuvwxyz</replaceable> </entry> + <entry> (where <replaceable>stuvwxyz</replaceable> is exactly eight hexadecimal digits) the character whose hexadecimal value is - <literal>0x</><replaceable>stuvwxyz</> + <literal>0x</literal><replaceable>stuvwxyz</replaceable> </entry> </row> <row> - <entry> <literal>\v</> </entry> + <entry> <literal>\v</literal> </entry> <entry> vertical tab, as in C </entry> </row> <row> - <entry> <literal>\x</><replaceable>hhh</> </entry> - <entry> (where <replaceable>hhh</> is any sequence of hexadecimal + <entry> <literal>\x</literal><replaceable>hhh</replaceable> </entry> + <entry> (where <replaceable>hhh</replaceable> is any sequence of hexadecimal digits) the character whose hexadecimal value is - <literal>0x</><replaceable>hhh</> + <literal>0x</literal><replaceable>hhh</replaceable> (a single character no matter how many hexadecimal digits are used) </entry> </row> <row> - <entry> <literal>\0</> </entry> - <entry> the character whose value is <literal>0</> (the null byte)</entry> + <entry> <literal>\0</literal> </entry> + <entry> the character whose value is <literal>0</literal> (the null byte)</entry> </row> <row> - <entry> <literal>\</><replaceable>xy</> </entry> - <entry> (where <replaceable>xy</> is exactly two octal digits, - and is not a <firstterm>back reference</>) + <entry> <literal>\</literal><replaceable>xy</replaceable> </entry> + <entry> (where <replaceable>xy</replaceable> is exactly two octal digits, + and is not a <firstterm>back reference</firstterm>) the character whose octal value is - <literal>0</><replaceable>xy</> </entry> + <literal>0</literal><replaceable>xy</replaceable> </entry> </row> <row> - <entry> <literal>\</><replaceable>xyz</> </entry> - <entry> (where <replaceable>xyz</> is exactly three octal digits, - and is not a <firstterm>back reference</>) + <entry> <literal>\</literal><replaceable>xyz</replaceable> </entry> + <entry> (where <replaceable>xyz</replaceable> is exactly three octal digits, + and is not a <firstterm>back reference</firstterm>) the character whose octal value is - <literal>0</><replaceable>xyz</> </entry> + <literal>0</literal><replaceable>xyz</replaceable> </entry> </row> </tbody> </tgroup> </table> <para> - Hexadecimal digits are <literal>0</>-<literal>9</>, - <literal>a</>-<literal>f</>, and <literal>A</>-<literal>F</>. - Octal digits are <literal>0</>-<literal>7</>. + Hexadecimal digits are <literal>0</literal>-<literal>9</literal>, + <literal>a</literal>-<literal>f</literal>, and <literal>A</literal>-<literal>F</literal>. + Octal digits are <literal>0</literal>-<literal>7</literal>. </para> <para> Numeric character-entry escapes specifying values outside the ASCII range (0-127) have meanings dependent on the database encoding. When the encoding is UTF-8, escape values are equivalent to Unicode code points, - for example <literal>\u1234</> means the character <literal>U+1234</>. + for example <literal>\u1234</literal> means the character <literal>U+1234</literal>. For other multibyte encodings, character-entry escapes usually just specify the concatenation of the byte values for the character. If the escape value does not correspond to any legal character in the database @@ -5091,8 +5091,8 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; <para> The character-entry escapes are always taken as ordinary characters. - For example, <literal>\135</> is <literal>]</> in ASCII, but - <literal>\135</> does not terminate a bracket expression. + For example, <literal>\135</literal> is <literal>]</literal> in ASCII, but + <literal>\135</literal> does not terminate a bracket expression. </para> <table id="posix-class-shorthand-escapes-table"> @@ -5108,34 +5108,34 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; <tbody> <row> - <entry> <literal>\d</> </entry> - <entry> <literal>[[:digit:]]</> </entry> + <entry> <literal>\d</literal> </entry> + <entry> <literal>[[:digit:]]</literal> </entry> </row> <row> - <entry> <literal>\s</> </entry> - <entry> <literal>[[:space:]]</> </entry> + <entry> <literal>\s</literal> </entry> + <entry> <literal>[[:space:]]</literal> </entry> </row> <row> - <entry> <literal>\w</> </entry> - <entry> <literal>[[:alnum:]_]</> + <entry> <literal>\w</literal> </entry> + <entry> <literal>[[:alnum:]_]</literal> (note underscore is included) </entry> </row> <row> - <entry> <literal>\D</> </entry> - <entry> <literal>[^[:digit:]]</> </entry> + <entry> <literal>\D</literal> </entry> + <entry> <literal>[^[:digit:]]</literal> </entry> </row> <row> - <entry> <literal>\S</> </entry> - <entry> <literal>[^[:space:]]</> </entry> + <entry> <literal>\S</literal> </entry> + <entry> <literal>[^[:space:]]</literal> </entry> </row> <row> - <entry> <literal>\W</> </entry> - <entry> <literal>[^[:alnum:]_]</> + <entry> <literal>\W</literal> </entry> + <entry> <literal>[^[:alnum:]_]</literal> (note underscore is included) </entry> </row> </tbody> @@ -5143,13 +5143,13 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; </table> <para> - Within bracket expressions, <literal>\d</>, <literal>\s</>, - and <literal>\w</> lose their outer brackets, - and <literal>\D</>, <literal>\S</>, and <literal>\W</> are illegal. - (So, for example, <literal>[a-c\d]</> is equivalent to - <literal>[a-c[:digit:]]</>. - Also, <literal>[a-c\D]</>, which is equivalent to - <literal>[a-c^[:digit:]]</>, is illegal.) + Within bracket expressions, <literal>\d</literal>, <literal>\s</literal>, + and <literal>\w</literal> lose their outer brackets, + and <literal>\D</literal>, <literal>\S</literal>, and <literal>\W</literal> are illegal. + (So, for example, <literal>[a-c\d]</literal> is equivalent to + <literal>[a-c[:digit:]]</literal>. + Also, <literal>[a-c\D]</literal>, which is equivalent to + <literal>[a-c^[:digit:]]</literal>, is illegal.) </para> <table id="posix-constraint-escapes-table"> @@ -5165,38 +5165,38 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; <tbody> <row> - <entry> <literal>\A</> </entry> + <entry> <literal>\A</literal> </entry> <entry> matches only at the beginning of the string (see <xref linkend="posix-matching-rules"> for how this differs from - <literal>^</>) </entry> + <literal>^</literal>) </entry> </row> <row> - <entry> <literal>\m</> </entry> + <entry> <literal>\m</literal> </entry> <entry> matches only at the beginning of a word </entry> </row> <row> - <entry> <literal>\M</> </entry> + <entry> <literal>\M</literal> </entry> <entry> matches only at the end of a word </entry> </row> <row> - <entry> <literal>\y</> </entry> + <entry> <literal>\y</literal> </entry> <entry> matches only at the beginning or end of a word </entry> </row> <row> - <entry> <literal>\Y</> </entry> + <entry> <literal>\Y</literal> </entry> <entry> matches only at a point that is not the beginning or end of a word </entry> </row> <row> - <entry> <literal>\Z</> </entry> + <entry> <literal>\Z</literal> </entry> <entry> matches only at the end of the string (see <xref linkend="posix-matching-rules"> for how this differs from - <literal>$</>) </entry> + <literal>$</literal>) </entry> </row> </tbody> </tgroup> @@ -5204,7 +5204,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; <para> A word is defined as in the specification of - <literal>[[:<:]]</> and <literal>[[:>:]]</> above. + <literal>[[:<:]]</literal> and <literal>[[:>:]]</literal> above. Constraint escapes are illegal within bracket expressions. </para> @@ -5221,18 +5221,18 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; <tbody> <row> - <entry> <literal>\</><replaceable>m</> </entry> - <entry> (where <replaceable>m</> is a nonzero digit) - a back reference to the <replaceable>m</>'th subexpression </entry> + <entry> <literal>\</literal><replaceable>m</replaceable> </entry> + <entry> (where <replaceable>m</replaceable> is a nonzero digit) + a back reference to the <replaceable>m</replaceable>'th subexpression </entry> </row> <row> - <entry> <literal>\</><replaceable>mnn</> </entry> - <entry> (where <replaceable>m</> is a nonzero digit, and - <replaceable>nn</> is some more digits, and the decimal value - <replaceable>mnn</> is not greater than the number of closing capturing + <entry> <literal>\</literal><replaceable>mnn</replaceable> </entry> + <entry> (where <replaceable>m</replaceable> is a nonzero digit, and + <replaceable>nn</replaceable> is some more digits, and the decimal value + <replaceable>mnn</replaceable> is not greater than the number of closing capturing parentheses seen so far) - a back reference to the <replaceable>mnn</>'th subexpression </entry> + a back reference to the <replaceable>mnn</replaceable>'th subexpression </entry> </row> </tbody> </tgroup> @@ -5263,29 +5263,29 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; </para> <para> - An RE can begin with one of two special <firstterm>director</> prefixes. - If an RE begins with <literal>***:</>, + An RE can begin with one of two special <firstterm>director</firstterm> prefixes. + If an RE begins with <literal>***:</literal>, the rest of the RE is taken as an ARE. (This normally has no effect in - <productname>PostgreSQL</>, since REs are assumed to be AREs; + <productname>PostgreSQL</productname>, since REs are assumed to be AREs; but it does have an effect if ERE or BRE mode had been specified by - the <replaceable>flags</> parameter to a regex function.) - If an RE begins with <literal>***=</>, + the <replaceable>flags</replaceable> parameter to a regex function.) + If an RE begins with <literal>***=</literal>, the rest of the RE is taken to be a literal string, with all characters considered ordinary characters. </para> <para> - An ARE can begin with <firstterm>embedded options</>: - a sequence <literal>(?</><replaceable>xyz</><literal>)</> - (where <replaceable>xyz</> is one or more alphabetic characters) + An ARE can begin with <firstterm>embedded options</firstterm>: + a sequence <literal>(?</literal><replaceable>xyz</replaceable><literal>)</literal> + (where <replaceable>xyz</replaceable> is one or more alphabetic characters) specifies options affecting the rest of the RE. These options override any previously determined options — in particular, they can override the case-sensitivity behavior implied by - a regex operator, or the <replaceable>flags</> parameter to a regex + a regex operator, or the <replaceable>flags</replaceable> parameter to a regex function. The available option letters are shown in <xref linkend="posix-embedded-options-table">. - Note that these same option letters are used in the <replaceable>flags</> + Note that these same option letters are used in the <replaceable>flags</replaceable> parameters of regex functions. </para> @@ -5302,67 +5302,67 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; <tbody> <row> - <entry> <literal>b</> </entry> + <entry> <literal>b</literal> </entry> <entry> rest of RE is a BRE </entry> </row> <row> - <entry> <literal>c</> </entry> + <entry> <literal>c</literal> </entry> <entry> case-sensitive matching (overrides operator type) </entry> </row> <row> - <entry> <literal>e</> </entry> + <entry> <literal>e</literal> </entry> <entry> rest of RE is an ERE </entry> </row> <row> - <entry> <literal>i</> </entry> + <entry> <literal>i</literal> </entry> <entry> case-insensitive matching (see <xref linkend="posix-matching-rules">) (overrides operator type) </entry> </row> <row> - <entry> <literal>m</> </entry> - <entry> historical synonym for <literal>n</> </entry> + <entry> <literal>m</literal> </entry> + <entry> historical synonym for <literal>n</literal> </entry> </row> <row> - <entry> <literal>n</> </entry> + <entry> <literal>n</literal> </entry> <entry> newline-sensitive matching (see <xref linkend="posix-matching-rules">) </entry> </row> <row> - <entry> <literal>p</> </entry> + <entry> <literal>p</literal> </entry> <entry> partial newline-sensitive matching (see <xref linkend="posix-matching-rules">) </entry> </row> <row> - <entry> <literal>q</> </entry> - <entry> rest of RE is a literal (<quote>quoted</>) string, all ordinary + <entry> <literal>q</literal> </entry> + <entry> rest of RE is a literal (<quote>quoted</quote>) string, all ordinary characters </entry> </row> <row> - <entry> <literal>s</> </entry> + <entry> <literal>s</literal> </entry> <entry> non-newline-sensitive matching (default) </entry> </row> <row> - <entry> <literal>t</> </entry> + <entry> <literal>t</literal> </entry> <entry> tight syntax (default; see below) </entry> </row> <row> - <entry> <literal>w</> </entry> - <entry> inverse partial newline-sensitive (<quote>weird</>) matching + <entry> <literal>w</literal> </entry> + <entry> inverse partial newline-sensitive (<quote>weird</quote>) matching (see <xref linkend="posix-matching-rules">) </entry> </row> <row> - <entry> <literal>x</> </entry> + <entry> <literal>x</literal> </entry> <entry> expanded syntax (see below) </entry> </row> </tbody> @@ -5370,18 +5370,18 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; </table> <para> - Embedded options take effect at the <literal>)</> terminating the sequence. + Embedded options take effect at the <literal>)</literal> terminating the sequence. They can appear only at the start of an ARE (after the - <literal>***:</> director if any). + <literal>***:</literal> director if any). </para> <para> - In addition to the usual (<firstterm>tight</>) RE syntax, in which all - characters are significant, there is an <firstterm>expanded</> syntax, - available by specifying the embedded <literal>x</> option. + In addition to the usual (<firstterm>tight</firstterm>) RE syntax, in which all + characters are significant, there is an <firstterm>expanded</firstterm> syntax, + available by specifying the embedded <literal>x</literal> option. In the expanded syntax, white-space characters in the RE are ignored, as are - all characters between a <literal>#</> + all characters between a <literal>#</literal> and the following newline (or the end of the RE). This permits paragraphing and commenting a complex RE. There are three exceptions to that basic rule: @@ -5389,41 +5389,41 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; <itemizedlist> <listitem> <para> - a white-space character or <literal>#</> preceded by <literal>\</> is + a white-space character or <literal>#</literal> preceded by <literal>\</literal> is retained </para> </listitem> <listitem> <para> - white space or <literal>#</> within a bracket expression is retained + white space or <literal>#</literal> within a bracket expression is retained </para> </listitem> <listitem> <para> white space and comments cannot appear within multi-character symbols, - such as <literal>(?:</> + such as <literal>(?:</literal> </para> </listitem> </itemizedlist> For this purpose, white-space characters are blank, tab, newline, and - any character that belongs to the <replaceable>space</> character class. + any character that belongs to the <replaceable>space</replaceable> character class. </para> <para> Finally, in an ARE, outside bracket expressions, the sequence - <literal>(?#</><replaceable>ttt</><literal>)</> - (where <replaceable>ttt</> is any text not containing a <literal>)</>) + <literal>(?#</literal><replaceable>ttt</replaceable><literal>)</literal> + (where <replaceable>ttt</replaceable> is any text not containing a <literal>)</literal>) is a comment, completely ignored. Again, this is not allowed between the characters of - multi-character symbols, like <literal>(?:</>. + multi-character symbols, like <literal>(?:</literal>. Such comments are more a historical artifact than a useful facility, and their use is deprecated; use the expanded syntax instead. </para> <para> - <emphasis>None</> of these metasyntax extensions is available if - an initial <literal>***=</> director + <emphasis>None</emphasis> of these metasyntax extensions is available if + an initial <literal>***=</literal> director has specified that the user's input be treated as a literal string rather than as an RE. </para> @@ -5437,8 +5437,8 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; string, the RE matches the one starting earliest in the string. If the RE could match more than one substring starting at that point, either the longest possible match or the shortest possible match will - be taken, depending on whether the RE is <firstterm>greedy</> or - <firstterm>non-greedy</>. + be taken, depending on whether the RE is <firstterm>greedy</firstterm> or + <firstterm>non-greedy</firstterm>. </para> <para> @@ -5458,39 +5458,39 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; <listitem> <para> A quantified atom with a fixed-repetition quantifier - (<literal>{</><replaceable>m</><literal>}</> + (<literal>{</literal><replaceable>m</replaceable><literal>}</literal> or - <literal>{</><replaceable>m</><literal>}?</>) + <literal>{</literal><replaceable>m</replaceable><literal>}?</literal>) has the same greediness (possibly none) as the atom itself. </para> </listitem> <listitem> <para> A quantified atom with other normal quantifiers (including - <literal>{</><replaceable>m</><literal>,</><replaceable>n</><literal>}</> - with <replaceable>m</> equal to <replaceable>n</>) + <literal>{</literal><replaceable>m</replaceable><literal>,</literal><replaceable>n</replaceable><literal>}</literal> + with <replaceable>m</replaceable> equal to <replaceable>n</replaceable>) is greedy (prefers longest match). </para> </listitem> <listitem> <para> A quantified atom with a non-greedy quantifier (including - <literal>{</><replaceable>m</><literal>,</><replaceable>n</><literal>}?</> - with <replaceable>m</> equal to <replaceable>n</>) + <literal>{</literal><replaceable>m</replaceable><literal>,</literal><replaceable>n</replaceable><literal>}?</literal> + with <replaceable>m</replaceable> equal to <replaceable>n</replaceable>) is non-greedy (prefers shortest match). </para> </listitem> <listitem> <para> A branch — that is, an RE that has no top-level - <literal>|</> operator — has the same greediness as the first + <literal>|</literal> operator — has the same greediness as the first quantified atom in it that has a greediness attribute. </para> </listitem> <listitem> <para> An RE consisting of two or more branches connected by the - <literal>|</> operator is always greedy. + <literal>|</literal> operator is always greedy. </para> </listitem> </itemizedlist> @@ -5501,7 +5501,7 @@ SELECT foo FROM regexp_split_to_table('the quick brown fox', E'\\s*') AS foo; quantified atoms, but with branches and entire REs that contain quantified atoms. What that means is that the matching is done in such a way that the branch, or whole RE, matches the longest or shortest possible - substring <emphasis>as a whole</>. Once the length of the entire match + substring <emphasis>as a whole</emphasis>. Once the length of the entire match is determined, the part of it that matches any particular subexpression is determined on the basis of the greediness attribute of that subexpression, with subexpressions starting earlier in the RE taking @@ -5516,16 +5516,16 @@ SELECT SUBSTRING('XY1234Z', 'Y*([0-9]{1,3})'); SELECT SUBSTRING('XY1234Z', 'Y*?([0-9]{1,3})'); <lineannotation>Result: </lineannotation><computeroutput>1</computeroutput> </screen> - In the first case, the RE as a whole is greedy because <literal>Y*</> - is greedy. It can match beginning at the <literal>Y</>, and it matches - the longest possible string starting there, i.e., <literal>Y123</>. - The output is the parenthesized part of that, or <literal>123</>. - In the second case, the RE as a whole is non-greedy because <literal>Y*?</> - is non-greedy. It can match beginning at the <literal>Y</>, and it matches - the shortest possible string starting there, i.e., <literal>Y1</>. - The subexpression <literal>[0-9]{1,3}</> is greedy but it cannot change + In the first case, the RE as a whole is greedy because <literal>Y*</literal> + is greedy. It can match beginning at the <literal>Y</literal>, and it matches + the longest possible string starting there, i.e., <literal>Y123</literal>. + The output is the parenthesized part of that, or <literal>123</literal>. + In the second case, the RE as a whole is non-greedy because <literal>Y*?</literal> + is non-greedy. It can match beginning at the <literal>Y</literal>, and it matches + the shortest possible string starting there, i.e., <literal>Y1</literal>. + The subexpression <literal>[0-9]{1,3}</literal> is greedy but it cannot change the decision as to the overall match length; so it is forced to match - just <literal>1</>. + just <literal>1</literal>. </para> <para> @@ -5533,11 +5533,11 @@ SELECT SUBSTRING('XY1234Z', 'Y*?([0-9]{1,3})'); the total match length is either as long as possible or as short as possible, according to the attribute assigned to the whole RE. The attributes assigned to the subexpressions only affect how much of that - match they are allowed to <quote>eat</> relative to each other. + match they are allowed to <quote>eat</quote> relative to each other. </para> <para> - The quantifiers <literal>{1,1}</> and <literal>{1,1}?</> + The quantifiers <literal>{1,1}</literal> and <literal>{1,1}?</literal> can be used to force greediness or non-greediness, respectively, on a subexpression or a whole RE. This is useful when you need the whole RE to have a greediness attribute @@ -5549,8 +5549,8 @@ SELECT SUBSTRING('XY1234Z', 'Y*?([0-9]{1,3})'); SELECT regexp_match('abc01234xyz', '(.*)(\d+)(.*)'); <lineannotation>Result: </lineannotation><computeroutput>{abc0123,4,xyz}</computeroutput> </screen> - That didn't work: the first <literal>.*</> is greedy so - it <quote>eats</> as much as it can, leaving the <literal>\d+</> to + That didn't work: the first <literal>.*</literal> is greedy so + it <quote>eats</quote> as much as it can, leaving the <literal>\d+</literal> to match at the last possible place, the last digit. We might try to fix that by making it non-greedy: <screen> @@ -5573,14 +5573,14 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); match lengths are measured in characters, not collating elements. An empty string is considered longer than no match at all. For example: - <literal>bb*</> - matches the three middle characters of <literal>abbbc</>; - <literal>(week|wee)(night|knights)</> - matches all ten characters of <literal>weeknights</>; - when <literal>(.*).*</> - is matched against <literal>abc</> the parenthesized subexpression + <literal>bb*</literal> + matches the three middle characters of <literal>abbbc</literal>; + <literal>(week|wee)(night|knights)</literal> + matches all ten characters of <literal>weeknights</literal>; + when <literal>(.*).*</literal> + is matched against <literal>abc</literal> the parenthesized subexpression matches all three characters; and when - <literal>(a*)*</> is matched against <literal>bc</> + <literal>(a*)*</literal> is matched against <literal>bc</literal> both the whole RE and the parenthesized subexpression match an empty string. </para> @@ -5592,38 +5592,38 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); When an alphabetic that exists in multiple cases appears as an ordinary character outside a bracket expression, it is effectively transformed into a bracket expression containing both cases, - e.g., <literal>x</> becomes <literal>[xX]</>. + e.g., <literal>x</literal> becomes <literal>[xX]</literal>. When it appears inside a bracket expression, all case counterparts of it are added to the bracket expression, e.g., - <literal>[x]</> becomes <literal>[xX]</> - and <literal>[^x]</> becomes <literal>[^xX]</>. + <literal>[x]</literal> becomes <literal>[xX]</literal> + and <literal>[^x]</literal> becomes <literal>[^xX]</literal>. </para> <para> - If newline-sensitive matching is specified, <literal>.</> - and bracket expressions using <literal>^</> + If newline-sensitive matching is specified, <literal>.</literal> + and bracket expressions using <literal>^</literal> will never match the newline character (so that matches will never cross newlines unless the RE explicitly arranges it) - and <literal>^</> and <literal>$</> + and <literal>^</literal> and <literal>$</literal> will match the empty string after and before a newline respectively, in addition to matching at beginning and end of string respectively. - But the ARE escapes <literal>\A</> and <literal>\Z</> - continue to match beginning or end of string <emphasis>only</>. + But the ARE escapes <literal>\A</literal> and <literal>\Z</literal> + continue to match beginning or end of string <emphasis>only</emphasis>. </para> <para> If partial newline-sensitive matching is specified, - this affects <literal>.</> and bracket expressions - as with newline-sensitive matching, but not <literal>^</> - and <literal>$</>. + this affects <literal>.</literal> and bracket expressions + as with newline-sensitive matching, but not <literal>^</literal> + and <literal>$</literal>. </para> <para> If inverse partial newline-sensitive matching is specified, - this affects <literal>^</> and <literal>$</> - as with newline-sensitive matching, but not <literal>.</> + this affects <literal>^</literal> and <literal>$</literal> + as with newline-sensitive matching, but not <literal>.</literal> and bracket expressions. This isn't very useful but is provided for symmetry. </para> @@ -5642,18 +5642,18 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); <para> The only feature of AREs that is actually incompatible with - POSIX EREs is that <literal>\</> does not lose its special + POSIX EREs is that <literal>\</literal> does not lose its special significance inside bracket expressions. All other ARE features use syntax which is illegal or has undefined or unspecified effects in POSIX EREs; - the <literal>***</> syntax of directors likewise is outside the POSIX + the <literal>***</literal> syntax of directors likewise is outside the POSIX syntax for both BREs and EREs. </para> <para> Many of the ARE extensions are borrowed from Perl, but some have been changed to clean them up, and a few Perl extensions are not present. - Incompatibilities of note include <literal>\b</>, <literal>\B</>, + Incompatibilities of note include <literal>\b</literal>, <literal>\B</literal>, the lack of special treatment for a trailing newline, the addition of complemented bracket expressions to the things affected by newline-sensitive matching, @@ -5664,12 +5664,12 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); <para> Two significant incompatibilities exist between AREs and the ERE syntax - recognized by pre-7.4 releases of <productname>PostgreSQL</>: + recognized by pre-7.4 releases of <productname>PostgreSQL</productname>: <itemizedlist> <listitem> <para> - In AREs, <literal>\</> followed by an alphanumeric character is either + In AREs, <literal>\</literal> followed by an alphanumeric character is either an escape or an error, while in previous releases, it was just another way of writing the alphanumeric. This should not be much of a problem because there was no reason to @@ -5678,9 +5678,9 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); </listitem> <listitem> <para> - In AREs, <literal>\</> remains a special character within - <literal>[]</>, so a literal <literal>\</> within a bracket - expression must be written <literal>\\</>. + In AREs, <literal>\</literal> remains a special character within + <literal>[]</literal>, so a literal <literal>\</literal> within a bracket + expression must be written <literal>\\</literal>. </para> </listitem> </itemizedlist> @@ -5692,27 +5692,27 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); <para> BREs differ from EREs in several respects. - In BREs, <literal>|</>, <literal>+</>, and <literal>?</> + In BREs, <literal>|</literal>, <literal>+</literal>, and <literal>?</literal> are ordinary characters and there is no equivalent for their functionality. The delimiters for bounds are - <literal>\{</> and <literal>\}</>, - with <literal>{</> and <literal>}</> + <literal>\{</literal> and <literal>\}</literal>, + with <literal>{</literal> and <literal>}</literal> by themselves ordinary characters. The parentheses for nested subexpressions are - <literal>\(</> and <literal>\)</>, - with <literal>(</> and <literal>)</> by themselves ordinary characters. - <literal>^</> is an ordinary character except at the beginning of the + <literal>\(</literal> and <literal>\)</literal>, + with <literal>(</literal> and <literal>)</literal> by themselves ordinary characters. + <literal>^</literal> is an ordinary character except at the beginning of the RE or the beginning of a parenthesized subexpression, - <literal>$</> is an ordinary character except at the end of the + <literal>$</literal> is an ordinary character except at the end of the RE or the end of a parenthesized subexpression, - and <literal>*</> is an ordinary character if it appears at the beginning + and <literal>*</literal> is an ordinary character if it appears at the beginning of the RE or the beginning of a parenthesized subexpression - (after a possible leading <literal>^</>). + (after a possible leading <literal>^</literal>). Finally, single-digit back references are available, and - <literal>\<</> and <literal>\></> + <literal>\<</literal> and <literal>\></literal> are synonyms for - <literal>[[:<:]]</> and <literal>[[:>:]]</> + <literal>[[:<:]]</literal> and <literal>[[:>:]]</literal> respectively; no other escapes are available in BREs. </para> </sect3> @@ -5839,13 +5839,13 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); exist to handle input formats that cannot be converted by simple casting. For most standard date/time formats, simply casting the source string to the required data type works, and is much easier. - Similarly, <function>to_number</> is unnecessary for standard numeric + Similarly, <function>to_number</function> is unnecessary for standard numeric representations. </para> </tip> <para> - In a <function>to_char</> output template string, there are certain + In a <function>to_char</function> output template string, there are certain patterns that are recognized and replaced with appropriately-formatted data based on the given value. Any text that is not a template pattern is simply copied verbatim. Similarly, in an input template string (for the @@ -6022,11 +6022,11 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); </row> <row> <entry><literal>D</literal></entry> - <entry>day of the week, Sunday (<literal>1</>) to Saturday (<literal>7</>)</entry> + <entry>day of the week, Sunday (<literal>1</literal>) to Saturday (<literal>7</literal>)</entry> </row> <row> <entry><literal>ID</literal></entry> - <entry>ISO 8601 day of the week, Monday (<literal>1</>) to Sunday (<literal>7</>)</entry> + <entry>ISO 8601 day of the week, Monday (<literal>1</literal>) to Sunday (<literal>7</literal>)</entry> </row> <row> <entry><literal>W</literal></entry> @@ -6063,17 +6063,17 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); <row> <entry><literal>TZ</literal></entry> <entry>upper case time-zone abbreviation - (only supported in <function>to_char</>)</entry> + (only supported in <function>to_char</function>)</entry> </row> <row> <entry><literal>tz</literal></entry> <entry>lower case time-zone abbreviation - (only supported in <function>to_char</>)</entry> + (only supported in <function>to_char</function>)</entry> </row> <row> <entry><literal>OF</literal></entry> <entry>time-zone offset from UTC - (only supported in <function>to_char</>)</entry> + (only supported in <function>to_char</function>)</entry> </row> </tbody> </tgroup> @@ -6107,12 +6107,12 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); <row> <entry><literal>TH</literal> suffix</entry> <entry>upper case ordinal number suffix</entry> - <entry><literal>DDTH</literal>, e.g., <literal>12TH</></entry> + <entry><literal>DDTH</literal>, e.g., <literal>12TH</literal></entry> </row> <row> <entry><literal>th</literal> suffix</entry> <entry>lower case ordinal number suffix</entry> - <entry><literal>DDth</literal>, e.g., <literal>12th</></entry> + <entry><literal>DDth</literal>, e.g., <literal>12th</literal></entry> </row> <row> <entry><literal>FX</literal> prefix</entry> @@ -6153,7 +6153,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); <listitem> <para> <literal>TM</literal> does not include trailing blanks. - <function>to_timestamp</> and <function>to_date</> ignore + <function>to_timestamp</function> and <function>to_date</function> ignore the <literal>TM</literal> modifier. </para> </listitem> @@ -6179,9 +6179,9 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); even if it contains pattern key words. For example, in <literal>'"Hello Year "YYYY'</literal>, the <literal>YYYY</literal> will be replaced by the year data, but the single <literal>Y</literal> in <literal>Year</literal> - will not be. In <function>to_date</>, <function>to_number</>, - and <function>to_timestamp</>, double-quoted strings skip the number of - input characters contained in the string, e.g. <literal>"XX"</> + will not be. In <function>to_date</function>, <function>to_number</function>, + and <function>to_timestamp</function>, double-quoted strings skip the number of + input characters contained in the string, e.g. <literal>"XX"</literal> skips two input characters. </para> </listitem> @@ -6198,9 +6198,9 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); <para> In <function>to_timestamp</function> and <function>to_date</function>, if the year format specification is less than four digits, e.g. - <literal>YYY</>, and the supplied year is less than four digits, + <literal>YYY</literal>, and the supplied year is less than four digits, the year will be adjusted to be nearest to the year 2020, e.g. - <literal>95</> becomes 1995. + <literal>95</literal> becomes 1995. </para> </listitem> @@ -6269,7 +6269,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); Attempting to enter a date using a mixture of ISO 8601 week-numbering fields and Gregorian date fields is nonsensical, and will cause an error. In the context of an ISO 8601 week-numbering year, the - concept of a <quote>month</> or <quote>day of month</> has no + concept of a <quote>month</quote> or <quote>day of month</quote> has no meaning. In the context of a Gregorian year, the ISO week has no meaning. </para> @@ -6278,8 +6278,8 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); While <function>to_date</function> will reject a mixture of Gregorian and ISO week-numbering date fields, <function>to_char</function> will not, since output format - specifications like <literal>YYYY-MM-DD (IYYY-IDDD)</> can be - useful. But avoid writing something like <literal>IYYY-MM-DD</>; + specifications like <literal>YYYY-MM-DD (IYYY-IDDD)</literal> can be + useful. But avoid writing something like <literal>IYYY-MM-DD</literal>; that would yield surprising results near the start of the year. (See <xref linkend="functions-datetime-extract"> for more information.) @@ -6323,11 +6323,11 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); <listitem> <para> - <function>to_char(interval)</function> formats <literal>HH</> and - <literal>HH12</> as shown on a 12-hour clock, for example zero hours - and 36 hours both output as <literal>12</>, while <literal>HH24</> + <function>to_char(interval)</function> formats <literal>HH</literal> and + <literal>HH12</literal> as shown on a 12-hour clock, for example zero hours + and 36 hours both output as <literal>12</literal>, while <literal>HH24</literal> outputs the full hour value, which can exceed 23 in - an <type>interval</> value. + an <type>interval</type> value. </para> </listitem> @@ -6423,19 +6423,19 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); <itemizedlist> <listitem> <para> - <literal>0</> specifies a digit position that will always be printed, - even if it contains a leading/trailing zero. <literal>9</> also + <literal>0</literal> specifies a digit position that will always be printed, + even if it contains a leading/trailing zero. <literal>9</literal> also specifies a digit position, but if it is a leading zero then it will be replaced by a space, while if it is a trailing zero and fill mode - is specified then it will be deleted. (For <function>to_number()</>, + is specified then it will be deleted. (For <function>to_number()</function>, these two pattern characters are equivalent.) </para> </listitem> <listitem> <para> - The pattern characters <literal>S</>, <literal>L</>, <literal>D</>, - and <literal>G</> represent the sign, currency symbol, decimal point, + The pattern characters <literal>S</literal>, <literal>L</literal>, <literal>D</literal>, + and <literal>G</literal> represent the sign, currency symbol, decimal point, and thousands separator characters defined by the current locale (see <xref linkend="guc-lc-monetary"> and <xref linkend="guc-lc-numeric">). The pattern characters period @@ -6447,9 +6447,9 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); <listitem> <para> If no explicit provision is made for a sign - in <function>to_char()</>'s pattern, one column will be reserved for + in <function>to_char()</function>'s pattern, one column will be reserved for the sign, and it will be anchored to (appear just left of) the - number. If <literal>S</> appears just left of some <literal>9</>'s, + number. If <literal>S</literal> appears just left of some <literal>9</literal>'s, it will likewise be anchored to the number. </para> </listitem> @@ -6742,7 +6742,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); inputs actually come in two variants: one that takes <type>time with time zone</type> or <type>timestamp with time zone</type>, and one that takes <type>time without time zone</type> or <type>timestamp without time zone</type>. For brevity, these variants are not shown separately. Also, the - <literal>+</> and <literal>*</> operators come in commutative pairs (for + <literal>+</literal> and <literal>*</literal> operators come in commutative pairs (for example both date + integer and integer + date); we show only one of each such pair. </para> @@ -6899,7 +6899,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); <literal><function>age(<type>timestamp</type>, <type>timestamp</type>)</function></literal> </entry> <entry><type>interval</type></entry> - <entry>Subtract arguments, producing a <quote>symbolic</> result that + <entry>Subtract arguments, producing a <quote>symbolic</quote> result that uses years and months, rather than just days</entry> <entry><literal>age(timestamp '2001-04-10', timestamp '1957-06-13')</literal></entry> <entry><literal>43 years 9 mons 27 days</literal></entry> @@ -7109,7 +7109,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); <literal><function>justify_interval(<type>interval</type>)</function></literal> </entry> <entry><type>interval</type></entry> - <entry>Adjust interval using <function>justify_days</> and <function>justify_hours</>, with additional sign adjustments</entry> + <entry>Adjust interval using <function>justify_days</function> and <function>justify_hours</function>, with additional sign adjustments</entry> <entry><literal>justify_interval(interval '1 mon -1 hour')</literal></entry> <entry><literal>29 days 23:00:00</literal></entry> </row> @@ -7302,7 +7302,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); </entry> <entry><type>text</type></entry> <entry>Current date and time - (like <function>clock_timestamp</>, but as a <type>text</> string); + (like <function>clock_timestamp</function>, but as a <type>text</type> string); see <xref linkend="functions-datetime-current"> </entry> <entry></entry> @@ -7344,7 +7344,7 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); <indexterm> <primary>OVERLAPS</primary> </indexterm> - In addition to these functions, the SQL <literal>OVERLAPS</> operator is + In addition to these functions, the SQL <literal>OVERLAPS</literal> operator is supported: <synopsis> (<replaceable>start1</replaceable>, <replaceable>end1</replaceable>) OVERLAPS (<replaceable>start2</replaceable>, <replaceable>end2</replaceable>) @@ -7355,11 +7355,11 @@ SELECT regexp_match('abc01234xyz', '(?:(.*?)(\d+)(.*)){1,1}'); can be specified as pairs of dates, times, or time stamps; or as a date, time, or time stamp followed by an interval. When a pair of values is provided, either the start or the end can be written - first; <literal>OVERLAPS</> automatically takes the earlier value + first; <literal>OVERLAPS</literal> automatically takes the earlier value of the pair as the start. Each time period is considered to - represent the half-open interval <replaceable>start</> <literal><=</> - <replaceable>time</> <literal><</> <replaceable>end</>, unless - <replaceable>start</> and <replaceable>end</> are equal in which case it + represent the half-open interval <replaceable>start</replaceable> <literal><=</literal> + <replaceable>time</replaceable> <literal><</literal> <replaceable>end</replaceable>, unless + <replaceable>start</replaceable> and <replaceable>end</replaceable> are equal in which case it represents that single time instant. This means for instance that two time periods with only an endpoint in common do not overlap. </para> @@ -7398,31 +7398,31 @@ SELECT (DATE '2001-10-30', DATE '2001-10-30') OVERLAPS </para> <para> - Note there can be ambiguity in the <literal>months</> field returned by - <function>age</> because different months have different numbers of - days. <productname>PostgreSQL</>'s approach uses the month from the + Note there can be ambiguity in the <literal>months</literal> field returned by + <function>age</function> because different months have different numbers of + days. <productname>PostgreSQL</productname>'s approach uses the month from the earlier of the two dates when calculating partial months. For example, - <literal>age('2004-06-01', '2004-04-30')</> uses April to yield - <literal>1 mon 1 day</>, while using May would yield <literal>1 mon 2 - days</> because May has 31 days, while April has only 30. + <literal>age('2004-06-01', '2004-04-30')</literal> uses April to yield + <literal>1 mon 1 day</literal>, while using May would yield <literal>1 mon 2 + days</literal> because May has 31 days, while April has only 30. </para> <para> Subtraction of dates and timestamps can also be complex. One conceptually simple way to perform subtraction is to convert each value to a number - of seconds using <literal>EXTRACT(EPOCH FROM ...)</>, then subtract the + of seconds using <literal>EXTRACT(EPOCH FROM ...)</literal>, then subtract the results; this produces the - number of <emphasis>seconds</> between the two values. This will adjust + number of <emphasis>seconds</emphasis> between the two values. This will adjust for the number of days in each month, timezone changes, and daylight saving time adjustments. Subtraction of date or timestamp - values with the <quote><literal>-</></quote> operator + values with the <quote><literal>-</literal></quote> operator returns the number of days (24-hours) and hours/minutes/seconds - between the values, making the same adjustments. The <function>age</> + between the values, making the same adjustments. The <function>age</function> function returns years, months, days, and hours/minutes/seconds, performing field-by-field subtraction and then adjusting for negative field values. The following queries illustrate the differences in these approaches. The sample results were produced with <literal>timezone - = 'US/Eastern'</>; there is a daylight saving time change between the + = 'US/Eastern'</literal>; there is a daylight saving time change between the two dates used: </para> @@ -7534,8 +7534,8 @@ SELECT EXTRACT(DECADE FROM TIMESTAMP '2001-02-16 20:38:40'); <term><literal>dow</literal></term> <listitem> <para> - The day of the week as Sunday (<literal>0</>) to - Saturday (<literal>6</>) + The day of the week as Sunday (<literal>0</literal>) to + Saturday (<literal>6</literal>) </para> <screen> @@ -7587,7 +7587,7 @@ SELECT EXTRACT(EPOCH FROM INTERVAL '5 days 3 hours'); <para> You can convert an epoch value back to a time stamp - with <function>to_timestamp</>: + with <function>to_timestamp</function>: </para> <screen> SELECT to_timestamp(982384720.12); @@ -7614,8 +7614,8 @@ SELECT EXTRACT(HOUR FROM TIMESTAMP '2001-02-16 20:38:40'); <term><literal>isodow</literal></term> <listitem> <para> - The day of the week as Monday (<literal>1</>) to - Sunday (<literal>7</>) + The day of the week as Monday (<literal>1</literal>) to + Sunday (<literal>7</literal>) </para> <screen> @@ -7623,8 +7623,8 @@ SELECT EXTRACT(ISODOW FROM TIMESTAMP '2001-02-18 20:38:40'); <lineannotation>Result: </lineannotation><computeroutput>7</computeroutput> </screen> <para> - This is identical to <literal>dow</> except for Sunday. This - matches the <acronym>ISO</> 8601 day of the week numbering. + This is identical to <literal>dow</literal> except for Sunday. This + matches the <acronym>ISO</acronym> 8601 day of the week numbering. </para> </listitem> @@ -7819,11 +7819,11 @@ SELECT EXTRACT(SECOND FROM TIME '17:12:28.5'); In the ISO week-numbering system, it is possible for early-January dates to be part of the 52nd or 53rd week of the previous year, and for late-December dates to be part of the first week of the next year. - For example, <literal>2005-01-01</> is part of the 53rd week of year - 2004, and <literal>2006-01-01</> is part of the 52nd week of year - 2005, while <literal>2012-12-31</> is part of the first week of 2013. - It's recommended to use the <literal>isoyear</> field together with - <literal>week</> to get consistent results. + For example, <literal>2005-01-01</literal> is part of the 53rd week of year + 2004, and <literal>2006-01-01</literal> is part of the 52nd week of year + 2005, while <literal>2012-12-31</literal> is part of the first week of 2013. + It's recommended to use the <literal>isoyear</literal> field together with + <literal>week</literal> to get consistent results. </para> <screen> @@ -7837,8 +7837,8 @@ SELECT EXTRACT(WEEK FROM TIMESTAMP '2001-02-16 20:38:40'); <term><literal>year</literal></term> <listitem> <para> - The year field. Keep in mind there is no <literal>0 AD</>, so subtracting - <literal>BC</> years from <literal>AD</> years should be done with care. + The year field. Keep in mind there is no <literal>0 AD</literal>, so subtracting + <literal>BC</literal> years from <literal>AD</literal> years should be done with care. </para> <screen> @@ -7853,11 +7853,11 @@ SELECT EXTRACT(YEAR FROM TIMESTAMP '2001-02-16 20:38:40'); <note> <para> - When the input value is +/-Infinity, <function>extract</> returns - +/-Infinity for monotonically-increasing fields (<literal>epoch</>, - <literal>julian</>, <literal>year</>, <literal>isoyear</>, - <literal>decade</>, <literal>century</>, and <literal>millennium</>). - For other fields, NULL is returned. <productname>PostgreSQL</> + When the input value is +/-Infinity, <function>extract</function> returns + +/-Infinity for monotonically-increasing fields (<literal>epoch</literal>, + <literal>julian</literal>, <literal>year</literal>, <literal>isoyear</literal>, + <literal>decade</literal>, <literal>century</literal>, and <literal>millennium</literal>). + For other fields, NULL is returned. <productname>PostgreSQL</productname> versions before 9.6 returned zero for all cases of infinite input. </para> </note> @@ -7908,13 +7908,13 @@ SELECT date_part('hour', INTERVAL '4 hours 3 minutes'); date_trunc('<replaceable>field</replaceable>', <replaceable>source</replaceable>) </synopsis> <replaceable>source</replaceable> is a value expression of type - <type>timestamp</type> or <type>interval</>. + <type>timestamp</type> or <type>interval</type>. (Values of type <type>date</type> and <type>time</type> are cast automatically to <type>timestamp</type> or - <type>interval</>, respectively.) + <type>interval</type>, respectively.) <replaceable>field</replaceable> selects to which precision to truncate the input value. The return value is of type - <type>timestamp</type> or <type>interval</> + <type>timestamp</type> or <type>interval</type> with all fields that are less significant than the selected one set to zero (or one, for day and month). </para> @@ -7983,34 +7983,34 @@ SELECT date_trunc('year', TIMESTAMP '2001-02-16 20:38:40'); <tbody> <row> <entry> - <literal><type>timestamp without time zone</type> AT TIME ZONE <replaceable>zone</></literal> + <literal><type>timestamp without time zone</type> AT TIME ZONE <replaceable>zone</replaceable></literal> </entry> <entry><type>timestamp with time zone</type></entry> - <entry>Treat given time stamp <emphasis>without time zone</> as located in the specified time zone</entry> + <entry>Treat given time stamp <emphasis>without time zone</emphasis> as located in the specified time zone</entry> </row> <row> <entry> - <literal><type>timestamp with time zone</type> AT TIME ZONE <replaceable>zone</></literal> + <literal><type>timestamp with time zone</type> AT TIME ZONE <replaceable>zone</replaceable></literal> </entry> <entry><type>timestamp without time zone</type></entry> - <entry>Convert given time stamp <emphasis>with time zone</> to the new time + <entry>Convert given time stamp <emphasis>with time zone</emphasis> to the new time zone, with no time zone designation</entry> </row> <row> <entry> - <literal><type>time with time zone</type> AT TIME ZONE <replaceable>zone</></literal> + <literal><type>time with time zone</type> AT TIME ZONE <replaceable>zone</replaceable></literal> </entry> <entry><type>time with time zone</type></entry> - <entry>Convert given time <emphasis>with time zone</> to the new time zone</entry> + <entry>Convert given time <emphasis>with time zone</emphasis> to the new time zone</entry> </row> </tbody> </tgroup> </table> <para> - In these expressions, the desired time zone <replaceable>zone</> can be + In these expressions, the desired time zone <replaceable>zone</replaceable> can be specified either as a text string (e.g., <literal>'PST'</literal>) or as an interval (e.g., <literal>INTERVAL '-08:00'</literal>). In the text case, a time zone name can be specified in any of the ways @@ -8018,7 +8018,7 @@ SELECT date_trunc('year', TIMESTAMP '2001-02-16 20:38:40'); </para> <para> - Examples (assuming the local time zone is <literal>PST8PDT</>): + Examples (assuming the local time zone is <literal>PST8PDT</literal>): <screen> SELECT TIMESTAMP '2001-02-16 20:38:40' AT TIME ZONE 'MST'; <lineannotation>Result: </lineannotation><computeroutput>2001-02-16 19:38:40-08</computeroutput> @@ -8032,10 +8032,10 @@ SELECT TIMESTAMP WITH TIME ZONE '2001-02-16 20:38:40-05' AT TIME ZONE 'MST'; </para> <para> - The function <literal><function>timezone</function>(<replaceable>zone</>, - <replaceable>timestamp</>)</literal> is equivalent to the SQL-conforming construct - <literal><replaceable>timestamp</> AT TIME ZONE - <replaceable>zone</></literal>. + The function <literal><function>timezone</function>(<replaceable>zone</replaceable>, + <replaceable>timestamp</replaceable>)</literal> is equivalent to the SQL-conforming construct + <literal><replaceable>timestamp</replaceable> AT TIME ZONE + <replaceable>zone</replaceable></literal>. </para> </sect2> @@ -8140,23 +8140,23 @@ now() </para> <para> - <function>transaction_timestamp()</> is equivalent to + <function>transaction_timestamp()</function> is equivalent to <function>CURRENT_TIMESTAMP</function>, but is named to clearly reflect what it returns. - <function>statement_timestamp()</> returns the start time of the current + <function>statement_timestamp()</function> returns the start time of the current statement (more specifically, the time of receipt of the latest command message from the client). - <function>statement_timestamp()</> and <function>transaction_timestamp()</> + <function>statement_timestamp()</function> and <function>transaction_timestamp()</function> return the same value during the first command of a transaction, but might differ during subsequent commands. - <function>clock_timestamp()</> returns the actual current time, and + <function>clock_timestamp()</function> returns the actual current time, and therefore its value changes even within a single SQL command. - <function>timeofday()</> is a historical + <function>timeofday()</function> is a historical <productname>PostgreSQL</productname> function. Like - <function>clock_timestamp()</>, it returns the actual current time, - but as a formatted <type>text</> string rather than a <type>timestamp - with time zone</> value. - <function>now()</> is a traditional <productname>PostgreSQL</productname> + <function>clock_timestamp()</function>, it returns the actual current time, + but as a formatted <type>text</type> string rather than a <type>timestamp + with time zone</type> value. + <function>now()</function> is a traditional <productname>PostgreSQL</productname> equivalent to <function>transaction_timestamp()</function>. </para> @@ -8174,7 +8174,7 @@ SELECT TIMESTAMP 'now'; -- incorrect for use with DEFAULT <tip> <para> - You do not want to use the third form when specifying a <literal>DEFAULT</> + You do not want to use the third form when specifying a <literal>DEFAULT</literal> clause while creating a table. The system will convert <literal>now</literal> to a <type>timestamp</type> as soon as the constant is parsed, so that when the default value is needed, @@ -8210,16 +8210,16 @@ SELECT TIMESTAMP 'now'; -- incorrect for use with DEFAULT process: <synopsis> pg_sleep(<replaceable>seconds</replaceable>) -pg_sleep_for(<type>interval</>) -pg_sleep_until(<type>timestamp with time zone</>) +pg_sleep_for(<type>interval</type>) +pg_sleep_until(<type>timestamp with time zone</type>) </synopsis> <function>pg_sleep</function> makes the current session's process sleep until <replaceable>seconds</replaceable> seconds have elapsed. <replaceable>seconds</replaceable> is a value of type - <type>double precision</>, so fractional-second delays can be specified. + <type>double precision</type>, so fractional-second delays can be specified. <function>pg_sleep_for</function> is a convenience function for larger - sleep times specified as an <type>interval</>. + sleep times specified as an <type>interval</type>. <function>pg_sleep_until</function> is a convenience function for when a specific wake-up time is desired. For example: @@ -8341,7 +8341,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple </table> <para> - Notice that except for the two-argument form of <function>enum_range</>, + Notice that except for the two-argument form of <function>enum_range</function>, these functions disregard the specific value passed to them; they care only about its declared data type. Either null or a specific value of the type can be passed, with the same result. It is more common to @@ -8365,13 +8365,13 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <caution> <para> - Note that the <quote>same as</> operator, <literal>~=</>, represents + Note that the <quote>same as</quote> operator, <literal>~=</literal>, represents the usual notion of equality for the <type>point</type>, <type>box</type>, <type>polygon</type>, and <type>circle</type> types. - Some of these types also have an <literal>=</> operator, but - <literal>=</> compares - for equal <emphasis>areas</> only. The other scalar comparison operators - (<literal><=</> and so on) likewise compare areas for these types. + Some of these types also have an <literal>=</literal> operator, but + <literal>=</literal> compares + for equal <emphasis>areas</emphasis> only. The other scalar comparison operators + (<literal><=</literal> and so on) likewise compare areas for these types. </para> </caution> @@ -8548,8 +8548,8 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <note> <para> Before <productname>PostgreSQL</productname> 8.2, the containment - operators <literal>@></> and <literal><@</> were respectively - called <literal>~</> and <literal>@</>. These names are still + operators <literal>@></literal> and <literal><@</literal> were respectively + called <literal>~</literal> and <literal>@</literal>. These names are still available, but are deprecated and will eventually be removed. </para> </note> @@ -8604,67 +8604,67 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple </thead> <tbody> <row> - <entry><literal><function>area(<replaceable>object</>)</function></literal></entry> + <entry><literal><function>area(<replaceable>object</replaceable>)</function></literal></entry> <entry><type>double precision</type></entry> <entry>area</entry> <entry><literal>area(box '((0,0),(1,1))')</literal></entry> </row> <row> - <entry><literal><function>center(<replaceable>object</>)</function></literal></entry> + <entry><literal><function>center(<replaceable>object</replaceable>)</function></literal></entry> <entry><type>point</type></entry> <entry>center</entry> <entry><literal>center(box '((0,0),(1,2))')</literal></entry> </row> <row> - <entry><literal><function>diameter(<type>circle</>)</function></literal></entry> + <entry><literal><function>diameter(<type>circle</type>)</function></literal></entry> <entry><type>double precision</type></entry> <entry>diameter of circle</entry> <entry><literal>diameter(circle '((0,0),2.0)')</literal></entry> </row> <row> - <entry><literal><function>height(<type>box</>)</function></literal></entry> + <entry><literal><function>height(<type>box</type>)</function></literal></entry> <entry><type>double precision</type></entry> <entry>vertical size of box</entry> <entry><literal>height(box '((0,0),(1,1))')</literal></entry> </row> <row> - <entry><literal><function>isclosed(<type>path</>)</function></literal></entry> + <entry><literal><function>isclosed(<type>path</type>)</function></literal></entry> <entry><type>boolean</type></entry> <entry>a closed path?</entry> <entry><literal>isclosed(path '((0,0),(1,1),(2,0))')</literal></entry> </row> <row> - <entry><literal><function>isopen(<type>path</>)</function></literal></entry> + <entry><literal><function>isopen(<type>path</type>)</function></literal></entry> <entry><type>boolean</type></entry> <entry>an open path?</entry> <entry><literal>isopen(path '[(0,0),(1,1),(2,0)]')</literal></entry> </row> <row> - <entry><literal><function>length(<replaceable>object</>)</function></literal></entry> + <entry><literal><function>length(<replaceable>object</replaceable>)</function></literal></entry> <entry><type>double precision</type></entry> <entry>length</entry> <entry><literal>length(path '((-1,0),(1,0))')</literal></entry> </row> <row> - <entry><literal><function>npoints(<type>path</>)</function></literal></entry> + <entry><literal><function>npoints(<type>path</type>)</function></literal></entry> <entry><type>int</type></entry> <entry>number of points</entry> <entry><literal>npoints(path '[(0,0),(1,1),(2,0)]')</literal></entry> </row> <row> - <entry><literal><function>npoints(<type>polygon</>)</function></literal></entry> + <entry><literal><function>npoints(<type>polygon</type>)</function></literal></entry> <entry><type>int</type></entry> <entry>number of points</entry> <entry><literal>npoints(polygon '((1,1),(0,0))')</literal></entry> </row> <row> - <entry><literal><function>pclose(<type>path</>)</function></literal></entry> + <entry><literal><function>pclose(<type>path</type>)</function></literal></entry> <entry><type>path</type></entry> <entry>convert path to closed</entry> <entry><literal>pclose(path '[(0,0),(1,1),(2,0)]')</literal></entry> </row> <row> - <entry><literal><function>popen(<type>path</>)</function></literal></entry> + <entry><literal><function>popen(<type>path</type>)</function></literal></entry> <entry><type>path</type></entry> <entry>convert path to open</entry> <entry><literal>popen(path '((0,0),(1,1),(2,0))')</literal></entry> @@ -8676,7 +8676,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <entry><literal>radius(circle '((0,0),2.0)')</literal></entry> </row> <row> - <entry><literal><function>width(<type>box</>)</function></literal></entry> + <entry><literal><function>width(<type>box</type>)</function></literal></entry> <entry><type>double precision</type></entry> <entry>horizontal size of box</entry> <entry><literal>width(box '((0,0),(1,1))')</literal></entry> @@ -8859,13 +8859,13 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple </table> <para> - It is possible to access the two component numbers of a <type>point</> + It is possible to access the two component numbers of a <type>point</type> as though the point were an array with indexes 0 and 1. For example, if - <literal>t.p</> is a <type>point</> column then - <literal>SELECT p[0] FROM t</> retrieves the X coordinate and - <literal>UPDATE t SET p[1] = ...</> changes the Y coordinate. - In the same way, a value of type <type>box</> or <type>lseg</> can be treated - as an array of two <type>point</> values. + <literal>t.p</literal> is a <type>point</type> column then + <literal>SELECT p[0] FROM t</literal> retrieves the X coordinate and + <literal>UPDATE t SET p[1] = ...</literal> changes the Y coordinate. + In the same way, a value of type <type>box</type> or <type>lseg</type> can be treated + as an array of two <type>point</type> values. </para> <para> @@ -9188,19 +9188,19 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple </table> <para> - Any <type>cidr</> value can be cast to <type>inet</> implicitly + Any <type>cidr</type> value can be cast to <type>inet</type> implicitly or explicitly; therefore, the functions shown above as operating on - <type>inet</> also work on <type>cidr</> values. (Where there are - separate functions for <type>inet</> and <type>cidr</>, it is because + <type>inet</type> also work on <type>cidr</type> values. (Where there are + separate functions for <type>inet</type> and <type>cidr</type>, it is because the behavior should be different for the two cases.) - Also, it is permitted to cast an <type>inet</> value to <type>cidr</>. + Also, it is permitted to cast an <type>inet</type> value to <type>cidr</type>. When this is done, any bits to the right of the netmask are silently zeroed - to create a valid <type>cidr</> value. + to create a valid <type>cidr</type> value. In addition, - you can cast a text value to <type>inet</> or <type>cidr</> + you can cast a text value to <type>inet</type> or <type>cidr</type> using normal casting syntax: for example, - <literal>inet(<replaceable>expression</>)</literal> or - <literal><replaceable>colname</>::cidr</literal>. + <literal>inet(<replaceable>expression</replaceable>)</literal> or + <literal><replaceable>colname</replaceable>::cidr</literal>. </para> <para> @@ -9345,64 +9345,64 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <tbody> <row> <entry> <literal>@@</literal> </entry> - <entry><type>boolean</></entry> - <entry><type>tsvector</> matches <type>tsquery</> ?</entry> + <entry><type>boolean</type></entry> + <entry><type>tsvector</type> matches <type>tsquery</type> ?</entry> <entry><literal>to_tsvector('fat cats ate rats') @@ to_tsquery('cat & rat')</literal></entry> <entry><literal>t</literal></entry> </row> <row> <entry> <literal>@@@</literal> </entry> - <entry><type>boolean</></entry> - <entry>deprecated synonym for <literal>@@</></entry> + <entry><type>boolean</type></entry> + <entry>deprecated synonym for <literal>@@</literal></entry> <entry><literal>to_tsvector('fat cats ate rats') @@@ to_tsquery('cat & rat')</literal></entry> <entry><literal>t</literal></entry> </row> <row> <entry> <literal>||</literal> </entry> - <entry><type>tsvector</></entry> - <entry>concatenate <type>tsvector</>s</entry> + <entry><type>tsvector</type></entry> + <entry>concatenate <type>tsvector</type>s</entry> <entry><literal>'a:1 b:2'::tsvector || 'c:1 d:2 b:3'::tsvector</literal></entry> <entry><literal>'a':1 'b':2,5 'c':3 'd':4</literal></entry> </row> <row> <entry> <literal>&&</literal> </entry> - <entry><type>tsquery</></entry> - <entry>AND <type>tsquery</>s together</entry> + <entry><type>tsquery</type></entry> + <entry>AND <type>tsquery</type>s together</entry> <entry><literal>'fat | rat'::tsquery && 'cat'::tsquery</literal></entry> <entry><literal>( 'fat' | 'rat' ) & 'cat'</literal></entry> </row> <row> <entry> <literal>||</literal> </entry> - <entry><type>tsquery</></entry> - <entry>OR <type>tsquery</>s together</entry> + <entry><type>tsquery</type></entry> + <entry>OR <type>tsquery</type>s together</entry> <entry><literal>'fat | rat'::tsquery || 'cat'::tsquery</literal></entry> <entry><literal>( 'fat' | 'rat' ) | 'cat'</literal></entry> </row> <row> <entry> <literal>!!</literal> </entry> - <entry><type>tsquery</></entry> - <entry>negate a <type>tsquery</></entry> + <entry><type>tsquery</type></entry> + <entry>negate a <type>tsquery</type></entry> <entry><literal>!! 'cat'::tsquery</literal></entry> <entry><literal>!'cat'</literal></entry> </row> <row> <entry> <literal><-></literal> </entry> - <entry><type>tsquery</></entry> - <entry><type>tsquery</> followed by <type>tsquery</></entry> + <entry><type>tsquery</type></entry> + <entry><type>tsquery</type> followed by <type>tsquery</type></entry> <entry><literal>to_tsquery('fat') <-> to_tsquery('rat')</literal></entry> <entry><literal>'fat' <-> 'rat'</literal></entry> </row> <row> <entry> <literal>@></literal> </entry> - <entry><type>boolean</></entry> - <entry><type>tsquery</> contains another ?</entry> + <entry><type>boolean</type></entry> + <entry><type>tsquery</type> contains another ?</entry> <entry><literal>'cat'::tsquery @> 'cat & rat'::tsquery</literal></entry> <entry><literal>f</literal></entry> </row> <row> <entry> <literal><@</literal> </entry> - <entry><type>boolean</></entry> - <entry><type>tsquery</> is contained in ?</entry> + <entry><type>boolean</type></entry> + <entry><type>tsquery</type> is contained in ?</entry> <entry><literal>'cat'::tsquery <@ 'cat & rat'::tsquery</literal></entry> <entry><literal>t</literal></entry> </row> @@ -9412,15 +9412,15 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <note> <para> - The <type>tsquery</> containment operators consider only the lexemes + The <type>tsquery</type> containment operators consider only the lexemes listed in the two queries, ignoring the combining operators. </para> </note> <para> In addition to the operators shown in the table, the ordinary B-tree - comparison operators (<literal>=</>, <literal><</>, etc) are defined - for types <type>tsvector</> and <type>tsquery</>. These are not very + comparison operators (<literal>=</literal>, <literal><</literal>, etc) are defined + for types <type>tsvector</type> and <type>tsquery</type>. These are not very useful for text searching but allow, for example, unique indexes to be built on columns of these types. </para> @@ -9443,7 +9443,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <indexterm> <primary>array_to_tsvector</primary> </indexterm> - <literal><function>array_to_tsvector(<type>text[]</>)</function></literal> + <literal><function>array_to_tsvector(<type>text[]</type>)</function></literal> </entry> <entry><type>tsvector</type></entry> <entry>convert array of lexemes to <type>tsvector</type></entry> @@ -9467,10 +9467,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <indexterm> <primary>length</primary> </indexterm> - <literal><function>length(<type>tsvector</>)</function></literal> + <literal><function>length(<type>tsvector</type>)</function></literal> </entry> <entry><type>integer</type></entry> - <entry>number of lexemes in <type>tsvector</></entry> + <entry>number of lexemes in <type>tsvector</type></entry> <entry><literal>length('fat:2,4 cat:3 rat:5A'::tsvector)</literal></entry> <entry><literal>3</literal></entry> </row> @@ -9479,10 +9479,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <indexterm> <primary>numnode</primary> </indexterm> - <literal><function>numnode(<type>tsquery</>)</function></literal> + <literal><function>numnode(<type>tsquery</type>)</function></literal> </entry> <entry><type>integer</type></entry> - <entry>number of lexemes plus operators in <type>tsquery</></entry> + <entry>number of lexemes plus operators in <type>tsquery</type></entry> <entry><literal> numnode('(fat & rat) | cat'::tsquery)</literal></entry> <entry><literal>5</literal></entry> </row> @@ -9491,10 +9491,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <indexterm> <primary>plainto_tsquery</primary> </indexterm> - <literal><function>plainto_tsquery(<optional> <replaceable class="parameter">config</> <type>regconfig</> , </optional> <replaceable class="parameter">query</> <type>text</type>)</function></literal> + <literal><function>plainto_tsquery(<optional> <replaceable class="parameter">config</replaceable> <type>regconfig</type> , </optional> <replaceable class="parameter">query</replaceable> <type>text</type>)</function></literal> </entry> <entry><type>tsquery</type></entry> - <entry>produce <type>tsquery</> ignoring punctuation</entry> + <entry>produce <type>tsquery</type> ignoring punctuation</entry> <entry><literal>plainto_tsquery('english', 'The Fat Rats')</literal></entry> <entry><literal>'fat' & 'rat'</literal></entry> </row> @@ -9503,10 +9503,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <indexterm> <primary>phraseto_tsquery</primary> </indexterm> - <literal><function>phraseto_tsquery(<optional> <replaceable class="parameter">config</> <type>regconfig</> , </optional> <replaceable class="parameter">query</> <type>text</type>)</function></literal> + <literal><function>phraseto_tsquery(<optional> <replaceable class="parameter">config</replaceable> <type>regconfig</type> , </optional> <replaceable class="parameter">query</replaceable> <type>text</type>)</function></literal> </entry> <entry><type>tsquery</type></entry> - <entry>produce <type>tsquery</> that searches for a phrase, + <entry>produce <type>tsquery</type> that searches for a phrase, ignoring punctuation</entry> <entry><literal>phraseto_tsquery('english', 'The Fat Rats')</literal></entry> <entry><literal>'fat' <-> 'rat'</literal></entry> @@ -9516,10 +9516,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <indexterm> <primary>querytree</primary> </indexterm> - <literal><function>querytree(<replaceable class="parameter">query</replaceable> <type>tsquery</>)</function></literal> + <literal><function>querytree(<replaceable class="parameter">query</replaceable> <type>tsquery</type>)</function></literal> </entry> <entry><type>text</type></entry> - <entry>get indexable part of a <type>tsquery</></entry> + <entry>get indexable part of a <type>tsquery</type></entry> <entry><literal>querytree('foo & ! bar'::tsquery)</literal></entry> <entry><literal>'foo'</literal></entry> </row> @@ -9528,7 +9528,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <indexterm> <primary>setweight</primary> </indexterm> - <literal><function>setweight(<replaceable class="parameter">vector</replaceable> <type>tsvector</>, <replaceable class="parameter">weight</replaceable> <type>"char"</>)</function></literal> + <literal><function>setweight(<replaceable class="parameter">vector</replaceable> <type>tsvector</type>, <replaceable class="parameter">weight</replaceable> <type>"char"</type>)</function></literal> </entry> <entry><type>tsvector</type></entry> <entry>assign <replaceable class="parameter">weight</replaceable> to each element of <replaceable class="parameter">vector</replaceable></entry> @@ -9541,7 +9541,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <primary>setweight</primary> <secondary>setweight for specific lexeme(s)</secondary> </indexterm> - <literal><function>setweight(<replaceable class="parameter">vector</replaceable> <type>tsvector</>, <replaceable class="parameter">weight</replaceable> <type>"char"</>, <replaceable class="parameter">lexemes</replaceable> <type>text[]</>)</function></literal> + <literal><function>setweight(<replaceable class="parameter">vector</replaceable> <type>tsvector</type>, <replaceable class="parameter">weight</replaceable> <type>"char"</type>, <replaceable class="parameter">lexemes</replaceable> <type>text[]</type>)</function></literal> </entry> <entry><type>tsvector</type></entry> <entry>assign <replaceable class="parameter">weight</replaceable> to elements of <replaceable class="parameter">vector</replaceable> that are listed in <replaceable class="parameter">lexemes</replaceable></entry> @@ -9553,10 +9553,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <indexterm> <primary>strip</primary> </indexterm> - <literal><function>strip(<type>tsvector</>)</function></literal> + <literal><function>strip(<type>tsvector</type>)</function></literal> </entry> <entry><type>tsvector</type></entry> - <entry>remove positions and weights from <type>tsvector</></entry> + <entry>remove positions and weights from <type>tsvector</type></entry> <entry><literal>strip('fat:2,4 cat:3 rat:5A'::tsvector)</literal></entry> <entry><literal>'cat' 'fat' 'rat'</literal></entry> </row> @@ -9565,10 +9565,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <indexterm> <primary>to_tsquery</primary> </indexterm> - <literal><function>to_tsquery(<optional> <replaceable class="parameter">config</> <type>regconfig</> , </optional> <replaceable class="parameter">query</> <type>text</type>)</function></literal> + <literal><function>to_tsquery(<optional> <replaceable class="parameter">config</replaceable> <type>regconfig</type> , </optional> <replaceable class="parameter">query</replaceable> <type>text</type>)</function></literal> </entry> <entry><type>tsquery</type></entry> - <entry>normalize words and convert to <type>tsquery</></entry> + <entry>normalize words and convert to <type>tsquery</type></entry> <entry><literal>to_tsquery('english', 'The & Fat & Rats')</literal></entry> <entry><literal>'fat' & 'rat'</literal></entry> </row> @@ -9577,21 +9577,21 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <indexterm> <primary>to_tsvector</primary> </indexterm> - <literal><function>to_tsvector(<optional> <replaceable class="parameter">config</> <type>regconfig</> , </optional> <replaceable class="parameter">document</> <type>text</type>)</function></literal> + <literal><function>to_tsvector(<optional> <replaceable class="parameter">config</replaceable> <type>regconfig</type> , </optional> <replaceable class="parameter">document</replaceable> <type>text</type>)</function></literal> </entry> <entry><type>tsvector</type></entry> - <entry>reduce document text to <type>tsvector</></entry> + <entry>reduce document text to <type>tsvector</type></entry> <entry><literal>to_tsvector('english', 'The Fat Rats')</literal></entry> <entry><literal>'fat':2 'rat':3</literal></entry> </row> <row> <entry> - <literal><function>to_tsvector(<optional> <replaceable class="parameter">config</> <type>regconfig</> , </optional> <replaceable class="parameter">document</> <type>json(b)</type>)</function></literal> + <literal><function>to_tsvector(<optional> <replaceable class="parameter">config</replaceable> <type>regconfig</type> , </optional> <replaceable class="parameter">document</replaceable> <type>json(b)</type>)</function></literal> </entry> <entry><type>tsvector</type></entry> <entry> - reduce each string value in the document to a <type>tsvector</>, and then - concatenate those in document order to produce a single <type>tsvector</> + reduce each string value in the document to a <type>tsvector</type>, and then + concatenate those in document order to produce a single <type>tsvector</type> </entry> <entry><literal>to_tsvector('english', '{"a": "The Fat Rats"}'::json)</literal></entry> <entry><literal>'fat':2 'rat':3</literal></entry> @@ -9601,7 +9601,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <indexterm> <primary>ts_delete</primary> </indexterm> - <literal><function>ts_delete(<replaceable class="parameter">vector</replaceable> <type>tsvector</>, <replaceable class="parameter">lexeme</replaceable> <type>text</>)</function></literal> + <literal><function>ts_delete(<replaceable class="parameter">vector</replaceable> <type>tsvector</type>, <replaceable class="parameter">lexeme</replaceable> <type>text</type>)</function></literal> </entry> <entry><type>tsvector</type></entry> <entry>remove given <replaceable class="parameter">lexeme</replaceable> from <replaceable class="parameter">vector</replaceable></entry> @@ -9611,7 +9611,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <row> <entry> <!-- previous indexterm entry covers this too --> - <literal><function>ts_delete(<replaceable class="parameter">vector</replaceable> <type>tsvector</>, <replaceable class="parameter">lexemes</replaceable> <type>text[]</>)</function></literal> + <literal><function>ts_delete(<replaceable class="parameter">vector</replaceable> <type>tsvector</type>, <replaceable class="parameter">lexemes</replaceable> <type>text[]</type>)</function></literal> </entry> <entry><type>tsvector</type></entry> <entry>remove any occurrence of lexemes in <replaceable class="parameter">lexemes</replaceable> from <replaceable class="parameter">vector</replaceable></entry> @@ -9623,7 +9623,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <indexterm> <primary>ts_filter</primary> </indexterm> - <literal><function>ts_filter(<replaceable class="parameter">vector</replaceable> <type>tsvector</>, <replaceable class="parameter">weights</replaceable> <type>"char"[]</>)</function></literal> + <literal><function>ts_filter(<replaceable class="parameter">vector</replaceable> <type>tsvector</type>, <replaceable class="parameter">weights</replaceable> <type>"char"[]</type>)</function></literal> </entry> <entry><type>tsvector</type></entry> <entry>select only elements with given <replaceable class="parameter">weights</replaceable> from <replaceable class="parameter">vector</replaceable></entry> @@ -9635,7 +9635,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <indexterm> <primary>ts_headline</primary> </indexterm> - <literal><function>ts_headline(<optional> <replaceable class="parameter">config</replaceable> <type>regconfig</>, </optional> <replaceable class="parameter">document</replaceable> <type>text</>, <replaceable class="parameter">query</replaceable> <type>tsquery</> <optional>, <replaceable class="parameter">options</replaceable> <type>text</> </optional>)</function></literal> + <literal><function>ts_headline(<optional> <replaceable class="parameter">config</replaceable> <type>regconfig</type>, </optional> <replaceable class="parameter">document</replaceable> <type>text</type>, <replaceable class="parameter">query</replaceable> <type>tsquery</type> <optional>, <replaceable class="parameter">options</replaceable> <type>text</type> </optional>)</function></literal> </entry> <entry><type>text</type></entry> <entry>display a query match</entry> @@ -9644,7 +9644,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple </row> <row> <entry> - <literal><function>ts_headline(<optional> <replaceable class="parameter">config</replaceable> <type>regconfig</>, </optional> <replaceable class="parameter">document</replaceable> <type>json(b)</>, <replaceable class="parameter">query</replaceable> <type>tsquery</> <optional>, <replaceable class="parameter">options</replaceable> <type>text</> </optional>)</function></literal> + <literal><function>ts_headline(<optional> <replaceable class="parameter">config</replaceable> <type>regconfig</type>, </optional> <replaceable class="parameter">document</replaceable> <type>json(b)</type>, <replaceable class="parameter">query</replaceable> <type>tsquery</type> <optional>, <replaceable class="parameter">options</replaceable> <type>text</type> </optional>)</function></literal> </entry> <entry><type>text</type></entry> <entry>display a query match</entry> @@ -9656,7 +9656,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <indexterm> <primary>ts_rank</primary> </indexterm> - <literal><function>ts_rank(<optional> <replaceable class="parameter">weights</replaceable> <type>float4[]</>, </optional> <replaceable class="parameter">vector</replaceable> <type>tsvector</>, <replaceable class="parameter">query</replaceable> <type>tsquery</> <optional>, <replaceable class="parameter">normalization</replaceable> <type>integer</> </optional>)</function></literal> + <literal><function>ts_rank(<optional> <replaceable class="parameter">weights</replaceable> <type>float4[]</type>, </optional> <replaceable class="parameter">vector</replaceable> <type>tsvector</type>, <replaceable class="parameter">query</replaceable> <type>tsquery</type> <optional>, <replaceable class="parameter">normalization</replaceable> <type>integer</type> </optional>)</function></literal> </entry> <entry><type>float4</type></entry> <entry>rank document for query</entry> @@ -9668,7 +9668,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <indexterm> <primary>ts_rank_cd</primary> </indexterm> - <literal><function>ts_rank_cd(<optional> <replaceable class="parameter">weights</replaceable> <type>float4[]</>, </optional> <replaceable class="parameter">vector</replaceable> <type>tsvector</>, <replaceable class="parameter">query</replaceable> <type>tsquery</> <optional>, <replaceable class="parameter">normalization</replaceable> <type>integer</> </optional>)</function></literal> + <literal><function>ts_rank_cd(<optional> <replaceable class="parameter">weights</replaceable> <type>float4[]</type>, </optional> <replaceable class="parameter">vector</replaceable> <type>tsvector</type>, <replaceable class="parameter">query</replaceable> <type>tsquery</type> <optional>, <replaceable class="parameter">normalization</replaceable> <type>integer</type> </optional>)</function></literal> </entry> <entry><type>float4</type></entry> <entry>rank document for query using cover density</entry> @@ -9680,18 +9680,18 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <indexterm> <primary>ts_rewrite</primary> </indexterm> - <literal><function>ts_rewrite(<replaceable class="parameter">query</replaceable> <type>tsquery</>, <replaceable class="parameter">target</replaceable> <type>tsquery</>, <replaceable class="parameter">substitute</replaceable> <type>tsquery</>)</function></literal> + <literal><function>ts_rewrite(<replaceable class="parameter">query</replaceable> <type>tsquery</type>, <replaceable class="parameter">target</replaceable> <type>tsquery</type>, <replaceable class="parameter">substitute</replaceable> <type>tsquery</type>)</function></literal> </entry> <entry><type>tsquery</type></entry> - <entry>replace <replaceable>target</> with <replaceable>substitute</> + <entry>replace <replaceable>target</replaceable> with <replaceable>substitute</replaceable> within query</entry> <entry><literal>ts_rewrite('a & b'::tsquery, 'a'::tsquery, 'foo|bar'::tsquery)</literal></entry> <entry><literal>'b' & ( 'foo' | 'bar' )</literal></entry> </row> <row> - <entry><literal><function>ts_rewrite(<replaceable class="parameter">query</replaceable> <type>tsquery</>, <replaceable class="parameter">select</replaceable> <type>text</>)</function></literal></entry> + <entry><literal><function>ts_rewrite(<replaceable class="parameter">query</replaceable> <type>tsquery</type>, <replaceable class="parameter">select</replaceable> <type>text</type>)</function></literal></entry> <entry><type>tsquery</type></entry> - <entry>replace using targets and substitutes from a <command>SELECT</> command</entry> + <entry>replace using targets and substitutes from a <command>SELECT</command> command</entry> <entry><literal>SELECT ts_rewrite('a & b'::tsquery, 'SELECT t,s FROM aliases')</literal></entry> <entry><literal>'b' & ( 'foo' | 'bar' )</literal></entry> </row> @@ -9700,22 +9700,22 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <indexterm> <primary>tsquery_phrase</primary> </indexterm> - <literal><function>tsquery_phrase(<replaceable class="parameter">query1</replaceable> <type>tsquery</>, <replaceable class="parameter">query2</replaceable> <type>tsquery</>)</function></literal> + <literal><function>tsquery_phrase(<replaceable class="parameter">query1</replaceable> <type>tsquery</type>, <replaceable class="parameter">query2</replaceable> <type>tsquery</type>)</function></literal> </entry> <entry><type>tsquery</type></entry> - <entry>make query that searches for <replaceable>query1</> followed - by <replaceable>query2</> (same as <literal><-></> + <entry>make query that searches for <replaceable>query1</replaceable> followed + by <replaceable>query2</replaceable> (same as <literal><-></literal> operator)</entry> <entry><literal>tsquery_phrase(to_tsquery('fat'), to_tsquery('cat'))</literal></entry> <entry><literal>'fat' <-> 'cat'</literal></entry> </row> <row> <entry> - <literal><function>tsquery_phrase(<replaceable class="parameter">query1</replaceable> <type>tsquery</>, <replaceable class="parameter">query2</replaceable> <type>tsquery</>, <replaceable class="parameter">distance</replaceable> <type>integer</>)</function></literal> + <literal><function>tsquery_phrase(<replaceable class="parameter">query1</replaceable> <type>tsquery</type>, <replaceable class="parameter">query2</replaceable> <type>tsquery</type>, <replaceable class="parameter">distance</replaceable> <type>integer</type>)</function></literal> </entry> <entry><type>tsquery</type></entry> - <entry>make query that searches for <replaceable>query1</> followed by - <replaceable>query2</> at distance <replaceable>distance</></entry> + <entry>make query that searches for <replaceable>query1</replaceable> followed by + <replaceable>query2</replaceable> at distance <replaceable>distance</replaceable></entry> <entry><literal>tsquery_phrase(to_tsquery('fat'), to_tsquery('cat'), 10)</literal></entry> <entry><literal>'fat' <10> 'cat'</literal></entry> </row> @@ -9724,10 +9724,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <indexterm> <primary>tsvector_to_array</primary> </indexterm> - <literal><function>tsvector_to_array(<type>tsvector</>)</function></literal> + <literal><function>tsvector_to_array(<type>tsvector</type>)</function></literal> </entry> <entry><type>text[]</type></entry> - <entry>convert <type>tsvector</> to array of lexemes</entry> + <entry>convert <type>tsvector</type> to array of lexemes</entry> <entry><literal>tsvector_to_array('fat:2,4 cat:3 rat:5A'::tsvector)</literal></entry> <entry><literal>{cat,fat,rat}</literal></entry> </row> @@ -9739,7 +9739,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <literal><function>tsvector_update_trigger()</function></literal> </entry> <entry><type>trigger</type></entry> - <entry>trigger function for automatic <type>tsvector</> column update</entry> + <entry>trigger function for automatic <type>tsvector</type> column update</entry> <entry><literal>CREATE TRIGGER ... tsvector_update_trigger(tsvcol, 'pg_catalog.swedish', title, body)</literal></entry> <entry><literal></literal></entry> </row> @@ -9751,7 +9751,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <literal><function>tsvector_update_trigger_column()</function></literal> </entry> <entry><type>trigger</type></entry> - <entry>trigger function for automatic <type>tsvector</> column update</entry> + <entry>trigger function for automatic <type>tsvector</type> column update</entry> <entry><literal>CREATE TRIGGER ... tsvector_update_trigger_column(tsvcol, configcol, title, body)</literal></entry> <entry><literal></literal></entry> </row> @@ -9761,7 +9761,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <primary>unnest</primary> <secondary>for tsvector</secondary> </indexterm> - <literal><function>unnest(<type>tsvector</>, OUT <replaceable class="parameter">lexeme</> <type>text</>, OUT <replaceable class="parameter">positions</> <type>smallint[]</>, OUT <replaceable class="parameter">weights</> <type>text</>)</function></literal> + <literal><function>unnest(<type>tsvector</type>, OUT <replaceable class="parameter">lexeme</replaceable> <type>text</type>, OUT <replaceable class="parameter">positions</replaceable> <type>smallint[]</type>, OUT <replaceable class="parameter">weights</replaceable> <type>text</type>)</function></literal> </entry> <entry><type>setof record</type></entry> <entry>expand a <type>tsvector</type> to a set of rows</entry> @@ -9774,7 +9774,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <note> <para> - All the text search functions that accept an optional <type>regconfig</> + All the text search functions that accept an optional <type>regconfig</type> argument will use the configuration specified by <xref linkend="guc-default-text-search-config"> when that argument is omitted. @@ -9807,7 +9807,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <indexterm> <primary>ts_debug</primary> </indexterm> - <literal><function>ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>regconfig</>, </optional> <replaceable class="parameter">document</replaceable> <type>text</>, OUT <replaceable class="parameter">alias</> <type>text</>, OUT <replaceable class="parameter">description</> <type>text</>, OUT <replaceable class="parameter">token</> <type>text</>, OUT <replaceable class="parameter">dictionaries</> <type>regdictionary[]</>, OUT <replaceable class="parameter">dictionary</> <type>regdictionary</>, OUT <replaceable class="parameter">lexemes</> <type>text[]</>)</function></literal> + <literal><function>ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>regconfig</type>, </optional> <replaceable class="parameter">document</replaceable> <type>text</type>, OUT <replaceable class="parameter">alias</replaceable> <type>text</type>, OUT <replaceable class="parameter">description</replaceable> <type>text</type>, OUT <replaceable class="parameter">token</replaceable> <type>text</type>, OUT <replaceable class="parameter">dictionaries</replaceable> <type>regdictionary[]</type>, OUT <replaceable class="parameter">dictionary</replaceable> <type>regdictionary</type>, OUT <replaceable class="parameter">lexemes</replaceable> <type>text[]</type>)</function></literal> </entry> <entry><type>setof record</type></entry> <entry>test a configuration</entry> @@ -9819,7 +9819,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <indexterm> <primary>ts_lexize</primary> </indexterm> - <literal><function>ts_lexize(<replaceable class="parameter">dict</replaceable> <type>regdictionary</>, <replaceable class="parameter">token</replaceable> <type>text</>)</function></literal> + <literal><function>ts_lexize(<replaceable class="parameter">dict</replaceable> <type>regdictionary</type>, <replaceable class="parameter">token</replaceable> <type>text</type>)</function></literal> </entry> <entry><type>text[]</type></entry> <entry>test a dictionary</entry> @@ -9831,7 +9831,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <indexterm> <primary>ts_parse</primary> </indexterm> - <literal><function>ts_parse(<replaceable class="parameter">parser_name</replaceable> <type>text</>, <replaceable class="parameter">document</replaceable> <type>text</>, OUT <replaceable class="parameter">tokid</> <type>integer</>, OUT <replaceable class="parameter">token</> <type>text</>)</function></literal> + <literal><function>ts_parse(<replaceable class="parameter">parser_name</replaceable> <type>text</type>, <replaceable class="parameter">document</replaceable> <type>text</type>, OUT <replaceable class="parameter">tokid</replaceable> <type>integer</type>, OUT <replaceable class="parameter">token</replaceable> <type>text</type>)</function></literal> </entry> <entry><type>setof record</type></entry> <entry>test a parser</entry> @@ -9839,7 +9839,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <entry><literal>(1,foo) ...</literal></entry> </row> <row> - <entry><literal><function>ts_parse(<replaceable class="parameter">parser_oid</replaceable> <type>oid</>, <replaceable class="parameter">document</replaceable> <type>text</>, OUT <replaceable class="parameter">tokid</> <type>integer</>, OUT <replaceable class="parameter">token</> <type>text</>)</function></literal></entry> + <entry><literal><function>ts_parse(<replaceable class="parameter">parser_oid</replaceable> <type>oid</type>, <replaceable class="parameter">document</replaceable> <type>text</type>, OUT <replaceable class="parameter">tokid</replaceable> <type>integer</type>, OUT <replaceable class="parameter">token</replaceable> <type>text</type>)</function></literal></entry> <entry><type>setof record</type></entry> <entry>test a parser</entry> <entry><literal>ts_parse(3722, 'foo - bar')</literal></entry> @@ -9850,7 +9850,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <indexterm> <primary>ts_token_type</primary> </indexterm> - <literal><function>ts_token_type(<replaceable class="parameter">parser_name</> <type>text</>, OUT <replaceable class="parameter">tokid</> <type>integer</>, OUT <replaceable class="parameter">alias</> <type>text</>, OUT <replaceable class="parameter">description</> <type>text</>)</function></literal> + <literal><function>ts_token_type(<replaceable class="parameter">parser_name</replaceable> <type>text</type>, OUT <replaceable class="parameter">tokid</replaceable> <type>integer</type>, OUT <replaceable class="parameter">alias</replaceable> <type>text</type>, OUT <replaceable class="parameter">description</replaceable> <type>text</type>)</function></literal> </entry> <entry><type>setof record</type></entry> <entry>get token types defined by parser</entry> @@ -9858,7 +9858,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <entry><literal>(1,asciiword,"Word, all ASCII") ...</literal></entry> </row> <row> - <entry><literal><function>ts_token_type(<replaceable class="parameter">parser_oid</> <type>oid</>, OUT <replaceable class="parameter">tokid</> <type>integer</>, OUT <replaceable class="parameter">alias</> <type>text</>, OUT <replaceable class="parameter">description</> <type>text</>)</function></literal></entry> + <entry><literal><function>ts_token_type(<replaceable class="parameter">parser_oid</replaceable> <type>oid</type>, OUT <replaceable class="parameter">tokid</replaceable> <type>integer</type>, OUT <replaceable class="parameter">alias</replaceable> <type>text</type>, OUT <replaceable class="parameter">description</replaceable> <type>text</type>)</function></literal></entry> <entry><type>setof record</type></entry> <entry>get token types defined by parser</entry> <entry><literal>ts_token_type(3722)</literal></entry> @@ -9869,10 +9869,10 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple <indexterm> <primary>ts_stat</primary> </indexterm> - <literal><function>ts_stat(<replaceable class="parameter">sqlquery</replaceable> <type>text</>, <optional> <replaceable class="parameter">weights</replaceable> <type>text</>, </optional> OUT <replaceable class="parameter">word</replaceable> <type>text</>, OUT <replaceable class="parameter">ndoc</replaceable> <type>integer</>, OUT <replaceable class="parameter">nentry</replaceable> <type>integer</>)</function></literal> + <literal><function>ts_stat(<replaceable class="parameter">sqlquery</replaceable> <type>text</type>, <optional> <replaceable class="parameter">weights</replaceable> <type>text</type>, </optional> OUT <replaceable class="parameter">word</replaceable> <type>text</type>, OUT <replaceable class="parameter">ndoc</replaceable> <type>integer</type>, OUT <replaceable class="parameter">nentry</replaceable> <type>integer</type>)</function></literal> </entry> <entry><type>setof record</type></entry> - <entry>get statistics of a <type>tsvector</> column</entry> + <entry>get statistics of a <type>tsvector</type> column</entry> <entry><literal>ts_stat('SELECT vector from apod')</literal></entry> <entry><literal>(foo,10,15) ...</literal></entry> </row> @@ -9894,7 +9894,7 @@ CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple and <function>xmlserialize</function> for converting to and from type <type>xml</type> are not repeated here. Use of most of these functions requires the installation to have been built - with <command>configure --with-libxml</>. + with <command>configure --with-libxml</command>. </para> <sect2 id="functions-producing-xml"> @@ -10246,7 +10246,7 @@ SELECT xmlagg(x) FROM test; </para> <para> - To determine the order of the concatenation, an <literal>ORDER BY</> + To determine the order of the concatenation, an <literal>ORDER BY</literal> clause may be added to the aggregate call as described in <xref linkend="syntax-aggregates">. For example: @@ -10365,18 +10365,18 @@ SELECT xmlexists('//town[text() = ''Toronto'']' PASSING BY REF '<towns><town>Tor </synopsis> <para> - These functions check whether a <type>text</> string is well-formed XML, + These functions check whether a <type>text</type> string is well-formed XML, returning a Boolean result. <function>xml_is_well_formed_document</function> checks for a well-formed document, while <function>xml_is_well_formed_content</function> checks for well-formed content. <function>xml_is_well_formed</function> does the former if the <xref linkend="guc-xmloption"> configuration - parameter is set to <literal>DOCUMENT</>, or the latter if it is set to - <literal>CONTENT</>. This means that + parameter is set to <literal>DOCUMENT</literal>, or the latter if it is set to + <literal>CONTENT</literal>. This means that <function>xml_is_well_formed</function> is useful for seeing whether - a simple cast to type <type>xml</> will succeed, whereas the other two + a simple cast to type <type>xml</type> will succeed, whereas the other two functions are useful for seeing whether the corresponding variants of - <function>XMLPARSE</> will succeed. + <function>XMLPARSE</function> will succeed. </para> <para> @@ -10446,7 +10446,7 @@ SELECT xml_is_well_formed_document('<pg:foo xmlns:pg="http://postgresql.org/stuf <para> The function <function>xpath</function> evaluates the XPath - expression <replaceable>xpath</replaceable> (a <type>text</> value) + expression <replaceable>xpath</replaceable> (a <type>text</type> value) against the XML value <replaceable>xml</replaceable>. It returns an array of XML values corresponding to the node set produced by the XPath expression. @@ -10461,14 +10461,14 @@ SELECT xml_is_well_formed_document('<pg:foo xmlns:pg="http://postgresql.org/stuf <para> The optional third argument of the function is an array of namespace - mappings. This array should be a two-dimensional <type>text</> array with + mappings. This array should be a two-dimensional <type>text</type> array with the length of the second axis being equal to 2 (i.e., it should be an array of arrays, each of which consists of exactly 2 elements). The first element of each array entry is the namespace name (alias), the second the namespace URI. It is not required that aliases provided in this array be the same as those being used in the XML document itself (in other words, both in the XML document and in the <function>xpath</function> - function context, aliases are <emphasis>local</>). + function context, aliases are <emphasis>local</emphasis>). </para> <para> @@ -10514,7 +10514,7 @@ SELECT xpath('//mydefns:b/text()', '<a xmlns="http://example.com"><b>test</b></a of the <function>xpath</function> function. Instead of returning the individual XML values that satisfy the XPath, this function returns a Boolean indicating whether the query was satisfied or not. This - function is equivalent to the standard <literal>XMLEXISTS</> predicate, + function is equivalent to the standard <literal>XMLEXISTS</literal> predicate, except that it also offers support for a namespace mapping argument. </para> @@ -10560,21 +10560,21 @@ SELECT xpath_exists('/my:a/text()', '<my:a xmlns:my="http://example.com">test</m </para> <para> - The optional <literal>XMLNAMESPACES</> clause is a comma-separated + The optional <literal>XMLNAMESPACES</literal> clause is a comma-separated list of namespaces. It specifies the XML namespaces used in the document and their aliases. A default namespace specification is not currently supported. </para> <para> - The required <replaceable>row_expression</> argument is an XPath + The required <replaceable>row_expression</replaceable> argument is an XPath expression that is evaluated against the supplied XML document to obtain an ordered sequence of XML nodes. This sequence is what - <function>xmltable</> transforms into output rows. + <function>xmltable</function> transforms into output rows. </para> <para> - <replaceable>document_expression</> provides the XML document to + <replaceable>document_expression</replaceable> provides the XML document to operate on. The <literal>BY REF</literal> clauses have no effect in PostgreSQL, but are allowed for SQL conformance and compatibility with other @@ -10586,9 +10586,9 @@ SELECT xpath_exists('/my:a/text()', '<my:a xmlns:my="http://example.com">test</m <para> The mandatory <literal>COLUMNS</literal> clause specifies the list of columns in the output table. - If the <literal>COLUMNS</> clause is omitted, the rows in the result - set contain a single column of type <literal>xml</> containing the - data matched by <replaceable>row_expression</>. + If the <literal>COLUMNS</literal> clause is omitted, the rows in the result + set contain a single column of type <literal>xml</literal> containing the + data matched by <replaceable>row_expression</replaceable>. If <literal>COLUMNS</literal> is specified, each entry describes a single column. See the syntax summary above for the format. @@ -10604,10 +10604,10 @@ SELECT xpath_exists('/my:a/text()', '<my:a xmlns:my="http://example.com">test</m </para> <para> - The <literal>column_expression</> for a column is an XPath expression + The <literal>column_expression</literal> for a column is an XPath expression that is evaluated for each row, relative to the result of the - <replaceable>row_expression</>, to find the value of the column. - If no <literal>column_expression</> is given, then the column name + <replaceable>row_expression</replaceable>, to find the value of the column. + If no <literal>column_expression</literal> is given, then the column name is used as an implicit path. </para> @@ -10615,55 +10615,55 @@ SELECT xpath_exists('/my:a/text()', '<my:a xmlns:my="http://example.com">test</m If a column's XPath expression returns multiple elements, an error is raised. If the expression matches an empty tag, the result is an - empty string (not <literal>NULL</>). - Any <literal>xsi:nil</> attributes are ignored. + empty string (not <literal>NULL</literal>). + Any <literal>xsi:nil</literal> attributes are ignored. </para> <para> - The text body of the XML matched by the <replaceable>column_expression</> + The text body of the XML matched by the <replaceable>column_expression</replaceable> is used as the column value. Multiple <literal>text()</literal> nodes within an element are concatenated in order. Any child elements, processing instructions, and comments are ignored, but the text contents of child elements are concatenated to the result. - Note that the whitespace-only <literal>text()</> node between two non-text - elements is preserved, and that leading whitespace on a <literal>text()</> + Note that the whitespace-only <literal>text()</literal> node between two non-text + elements is preserved, and that leading whitespace on a <literal>text()</literal> node is not flattened. </para> <para> If the path expression does not match for a given row but - <replaceable>default_expression</> is specified, the value resulting + <replaceable>default_expression</replaceable> is specified, the value resulting from evaluating that expression is used. - If no <literal>DEFAULT</> clause is given for the column, - the field will be set to <literal>NULL</>. - It is possible for a <replaceable>default_expression</> to reference + If no <literal>DEFAULT</literal> clause is given for the column, + the field will be set to <literal>NULL</literal>. + It is possible for a <replaceable>default_expression</replaceable> to reference the value of output columns that appear prior to it in the column list, so the default of one column may be based on the value of another column. </para> <para> - Columns may be marked <literal>NOT NULL</>. If the - <replaceable>column_expression</> for a <literal>NOT NULL</> column - does not match anything and there is no <literal>DEFAULT</> or the - <replaceable>default_expression</> also evaluates to null, an error + Columns may be marked <literal>NOT NULL</literal>. If the + <replaceable>column_expression</replaceable> for a <literal>NOT NULL</literal> column + does not match anything and there is no <literal>DEFAULT</literal> or the + <replaceable>default_expression</replaceable> also evaluates to null, an error is reported. </para> <para> - Unlike regular PostgreSQL functions, <replaceable>column_expression</> - and <replaceable>default_expression</> are not evaluated to a simple + Unlike regular PostgreSQL functions, <replaceable>column_expression</replaceable> + and <replaceable>default_expression</replaceable> are not evaluated to a simple value before calling the function. - <replaceable>column_expression</> is normally evaluated - exactly once per input row, and <replaceable>default_expression</> + <replaceable>column_expression</replaceable> is normally evaluated + exactly once per input row, and <replaceable>default_expression</replaceable> is evaluated each time a default is needed for a field. If the expression qualifies as stable or immutable the repeat evaluation may be skipped. - Effectively <function>xmltable</> behaves more like a subquery than a + Effectively <function>xmltable</function> behaves more like a subquery than a function call. This means that you can usefully use volatile functions like - <function>nextval</> in <replaceable>default_expression</>, and - <replaceable>column_expression</> may depend on other parts of the + <function>nextval</function> in <replaceable>default_expression</replaceable>, and + <replaceable>column_expression</replaceable> may depend on other parts of the XML document. </para> @@ -11029,7 +11029,7 @@ table2-mapping </para> <table id="functions-json-op-table"> - <title><type>json</> and <type>jsonb</> Operators</title> + <title><type>json</type> and <type>jsonb</type> Operators</title> <tgroup cols="5"> <thead> <row> @@ -11059,14 +11059,14 @@ table2-mapping <row> <entry><literal>->></literal></entry> <entry><type>int</type></entry> - <entry>Get JSON array element as <type>text</></entry> + <entry>Get JSON array element as <type>text</type></entry> <entry><literal>'[1,2,3]'::json->>2</literal></entry> <entry><literal>3</literal></entry> </row> <row> <entry><literal>->></literal></entry> <entry><type>text</type></entry> - <entry>Get JSON object field as <type>text</></entry> + <entry>Get JSON object field as <type>text</type></entry> <entry><literal>'{"a":1,"b":2}'::json->>'b'</literal></entry> <entry><literal>2</literal></entry> </row> @@ -11080,7 +11080,7 @@ table2-mapping <row> <entry><literal>#>></literal></entry> <entry><type>text[]</type></entry> - <entry>Get JSON object at specified path as <type>text</></entry> + <entry>Get JSON object at specified path as <type>text</type></entry> <entry><literal>'{"a":[1,2,3],"b":[4,5,6]}'::json#>>'{a,2}'</literal></entry> <entry><literal>3</literal></entry> </row> @@ -11095,7 +11095,7 @@ table2-mapping The field/element/path extraction operators return the same type as their left-hand input (either <type>json</type> or <type>jsonb</type>), except for those specified as - returning <type>text</>, which coerce the value to text. + returning <type>text</type>, which coerce the value to text. The field/element/path extraction operators return NULL, rather than failing, if the JSON input does not have the right structure to match the request; for example if no such element exists. The @@ -11115,14 +11115,14 @@ table2-mapping Some further operators also exist only for <type>jsonb</type>, as shown in <xref linkend="functions-jsonb-op-table">. Many of these operators can be indexed by - <type>jsonb</> operator classes. For a full description of - <type>jsonb</> containment and existence semantics, see <xref + <type>jsonb</type> operator classes. For a full description of + <type>jsonb</type> containment and existence semantics, see <xref linkend="json-containment">. <xref linkend="json-indexing"> describes how these operators can be used to effectively index - <type>jsonb</> data. + <type>jsonb</type> data. </para> <table id="functions-jsonb-op-table"> - <title>Additional <type>jsonb</> Operators</title> + <title>Additional <type>jsonb</type> Operators</title> <tgroup cols="4"> <thead> <row> @@ -11211,7 +11211,7 @@ table2-mapping <note> <para> - The <literal>||</> operator concatenates the elements at the top level of + The <literal>||</literal> operator concatenates the elements at the top level of each of its operands. It does not operate recursively. For example, if both operands are objects with a common key field name, the value of the field in the result will just be the value from the right hand operand. @@ -11221,8 +11221,8 @@ table2-mapping <para> <xref linkend="functions-json-creation-table"> shows the functions that are available for creating <type>json</type> and <type>jsonb</type> values. - (There are no equivalent functions for <type>jsonb</>, of the <literal>row_to_json</> - and <literal>array_to_json</> functions. However, the <literal>to_jsonb</> + (There are no equivalent functions for <type>jsonb</type>, of the <literal>row_to_json</literal> + and <literal>array_to_json</literal> functions. However, the <literal>to_jsonb</literal> function supplies much the same functionality as these functions would.) </para> @@ -11274,14 +11274,14 @@ table2-mapping </para><para><literal>to_jsonb(anyelement)</literal> </para></entry> <entry> - Returns the value as <type>json</> or <type>jsonb</>. + Returns the value as <type>json</type> or <type>jsonb</type>. Arrays and composites are converted (recursively) to arrays and objects; otherwise, if there is a cast from the type to <type>json</type>, the cast function will be used to perform the conversion; otherwise, a scalar value is produced. For any scalar type other than a number, a Boolean, or a null value, the text representation will be used, in such a fashion that it is a - valid <type>json</> or <type>jsonb</> value. + valid <type>json</type> or <type>jsonb</type> value. </entry> <entry><literal>to_json('Fred said "Hi."'::text)</literal></entry> <entry><literal>"Fred said \"Hi.\""</literal></entry> @@ -11343,8 +11343,8 @@ table2-mapping such that each inner array has exactly two elements, which are taken as a key/value pair. </entry> - <entry><para><literal>json_object('{a, 1, b, "def", c, 3.5}')</></para> - <para><literal>json_object('{{a, 1},{b, "def"},{c, 3.5}}')</></para></entry> + <entry><para><literal>json_object('{a, 1, b, "def", c, 3.5}')</literal></para> + <para><literal>json_object('{{a, 1},{b, "def"},{c, 3.5}}')</literal></para></entry> <entry><literal>{"a": "1", "b": "def", "c": "3.5"}</literal></entry> </row> <row> @@ -11352,7 +11352,7 @@ table2-mapping </para><para><literal>jsonb_object(keys text[], values text[])</literal> </para></entry> <entry> - This form of <function>json_object</> takes keys and values pairwise from two separate + This form of <function>json_object</function> takes keys and values pairwise from two separate arrays. In all other respects it is identical to the one-argument form. </entry> <entry><literal>json_object('{a, b}', '{1,2}')</literal></entry> @@ -11364,9 +11364,9 @@ table2-mapping <note> <para> - <function>array_to_json</> and <function>row_to_json</> have the same - behavior as <function>to_json</> except for offering a pretty-printing - option. The behavior described for <function>to_json</> likewise applies + <function>array_to_json</function> and <function>row_to_json</function> have the same + behavior as <function>to_json</function> except for offering a pretty-printing + option. The behavior described for <function>to_json</function> likewise applies to each individual value converted by the other JSON creation functions. </para> </note> @@ -11530,7 +11530,7 @@ table2-mapping <entry><type>setof key text, value text</type></entry> <entry> Expands the outermost JSON object into a set of key/value pairs. The - returned values will be of type <type>text</>. + returned values will be of type <type>text</type>. </entry> <entry><literal>select * from json_each_text('{"a":"foo", "b":"bar"}')</literal></entry> <entry> @@ -11562,7 +11562,7 @@ table2-mapping <entry><type>text</type></entry> <entry> Returns JSON value pointed to by <replaceable>path_elems</replaceable> - as <type>text</> + as <type>text</type> (equivalent to <literal>#>></literal> operator). </entry> <entry><literal>json_extract_path_text('{"f2":{"f3":1},"f4":{"f5":99,"f6":"foo"}}','f4', 'f6')</literal></entry> @@ -11593,7 +11593,7 @@ table2-mapping <entry><type>anyelement</type></entry> <entry> Expands the object in <replaceable>from_json</replaceable> to a row - whose columns match the record type defined by <replaceable>base</> + whose columns match the record type defined by <replaceable>base</replaceable> (see note below). </entry> <entry><literal>select * from json_populate_record(null::myrowtype, '{"a": 1, "b": ["2", "a b"], "c": {"d": 4, "e": "a b c"}}')</literal></entry> @@ -11613,7 +11613,7 @@ table2-mapping <entry> Expands the outermost array of objects in <replaceable>from_json</replaceable> to a set of rows whose - columns match the record type defined by <replaceable>base</> (see + columns match the record type defined by <replaceable>base</replaceable> (see note below). </entry> <entry><literal>select * from json_populate_recordset(null::myrowtype, '[{"a":1,"b":2},{"a":3,"b":4}]')</literal></entry> @@ -11653,7 +11653,7 @@ table2-mapping </para></entry> <entry><type>setof text</type></entry> <entry> - Expands a JSON array to a set of <type>text</> values. + Expands a JSON array to a set of <type>text</type> values. </entry> <entry><literal>select * from json_array_elements_text('["foo", "bar"]')</literal></entry> <entry> @@ -11673,8 +11673,8 @@ table2-mapping <entry> Returns the type of the outermost JSON value as a text string. Possible types are - <literal>object</>, <literal>array</>, <literal>string</>, <literal>number</>, - <literal>boolean</>, and <literal>null</>. + <literal>object</literal>, <literal>array</literal>, <literal>string</literal>, <literal>number</literal>, + <literal>boolean</literal>, and <literal>null</literal>. </entry> <entry><literal>json_typeof('-123.4')</literal></entry> <entry><literal>number</literal></entry> @@ -11686,8 +11686,8 @@ table2-mapping <entry><type>record</type></entry> <entry> Builds an arbitrary record from a JSON object (see note below). As - with all functions returning <type>record</>, the caller must - explicitly define the structure of the record with an <literal>AS</> + with all functions returning <type>record</type>, the caller must + explicitly define the structure of the record with an <literal>AS</literal> clause. </entry> <entry><literal>select * from json_to_record('{"a":1,"b":[1,2,3],"c":[1,2,3],"e":"bar","r": {"a": 123, "b": "a b c"}}') as x(a int, b text, c int[], d text, r myrowtype) </literal></entry> @@ -11706,9 +11706,9 @@ table2-mapping <entry><type>setof record</type></entry> <entry> Builds an arbitrary set of records from a JSON array of objects (see - note below). As with all functions returning <type>record</>, the + note below). As with all functions returning <type>record</type>, the caller must explicitly define the structure of the record with - an <literal>AS</> clause. + an <literal>AS</literal> clause. </entry> <entry><literal>select * from json_to_recordset('[{"a":1,"b":"foo"},{"a":"2","c":"bar"}]') as x(a int, b text);</literal></entry> <entry> @@ -11743,7 +11743,7 @@ table2-mapping replaced by <replaceable>new_value</replaceable>, or with <replaceable>new_value</replaceable> added if <replaceable>create_missing</replaceable> is true ( default is - <literal>true</>) and the item + <literal>true</literal>) and the item designated by <replaceable>path</replaceable> does not exist. As with the path orientated operators, negative integers that appear in <replaceable>path</replaceable> count from the end @@ -11770,7 +11770,7 @@ table2-mapping <replaceable>path</replaceable> is in a JSONB array, <replaceable>new_value</replaceable> will be inserted before target or after if <replaceable>insert_after</replaceable> is true (default is - <literal>false</>). If <replaceable>target</replaceable> section + <literal>false</literal>). If <replaceable>target</replaceable> section designated by <replaceable>path</replaceable> is in JSONB object, <replaceable>new_value</replaceable> will be inserted only if <replaceable>target</replaceable> does not exist. As with the path @@ -11820,17 +11820,17 @@ table2-mapping <para> Many of these functions and operators will convert Unicode escapes in JSON strings to the appropriate single character. This is a non-issue - if the input is type <type>jsonb</>, because the conversion was already - done; but for <type>json</> input, this may result in throwing an error, + if the input is type <type>jsonb</type>, because the conversion was already + done; but for <type>json</type> input, this may result in throwing an error, as noted in <xref linkend="datatype-json">. </para> </note> <note> <para> - In <function>json_populate_record</>, <function>json_populate_recordset</>, - <function>json_to_record</> and <function>json_to_recordset</>, - type coercion from the JSON is <quote>best effort</> and may not result + In <function>json_populate_record</function>, <function>json_populate_recordset</function>, + <function>json_to_record</function> and <function>json_to_recordset</function>, + type coercion from the JSON is <quote>best effort</quote> and may not result in desired values for some types. JSON keys are matched to identical column names in the target row type. JSON fields that do not appear in the target row type will be omitted from the output, and @@ -11840,18 +11840,18 @@ table2-mapping <note> <para> - All the items of the <literal>path</> parameter of <literal>jsonb_set</> - as well as <literal>jsonb_insert</> except the last item must be present - in the <literal>target</>. If <literal>create_missing</> is false, all - items of the <literal>path</> parameter of <literal>jsonb_set</> must be - present. If these conditions are not met the <literal>target</> is + All the items of the <literal>path</literal> parameter of <literal>jsonb_set</literal> + as well as <literal>jsonb_insert</literal> except the last item must be present + in the <literal>target</literal>. If <literal>create_missing</literal> is false, all + items of the <literal>path</literal> parameter of <literal>jsonb_set</literal> must be + present. If these conditions are not met the <literal>target</literal> is returned unchanged. </para> <para> If the last path item is an object key, it will be created if it is absent and given the new value. If the last path item is an array index, if it is positive the item to set is found by counting from - the left, and if negative by counting from the right - <literal>-1</> + the left, and if negative by counting from the right - <literal>-1</literal> designates the rightmost element, and so on. If the item is out of the range -array_length .. array_length -1, and create_missing is true, the new value is added at the beginning @@ -11862,20 +11862,20 @@ table2-mapping <note> <para> - The <literal>json_typeof</> function's <literal>null</> return value + The <literal>json_typeof</literal> function's <literal>null</literal> return value should not be confused with a SQL NULL. While - calling <literal>json_typeof('null'::json)</> will - return <literal>null</>, calling <literal>json_typeof(NULL::json)</> + calling <literal>json_typeof('null'::json)</literal> will + return <literal>null</literal>, calling <literal>json_typeof(NULL::json)</literal> will return a SQL NULL. </para> </note> <note> <para> - If the argument to <literal>json_strip_nulls</> contains duplicate + If the argument to <literal>json_strip_nulls</literal> contains duplicate field names in any object, the result could be semantically somewhat different, depending on the order in which they occur. This is not an - issue for <literal>jsonb_strip_nulls</> since <type>jsonb</type> values never have + issue for <literal>jsonb_strip_nulls</literal> since <type>jsonb</type> values never have duplicate object field names. </para> </note> @@ -11886,7 +11886,7 @@ table2-mapping values as JSON, and the aggregate function <function>json_object_agg</function> which aggregates pairs of values into a JSON object, and their <type>jsonb</type> equivalents, - <function>jsonb_agg</> and <function>jsonb_object_agg</>. + <function>jsonb_agg</function> and <function>jsonb_object_agg</function>. </para> </sect1> @@ -11963,52 +11963,52 @@ table2-mapping <para> The sequence to be operated on by a sequence function is specified by - a <type>regclass</> argument, which is simply the OID of the sequence in the - <structname>pg_class</> system catalog. You do not have to look up the - OID by hand, however, since the <type>regclass</> data type's input + a <type>regclass</type> argument, which is simply the OID of the sequence in the + <structname>pg_class</structname> system catalog. You do not have to look up the + OID by hand, however, since the <type>regclass</type> data type's input converter will do the work for you. Just write the sequence name enclosed in single quotes so that it looks like a literal constant. For compatibility with the handling of ordinary <acronym>SQL</acronym> names, the string will be converted to lower case unless it contains double quotes around the sequence name. Thus: <programlisting> -nextval('foo') <lineannotation>operates on sequence <literal>foo</literal></> -nextval('FOO') <lineannotation>operates on sequence <literal>foo</literal></> -nextval('"Foo"') <lineannotation>operates on sequence <literal>Foo</literal></> +nextval('foo') <lineannotation>operates on sequence <literal>foo</literal></lineannotation> +nextval('FOO') <lineannotation>operates on sequence <literal>foo</literal></lineannotation> +nextval('"Foo"') <lineannotation>operates on sequence <literal>Foo</literal></lineannotation> </programlisting> The sequence name can be schema-qualified if necessary: <programlisting> -nextval('myschema.foo') <lineannotation>operates on <literal>myschema.foo</literal></> +nextval('myschema.foo') <lineannotation>operates on <literal>myschema.foo</literal></lineannotation> nextval('"myschema".foo') <lineannotation>same as above</lineannotation> -nextval('foo') <lineannotation>searches search path for <literal>foo</literal></> +nextval('foo') <lineannotation>searches search path for <literal>foo</literal></lineannotation> </programlisting> See <xref linkend="datatype-oid"> for more information about - <type>regclass</>. + <type>regclass</type>. </para> <note> <para> Before <productname>PostgreSQL</productname> 8.1, the arguments of the - sequence functions were of type <type>text</>, not <type>regclass</>, and + sequence functions were of type <type>text</type>, not <type>regclass</type>, and the above-described conversion from a text string to an OID value would happen at run time during each call. For backward compatibility, this facility still exists, but internally it is now handled as an implicit - coercion from <type>text</> to <type>regclass</> before the function is + coercion from <type>text</type> to <type>regclass</type> before the function is invoked. </para> <para> When you write the argument of a sequence function as an unadorned - literal string, it becomes a constant of type <type>regclass</>. + literal string, it becomes a constant of type <type>regclass</type>. Since this is really just an OID, it will track the originally identified sequence despite later renaming, schema reassignment, - etc. This <quote>early binding</> behavior is usually desirable for + etc. This <quote>early binding</quote> behavior is usually desirable for sequence references in column defaults and views. But sometimes you might - want <quote>late binding</> where the sequence reference is resolved + want <quote>late binding</quote> where the sequence reference is resolved at run time. To get late-binding behavior, force the constant to be - stored as a <type>text</> constant instead of <type>regclass</>: + stored as a <type>text</type> constant instead of <type>regclass</type>: <programlisting> -nextval('foo'::text) <lineannotation><literal>foo</literal> is looked up at runtime</> +nextval('foo'::text) <lineannotation><literal>foo</literal> is looked up at runtime</lineannotation> </programlisting> Note that late binding was the only behavior supported in <productname>PostgreSQL</productname> releases before 8.1, so you @@ -12051,14 +12051,14 @@ nextval('foo'::text) <lineannotation><literal>foo</literal> is looked up at rolled back; that is, once a value has been fetched it is considered used and will not be returned again. This is true even if the surrounding transaction later aborts, or if the calling query ends - up not using the value. For example an <command>INSERT</> with - an <literal>ON CONFLICT</> clause will compute the to-be-inserted + up not using the value. For example an <command>INSERT</command> with + an <literal>ON CONFLICT</literal> clause will compute the to-be-inserted tuple, including doing any required <function>nextval</function> calls, before detecting any conflict that would cause it to follow - the <literal>ON CONFLICT</> rule instead. Such cases will leave + the <literal>ON CONFLICT</literal> rule instead. Such cases will leave unused <quote>holes</quote> in the sequence of assigned values. - Thus, <productname>PostgreSQL</> sequence objects <emphasis>cannot - be used to obtain <quote>gapless</> sequences</emphasis>. + Thus, <productname>PostgreSQL</productname> sequence objects <emphasis>cannot + be used to obtain <quote>gapless</quote> sequences</emphasis>. </para> </important> @@ -12094,7 +12094,7 @@ nextval('foo'::text) <lineannotation><literal>foo</literal> is looked up at <listitem> <para> Return the value most recently returned by - <function>nextval</> in the current session. This function is + <function>nextval</function> in the current session. This function is identical to <function>currval</function>, except that instead of taking the sequence name as an argument it refers to whichever sequence <function>nextval</function> was most recently applied to @@ -12119,20 +12119,20 @@ nextval('foo'::text) <lineannotation><literal>foo</literal> is looked up at specified value and sets its <literal>is_called</literal> field to <literal>true</literal>, meaning that the next <function>nextval</function> will advance the sequence before - returning a value. The value reported by <function>currval</> is + returning a value. The value reported by <function>currval</function> is also set to the specified value. In the three-parameter form, <literal>is_called</literal> can be set to either <literal>true</literal> - or <literal>false</literal>. <literal>true</> has the same effect as + or <literal>false</literal>. <literal>true</literal> has the same effect as the two-parameter form. If it is set to <literal>false</literal>, the next <function>nextval</function> will return exactly the specified value, and sequence advancement commences with the following <function>nextval</function>. Furthermore, the value reported by - <function>currval</> is not changed in this case. For example, + <function>currval</function> is not changed in this case. For example, <screen> -SELECT setval('foo', 42); <lineannotation>Next <function>nextval</> will return 43</lineannotation> +SELECT setval('foo', 42); <lineannotation>Next <function>nextval</function> will return 43</lineannotation> SELECT setval('foo', 42, true); <lineannotation>Same as above</lineannotation> -SELECT setval('foo', 42, false); <lineannotation>Next <function>nextval</> will return 42</lineannotation> +SELECT setval('foo', 42, false); <lineannotation>Next <function>nextval</function> will return 42</lineannotation> </screen> The result returned by <function>setval</function> is just the value of its @@ -12183,7 +12183,7 @@ SELECT setval('foo', 42, false); <lineannotation>Next <function>nextval</> wi </tip> <sect2 id="functions-case"> - <title><literal>CASE</></title> + <title><literal>CASE</literal></title> <para> The <acronym>SQL</acronym> <token>CASE</token> expression is a @@ -12206,7 +12206,7 @@ END condition's result is not true, any subsequent <token>WHEN</token> clauses are examined in the same manner. If no <token>WHEN</token> <replaceable>condition</replaceable> yields true, the value of the - <token>CASE</> expression is the <replaceable>result</replaceable> of the + <token>CASE</token> expression is the <replaceable>result</replaceable> of the <token>ELSE</token> clause. If the <token>ELSE</token> clause is omitted and no condition is true, the result is null. </para> @@ -12245,7 +12245,7 @@ SELECT a, </para> <para> - There is a <quote>simple</> form of <token>CASE</token> expression + There is a <quote>simple</quote> form of <token>CASE</token> expression that is a variant of the general form above: <synopsis> @@ -12299,7 +12299,7 @@ SELECT ... WHERE CASE WHEN x <> 0 THEN y/x > 1.5 ELSE false END; situations in which subexpressions of an expression are evaluated at different times, so that the principle that <quote><token>CASE</token> evaluates only necessary subexpressions</quote> is not ironclad. For - example a constant <literal>1/0</> subexpression will usually result in + example a constant <literal>1/0</literal> subexpression will usually result in a division-by-zero failure at planning time, even if it's within a <token>CASE</token> arm that would never be entered at run time. </para> @@ -12307,7 +12307,7 @@ SELECT ... WHERE CASE WHEN x <> 0 THEN y/x > 1.5 ELSE false END; </sect2> <sect2 id="functions-coalesce-nvl-ifnull"> - <title><literal>COALESCE</></title> + <title><literal>COALESCE</literal></title> <indexterm> <primary>COALESCE</primary> @@ -12333,8 +12333,8 @@ SELECT ... WHERE CASE WHEN x <> 0 THEN y/x > 1.5 ELSE false END; <programlisting> SELECT COALESCE(description, short_description, '(none)') ... </programlisting> - This returns <varname>description</> if it is not null, otherwise - <varname>short_description</> if it is not null, otherwise <literal>(none)</>. + This returns <varname>description</varname> if it is not null, otherwise + <varname>short_description</varname> if it is not null, otherwise <literal>(none)</literal>. </para> <para> @@ -12342,13 +12342,13 @@ SELECT COALESCE(description, short_description, '(none)') ... evaluates the arguments that are needed to determine the result; that is, arguments to the right of the first non-null argument are not evaluated. This SQL-standard function provides capabilities similar - to <function>NVL</> and <function>IFNULL</>, which are used in some other + to <function>NVL</function> and <function>IFNULL</function>, which are used in some other database systems. </para> </sect2> <sect2 id="functions-nullif"> - <title><literal>NULLIF</></title> + <title><literal>NULLIF</literal></title> <indexterm> <primary>NULLIF</primary> @@ -12369,7 +12369,7 @@ SELECT NULLIF(value, '(none)') ... </programlisting> </para> <para> - In this example, if <literal>value</literal> is <literal>(none)</>, + In this example, if <literal>value</literal> is <literal>(none)</literal>, null is returned, otherwise the value of <literal>value</literal> is returned. </para> @@ -12394,7 +12394,7 @@ SELECT NULLIF(value, '(none)') ... </synopsis> <para> - The <function>GREATEST</> and <function>LEAST</> functions select the + The <function>GREATEST</function> and <function>LEAST</function> functions select the largest or smallest value from a list of any number of expressions. The expressions must all be convertible to a common data type, which will be the type of the result @@ -12404,7 +12404,7 @@ SELECT NULLIF(value, '(none)') ... </para> <para> - Note that <function>GREATEST</> and <function>LEAST</> are not in + Note that <function>GREATEST</function> and <function>LEAST</function> are not in the SQL standard, but are a common extension. Some other databases make them return NULL if any argument is NULL, rather than only when all are NULL. @@ -12534,7 +12534,7 @@ SELECT NULLIF(value, '(none)') ... If the contents of two arrays are equal but the dimensionality is different, the first difference in the dimensionality information determines the sort order. (This is a change from versions of - <productname>PostgreSQL</> prior to 8.2: older versions would claim + <productname>PostgreSQL</productname> prior to 8.2: older versions would claim that two arrays with the same contents were equal, even if the number of dimensions or subscript ranges were different.) </para> @@ -12833,7 +12833,7 @@ NULL baz</literallayout>(3 rows)</entry> </table> <para> - In <function>array_position</function> and <function>array_positions</>, + In <function>array_position</function> and <function>array_positions</function>, each array element is compared to the searched value using <literal>IS NOT DISTINCT FROM</literal> semantics. </para> @@ -12868,8 +12868,8 @@ NULL baz</literallayout>(3 rows)</entry> <note> <para> - There are two differences in the behavior of <function>string_to_array</> - from pre-9.1 versions of <productname>PostgreSQL</>. + There are two differences in the behavior of <function>string_to_array</function> + from pre-9.1 versions of <productname>PostgreSQL</productname>. First, it will return an empty (zero-element) array rather than NULL when the input string is of zero length. Second, if the delimiter string is NULL, the function splits the input into individual characters, rather @@ -13198,7 +13198,7 @@ NULL baz</literallayout>(3 rows)</entry> </table> <para> - The <function>lower</> and <function>upper</> functions return null + The <function>lower</function> and <function>upper</function> functions return null if the range is empty or the requested bound is infinite. The <function>lower_inc</function>, <function>upper_inc</function>, <function>lower_inf</function>, and <function>upper_inf</function> @@ -13550,7 +13550,7 @@ NULL baz</literallayout>(3 rows)</entry> <type>smallint</type>, <type>int</type>, <type>bigint</type>, <type>real</type>, <type>double precision</type>, <type>numeric</type>, - <type>interval</type>, or <type>money</> + <type>interval</type>, or <type>money</type> </entry> <entry> <type>bigint</type> for <type>smallint</type> or @@ -13647,7 +13647,7 @@ SELECT count(*) FROM sometable; aggregate functions, produce meaningfully different result values depending on the order of the input values. This ordering is unspecified by default, but can be controlled by writing an - <literal>ORDER BY</> clause within the aggregate call, as shown in + <literal>ORDER BY</literal> clause within the aggregate call, as shown in <xref linkend="syntax-aggregates">. Alternatively, supplying the input values from a sorted subquery will usually work. For example: @@ -14082,9 +14082,9 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; <para> <xref linkend="functions-orderedset-table"> shows some - aggregate functions that use the <firstterm>ordered-set aggregate</> + aggregate functions that use the <firstterm>ordered-set aggregate</firstterm> syntax. These functions are sometimes referred to as <quote>inverse - distribution</> functions. + distribution</quote> functions. </para> <indexterm> @@ -14249,7 +14249,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; window function of the same name defined in <xref linkend="functions-window">. In each case, the aggregate result is the value that the associated window function would have - returned for the <quote>hypothetical</> row constructed from + returned for the <quote>hypothetical</quote> row constructed from <replaceable>args</replaceable>, if such a row had been added to the sorted group of rows computed from the <replaceable>sorted_args</replaceable>. </para> @@ -14280,10 +14280,10 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; <function>rank(<replaceable class="parameter">args</replaceable>) WITHIN GROUP (ORDER BY <replaceable class="parameter">sorted_args</replaceable>)</function> </entry> <entry> - <literal>VARIADIC</> <type>"any"</type> + <literal>VARIADIC</literal> <type>"any"</type> </entry> <entry> - <literal>VARIADIC</> <type>"any"</type> + <literal>VARIADIC</literal> <type>"any"</type> </entry> <entry> <type>bigint</type> @@ -14303,10 +14303,10 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; <function>dense_rank(<replaceable class="parameter">args</replaceable>) WITHIN GROUP (ORDER BY <replaceable class="parameter">sorted_args</replaceable>)</function> </entry> <entry> - <literal>VARIADIC</> <type>"any"</type> + <literal>VARIADIC</literal> <type>"any"</type> </entry> <entry> - <literal>VARIADIC</> <type>"any"</type> + <literal>VARIADIC</literal> <type>"any"</type> </entry> <entry> <type>bigint</type> @@ -14326,10 +14326,10 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; <function>percent_rank(<replaceable class="parameter">args</replaceable>) WITHIN GROUP (ORDER BY <replaceable class="parameter">sorted_args</replaceable>)</function> </entry> <entry> - <literal>VARIADIC</> <type>"any"</type> + <literal>VARIADIC</literal> <type>"any"</type> </entry> <entry> - <literal>VARIADIC</> <type>"any"</type> + <literal>VARIADIC</literal> <type>"any"</type> </entry> <entry> <type>double precision</type> @@ -14349,10 +14349,10 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; <function>cume_dist(<replaceable class="parameter">args</replaceable>) WITHIN GROUP (ORDER BY <replaceable class="parameter">sorted_args</replaceable>)</function> </entry> <entry> - <literal>VARIADIC</> <type>"any"</type> + <literal>VARIADIC</literal> <type>"any"</type> </entry> <entry> - <literal>VARIADIC</> <type>"any"</type> + <literal>VARIADIC</literal> <type>"any"</type> </entry> <entry> <type>double precision</type> @@ -14360,7 +14360,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; <entry>No</entry> <entry> relative rank of the hypothetical row, ranging from - 1/<replaceable>N</> to 1 + 1/<replaceable>N</replaceable> to 1 </entry> </row> @@ -14374,7 +14374,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; the aggregated arguments given in <replaceable>sorted_args</replaceable>. Unlike most built-in aggregates, these aggregates are not strict, that is they do not drop input rows containing nulls. Null values sort according - to the rule specified in the <literal>ORDER BY</> clause. + to the rule specified in the <literal>ORDER BY</literal> clause. </para> <table id="functions-grouping-table"> @@ -14413,14 +14413,14 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; <para> Grouping operations are used in conjunction with grouping sets (see <xref linkend="queries-grouping-sets">) to distinguish result rows. The - arguments to the <literal>GROUPING</> operation are not actually evaluated, - but they must match exactly expressions given in the <literal>GROUP BY</> + arguments to the <literal>GROUPING</literal> operation are not actually evaluated, + but they must match exactly expressions given in the <literal>GROUP BY</literal> clause of the associated query level. Bits are assigned with the rightmost argument being the least-significant bit; each bit is 0 if the corresponding expression is included in the grouping criteria of the grouping set generating the result row, and 1 if it is not. For example: <screen> -<prompt>=></> <userinput>SELECT * FROM items_sold;</> +<prompt>=></prompt> <userinput>SELECT * FROM items_sold;</userinput> make | model | sales -------+-------+------- Foo | GT | 10 @@ -14429,7 +14429,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; Bar | Sport | 5 (4 rows) -<prompt>=></> <userinput>SELECT make, model, GROUPING(make,model), sum(sales) FROM items_sold GROUP BY ROLLUP(make,model);</> +<prompt>=></prompt> <userinput>SELECT make, model, GROUPING(make,model), sum(sales) FROM items_sold GROUP BY ROLLUP(make,model);</userinput> make | model | grouping | sum -------+-------+----------+----- Foo | GT | 0 | 10 @@ -14464,8 +14464,8 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; <para> The built-in window functions are listed in <xref linkend="functions-window-table">. Note that these functions - <emphasis>must</> be invoked using window function syntax, i.e., an - <literal>OVER</> clause is required. + <emphasis>must</emphasis> be invoked using window function syntax, i.e., an + <literal>OVER</literal> clause is required. </para> <para> @@ -14474,7 +14474,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; aggregate (i.e., not ordered-set or hypothetical-set aggregates) can be used as a window function; see <xref linkend="functions-aggregate"> for a list of the built-in aggregates. - Aggregate functions act as window functions only when an <literal>OVER</> + Aggregate functions act as window functions only when an <literal>OVER</literal> clause follows the call; otherwise they act as non-window aggregates and return a single row for the entire set. </para> @@ -14515,7 +14515,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; <entry> <type>bigint</type> </entry> - <entry>rank of the current row with gaps; same as <function>row_number</> of its first peer</entry> + <entry>rank of the current row with gaps; same as <function>row_number</function> of its first peer</entry> </row> <row> @@ -14541,7 +14541,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; <entry> <type>double precision</type> </entry> - <entry>relative rank of the current row: (<function>rank</> - 1) / (total partition rows - 1)</entry> + <entry>relative rank of the current row: (<function>rank</function> - 1) / (total partition rows - 1)</entry> </row> <row> @@ -14562,7 +14562,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; <indexterm> <primary>ntile</primary> </indexterm> - <function>ntile(<replaceable class="parameter">num_buckets</replaceable> <type>integer</>)</function> + <function>ntile(<replaceable class="parameter">num_buckets</replaceable> <type>integer</type>)</function> </entry> <entry> <type>integer</type> @@ -14577,9 +14577,9 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; <primary>lag</primary> </indexterm> <function> - lag(<replaceable class="parameter">value</replaceable> <type>anyelement</> - [, <replaceable class="parameter">offset</replaceable> <type>integer</> - [, <replaceable class="parameter">default</replaceable> <type>anyelement</> ]]) + lag(<replaceable class="parameter">value</replaceable> <type>anyelement</type> + [, <replaceable class="parameter">offset</replaceable> <type>integer</type> + [, <replaceable class="parameter">default</replaceable> <type>anyelement</type> ]]) </function> </entry> <entry> @@ -14606,9 +14606,9 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; <primary>lead</primary> </indexterm> <function> - lead(<replaceable class="parameter">value</replaceable> <type>anyelement</> - [, <replaceable class="parameter">offset</replaceable> <type>integer</> - [, <replaceable class="parameter">default</replaceable> <type>anyelement</> ]]) + lead(<replaceable class="parameter">value</replaceable> <type>anyelement</type> + [, <replaceable class="parameter">offset</replaceable> <type>integer</type> + [, <replaceable class="parameter">default</replaceable> <type>anyelement</type> ]]) </function> </entry> <entry> @@ -14634,7 +14634,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; <indexterm> <primary>first_value</primary> </indexterm> - <function>first_value(<replaceable class="parameter">value</replaceable> <type>any</>)</function> + <function>first_value(<replaceable class="parameter">value</replaceable> <type>any</type>)</function> </entry> <entry> <type>same type as <replaceable class="parameter">value</replaceable></type> @@ -14650,7 +14650,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; <indexterm> <primary>last_value</primary> </indexterm> - <function>last_value(<replaceable class="parameter">value</replaceable> <type>any</>)</function> + <function>last_value(<replaceable class="parameter">value</replaceable> <type>any</type>)</function> </entry> <entry> <type>same type as <replaceable class="parameter">value</replaceable></type> @@ -14667,7 +14667,7 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; <primary>nth_value</primary> </indexterm> <function> - nth_value(<replaceable class="parameter">value</replaceable> <type>any</>, <replaceable class="parameter">nth</replaceable> <type>integer</>) + nth_value(<replaceable class="parameter">value</replaceable> <type>any</type>, <replaceable class="parameter">nth</replaceable> <type>integer</type>) </function> </entry> <entry> @@ -14686,22 +14686,22 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; <para> All of the functions listed in <xref linkend="functions-window-table"> depend on the sort ordering - specified by the <literal>ORDER BY</> clause of the associated window + specified by the <literal>ORDER BY</literal> clause of the associated window definition. Rows that are not distinct when considering only the - <literal>ORDER BY</> columns are said to be <firstterm>peers</>. - The four ranking functions (including <function>cume_dist</>) are + <literal>ORDER BY</literal> columns are said to be <firstterm>peers</firstterm>. + The four ranking functions (including <function>cume_dist</function>) are defined so that they give the same answer for all peer rows. </para> <para> - Note that <function>first_value</>, <function>last_value</>, and - <function>nth_value</> consider only the rows within the <quote>window - frame</>, which by default contains the rows from the start of the + Note that <function>first_value</function>, <function>last_value</function>, and + <function>nth_value</function> consider only the rows within the <quote>window + frame</quote>, which by default contains the rows from the start of the partition through the last peer of the current row. This is - likely to give unhelpful results for <function>last_value</> and - sometimes also <function>nth_value</>. You can redefine the frame by - adding a suitable frame specification (<literal>RANGE</> or - <literal>ROWS</>) to the <literal>OVER</> clause. + likely to give unhelpful results for <function>last_value</function> and + sometimes also <function>nth_value</function>. You can redefine the frame by + adding a suitable frame specification (<literal>RANGE</literal> or + <literal>ROWS</literal>) to the <literal>OVER</literal> clause. See <xref linkend="syntax-window-functions"> for more information about frame specifications. </para> @@ -14709,34 +14709,34 @@ SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab; <para> When an aggregate function is used as a window function, it aggregates over the rows within the current row's window frame. - An aggregate used with <literal>ORDER BY</> and the default window frame - definition produces a <quote>running sum</> type of behavior, which may or + An aggregate used with <literal>ORDER BY</literal> and the default window frame + definition produces a <quote>running sum</quote> type of behavior, which may or may not be what's wanted. To obtain - aggregation over the whole partition, omit <literal>ORDER BY</> or use - <literal>ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING</>. + aggregation over the whole partition, omit <literal>ORDER BY</literal> or use + <literal>ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING</literal>. Other frame specifications can be used to obtain other effects. </para> <note> <para> - The SQL standard defines a <literal>RESPECT NULLS</> or - <literal>IGNORE NULLS</> option for <function>lead</>, <function>lag</>, - <function>first_value</>, <function>last_value</>, and - <function>nth_value</>. This is not implemented in + The SQL standard defines a <literal>RESPECT NULLS</literal> or + <literal>IGNORE NULLS</literal> option for <function>lead</function>, <function>lag</function>, + <function>first_value</function>, <function>last_value</function>, and + <function>nth_value</function>. This is not implemented in <productname>PostgreSQL</productname>: the behavior is always the - same as the standard's default, namely <literal>RESPECT NULLS</>. - Likewise, the standard's <literal>FROM FIRST</> or <literal>FROM LAST</> - option for <function>nth_value</> is not implemented: only the - default <literal>FROM FIRST</> behavior is supported. (You can achieve - the result of <literal>FROM LAST</> by reversing the <literal>ORDER BY</> + same as the standard's default, namely <literal>RESPECT NULLS</literal>. + Likewise, the standard's <literal>FROM FIRST</literal> or <literal>FROM LAST</literal> + option for <function>nth_value</function> is not implemented: only the + default <literal>FROM FIRST</literal> behavior is supported. (You can achieve + the result of <literal>FROM LAST</literal> by reversing the <literal>ORDER BY</literal> ordering.) </para> </note> <para> - <function>cume_dist</> computes the fraction of partition rows that + <function>cume_dist</function> computes the fraction of partition rows that are less than or equal to the current row and its peers, while - <function>percent_rank</> computes the fraction of partition rows that + <function>percent_rank</function> computes the fraction of partition rows that are less than the current row, assuming the current row does not exist in the partition. </para> @@ -14789,12 +14789,12 @@ EXISTS (<replaceable>subquery</replaceable>) </synopsis> <para> - The argument of <token>EXISTS</token> is an arbitrary <command>SELECT</> statement, + The argument of <token>EXISTS</token> is an arbitrary <command>SELECT</command> statement, or <firstterm>subquery</firstterm>. The subquery is evaluated to determine whether it returns any rows. If it returns at least one row, the result of <token>EXISTS</token> is - <quote>true</>; if the subquery returns no rows, the result of <token>EXISTS</token> - is <quote>false</>. + <quote>true</quote>; if the subquery returns no rows, the result of <token>EXISTS</token> + is <quote>false</quote>. </para> <para> @@ -14814,15 +14814,15 @@ EXISTS (<replaceable>subquery</replaceable>) Since the result depends only on whether any rows are returned, and not on the contents of those rows, the output list of the subquery is normally unimportant. A common coding convention is - to write all <literal>EXISTS</> tests in the form + to write all <literal>EXISTS</literal> tests in the form <literal>EXISTS(SELECT 1 WHERE ...)</literal>. There are exceptions to this rule however, such as subqueries that use <token>INTERSECT</token>. </para> <para> - This simple example is like an inner join on <literal>col2</>, but - it produces at most one output row for each <literal>tab1</> row, - even if there are several matching <literal>tab2</> rows: + This simple example is like an inner join on <literal>col2</literal>, but + it produces at most one output row for each <literal>tab1</literal> row, + even if there are several matching <literal>tab2</literal> rows: <screen> SELECT col1 FROM tab1 @@ -14842,8 +14842,8 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand expression is evaluated and compared to each row of the subquery result. - The result of <token>IN</token> is <quote>true</> if any equal subquery row is found. - The result is <quote>false</> if no equal row is found (including the + The result of <token>IN</token> is <quote>true</quote> if any equal subquery row is found. + The result is <quote>false</quote> if no equal row is found (including the case where the subquery returns no rows). </para> @@ -14871,8 +14871,8 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); subquery, which must return exactly as many columns as there are expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result. - The result of <token>IN</token> is <quote>true</> if any equal subquery row is found. - The result is <quote>false</> if no equal row is found (including the + The result of <token>IN</token> is <quote>true</quote> if any equal subquery row is found. + The result is <quote>false</quote> if no equal row is found (including the case where the subquery returns no rows). </para> @@ -14898,9 +14898,9 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand expression is evaluated and compared to each row of the subquery result. - The result of <token>NOT IN</token> is <quote>true</> if only unequal subquery rows + The result of <token>NOT IN</token> is <quote>true</quote> if only unequal subquery rows are found (including the case where the subquery returns no rows). - The result is <quote>false</> if any equal row is found. + The result is <quote>false</quote> if any equal row is found. </para> <para> @@ -14927,9 +14927,9 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); subquery, which must return exactly as many columns as there are expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result. - The result of <token>NOT IN</token> is <quote>true</> if only unequal subquery rows + The result of <token>NOT IN</token> is <quote>true</quote> if only unequal subquery rows are found (including the case where the subquery returns no rows). - The result is <quote>false</> if any equal row is found. + The result is <quote>false</quote> if any equal row is found. </para> <para> @@ -14957,8 +14957,8 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); is evaluated and compared to each row of the subquery result using the given <replaceable>operator</replaceable>, which must yield a Boolean result. - The result of <token>ANY</token> is <quote>true</> if any true result is obtained. - The result is <quote>false</> if no true result is found (including the + The result of <token>ANY</token> is <quote>true</quote> if any true result is obtained. + The result is <quote>false</quote> if no true result is found (including the case where the subquery returns no rows). </para> @@ -14981,8 +14981,8 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); </para> <synopsis> -<replaceable>row_constructor</replaceable> <replaceable>operator</> ANY (<replaceable>subquery</replaceable>) -<replaceable>row_constructor</replaceable> <replaceable>operator</> SOME (<replaceable>subquery</replaceable>) +<replaceable>row_constructor</replaceable> <replaceable>operator</replaceable> ANY (<replaceable>subquery</replaceable>) +<replaceable>row_constructor</replaceable> <replaceable>operator</replaceable> SOME (<replaceable>subquery</replaceable>) </synopsis> <para> @@ -14993,9 +14993,9 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result, using the given <replaceable>operator</replaceable>. - The result of <token>ANY</token> is <quote>true</> if the comparison + The result of <token>ANY</token> is <quote>true</quote> if the comparison returns true for any subquery row. - The result is <quote>false</> if the comparison returns false for every + The result is <quote>false</quote> if the comparison returns false for every subquery row (including the case where the subquery returns no rows). The result is NULL if the comparison does not return true for any row, @@ -15021,9 +15021,9 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); is evaluated and compared to each row of the subquery result using the given <replaceable>operator</replaceable>, which must yield a Boolean result. - The result of <token>ALL</token> is <quote>true</> if all rows yield true + The result of <token>ALL</token> is <quote>true</quote> if all rows yield true (including the case where the subquery returns no rows). - The result is <quote>false</> if any false result is found. + The result is <quote>false</quote> if any false result is found. The result is NULL if the comparison does not return false for any row, and it returns NULL for at least one row. </para> @@ -15049,10 +15049,10 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result, using the given <replaceable>operator</replaceable>. - The result of <token>ALL</token> is <quote>true</> if the comparison + The result of <token>ALL</token> is <quote>true</quote> if the comparison returns true for all subquery rows (including the case where the subquery returns no rows). - The result is <quote>false</> if the comparison returns false for any + The result is <quote>false</quote> if the comparison returns false for any subquery row. The result is NULL if the comparison does not return false for any subquery row, and it returns NULL for at least one row. @@ -15165,7 +15165,7 @@ WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); <para> The right-hand side is a parenthesized list - of scalar expressions. The result is <quote>true</> if the left-hand expression's + of scalar expressions. The result is <quote>true</quote> if the left-hand expression's result is equal to any of the right-hand expressions. This is a shorthand notation for @@ -15243,8 +15243,8 @@ AND is evaluated and compared to each element of the array using the given <replaceable>operator</replaceable>, which must yield a Boolean result. - The result of <token>ANY</token> is <quote>true</> if any true result is obtained. - The result is <quote>false</> if no true result is found (including the + The result of <token>ANY</token> is <quote>true</quote> if any true result is obtained. + The result is <quote>false</quote> if no true result is found (including the case where the array has zero elements). </para> @@ -15279,9 +15279,9 @@ AND is evaluated and compared to each element of the array using the given <replaceable>operator</replaceable>, which must yield a Boolean result. - The result of <token>ALL</token> is <quote>true</> if all comparisons yield true + The result of <token>ALL</token> is <quote>true</quote> if all comparisons yield true (including the case where the array has zero elements). - The result is <quote>false</> if any false result is found. + The result is <quote>false</quote> if any false result is found. </para> <para> @@ -15310,12 +15310,12 @@ AND The two row values must have the same number of fields. Each side is evaluated and they are compared row-wise. Row constructor comparisons are allowed when the <replaceable>operator</replaceable> is - <literal>=</>, - <literal><></>, - <literal><</>, - <literal><=</>, - <literal>></> or - <literal>>=</>. + <literal>=</literal>, + <literal><></literal>, + <literal><</literal>, + <literal><=</literal>, + <literal>></literal> or + <literal>>=</literal>. Every row element must be of a type which has a default B-tree operator class or the attempted comparison may generate an error. </para> @@ -15328,7 +15328,7 @@ AND </note> <para> - The <literal>=</> and <literal><></> cases work slightly differently + The <literal>=</literal> and <literal><></literal> cases work slightly differently from the others. Two rows are considered equal if all their corresponding members are non-null and equal; the rows are unequal if any corresponding members are non-null and unequal; @@ -15336,13 +15336,13 @@ AND </para> <para> - For the <literal><</>, <literal><=</>, <literal>></> and - <literal>>=</> cases, the row elements are compared left-to-right, + For the <literal><</literal>, <literal><=</literal>, <literal>></literal> and + <literal>>=</literal> cases, the row elements are compared left-to-right, stopping as soon as an unequal or null pair of elements is found. If either of this pair of elements is null, the result of the row comparison is unknown (null); otherwise comparison of this pair of elements determines the result. For example, - <literal>ROW(1,2,NULL) < ROW(1,3,0)</> + <literal>ROW(1,2,NULL) < ROW(1,3,0)</literal> yields true, not null, because the third pair of elements are not considered. </para> @@ -15350,13 +15350,13 @@ AND <note> <para> Prior to <productname>PostgreSQL</productname> 8.2, the - <literal><</>, <literal><=</>, <literal>></> and <literal>>=</> + <literal><</literal>, <literal><=</literal>, <literal>></literal> and <literal>>=</literal> cases were not handled per SQL specification. A comparison like - <literal>ROW(a,b) < ROW(c,d)</> + <literal>ROW(a,b) < ROW(c,d)</literal> was implemented as - <literal>a < c AND b < d</> + <literal>a < c AND b < d</literal> whereas the correct behavior is equivalent to - <literal>a < c OR (a = c AND b < d)</>. + <literal>a < c OR (a = c AND b < d)</literal>. </para> </note> @@ -15409,15 +15409,15 @@ AND <para> Each side is evaluated and they are compared row-wise. Composite type comparisons are allowed when the <replaceable>operator</replaceable> is - <literal>=</>, - <literal><></>, - <literal><</>, - <literal><=</>, - <literal>></> or - <literal>>=</>, + <literal>=</literal>, + <literal><></literal>, + <literal><</literal>, + <literal><=</literal>, + <literal>></literal> or + <literal>>=</literal>, or has semantics similar to one of these. (To be specific, an operator can be a row comparison operator if it is a member of a B-tree operator - class, or is the negator of the <literal>=</> member of a B-tree operator + class, or is the negator of the <literal>=</literal> member of a B-tree operator class.) The default behavior of the above operators is the same as for <literal>IS [ NOT ] DISTINCT FROM</literal> for row constructors (see <xref linkend="row-wise-comparison">). @@ -15427,12 +15427,12 @@ AND To support matching of rows which include elements without a default B-tree operator class, the following operators are defined for composite type comparison: - <literal>*=</>, - <literal>*<></>, - <literal>*<</>, - <literal>*<=</>, - <literal>*></>, and - <literal>*>=</>. + <literal>*=</literal>, + <literal>*<></literal>, + <literal>*<</literal>, + <literal>*<=</literal>, + <literal>*></literal>, and + <literal>*>=</literal>. These operators compare the internal binary representation of the two rows. Two rows might have a different binary representation even though comparisons of the two rows with the equality operator is true. @@ -15501,7 +15501,7 @@ AND </row> <row> - <entry><literal><function>generate_series(<parameter>start</parameter>, <parameter>stop</parameter>, <parameter>step</parameter> <type>interval</>)</function></literal></entry> + <entry><literal><function>generate_series(<parameter>start</parameter>, <parameter>stop</parameter>, <parameter>step</parameter> <type>interval</type>)</function></literal></entry> <entry><type>timestamp</type> or <type>timestamp with time zone</type></entry> <entry><type>setof timestamp</type> or <type>setof timestamp with time zone</type> (same as argument type)</entry> <entry> @@ -15616,7 +15616,7 @@ SELECT * FROM generate_series('2008-03-01 00:00'::timestamp, </indexterm> <para> - <function>generate_subscripts</> is a convenience function that generates + <function>generate_subscripts</function> is a convenience function that generates the set of valid subscripts for the specified dimension of the given array. Zero rows are returned for arrays that do not have the requested dimension, @@ -15681,7 +15681,7 @@ SELECT * FROM unnest2(ARRAY[[1,2],[3,4]]); by <literal>WITH ORDINALITY</literal>, a <type>bigint</type> column is appended to the output which starts from 1 and increments by 1 for each row of the function's output. This is most useful in the case of set returning - functions such as <function>unnest()</>. + functions such as <function>unnest()</function>. <programlisting> -- set returning function WITH ORDINALITY @@ -15825,7 +15825,7 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n); </row> <row> - <entry><literal><function>pg_current_logfile(<optional><type>text</></optional>)</function></literal></entry> + <entry><literal><function>pg_current_logfile(<optional><type>text</type></optional>)</function></literal></entry> <entry><type>text</type></entry> <entry>Primary log file name, or log in the requested format, currently in use by the logging collector</entry> @@ -15870,7 +15870,7 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n); <row> <entry><literal><function>pg_trigger_depth()</function></literal></entry> <entry><type>int</type></entry> - <entry>current nesting level of <productname>PostgreSQL</> triggers + <entry>current nesting level of <productname>PostgreSQL</productname> triggers (0 if not called, directly or indirectly, from inside a trigger)</entry> </row> @@ -15889,7 +15889,7 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n); <row> <entry><literal><function>version()</function></literal></entry> <entry><type>text</type></entry> - <entry><productname>PostgreSQL</> version information. See also <xref linkend="guc-server-version-num"> for a machine-readable version.</entry> + <entry><productname>PostgreSQL</productname> version information. See also <xref linkend="guc-server-version-num"> for a machine-readable version.</entry> </row> </tbody> </tgroup> @@ -15979,7 +15979,7 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n); <function>current_role</function> and <function>user</function> are synonyms for <function>current_user</function>. (The SQL standard draws a distinction between <function>current_role</function> - and <function>current_user</function>, but <productname>PostgreSQL</> + and <function>current_user</function>, but <productname>PostgreSQL</productname> does not, since it unifies users and roles into a single kind of entity.) </para> @@ -15990,7 +15990,7 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n); other named objects that are created without specifying a target schema. <function>current_schemas(boolean)</function> returns an array of the names of all schemas presently in the search path. The Boolean option determines whether or not - implicitly included system schemas such as <literal>pg_catalog</> are included in the + implicitly included system schemas such as <literal>pg_catalog</literal> are included in the returned search path. </para> @@ -15998,7 +15998,7 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n); <para> The search path can be altered at run time. The command is: <programlisting> -SET search_path TO <replaceable>schema</> <optional>, <replaceable>schema</>, ...</optional> +SET search_path TO <replaceable>schema</replaceable> <optional>, <replaceable>schema</replaceable>, ...</optional> </programlisting> </para> </note> @@ -16043,7 +16043,7 @@ SET search_path TO <replaceable>schema</> <optional>, <replaceable>schema</>, .. waiting for a lock that would conflict with the blocked process's lock request and is ahead of it in the wait queue (soft block). When using parallel queries the result always lists client-visible process IDs (that - is, <function>pg_backend_pid</> results) even if the actual lock is held + is, <function>pg_backend_pid</function> results) even if the actual lock is held or awaited by a child worker process. As a result of that, there may be duplicated PIDs in the result. Also note that when a prepared transaction holds a conflicting lock, it will be represented by a zero process ID in @@ -16095,15 +16095,15 @@ SET search_path TO <replaceable>schema</> <optional>, <replaceable>schema</>, .. is <literal>NULL</literal>. When multiple log files exist, each in a different format, <function>pg_current_logfile</function> called without arguments returns the path of the file having the first format - found in the ordered list: <systemitem>stderr</>, <systemitem>csvlog</>. + found in the ordered list: <systemitem>stderr</systemitem>, <systemitem>csvlog</systemitem>. <literal>NULL</literal> is returned when no log file has any of these formats. To request a specific file format supply, as <type>text</type>, - either <systemitem>csvlog</> or <systemitem>stderr</> as the value of the + either <systemitem>csvlog</systemitem> or <systemitem>stderr</systemitem> as the value of the optional parameter. The return value is <literal>NULL</literal> when the log format requested is not a configured <xref linkend="guc-log-destination">. The <function>pg_current_logfiles</function> reflects the contents of the - <filename>current_logfiles</> file. + <filename>current_logfiles</filename> file. </para> <indexterm> @@ -16460,7 +16460,7 @@ SET search_path TO <replaceable>schema</> <optional>, <replaceable>schema</>, .. <function>has_table_privilege</function> checks whether a user can access a table in a particular way. The user can be specified by name, by OID (<literal>pg_authid.oid</literal>), - <literal>public</> to indicate the PUBLIC pseudo-role, or if the argument is + <literal>public</literal> to indicate the PUBLIC pseudo-role, or if the argument is omitted <function>current_user</function> is assumed. The table can be specified by name or by OID. (Thus, there are actually six variants of @@ -16470,12 +16470,12 @@ SET search_path TO <replaceable>schema</> <optional>, <replaceable>schema</>, .. The desired access privilege type is specified by a text string, which must evaluate to one of the values <literal>SELECT</literal>, <literal>INSERT</literal>, - <literal>UPDATE</literal>, <literal>DELETE</literal>, <literal>TRUNCATE</>, + <literal>UPDATE</literal>, <literal>DELETE</literal>, <literal>TRUNCATE</literal>, <literal>REFERENCES</literal>, or <literal>TRIGGER</literal>. Optionally, - <literal>WITH GRANT OPTION</> can be added to a privilege type to test + <literal>WITH GRANT OPTION</literal> can be added to a privilege type to test whether the privilege is held with grant option. Also, multiple privilege types can be listed separated by commas, in which case the result will - be <literal>true</> if any of the listed privileges is held. + be <literal>true</literal> if any of the listed privileges is held. (Case of the privilege string is not significant, and extra whitespace is allowed between but not within privilege names.) Some examples: @@ -16499,7 +16499,7 @@ SELECT has_table_privilege('joe', 'mytable', 'INSERT, SELECT WITH GRANT OPTION') <function>has_any_column_privilege</function> checks whether a user can access any column of a table in a particular way. Its argument possibilities - are analogous to <function>has_table_privilege</>, + are analogous to <function>has_table_privilege</function>, except that the desired access privilege type must evaluate to some combination of <literal>SELECT</literal>, @@ -16508,8 +16508,8 @@ SELECT has_table_privilege('joe', 'mytable', 'INSERT, SELECT WITH GRANT OPTION') <literal>REFERENCES</literal>. Note that having any of these privileges at the table level implicitly grants it for each column of the table, so <function>has_any_column_privilege</function> will always return - <literal>true</> if <function>has_table_privilege</> does for the same - arguments. But <function>has_any_column_privilege</> also succeeds if + <literal>true</literal> if <function>has_table_privilege</function> does for the same + arguments. But <function>has_any_column_privilege</function> also succeeds if there is a column-level grant of the privilege for at least one column. </para> @@ -16547,7 +16547,7 @@ SELECT has_table_privilege('joe', 'mytable', 'INSERT, SELECT WITH GRANT OPTION') Its argument possibilities are analogous to <function>has_table_privilege</function>. When specifying a function by a text string rather than by OID, - the allowed input is the same as for the <type>regprocedure</> data type + the allowed input is the same as for the <type>regprocedure</type> data type (see <xref linkend="datatype-oid">). The desired access privilege type must evaluate to <literal>EXECUTE</literal>. @@ -16609,7 +16609,7 @@ SELECT has_function_privilege('joeuser', 'myfunc(int, text)', 'execute'); Its argument possibilities are analogous to <function>has_table_privilege</function>. When specifying a type by a text string rather than by OID, - the allowed input is the same as for the <type>regtype</> data type + the allowed input is the same as for the <type>regtype</type> data type (see <xref linkend="datatype-oid">). The desired access privilege type must evaluate to <literal>USAGE</literal>. @@ -16620,14 +16620,14 @@ SELECT has_function_privilege('joeuser', 'myfunc(int, text)', 'execute'); can access a role in a particular way. Its argument possibilities are analogous to <function>has_table_privilege</function>, - except that <literal>public</> is not allowed as a user name. + except that <literal>public</literal> is not allowed as a user name. The desired access privilege type must evaluate to some combination of <literal>MEMBER</literal> or <literal>USAGE</literal>. <literal>MEMBER</literal> denotes direct or indirect membership in - the role (that is, the right to do <command>SET ROLE</>), while + the role (that is, the right to do <command>SET ROLE</command>), while <literal>USAGE</literal> denotes whether the privileges of the role - are immediately available without doing <command>SET ROLE</>. + are immediately available without doing <command>SET ROLE</command>. </para> <para> @@ -16639,7 +16639,7 @@ SELECT has_function_privilege('joeuser', 'myfunc(int, text)', 'execute'); <para> <xref linkend="functions-info-schema-table"> shows functions that - determine whether a certain object is <firstterm>visible</> in the + determine whether a certain object is <firstterm>visible</firstterm> in the current schema search path. For example, a table is said to be visible if its containing schema is in the search path and no table of the same @@ -16793,16 +16793,16 @@ SELECT relname FROM pg_class WHERE pg_table_is_visible(oid); <function>pg_type_is_visible</function> can also be used with domains. For functions and operators, an object in the search path is visible if there is no object of the same name - <emphasis>and argument data type(s)</> earlier in the path. For operator + <emphasis>and argument data type(s)</emphasis> earlier in the path. For operator classes, both name and associated index access method are considered. </para> <para> All these functions require object OIDs to identify the object to be checked. If you want to test an object by name, it is convenient to use - the OID alias types (<type>regclass</>, <type>regtype</>, - <type>regprocedure</>, <type>regoperator</>, <type>regconfig</>, - or <type>regdictionary</>), + the OID alias types (<type>regclass</type>, <type>regtype</type>, + <type>regprocedure</type>, <type>regoperator</type>, <type>regconfig</type>, + or <type>regdictionary</type>), for example: <programlisting> SELECT pg_type_is_visible('myschema.widget'::regtype); @@ -16949,7 +16949,7 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); <tbody> <row> - <entry><literal><function>format_type(<parameter>type_oid</parameter>, <parameter>typemod</>)</function></literal></entry> + <entry><literal><function>format_type(<parameter>type_oid</parameter>, <parameter>typemod</parameter>)</function></literal></entry> <entry><type>text</type></entry> <entry>get SQL name of a data type</entry> </row> @@ -16959,18 +16959,18 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); <entry>get definition of a constraint</entry> </row> <row> - <entry><literal><function>pg_get_constraintdef(<parameter>constraint_oid</parameter>, <parameter>pretty_bool</>)</function></literal></entry> + <entry><literal><function>pg_get_constraintdef(<parameter>constraint_oid</parameter>, <parameter>pretty_bool</parameter>)</function></literal></entry> <entry><type>text</type></entry> <entry>get definition of a constraint</entry> </row> <row> - <entry><literal><function>pg_get_expr(<parameter>pg_node_tree</parameter>, <parameter>relation_oid</>)</function></literal></entry> + <entry><literal><function>pg_get_expr(<parameter>pg_node_tree</parameter>, <parameter>relation_oid</parameter>)</function></literal></entry> <entry><type>text</type></entry> <entry>decompile internal form of an expression, assuming that any Vars in it refer to the relation indicated by the second parameter</entry> </row> <row> - <entry><literal><function>pg_get_expr(<parameter>pg_node_tree</parameter>, <parameter>relation_oid</>, <parameter>pretty_bool</>)</function></literal></entry> + <entry><literal><function>pg_get_expr(<parameter>pg_node_tree</parameter>, <parameter>relation_oid</parameter>, <parameter>pretty_bool</parameter>)</function></literal></entry> <entry><type>text</type></entry> <entry>decompile internal form of an expression, assuming that any Vars in it refer to the relation indicated by the second parameter</entry> @@ -16993,19 +16993,19 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); <row> <entry><literal><function>pg_get_function_result(<parameter>func_oid</parameter>)</function></literal></entry> <entry><type>text</type></entry> - <entry>get <literal>RETURNS</> clause for function</entry> + <entry>get <literal>RETURNS</literal> clause for function</entry> </row> <row> <entry><literal><function>pg_get_indexdef(<parameter>index_oid</parameter>)</function></literal></entry> <entry><type>text</type></entry> - <entry>get <command>CREATE INDEX</> command for index</entry> + <entry>get <command>CREATE INDEX</command> command for index</entry> </row> <row> - <entry><literal><function>pg_get_indexdef(<parameter>index_oid</parameter>, <parameter>column_no</>, <parameter>pretty_bool</>)</function></literal></entry> + <entry><literal><function>pg_get_indexdef(<parameter>index_oid</parameter>, <parameter>column_no</parameter>, <parameter>pretty_bool</parameter>)</function></literal></entry> <entry><type>text</type></entry> - <entry>get <command>CREATE INDEX</> command for index, + <entry>get <command>CREATE INDEX</command> command for index, or definition of just one index column when - <parameter>column_no</> is not zero</entry> + <parameter>column_no</parameter> is not zero</entry> </row> <row> <entry><literal><function>pg_get_keywords()</function></literal></entry> @@ -17015,12 +17015,12 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); <row> <entry><literal><function>pg_get_ruledef(<parameter>rule_oid</parameter>)</function></literal></entry> <entry><type>text</type></entry> - <entry>get <command>CREATE RULE</> command for rule</entry> + <entry>get <command>CREATE RULE</command> command for rule</entry> </row> <row> - <entry><literal><function>pg_get_ruledef(<parameter>rule_oid</parameter>, <parameter>pretty_bool</>)</function></literal></entry> + <entry><literal><function>pg_get_ruledef(<parameter>rule_oid</parameter>, <parameter>pretty_bool</parameter>)</function></literal></entry> <entry><type>text</type></entry> - <entry>get <command>CREATE RULE</> command for rule</entry> + <entry>get <command>CREATE RULE</command> command for rule</entry> </row> <row> <entry><literal><function>pg_get_serial_sequence(<parameter>table_name</parameter>, <parameter>column_name</parameter>)</function></literal></entry> @@ -17030,17 +17030,17 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); <row> <entry><literal><function>pg_get_statisticsobjdef(<parameter>statobj_oid</parameter>)</function></literal></entry> <entry><type>text</type></entry> - <entry>get <command>CREATE STATISTICS</> command for extended statistics object</entry> + <entry>get <command>CREATE STATISTICS</command> command for extended statistics object</entry> </row> <row> <entry><function>pg_get_triggerdef</function>(<parameter>trigger_oid</parameter>)</entry> <entry><type>text</type></entry> - <entry>get <command>CREATE [ CONSTRAINT ] TRIGGER</> command for trigger</entry> + <entry>get <command>CREATE [ CONSTRAINT ] TRIGGER</command> command for trigger</entry> </row> <row> - <entry><function>pg_get_triggerdef</function>(<parameter>trigger_oid</parameter>, <parameter>pretty_bool</>)</entry> + <entry><function>pg_get_triggerdef</function>(<parameter>trigger_oid</parameter>, <parameter>pretty_bool</parameter>)</entry> <entry><type>text</type></entry> - <entry>get <command>CREATE [ CONSTRAINT ] TRIGGER</> command for trigger</entry> + <entry>get <command>CREATE [ CONSTRAINT ] TRIGGER</command> command for trigger</entry> </row> <row> <entry><literal><function>pg_get_userbyid(<parameter>role_oid</parameter>)</function></literal></entry> @@ -17053,7 +17053,7 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); <entry>get underlying <command>SELECT</command> command for view or materialized view (<emphasis>deprecated</emphasis>)</entry> </row> <row> - <entry><literal><function>pg_get_viewdef(<parameter>view_name</parameter>, <parameter>pretty_bool</>)</function></literal></entry> + <entry><literal><function>pg_get_viewdef(<parameter>view_name</parameter>, <parameter>pretty_bool</parameter>)</function></literal></entry> <entry><type>text</type></entry> <entry>get underlying <command>SELECT</command> command for view or materialized view (<emphasis>deprecated</emphasis>)</entry> </row> @@ -17063,29 +17063,29 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); <entry>get underlying <command>SELECT</command> command for view or materialized view</entry> </row> <row> - <entry><literal><function>pg_get_viewdef(<parameter>view_oid</parameter>, <parameter>pretty_bool</>)</function></literal></entry> + <entry><literal><function>pg_get_viewdef(<parameter>view_oid</parameter>, <parameter>pretty_bool</parameter>)</function></literal></entry> <entry><type>text</type></entry> <entry>get underlying <command>SELECT</command> command for view or materialized view</entry> </row> <row> - <entry><literal><function>pg_get_viewdef(<parameter>view_oid</parameter>, <parameter>wrap_column_int</>)</function></literal></entry> + <entry><literal><function>pg_get_viewdef(<parameter>view_oid</parameter>, <parameter>wrap_column_int</parameter>)</function></literal></entry> <entry><type>text</type></entry> <entry>get underlying <command>SELECT</command> command for view or materialized view; lines with fields are wrapped to specified number of columns, pretty-printing is implied</entry> </row> <row> - <entry><literal><function>pg_index_column_has_property(<parameter>index_oid</parameter>, <parameter>column_no</>, <parameter>prop_name</>)</function></literal></entry> + <entry><literal><function>pg_index_column_has_property(<parameter>index_oid</parameter>, <parameter>column_no</parameter>, <parameter>prop_name</parameter>)</function></literal></entry> <entry><type>boolean</type></entry> <entry>test whether an index column has a specified property</entry> </row> <row> - <entry><literal><function>pg_index_has_property(<parameter>index_oid</parameter>, <parameter>prop_name</>)</function></literal></entry> + <entry><literal><function>pg_index_has_property(<parameter>index_oid</parameter>, <parameter>prop_name</parameter>)</function></literal></entry> <entry><type>boolean</type></entry> <entry>test whether an index has a specified property</entry> </row> <row> - <entry><literal><function>pg_indexam_has_property(<parameter>am_oid</parameter>, <parameter>prop_name</>)</function></literal></entry> + <entry><literal><function>pg_indexam_has_property(<parameter>am_oid</parameter>, <parameter>prop_name</parameter>)</function></literal></entry> <entry><type>boolean</type></entry> <entry>test whether an index access method has a specified property</entry> </row> @@ -17166,11 +17166,11 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); <para> <function>pg_get_keywords</function> returns a set of records describing - the SQL keywords recognized by the server. The <structfield>word</> column - contains the keyword. The <structfield>catcode</> column contains a - category code: <literal>U</> for unreserved, <literal>C</> for column name, - <literal>T</> for type or function name, or <literal>R</> for reserved. - The <structfield>catdesc</> column contains a possibly-localized string + the SQL keywords recognized by the server. The <structfield>word</structfield> column + contains the keyword. The <structfield>catcode</structfield> column contains a + category code: <literal>U</literal> for unreserved, <literal>C</literal> for column name, + <literal>T</literal> for type or function name, or <literal>R</literal> for reserved. + The <structfield>catdesc</structfield> column contains a possibly-localized string describing the category. </para> @@ -17187,26 +17187,26 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); catalogs. If the expression might contain Vars, specify the OID of the relation they refer to as the second parameter; if no Vars are expected, zero is sufficient. <function>pg_get_viewdef</function> reconstructs the - <command>SELECT</> query that defines a view. Most of these functions come - in two variants, one of which can optionally <quote>pretty-print</> the + <command>SELECT</command> query that defines a view. Most of these functions come + in two variants, one of which can optionally <quote>pretty-print</quote> the result. The pretty-printed format is more readable, but the default format is more likely to be interpreted the same way by future versions of - <productname>PostgreSQL</>; avoid using pretty-printed output for dump - purposes. Passing <literal>false</> for the pretty-print parameter yields + <productname>PostgreSQL</productname>; avoid using pretty-printed output for dump + purposes. Passing <literal>false</literal> for the pretty-print parameter yields the same result as the variant that does not have the parameter at all. </para> <para> - <function>pg_get_functiondef</> returns a complete - <command>CREATE OR REPLACE FUNCTION</> statement for a function. + <function>pg_get_functiondef</function> returns a complete + <command>CREATE OR REPLACE FUNCTION</command> statement for a function. <function>pg_get_function_arguments</function> returns the argument list of a function, in the form it would need to appear in within - <command>CREATE FUNCTION</>. + <command>CREATE FUNCTION</command>. <function>pg_get_function_result</function> similarly returns the - appropriate <literal>RETURNS</> clause for the function. + appropriate <literal>RETURNS</literal> clause for the function. <function>pg_get_function_identity_arguments</function> returns the argument list necessary to identify a function, in the form it - would need to appear in within <command>ALTER FUNCTION</>, for + would need to appear in within <command>ALTER FUNCTION</command>, for instance. This form omits default values. </para> @@ -17219,10 +17219,10 @@ SELECT pg_type_is_visible('myschema.widget'::regtype); (<type>serial</type>, <type>smallserial</type>, <type>bigserial</type>), it is the sequence created for that serial column definition. In the latter case, this association can be modified or removed with <command>ALTER - SEQUENCE OWNED BY</>. (The function probably should have been called + SEQUENCE OWNED BY</command>. (The function probably should have been called <function>pg_get_owned_sequence</function>; its current name reflects the - fact that it has typically been used with <type>serial</> - or <type>bigserial</> columns.) The first input parameter is a table name + fact that it has typically been used with <type>serial</type> + or <type>bigserial</type> columns.) The first input parameter is a table name with optional schema, and the second parameter is a column name. Because the first parameter is potentially a schema and table, it is not treated as a double-quoted identifier, meaning it is lower cased by default, while the @@ -17290,8 +17290,8 @@ SELECT currval(pg_get_serial_sequence('sometable', 'id')); </row> <row> <entry><literal>distance_orderable</literal></entry> - <entry>Can the column be scanned in order by a <quote>distance</> - operator, for example <literal>ORDER BY col <-> constant</> ? + <entry>Can the column be scanned in order by a <quote>distance</quote> + operator, for example <literal>ORDER BY col <-> constant</literal> ? </entry> </row> <row> @@ -17301,14 +17301,14 @@ SELECT currval(pg_get_serial_sequence('sometable', 'id')); </row> <row> <entry><literal>search_array</literal></entry> - <entry>Does the column natively support <literal>col = ANY(array)</> + <entry>Does the column natively support <literal>col = ANY(array)</literal> searches? </entry> </row> <row> <entry><literal>search_nulls</literal></entry> - <entry>Does the column support <literal>IS NULL</> and - <literal>IS NOT NULL</> searches? + <entry>Does the column support <literal>IS NULL</literal> and + <literal>IS NOT NULL</literal> searches? </entry> </row> </tbody> @@ -17324,7 +17324,7 @@ SELECT currval(pg_get_serial_sequence('sometable', 'id')); <tbody> <row> <entry><literal>clusterable</literal></entry> - <entry>Can the index be used in a <literal>CLUSTER</> command? + <entry>Can the index be used in a <literal>CLUSTER</literal> command? </entry> </row> <row> @@ -17355,9 +17355,9 @@ SELECT currval(pg_get_serial_sequence('sometable', 'id')); <tbody> <row> <entry><literal>can_order</literal></entry> - <entry>Does the access method support <literal>ASC</>, - <literal>DESC</> and related keywords in - <literal>CREATE INDEX</>? + <entry>Does the access method support <literal>ASC</literal>, + <literal>DESC</literal> and related keywords in + <literal>CREATE INDEX</literal>? </entry> </row> <row> @@ -17382,9 +17382,9 @@ SELECT currval(pg_get_serial_sequence('sometable', 'id')); <para> <function>pg_options_to_table</function> returns the set of storage option name/value pairs - (<replaceable>option_name</>/<replaceable>option_value</>) when passed - <structname>pg_class</>.<structfield>reloptions</> or - <structname>pg_attribute</>.<structfield>attoptions</>. + (<replaceable>option_name</replaceable>/<replaceable>option_value</replaceable>) when passed + <structname>pg_class</structname>.<structfield>reloptions</structfield> or + <structname>pg_attribute</structname>.<structfield>attoptions</structfield>. </para> <para> @@ -17394,14 +17394,14 @@ SELECT currval(pg_get_serial_sequence('sometable', 'id')); empty and cannot be dropped. To display the specific objects populating the tablespace, you will need to connect to the databases identified by <function>pg_tablespace_databases</function> and query their - <structname>pg_class</> catalogs. + <structname>pg_class</structname> catalogs. </para> <para> <function>pg_typeof</function> returns the OID of the data type of the value that is passed to it. This can be helpful for troubleshooting or dynamically constructing SQL queries. The function is declared as - returning <type>regtype</>, which is an OID alias type (see + returning <type>regtype</type>, which is an OID alias type (see <xref linkend="datatype-oid">); this means that it is the same as an OID for comparison purposes but displays as a type name. For example: <programlisting> @@ -17447,10 +17447,10 @@ SELECT collation for ('foo' COLLATE "de_DE"); <function>to_regoperator</function>, <function>to_regtype</function>, <function>to_regnamespace</function>, and <function>to_regrole</function> functions translate relation, function, operator, type, schema, and role - names (given as <type>text</>) to objects of - type <type>regclass</>, <type>regproc</>, <type>regprocedure</type>, - <type>regoper</>, <type>regoperator</type>, <type>regtype</>, - <type>regnamespace</>, and <type>regrole</> + names (given as <type>text</type>) to objects of + type <type>regclass</type>, <type>regproc</type>, <type>regprocedure</type>, + <type>regoper</type>, <type>regoperator</type>, <type>regtype</type>, + <type>regnamespace</type>, and <type>regrole</type> respectively. These functions differ from a cast from text in that they don't accept a numeric OID, and that they return null rather than throwing an error if the name is not found (or, for @@ -17493,18 +17493,18 @@ SELECT collation for ('foo' COLLATE "de_DE"); <entry>get description of a database object</entry> </row> <row> - <entry><literal><function>pg_identify_object(<parameter>catalog_id</parameter> <type>oid</>, <parameter>object_id</parameter> <type>oid</>, <parameter>object_sub_id</parameter> <type>integer</>)</function></literal></entry> - <entry><parameter>type</> <type>text</>, <parameter>schema</> <type>text</>, <parameter>name</> <type>text</>, <parameter>identity</> <type>text</></entry> + <entry><literal><function>pg_identify_object(<parameter>catalog_id</parameter> <type>oid</type>, <parameter>object_id</parameter> <type>oid</type>, <parameter>object_sub_id</parameter> <type>integer</type>)</function></literal></entry> + <entry><parameter>type</parameter> <type>text</type>, <parameter>schema</parameter> <type>text</type>, <parameter>name</parameter> <type>text</type>, <parameter>identity</parameter> <type>text</type></entry> <entry>get identity of a database object</entry> </row> <row> - <entry><literal><function>pg_identify_object_as_address(<parameter>catalog_id</parameter> <type>oid</>, <parameter>object_id</parameter> <type>oid</>, <parameter>object_sub_id</parameter> <type>integer</>)</function></literal></entry> - <entry><parameter>type</> <type>text</>, <parameter>name</> <type>text[]</>, <parameter>args</> <type>text[]</></entry> + <entry><literal><function>pg_identify_object_as_address(<parameter>catalog_id</parameter> <type>oid</type>, <parameter>object_id</parameter> <type>oid</type>, <parameter>object_sub_id</parameter> <type>integer</type>)</function></literal></entry> + <entry><parameter>type</parameter> <type>text</type>, <parameter>name</parameter> <type>text[]</type>, <parameter>args</parameter> <type>text[]</type></entry> <entry>get external representation of a database object's address</entry> </row> <row> - <entry><literal><function>pg_get_object_address(<parameter>type</parameter> <type>text</>, <parameter>name</parameter> <type>text[]</>, <parameter>args</parameter> <type>text[]</>)</function></literal></entry> - <entry><parameter>catalog_id</> <type>oid</>, <parameter>object_id</> <type>oid</>, <parameter>object_sub_id</> <type>int32</></entry> + <entry><literal><function>pg_get_object_address(<parameter>type</parameter> <type>text</type>, <parameter>name</parameter> <type>text[]</type>, <parameter>args</parameter> <type>text[]</type>)</function></literal></entry> + <entry><parameter>catalog_id</parameter> <type>oid</type>, <parameter>object_id</parameter> <type>oid</type>, <parameter>object_sub_id</parameter> <type>int32</type></entry> <entry>get address of a database object, from its external representation</entry> </row> </tbody> @@ -17525,13 +17525,13 @@ SELECT collation for ('foo' COLLATE "de_DE"); to uniquely identify the database object specified by catalog OID, object OID and a (possibly zero) sub-object ID. This information is intended to be machine-readable, and is never translated. - <parameter>type</> identifies the type of database object; - <parameter>schema</> is the schema name that the object belongs in, or - <literal>NULL</> for object types that do not belong to schemas; - <parameter>name</> is the name of the object, quoted if necessary, only + <parameter>type</parameter> identifies the type of database object; + <parameter>schema</parameter> is the schema name that the object belongs in, or + <literal>NULL</literal> for object types that do not belong to schemas; + <parameter>name</parameter> is the name of the object, quoted if necessary, only present if it can be used (alongside schema name, if pertinent) as a unique - identifier of the object, otherwise <literal>NULL</>; - <parameter>identity</> is the complete object identity, with the precise format + identifier of the object, otherwise <literal>NULL</literal>; + <parameter>identity</parameter> is the complete object identity, with the precise format depending on object type, and each part within the format being schema-qualified and quoted as necessary. </para> @@ -17542,10 +17542,10 @@ SELECT collation for ('foo' COLLATE "de_DE"); catalog OID, object OID and a (possibly zero) sub-object ID. The returned information is independent of the current server, that is, it could be used to identify an identically named object in another server. - <parameter>type</> identifies the type of database object; - <parameter>name</> and <parameter>args</> are text arrays that together + <parameter>type</parameter> identifies the type of database object; + <parameter>name</parameter> and <parameter>args</parameter> are text arrays that together form a reference to the object. These three columns can be passed to - <function>pg_get_object_address</> to obtain the internal address + <function>pg_get_object_address</function> to obtain the internal address of the object. This function is the inverse of <function>pg_get_object_address</function>. </para> @@ -17554,13 +17554,13 @@ SELECT collation for ('foo' COLLATE "de_DE"); <function>pg_get_object_address</function> returns a row containing enough information to uniquely identify the database object specified by its type and object name and argument arrays. The returned values are the - ones that would be used in system catalogs such as <structname>pg_depend</> + ones that would be used in system catalogs such as <structname>pg_depend</structname> and can be passed to other system functions such as - <function>pg_identify_object</> or <function>pg_describe_object</>. - <parameter>catalog_id</> is the OID of the system catalog containing the + <function>pg_identify_object</function> or <function>pg_describe_object</function>. + <parameter>catalog_id</parameter> is the OID of the system catalog containing the object; - <parameter>object_id</> is the OID of the object itself, and - <parameter>object_sub_id</> is the object sub-ID, or zero if none. + <parameter>object_id</parameter> is the OID of the object itself, and + <parameter>object_sub_id</parameter> is the object sub-ID, or zero if none. This function is the inverse of <function>pg_identify_object_as_address</function>. </para> @@ -17739,9 +17739,9 @@ SELECT collation for ('foo' COLLATE "de_DE"); </table> <para> - The internal transaction ID type (<type>xid</>) is 32 bits wide and + The internal transaction ID type (<type>xid</type>) is 32 bits wide and wraps around every 4 billion transactions. However, these functions - export a 64-bit format that is extended with an <quote>epoch</> counter + export a 64-bit format that is extended with an <quote>epoch</quote> counter so it will not wrap around during the life of an installation. The data type used by these functions, <type>txid_snapshot</type>, stores information about transaction ID @@ -17782,9 +17782,9 @@ SELECT collation for ('foo' COLLATE "de_DE"); <entry><type>xip_list</type></entry> <entry> Active txids at the time of the snapshot. The list - includes only those active txids between <literal>xmin</> - and <literal>xmax</>; there might be active txids higher - than <literal>xmax</>. A txid that is <literal>xmin <= txid < + includes only those active txids between <literal>xmin</literal> + and <literal>xmax</literal>; there might be active txids higher + than <literal>xmax</literal>. A txid that is <literal>xmin <= txid < xmax</literal> and not in this list was already completed at the time of the snapshot, and thus either visible or dead according to its commit status. The list does not @@ -17797,27 +17797,27 @@ SELECT collation for ('foo' COLLATE "de_DE"); </table> <para> - <type>txid_snapshot</>'s textual representation is - <literal><replaceable>xmin</>:<replaceable>xmax</>:<replaceable>xip_list</></literal>. + <type>txid_snapshot</type>'s textual representation is + <literal><replaceable>xmin</replaceable>:<replaceable>xmax</replaceable>:<replaceable>xip_list</replaceable></literal>. For example <literal>10:20:10,14,15</literal> means <literal>xmin=10, xmax=20, xip_list=10, 14, 15</literal>. </para> <para> - <function>txid_status(bigint)</> reports the commit status of a recent + <function>txid_status(bigint)</function> reports the commit status of a recent transaction. Applications may use it to determine whether a transaction committed or aborted when the application and database server become disconnected while a <literal>COMMIT</literal> is in progress. The status of a transaction will be reported as either - <literal>in progress</>, - <literal>committed</>, or <literal>aborted</>, provided that the + <literal>in progress</literal>, + <literal>committed</literal>, or <literal>aborted</literal>, provided that the transaction is recent enough that the system retains the commit status of that transaction. If is old enough that no references to that transaction survive in the system and the commit status information has been discarded, this function will return NULL. Note that prepared - transactions are reported as <literal>in progress</>; applications must + transactions are reported as <literal>in progress</literal>; applications must check <link - linkend="view-pg-prepared-xacts"><literal>pg_prepared_xacts</></> if they + linkend="view-pg-prepared-xacts"><literal>pg_prepared_xacts</literal></link> if they need to determine whether the txid is a prepared transaction. </para> @@ -17852,7 +17852,7 @@ SELECT collation for ('foo' COLLATE "de_DE"); <indexterm><primary>pg_last_committed_xact</primary></indexterm> <literal><function>pg_last_committed_xact()</function></literal> </entry> - <entry><parameter>xid</> <type>xid</>, <parameter>timestamp</> <type>timestamp with time zone</></entry> + <entry><parameter>xid</parameter> <type>xid</type>, <parameter>timestamp</parameter> <type>timestamp with time zone</type></entry> <entry>get transaction ID and commit timestamp of latest committed transaction</entry> </row> </tbody> @@ -17861,7 +17861,7 @@ SELECT collation for ('foo' COLLATE "de_DE"); <para> The functions shown in <xref linkend="functions-controldata"> - print information initialized during <command>initdb</>, such + print information initialized during <command>initdb</command>, such as the catalog version. They also show information about write-ahead logging and checkpoint processing. This information is cluster-wide, and not specific to any one database. They provide most of the same @@ -17927,12 +17927,12 @@ SELECT collation for ('foo' COLLATE "de_DE"); </table> <para> - <function>pg_control_checkpoint</> returns a record, shown in + <function>pg_control_checkpoint</function> returns a record, shown in <xref linkend="functions-pg-control-checkpoint"> </para> <table id="functions-pg-control-checkpoint"> - <title><function>pg_control_checkpoint</> Columns</title> + <title><function>pg_control_checkpoint</function> Columns</title> <tgroup cols="2"> <thead> <row> @@ -18043,12 +18043,12 @@ SELECT collation for ('foo' COLLATE "de_DE"); </table> <para> - <function>pg_control_system</> returns a record, shown in + <function>pg_control_system</function> returns a record, shown in <xref linkend="functions-pg-control-system"> </para> <table id="functions-pg-control-system"> - <title><function>pg_control_system</> Columns</title> + <title><function>pg_control_system</function> Columns</title> <tgroup cols="2"> <thead> <row> @@ -18084,12 +18084,12 @@ SELECT collation for ('foo' COLLATE "de_DE"); </table> <para> - <function>pg_control_init</> returns a record, shown in + <function>pg_control_init</function> returns a record, shown in <xref linkend="functions-pg-control-init"> </para> <table id="functions-pg-control-init"> - <title><function>pg_control_init</> Columns</title> + <title><function>pg_control_init</function> Columns</title> <tgroup cols="2"> <thead> <row> @@ -18165,12 +18165,12 @@ SELECT collation for ('foo' COLLATE "de_DE"); </table> <para> - <function>pg_control_recovery</> returns a record, shown in + <function>pg_control_recovery</function> returns a record, shown in <xref linkend="functions-pg-control-recovery"> </para> <table id="functions-pg-control-recovery"> - <title><function>pg_control_recovery</> Columns</title> + <title><function>pg_control_recovery</function> Columns</title> <tgroup cols="2"> <thead> <row> @@ -18217,7 +18217,7 @@ SELECT collation for ('foo' COLLATE "de_DE"); <para> The functions described in this section are used to control and - monitor a <productname>PostgreSQL</> installation. + monitor a <productname>PostgreSQL</productname> installation. </para> <sect2 id="functions-admin-set"> @@ -18357,7 +18357,7 @@ SELECT set_config('log_statement_stats', 'off', false); <tbody> <row> <entry> - <literal><function>pg_cancel_backend(<parameter>pid</parameter> <type>int</>)</function></literal> + <literal><function>pg_cancel_backend(<parameter>pid</parameter> <type>int</type>)</function></literal> </entry> <entry><type>boolean</type></entry> <entry>Cancel a backend's current query. This is also allowed if the @@ -18382,7 +18382,7 @@ SELECT set_config('log_statement_stats', 'off', false); </row> <row> <entry> - <literal><function>pg_terminate_backend(<parameter>pid</parameter> <type>int</>)</function></literal> + <literal><function>pg_terminate_backend(<parameter>pid</parameter> <type>int</type>)</function></literal> </entry> <entry><type>boolean</type></entry> <entry>Terminate a backend. This is also allowed if the calling role @@ -18401,28 +18401,28 @@ SELECT set_config('log_statement_stats', 'off', false); </para> <para> - <function>pg_cancel_backend</> and <function>pg_terminate_backend</> - send signals (<systemitem>SIGINT</> or <systemitem>SIGTERM</> + <function>pg_cancel_backend</function> and <function>pg_terminate_backend</function> + send signals (<systemitem>SIGINT</systemitem> or <systemitem>SIGTERM</systemitem> respectively) to backend processes identified by process ID. The process ID of an active backend can be found from the <structfield>pid</structfield> column of the <structname>pg_stat_activity</structname> view, or by listing the <command>postgres</command> processes on the server (using - <application>ps</> on Unix or the <application>Task - Manager</> on <productname>Windows</>). + <application>ps</application> on Unix or the <application>Task + Manager</application> on <productname>Windows</productname>). The role of an active backend can be found from the <structfield>usename</structfield> column of the <structname>pg_stat_activity</structname> view. </para> <para> - <function>pg_reload_conf</> sends a <systemitem>SIGHUP</> signal + <function>pg_reload_conf</function> sends a <systemitem>SIGHUP</systemitem> signal to the server, causing configuration files to be reloaded by all server processes. </para> <para> - <function>pg_rotate_logfile</> signals the log-file manager to switch + <function>pg_rotate_logfile</function> signals the log-file manager to switch to a new output file immediately. This works only when the built-in log collector is running, since otherwise there is no log-file manager subprocess. @@ -18492,7 +18492,7 @@ SELECT set_config('log_statement_stats', 'off', false); <tbody> <row> <entry> - <literal><function>pg_create_restore_point(<parameter>name</> <type>text</>)</function></literal> + <literal><function>pg_create_restore_point(<parameter>name</parameter> <type>text</type>)</function></literal> </entry> <entry><type>pg_lsn</type></entry> <entry>Create a named point for performing restore (restricted to superusers by default, but other users can be granted EXECUTE to run the function)</entry> @@ -18520,7 +18520,7 @@ SELECT set_config('log_statement_stats', 'off', false); </row> <row> <entry> - <literal><function>pg_start_backup(<parameter>label</> <type>text</> <optional>, <parameter>fast</> <type>boolean</> <optional>, <parameter>exclusive</> <type>boolean</> </optional></optional>)</function></literal> + <literal><function>pg_start_backup(<parameter>label</parameter> <type>text</type> <optional>, <parameter>fast</parameter> <type>boolean</type> <optional>, <parameter>exclusive</parameter> <type>boolean</type> </optional></optional>)</function></literal> </entry> <entry><type>pg_lsn</type></entry> <entry>Prepare for performing on-line backup (restricted to superusers by default, but other users can be granted EXECUTE to run the function)</entry> @@ -18534,7 +18534,7 @@ SELECT set_config('log_statement_stats', 'off', false); </row> <row> <entry> - <literal><function>pg_stop_backup(<parameter>exclusive</> <type>boolean</> <optional>, <parameter>wait_for_archive</> <type>boolean</> </optional>)</function></literal> + <literal><function>pg_stop_backup(<parameter>exclusive</parameter> <type>boolean</type> <optional>, <parameter>wait_for_archive</parameter> <type>boolean</type> </optional>)</function></literal> </entry> <entry><type>setof record</type></entry> <entry>Finish performing exclusive or non-exclusive on-line backup (restricted to superusers by default, but other users can be granted EXECUTE to run the function)</entry> @@ -18562,23 +18562,23 @@ SELECT set_config('log_statement_stats', 'off', false); </row> <row> <entry> - <literal><function>pg_walfile_name(<parameter>lsn</> <type>pg_lsn</>)</function></literal> + <literal><function>pg_walfile_name(<parameter>lsn</parameter> <type>pg_lsn</type>)</function></literal> </entry> <entry><type>text</type></entry> <entry>Convert write-ahead log location to file name</entry> </row> <row> <entry> - <literal><function>pg_walfile_name_offset(<parameter>lsn</> <type>pg_lsn</>)</function></literal> + <literal><function>pg_walfile_name_offset(<parameter>lsn</parameter> <type>pg_lsn</type>)</function></literal> </entry> - <entry><type>text</>, <type>integer</></entry> + <entry><type>text</type>, <type>integer</type></entry> <entry>Convert write-ahead log location to file name and decimal byte offset within file</entry> </row> <row> <entry> - <literal><function>pg_wal_lsn_diff(<parameter>lsn</> <type>pg_lsn</>, <parameter>lsn</> <type>pg_lsn</>)</function></literal> + <literal><function>pg_wal_lsn_diff(<parameter>lsn</parameter> <type>pg_lsn</type>, <parameter>lsn</parameter> <type>pg_lsn</type>)</function></literal> </entry> - <entry><type>numeric</></entry> + <entry><type>numeric</type></entry> <entry>Calculate the difference between two write-ahead log locations</entry> </row> </tbody> @@ -18586,17 +18586,17 @@ SELECT set_config('log_statement_stats', 'off', false); </table> <para> - <function>pg_start_backup</> accepts an arbitrary user-defined label for + <function>pg_start_backup</function> accepts an arbitrary user-defined label for the backup. (Typically this would be the name under which the backup dump file will be stored.) When used in exclusive mode, the function writes a - backup label file (<filename>backup_label</>) and, if there are any links - in the <filename>pg_tblspc/</> directory, a tablespace map file - (<filename>tablespace_map</>) into the database cluster's data directory, + backup label file (<filename>backup_label</filename>) and, if there are any links + in the <filename>pg_tblspc/</filename> directory, a tablespace map file + (<filename>tablespace_map</filename>) into the database cluster's data directory, performs a checkpoint, and then returns the backup's starting write-ahead log location as text. The user can ignore this result value, but it is provided in case it is useful. When used in non-exclusive mode, the contents of these files are instead returned by the - <function>pg_stop_backup</> function, and should be written to the backup + <function>pg_stop_backup</function> function, and should be written to the backup by the caller. <programlisting> @@ -18606,29 +18606,29 @@ postgres=# select pg_start_backup('label_goes_here'); 0/D4445B8 (1 row) </programlisting> - There is an optional second parameter of type <type>boolean</type>. If <literal>true</>, - it specifies executing <function>pg_start_backup</> as quickly as + There is an optional second parameter of type <type>boolean</type>. If <literal>true</literal>, + it specifies executing <function>pg_start_backup</function> as quickly as possible. This forces an immediate checkpoint which will cause a spike in I/O operations, slowing any concurrently executing queries. </para> <para> - In an exclusive backup, <function>pg_stop_backup</> removes the label file - and, if it exists, the <filename>tablespace_map</> file created by - <function>pg_start_backup</>. In a non-exclusive backup, the contents of - the <filename>backup_label</> and <filename>tablespace_map</> are returned + In an exclusive backup, <function>pg_stop_backup</function> removes the label file + and, if it exists, the <filename>tablespace_map</filename> file created by + <function>pg_start_backup</function>. In a non-exclusive backup, the contents of + the <filename>backup_label</filename> and <filename>tablespace_map</filename> are returned in the result of the function, and should be written to files in the backup (and not in the data directory). There is an optional second - parameter of type <type>boolean</type>. If false, the <function>pg_stop_backup</> + parameter of type <type>boolean</type>. If false, the <function>pg_stop_backup</function> will return immediately after the backup is completed without waiting for WAL to be archived. This behavior is only useful for backup software which independently monitors WAL archiving. Otherwise, WAL required to make the backup consistent might be missing and make the backup - useless. When this parameter is set to true, <function>pg_stop_backup</> + useless. When this parameter is set to true, <function>pg_stop_backup</function> will wait for WAL to be archived when archiving is enabled; on the standby, - this means that it will wait only when <varname>archive_mode = always</>. + this means that it will wait only when <varname>archive_mode = always</varname>. If write activity on the primary is low, it may be useful to run - <function>pg_switch_wal</> on the primary in order to trigger + <function>pg_switch_wal</function> on the primary in order to trigger an immediate segment switch. </para> @@ -18636,7 +18636,7 @@ postgres=# select pg_start_backup('label_goes_here'); When executed on a primary, the function also creates a backup history file in the write-ahead log archive area. The history file includes the label given to - <function>pg_start_backup</>, the starting and ending write-ahead log locations for + <function>pg_start_backup</function>, the starting and ending write-ahead log locations for the backup, and the starting and ending times of the backup. The return value is the backup's ending write-ahead log location (which again can be ignored). After recording the ending location, the current @@ -18646,16 +18646,16 @@ postgres=# select pg_start_backup('label_goes_here'); </para> <para> - <function>pg_switch_wal</> moves to the next write-ahead log file, allowing the + <function>pg_switch_wal</function> moves to the next write-ahead log file, allowing the current file to be archived (assuming you are using continuous archiving). The return value is the ending write-ahead log location + 1 within the just-completed write-ahead log file. If there has been no write-ahead log activity since the last write-ahead log switch, - <function>pg_switch_wal</> does nothing and returns the start location + <function>pg_switch_wal</function> does nothing and returns the start location of the write-ahead log file currently in use. </para> <para> - <function>pg_create_restore_point</> creates a named write-ahead log + <function>pg_create_restore_point</function> creates a named write-ahead log record that can be used as recovery target, and returns the corresponding write-ahead log location. The given name can then be used with <xref linkend="recovery-target-name"> to specify the point up to which @@ -18665,11 +18665,11 @@ postgres=# select pg_start_backup('label_goes_here'); </para> <para> - <function>pg_current_wal_lsn</> displays the current write-ahead log write + <function>pg_current_wal_lsn</function> displays the current write-ahead log write location in the same format used by the above functions. Similarly, - <function>pg_current_wal_insert_lsn</> displays the current write-ahead log - insertion location and <function>pg_current_wal_flush_lsn</> displays the - current write-ahead log flush location. The insertion location is the <quote>logical</> + <function>pg_current_wal_insert_lsn</function> displays the current write-ahead log + insertion location and <function>pg_current_wal_flush_lsn</function> displays the + current write-ahead log flush location. The insertion location is the <quote>logical</quote> end of the write-ahead log at any instant, while the write location is the end of what has actually been written out from the server's internal buffers and flush location is the location guaranteed to be written to durable storage. The write @@ -18681,7 +18681,7 @@ postgres=# select pg_start_backup('label_goes_here'); </para> <para> - You can use <function>pg_walfile_name_offset</> to extract the + You can use <function>pg_walfile_name_offset</function> to extract the corresponding write-ahead log file name and byte offset from the results of any of the above functions. For example: <programlisting> @@ -18691,7 +18691,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); 00000001000000000000000D | 4039624 (1 row) </programlisting> - Similarly, <function>pg_walfile_name</> extracts just the write-ahead log file name. + Similarly, <function>pg_walfile_name</function> extracts just the write-ahead log file name. When the given write-ahead log location is exactly at a write-ahead log file boundary, both these functions return the name of the preceding write-ahead log file. This is usually the desired behavior for managing write-ahead log archiving @@ -18700,7 +18700,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); </para> <para> - <function>pg_wal_lsn_diff</> calculates the difference in bytes + <function>pg_wal_lsn_diff</function> calculates the difference in bytes between two write-ahead log locations. It can be used with <structname>pg_stat_replication</structname> or some functions shown in <xref linkend="functions-admin-backup-table"> to get the replication lag. @@ -18878,21 +18878,21 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); </indexterm> <para> - <productname>PostgreSQL</> allows database sessions to synchronize their - snapshots. A <firstterm>snapshot</> determines which data is visible to the + <productname>PostgreSQL</productname> allows database sessions to synchronize their + snapshots. A <firstterm>snapshot</firstterm> determines which data is visible to the transaction that is using the snapshot. Synchronized snapshots are necessary when two or more sessions need to see identical content in the database. If two sessions just start their transactions independently, there is always a possibility that some third transaction commits - between the executions of the two <command>START TRANSACTION</> commands, + between the executions of the two <command>START TRANSACTION</command> commands, so that one session sees the effects of that transaction and the other does not. </para> <para> - To solve this problem, <productname>PostgreSQL</> allows a transaction to - <firstterm>export</> the snapshot it is using. As long as the exporting - transaction remains open, other transactions can <firstterm>import</> its + To solve this problem, <productname>PostgreSQL</productname> allows a transaction to + <firstterm>export</firstterm> the snapshot it is using. As long as the exporting + transaction remains open, other transactions can <firstterm>import</firstterm> its snapshot, and thereby be guaranteed that they see exactly the same view of the database that the first transaction sees. But note that any database changes made by any one of these transactions remain invisible @@ -18902,7 +18902,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); </para> <para> - Snapshots are exported with the <function>pg_export_snapshot</> function, + Snapshots are exported with the <function>pg_export_snapshot</function> function, shown in <xref linkend="functions-snapshot-synchronization-table">, and imported with the <xref linkend="sql-set-transaction"> command. </para> @@ -18928,13 +18928,13 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); </table> <para> - The function <function>pg_export_snapshot</> saves the current snapshot - and returns a <type>text</> string identifying the snapshot. This string + The function <function>pg_export_snapshot</function> saves the current snapshot + and returns a <type>text</type> string identifying the snapshot. This string must be passed (outside the database) to clients that want to import the snapshot. The snapshot is available for import only until the end of the transaction that exported it. A transaction can export more than one snapshot, if needed. Note that doing so is only useful in <literal>READ - COMMITTED</> transactions, since in <literal>REPEATABLE READ</> and + COMMITTED</literal> transactions, since in <literal>REPEATABLE READ</literal> and higher isolation levels, transactions use the same snapshot throughout their lifetime. Once a transaction has exported any snapshots, it cannot be prepared with <xref linkend="sql-prepare-transaction">. @@ -18989,7 +18989,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <indexterm> <primary>pg_create_physical_replication_slot</primary> </indexterm> - <literal><function>pg_create_physical_replication_slot(<parameter>slot_name</parameter> <type>name</type> <optional>, <parameter>immediately_reserve</> <type>boolean</>, <parameter>temporary</> <type>boolean</></optional>)</function></literal> + <literal><function>pg_create_physical_replication_slot(<parameter>slot_name</parameter> <type>name</type> <optional>, <parameter>immediately_reserve</parameter> <type>boolean</type>, <parameter>temporary</parameter> <type>boolean</type></optional>)</function></literal> </entry> <entry> (<parameter>slot_name</parameter> <type>name</type>, <parameter>lsn</parameter> <type>pg_lsn</type>) @@ -18997,13 +18997,13 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <entry> Creates a new physical replication slot named <parameter>slot_name</parameter>. The optional second parameter, - when <literal>true</>, specifies that the <acronym>LSN</> for this + when <literal>true</literal>, specifies that the <acronym>LSN</acronym> for this replication slot be reserved immediately; otherwise - the <acronym>LSN</> is reserved on first connection from a streaming + the <acronym>LSN</acronym> is reserved on first connection from a streaming replication client. Streaming changes from a physical slot is only possible with the streaming-replication protocol — see <xref linkend="protocol-replication">. The optional third - parameter, <parameter>temporary</>, when set to true, specifies that + parameter, <parameter>temporary</parameter>, when set to true, specifies that the slot should not be permanently stored to disk and is only meant for use by current session. Temporary slots are also released upon any error. This function corresponds @@ -19024,7 +19024,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <entry> Drops the physical or logical replication slot named <parameter>slot_name</parameter>. Same as replication protocol - command <literal>DROP_REPLICATION_SLOT</>. For logical slots, this must + command <literal>DROP_REPLICATION_SLOT</literal>. For logical slots, this must be called when connected to the same database the slot was created on. </entry> </row> @@ -19034,7 +19034,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <indexterm> <primary>pg_create_logical_replication_slot</primary> </indexterm> - <literal><function>pg_create_logical_replication_slot(<parameter>slot_name</parameter> <type>name</type>, <parameter>plugin</parameter> <type>name</type> <optional>, <parameter>temporary</> <type>boolean</></optional>)</function></literal> + <literal><function>pg_create_logical_replication_slot(<parameter>slot_name</parameter> <type>name</type>, <parameter>plugin</parameter> <type>name</type> <optional>, <parameter>temporary</parameter> <type>boolean</type></optional>)</function></literal> </entry> <entry> (<parameter>slot_name</parameter> <type>name</type>, <parameter>lsn</parameter> <type>pg_lsn</type>) @@ -19043,7 +19043,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); Creates a new logical (decoding) replication slot named <parameter>slot_name</parameter> using the output plugin <parameter>plugin</parameter>. The optional third - parameter, <parameter>temporary</>, when set to true, specifies that + parameter, <parameter>temporary</parameter>, when set to true, specifies that the slot should not be permanently stored to disk and is only meant for use by current session. Temporary slots are also released upon any error. A call to this function has the same @@ -19065,9 +19065,9 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <entry> Returns changes in the slot <parameter>slot_name</parameter>, starting from the point at which since changes have been consumed last. If - <parameter>upto_lsn</> and <parameter>upto_nchanges</> are NULL, + <parameter>upto_lsn</parameter> and <parameter>upto_nchanges</parameter> are NULL, logical decoding will continue until end of WAL. If - <parameter>upto_lsn</> is non-NULL, decoding will include only + <parameter>upto_lsn</parameter> is non-NULL, decoding will include only those transactions which commit prior to the specified LSN. If <parameter>upto_nchanges</parameter> is non-NULL, decoding will stop when the number of rows produced by decoding exceeds @@ -19155,7 +19155,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <literal><function>pg_replication_origin_drop(<parameter>node_name</parameter> <type>text</type>)</function></literal> </entry> <entry> - <type>void</> + <type>void</type> </entry> <entry> Delete a previously created replication origin, including any @@ -19187,7 +19187,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <literal><function>pg_replication_origin_session_setup(<parameter>node_name</parameter> <type>text</type>)</function></literal> </entry> <entry> - <type>void</> + <type>void</type> </entry> <entry> Mark the current session as replaying from the given @@ -19205,7 +19205,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <literal><function>pg_replication_origin_session_reset()</function></literal> </entry> <entry> - <type>void</> + <type>void</type> </entry> <entry> Cancel the effects @@ -19254,7 +19254,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <literal><function>pg_replication_origin_xact_setup(<parameter>origin_lsn</parameter> <type>pg_lsn</type>, <parameter>origin_timestamp</parameter> <type>timestamptz</type>)</function></literal> </entry> <entry> - <type>void</> + <type>void</type> </entry> <entry> Mark the current transaction as replaying a transaction that has @@ -19273,7 +19273,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <literal><function>pg_replication_origin_xact_reset()</function></literal> </entry> <entry> - <type>void</> + <type>void</type> </entry> <entry> Cancel the effects of @@ -19289,7 +19289,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <literal>pg_replication_origin_advance<function>(<parameter>node_name</parameter> <type>text</type>, <parameter>lsn</parameter> <type>pg_lsn</type>)</function></literal> </entry> <entry> - <type>void</> + <type>void</type> </entry> <entry> Set replication progress for the given node to the given @@ -19446,7 +19446,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <entry><type>bigint</type></entry> <entry> Disk space used by the specified fork (<literal>'main'</literal>, - <literal>'fsm'</literal>, <literal>'vm'</>, or <literal>'init'</>) + <literal>'fsm'</literal>, <literal>'vm'</literal>, or <literal>'init'</literal>) of the specified table or index </entry> </row> @@ -19519,7 +19519,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <entry><type>bigint</type></entry> <entry> Total disk space used by the specified table, - including all indexes and <acronym>TOAST</> data + including all indexes and <acronym>TOAST</acronym> data </entry> </row> </tbody> @@ -19527,48 +19527,48 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); </table> <para> - <function>pg_column_size</> shows the space used to store any individual + <function>pg_column_size</function> shows the space used to store any individual data value. </para> <para> - <function>pg_total_relation_size</> accepts the OID or name of a + <function>pg_total_relation_size</function> accepts the OID or name of a table or toast table, and returns the total on-disk space used for that table, including all associated indexes. This function is equivalent to <function>pg_table_size</function> - <literal>+</> <function>pg_indexes_size</function>. + <literal>+</literal> <function>pg_indexes_size</function>. </para> <para> - <function>pg_table_size</> accepts the OID or name of a table and + <function>pg_table_size</function> accepts the OID or name of a table and returns the disk space needed for that table, exclusive of indexes. (TOAST space, free space map, and visibility map are included.) </para> <para> - <function>pg_indexes_size</> accepts the OID or name of a table and + <function>pg_indexes_size</function> accepts the OID or name of a table and returns the total disk space used by all the indexes attached to that table. </para> <para> - <function>pg_database_size</function> and <function>pg_tablespace_size</> + <function>pg_database_size</function> and <function>pg_tablespace_size</function> accept the OID or name of a database or tablespace, and return the total disk space used therein. To use <function>pg_database_size</function>, - you must have <literal>CONNECT</> permission on the specified database - (which is granted by default), or be a member of the <literal>pg_read_all_stats</> - role. To use <function>pg_tablespace_size</>, you must have - <literal>CREATE</> permission on the specified tablespace, or be a member - of the <literal>pg_read_all_stats</> role unless it is the default tablespace for + you must have <literal>CONNECT</literal> permission on the specified database + (which is granted by default), or be a member of the <literal>pg_read_all_stats</literal> + role. To use <function>pg_tablespace_size</function>, you must have + <literal>CREATE</literal> permission on the specified tablespace, or be a member + of the <literal>pg_read_all_stats</literal> role unless it is the default tablespace for the current database. </para> <para> - <function>pg_relation_size</> accepts the OID or name of a table, index + <function>pg_relation_size</function> accepts the OID or name of a table, index or toast table, and returns the on-disk size in bytes of one fork of that relation. (Note that for most purposes it is more convenient to - use the higher-level functions <function>pg_total_relation_size</> - or <function>pg_table_size</>, which sum the sizes of all forks.) + use the higher-level functions <function>pg_total_relation_size</function> + or <function>pg_table_size</function>, which sum the sizes of all forks.) With one argument, it returns the size of the main data fork of the relation. The second argument can be provided to specify which fork to examine: @@ -19601,13 +19601,13 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); </para> <para> - <function>pg_size_pretty</> can be used to format the result of one of + <function>pg_size_pretty</function> can be used to format the result of one of the other functions in a human-readable way, using bytes, kB, MB, GB or TB as appropriate. </para> <para> - <function>pg_size_bytes</> can be used to get the size in bytes from a + <function>pg_size_bytes</function> can be used to get the size in bytes from a string in human-readable format. The input may have units of bytes, kB, MB, GB or TB, and is parsed case-insensitively. If no units are specified, bytes are assumed. @@ -19616,17 +19616,17 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <note> <para> The units kB, MB, GB and TB used by the functions - <function>pg_size_pretty</> and <function>pg_size_bytes</> are defined + <function>pg_size_pretty</function> and <function>pg_size_bytes</function> are defined using powers of 2 rather than powers of 10, so 1kB is 1024 bytes, 1MB is - 1024<superscript>2</> = 1048576 bytes, and so on. + 1024<superscript>2</superscript> = 1048576 bytes, and so on. </para> </note> <para> The functions above that operate on tables or indexes accept a - <type>regclass</> argument, which is simply the OID of the table or index - in the <structname>pg_class</> system catalog. You do not have to look up - the OID by hand, however, since the <type>regclass</> data type's input + <type>regclass</type> argument, which is simply the OID of the table or index + in the <structname>pg_class</structname> system catalog. You do not have to look up + the OID by hand, however, since the <type>regclass</type> data type's input converter will do the work for you. Just write the table name enclosed in single quotes so that it looks like a literal constant. For compatibility with the handling of ordinary <acronym>SQL</acronym> names, the string @@ -19695,28 +19695,28 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); </table> <para> - <function>pg_relation_filenode</> accepts the OID or name of a table, - index, sequence, or toast table, and returns the <quote>filenode</> number + <function>pg_relation_filenode</function> accepts the OID or name of a table, + index, sequence, or toast table, and returns the <quote>filenode</quote> number currently assigned to it. The filenode is the base component of the file name(s) used for the relation (see <xref linkend="storage-file-layout"> for more information). For most tables the result is the same as - <structname>pg_class</>.<structfield>relfilenode</>, but for certain - system catalogs <structfield>relfilenode</> is zero and this function must + <structname>pg_class</structname>.<structfield>relfilenode</structfield>, but for certain + system catalogs <structfield>relfilenode</structfield> is zero and this function must be used to get the correct value. The function returns NULL if passed a relation that does not have storage, such as a view. </para> <para> - <function>pg_relation_filepath</> is similar to - <function>pg_relation_filenode</>, but it returns the entire file path name - (relative to the database cluster's data directory <varname>PGDATA</>) of + <function>pg_relation_filepath</function> is similar to + <function>pg_relation_filenode</function>, but it returns the entire file path name + (relative to the database cluster's data directory <varname>PGDATA</varname>) of the relation. </para> <para> - <function>pg_filenode_relation</> is the reverse of - <function>pg_relation_filenode</>. Given a <quote>tablespace</> OID and - a <quote>filenode</>, it returns the associated relation's OID. For a table + <function>pg_filenode_relation</function> is the reverse of + <function>pg_relation_filenode</function>. Given a <quote>tablespace</quote> OID and + a <quote>filenode</quote>, it returns the associated relation's OID. For a table in the database's default tablespace, the tablespace can be specified as 0. </para> @@ -19736,7 +19736,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <row> <entry> <indexterm><primary>pg_collation_actual_version</primary></indexterm> - <literal><function>pg_collation_actual_version(<type>oid</>)</function></literal> + <literal><function>pg_collation_actual_version(<type>oid</type>)</function></literal> </entry> <entry><type>text</type></entry> <entry>Return actual version of collation from operating system</entry> @@ -19744,7 +19744,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <row> <entry> <indexterm><primary>pg_import_system_collations</primary></indexterm> - <literal><function>pg_import_system_collations(<parameter>schema</> <type>regnamespace</>)</function></literal> + <literal><function>pg_import_system_collations(<parameter>schema</parameter> <type>regnamespace</type>)</function></literal> </entry> <entry><type>integer</type></entry> <entry>Import operating system collations</entry> @@ -19763,7 +19763,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); </para> <para> - <function>pg_import_system_collations</> adds collations to the system + <function>pg_import_system_collations</function> adds collations to the system catalog <literal>pg_collation</literal> based on all the locales it finds in the operating system. This is what <command>initdb</command> uses; @@ -19818,28 +19818,28 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <tbody> <row> <entry> - <literal><function>brin_summarize_new_values(<parameter>index</> <type>regclass</>)</function></literal> + <literal><function>brin_summarize_new_values(<parameter>index</parameter> <type>regclass</type>)</function></literal> </entry> <entry><type>integer</type></entry> <entry>summarize page ranges not already summarized</entry> </row> <row> <entry> - <literal><function>brin_summarize_range(<parameter>index</> <type>regclass</>, <parameter>blockNumber</> <type>bigint</type>)</function></literal> + <literal><function>brin_summarize_range(<parameter>index</parameter> <type>regclass</type>, <parameter>blockNumber</parameter> <type>bigint</type>)</function></literal> </entry> <entry><type>integer</type></entry> <entry>summarize the page range covering the given block, if not already summarized</entry> </row> <row> <entry> - <literal><function>brin_desummarize_range(<parameter>index</> <type>regclass</>, <parameter>blockNumber</> <type>bigint</type>)</function></literal> + <literal><function>brin_desummarize_range(<parameter>index</parameter> <type>regclass</type>, <parameter>blockNumber</parameter> <type>bigint</type>)</function></literal> </entry> <entry><type>integer</type></entry> <entry>de-summarize the page range covering the given block, if summarized</entry> </row> <row> <entry> - <literal><function>gin_clean_pending_list(<parameter>index</> <type>regclass</>)</function></literal> + <literal><function>gin_clean_pending_list(<parameter>index</parameter> <type>regclass</type>)</function></literal> </entry> <entry><type>bigint</type></entry> <entry>move GIN pending list entries into main index structure</entry> @@ -19849,25 +19849,25 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); </table> <para> - <function>brin_summarize_new_values</> accepts the OID or name of a + <function>brin_summarize_new_values</function> accepts the OID or name of a BRIN index and inspects the index to find page ranges in the base table that are not currently summarized by the index; for any such range it creates a new summary index tuple by scanning the table pages. It returns the number of new page range summaries that were inserted - into the index. <function>brin_summarize_range</> does the same, except + into the index. <function>brin_summarize_range</function> does the same, except it only summarizes the range that covers the given block number. </para> <para> - <function>gin_clean_pending_list</> accepts the OID or name of + <function>gin_clean_pending_list</function> accepts the OID or name of a GIN index and cleans up the pending list of the specified index by moving entries in it to the main GIN data structure in bulk. It returns the number of pages removed from the pending list. Note that if the argument is a GIN index built with - the <literal>fastupdate</> option disabled, no cleanup happens and the + the <literal>fastupdate</literal> option disabled, no cleanup happens and the return value is 0, because the index doesn't have a pending list. Please see <xref linkend="gin-fast-update"> and <xref linkend="gin-tips"> - for details of the pending list and <literal>fastupdate</> option. + for details of the pending list and <literal>fastupdate</literal> option. </para> </sect2> @@ -19879,9 +19879,9 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); The functions shown in <xref linkend="functions-admin-genfile-table"> provide native access to files on the machine hosting the server. Only files within the - database cluster directory and the <varname>log_directory</> can be + database cluster directory and the <varname>log_directory</varname> can be accessed. Use a relative path for files in the cluster directory, - and a path matching the <varname>log_directory</> configuration setting + and a path matching the <varname>log_directory</varname> configuration setting for log files. Use of these functions is restricted to superusers except where stated otherwise. </para> @@ -19897,7 +19897,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <tbody> <row> <entry> - <literal><function>pg_ls_dir(<parameter>dirname</> <type>text</> [, <parameter>missing_ok</> <type>boolean</>, <parameter>include_dot_dirs</> <type>boolean</>])</function></literal> + <literal><function>pg_ls_dir(<parameter>dirname</parameter> <type>text</type> [, <parameter>missing_ok</parameter> <type>boolean</type>, <parameter>include_dot_dirs</parameter> <type>boolean</type>])</function></literal> </entry> <entry><type>setof text</type></entry> <entry> @@ -19911,7 +19911,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <entry><type>setof record</type></entry> <entry> List the name, size, and last modification time of files in the log - directory. Access is granted to members of the <literal>pg_monitor</> + directory. Access is granted to members of the <literal>pg_monitor</literal> role and may be granted to other non-superuser roles. </entry> </row> @@ -19922,13 +19922,13 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <entry><type>setof record</type></entry> <entry> List the name, size, and last modification time of files in the WAL - directory. Access is granted to members of the <literal>pg_monitor</> + directory. Access is granted to members of the <literal>pg_monitor</literal> role and may be granted to other non-superuser roles. </entry> </row> <row> <entry> - <literal><function>pg_read_file(<parameter>filename</> <type>text</> [, <parameter>offset</> <type>bigint</>, <parameter>length</> <type>bigint</> [, <parameter>missing_ok</> <type>boolean</>] ])</function></literal> + <literal><function>pg_read_file(<parameter>filename</parameter> <type>text</type> [, <parameter>offset</parameter> <type>bigint</type>, <parameter>length</parameter> <type>bigint</type> [, <parameter>missing_ok</parameter> <type>boolean</type>] ])</function></literal> </entry> <entry><type>text</type></entry> <entry> @@ -19937,7 +19937,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); </row> <row> <entry> - <literal><function>pg_read_binary_file(<parameter>filename</> <type>text</> [, <parameter>offset</> <type>bigint</>, <parameter>length</> <type>bigint</> [, <parameter>missing_ok</> <type>boolean</>] ])</function></literal> + <literal><function>pg_read_binary_file(<parameter>filename</parameter> <type>text</type> [, <parameter>offset</parameter> <type>bigint</type>, <parameter>length</parameter> <type>bigint</type> [, <parameter>missing_ok</parameter> <type>boolean</type>] ])</function></literal> </entry> <entry><type>bytea</type></entry> <entry> @@ -19946,7 +19946,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); </row> <row> <entry> - <literal><function>pg_stat_file(<parameter>filename</> <type>text</>[, <parameter>missing_ok</> <type>boolean</type>])</function></literal> + <literal><function>pg_stat_file(<parameter>filename</parameter> <type>text</type>[, <parameter>missing_ok</parameter> <type>boolean</type>])</function></literal> </entry> <entry><type>record</type></entry> <entry> @@ -19958,23 +19958,23 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); </table> <para> - Some of these functions take an optional <parameter>missing_ok</> parameter, + Some of these functions take an optional <parameter>missing_ok</parameter> parameter, which specifies the behavior when the file or directory does not exist. If <literal>true</literal>, the function returns NULL (except - <function>pg_ls_dir</>, which returns an empty result set). If - <literal>false</>, an error is raised. The default is <literal>false</>. + <function>pg_ls_dir</function>, which returns an empty result set). If + <literal>false</literal>, an error is raised. The default is <literal>false</literal>. </para> <indexterm> <primary>pg_ls_dir</primary> </indexterm> <para> - <function>pg_ls_dir</> returns the names of all files (and directories + <function>pg_ls_dir</function> returns the names of all files (and directories and other special files) in the specified directory. The <parameter> - include_dot_dirs</> indicates whether <quote>.</> and <quote>..</> are + include_dot_dirs</parameter> indicates whether <quote>.</quote> and <quote>..</quote> are included in the result set. The default is to exclude them - (<literal>false</>), but including them can be useful when - <parameter>missing_ok</> is <literal>true</literal>, to distinguish an + (<literal>false</literal>), but including them can be useful when + <parameter>missing_ok</parameter> is <literal>true</literal>, to distinguish an empty directory from an non-existent directory. </para> @@ -19982,9 +19982,9 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <primary>pg_ls_logdir</primary> </indexterm> <para> - <function>pg_ls_logdir</> returns the name, size, and last modified time + <function>pg_ls_logdir</function> returns the name, size, and last modified time (mtime) of each file in the log directory. By default, only superusers - and members of the <literal>pg_monitor</> role can use this function. + and members of the <literal>pg_monitor</literal> role can use this function. Access may be granted to others using <command>GRANT</command>. </para> @@ -19992,9 +19992,9 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <primary>pg_ls_waldir</primary> </indexterm> <para> - <function>pg_ls_waldir</> returns the name, size, and last modified time + <function>pg_ls_waldir</function> returns the name, size, and last modified time (mtime) of each file in the write ahead log (WAL) directory. By - default only superusers and members of the <literal>pg_monitor</> role + default only superusers and members of the <literal>pg_monitor</literal> role can use this function. Access may be granted to others using <command>GRANT</command>. </para> @@ -20003,11 +20003,11 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <primary>pg_read_file</primary> </indexterm> <para> - <function>pg_read_file</> returns part of a text file, starting - at the given <parameter>offset</>, returning at most <parameter>length</> - bytes (less if the end of file is reached first). If <parameter>offset</> + <function>pg_read_file</function> returns part of a text file, starting + at the given <parameter>offset</parameter>, returning at most <parameter>length</parameter> + bytes (less if the end of file is reached first). If <parameter>offset</parameter> is negative, it is relative to the end of the file. - If <parameter>offset</> and <parameter>length</> are omitted, the entire + If <parameter>offset</parameter> and <parameter>length</parameter> are omitted, the entire file is returned. The bytes read from the file are interpreted as a string in the server encoding; an error is thrown if they are not valid in that encoding. @@ -20017,10 +20017,10 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup()); <primary>pg_read_binary_file</primary> </indexterm> <para> - <function>pg_read_binary_file</> is similar to - <function>pg_read_file</>, except that the result is a <type>bytea</type> value; + <function>pg_read_binary_file</function> is similar to + <function>pg_read_file</function>, except that the result is a <type>bytea</type> value; accordingly, no encoding checks are performed. - In combination with the <function>convert_from</> function, this function + In combination with the <function>convert_from</function> function, this function can be used to read a file in a specified encoding: <programlisting> SELECT convert_from(pg_read_binary_file('file_in_utf8.txt'), 'UTF8'); @@ -20031,7 +20031,7 @@ SELECT convert_from(pg_read_binary_file('file_in_utf8.txt'), 'UTF8'); <primary>pg_stat_file</primary> </indexterm> <para> - <function>pg_stat_file</> returns a record containing the file + <function>pg_stat_file</function> returns a record containing the file size, last accessed time stamp, last modified time stamp, last file status change time stamp (Unix platforms only), file creation time stamp (Windows only), and a <type>boolean</type> @@ -20064,42 +20064,42 @@ SELECT (pg_stat_file('filename')).modification; <tbody> <row> <entry> - <literal><function>pg_advisory_lock(<parameter>key</> <type>bigint</>)</function></literal> + <literal><function>pg_advisory_lock(<parameter>key</parameter> <type>bigint</type>)</function></literal> </entry> <entry><type>void</type></entry> <entry>Obtain exclusive session level advisory lock</entry> </row> <row> <entry> - <literal><function>pg_advisory_lock(<parameter>key1</> <type>int</>, <parameter>key2</> <type>int</>)</function></literal> + <literal><function>pg_advisory_lock(<parameter>key1</parameter> <type>int</type>, <parameter>key2</parameter> <type>int</type>)</function></literal> </entry> <entry><type>void</type></entry> <entry>Obtain exclusive session level advisory lock</entry> </row> <row> <entry> - <literal><function>pg_advisory_lock_shared(<parameter>key</> <type>bigint</>)</function></literal> + <literal><function>pg_advisory_lock_shared(<parameter>key</parameter> <type>bigint</type>)</function></literal> </entry> <entry><type>void</type></entry> <entry>Obtain shared session level advisory lock</entry> </row> <row> <entry> - <literal><function>pg_advisory_lock_shared(<parameter>key1</> <type>int</>, <parameter>key2</> <type>int</>)</function></literal> + <literal><function>pg_advisory_lock_shared(<parameter>key1</parameter> <type>int</type>, <parameter>key2</parameter> <type>int</type>)</function></literal> </entry> <entry><type>void</type></entry> <entry>Obtain shared session level advisory lock</entry> </row> <row> <entry> - <literal><function>pg_advisory_unlock(<parameter>key</> <type>bigint</>)</function></literal> + <literal><function>pg_advisory_unlock(<parameter>key</parameter> <type>bigint</type>)</function></literal> </entry> <entry><type>boolean</type></entry> <entry>Release an exclusive session level advisory lock</entry> </row> <row> <entry> - <literal><function>pg_advisory_unlock(<parameter>key1</> <type>int</>, <parameter>key2</> <type>int</>)</function></literal> + <literal><function>pg_advisory_unlock(<parameter>key1</parameter> <type>int</type>, <parameter>key2</parameter> <type>int</type>)</function></literal> </entry> <entry><type>boolean</type></entry> <entry>Release an exclusive session level advisory lock</entry> @@ -20113,98 +20113,98 @@ SELECT (pg_stat_file('filename')).modification; </row> <row> <entry> - <literal><function>pg_advisory_unlock_shared(<parameter>key</> <type>bigint</>)</function></literal> + <literal><function>pg_advisory_unlock_shared(<parameter>key</parameter> <type>bigint</type>)</function></literal> </entry> <entry><type>boolean</type></entry> <entry>Release a shared session level advisory lock</entry> </row> <row> <entry> - <literal><function>pg_advisory_unlock_shared(<parameter>key1</> <type>int</>, <parameter>key2</> <type>int</>)</function></literal> + <literal><function>pg_advisory_unlock_shared(<parameter>key1</parameter> <type>int</type>, <parameter>key2</parameter> <type>int</type>)</function></literal> </entry> <entry><type>boolean</type></entry> <entry>Release a shared session level advisory lock</entry> </row> <row> <entry> - <literal><function>pg_advisory_xact_lock(<parameter>key</> <type>bigint</>)</function></literal> + <literal><function>pg_advisory_xact_lock(<parameter>key</parameter> <type>bigint</type>)</function></literal> </entry> <entry><type>void</type></entry> <entry>Obtain exclusive transaction level advisory lock</entry> </row> <row> <entry> - <literal><function>pg_advisory_xact_lock(<parameter>key1</> <type>int</>, <parameter>key2</> <type>int</>)</function></literal> + <literal><function>pg_advisory_xact_lock(<parameter>key1</parameter> <type>int</type>, <parameter>key2</parameter> <type>int</type>)</function></literal> </entry> <entry><type>void</type></entry> <entry>Obtain exclusive transaction level advisory lock</entry> </row> <row> <entry> - <literal><function>pg_advisory_xact_lock_shared(<parameter>key</> <type>bigint</>)</function></literal> + <literal><function>pg_advisory_xact_lock_shared(<parameter>key</parameter> <type>bigint</type>)</function></literal> </entry> <entry><type>void</type></entry> <entry>Obtain shared transaction level advisory lock</entry> </row> <row> <entry> - <literal><function>pg_advisory_xact_lock_shared(<parameter>key1</> <type>int</>, <parameter>key2</> <type>int</>)</function></literal> + <literal><function>pg_advisory_xact_lock_shared(<parameter>key1</parameter> <type>int</type>, <parameter>key2</parameter> <type>int</type>)</function></literal> </entry> <entry><type>void</type></entry> <entry>Obtain shared transaction level advisory lock</entry> </row> <row> <entry> - <literal><function>pg_try_advisory_lock(<parameter>key</> <type>bigint</>)</function></literal> + <literal><function>pg_try_advisory_lock(<parameter>key</parameter> <type>bigint</type>)</function></literal> </entry> <entry><type>boolean</type></entry> <entry>Obtain exclusive session level advisory lock if available</entry> </row> <row> <entry> - <literal><function>pg_try_advisory_lock(<parameter>key1</> <type>int</>, <parameter>key2</> <type>int</>)</function></literal> + <literal><function>pg_try_advisory_lock(<parameter>key1</parameter> <type>int</type>, <parameter>key2</parameter> <type>int</type>)</function></literal> </entry> <entry><type>boolean</type></entry> <entry>Obtain exclusive session level advisory lock if available</entry> </row> <row> <entry> - <literal><function>pg_try_advisory_lock_shared(<parameter>key</> <type>bigint</>)</function></literal> + <literal><function>pg_try_advisory_lock_shared(<parameter>key</parameter> <type>bigint</type>)</function></literal> </entry> <entry><type>boolean</type></entry> <entry>Obtain shared session level advisory lock if available</entry> </row> <row> <entry> - <literal><function>pg_try_advisory_lock_shared(<parameter>key1</> <type>int</>, <parameter>key2</> <type>int</>)</function></literal> + <literal><function>pg_try_advisory_lock_shared(<parameter>key1</parameter> <type>int</type>, <parameter>key2</parameter> <type>int</type>)</function></literal> </entry> <entry><type>boolean</type></entry> <entry>Obtain shared session level advisory lock if available</entry> </row> <row> <entry> - <literal><function>pg_try_advisory_xact_lock(<parameter>key</> <type>bigint</>)</function></literal> + <literal><function>pg_try_advisory_xact_lock(<parameter>key</parameter> <type>bigint</type>)</function></literal> </entry> <entry><type>boolean</type></entry> <entry>Obtain exclusive transaction level advisory lock if available</entry> </row> <row> <entry> - <literal><function>pg_try_advisory_xact_lock(<parameter>key1</> <type>int</>, <parameter>key2</> <type>int</>)</function></literal> + <literal><function>pg_try_advisory_xact_lock(<parameter>key1</parameter> <type>int</type>, <parameter>key2</parameter> <type>int</type>)</function></literal> </entry> <entry><type>boolean</type></entry> <entry>Obtain exclusive transaction level advisory lock if available</entry> </row> <row> <entry> - <literal><function>pg_try_advisory_xact_lock_shared(<parameter>key</> <type>bigint</>)</function></literal> + <literal><function>pg_try_advisory_xact_lock_shared(<parameter>key</parameter> <type>bigint</type>)</function></literal> </entry> <entry><type>boolean</type></entry> <entry>Obtain shared transaction level advisory lock if available</entry> </row> <row> <entry> - <literal><function>pg_try_advisory_xact_lock_shared(<parameter>key1</> <type>int</>, <parameter>key2</> <type>int</>)</function></literal> + <literal><function>pg_try_advisory_xact_lock_shared(<parameter>key1</parameter> <type>int</type>, <parameter>key2</parameter> <type>int</type>)</function></literal> </entry> <entry><type>boolean</type></entry> <entry>Obtain shared transaction level advisory lock if available</entry> @@ -20217,7 +20217,7 @@ SELECT (pg_stat_file('filename')).modification; <primary>pg_advisory_lock</primary> </indexterm> <para> - <function>pg_advisory_lock</> locks an application-defined resource, + <function>pg_advisory_lock</function> locks an application-defined resource, which can be identified either by a single 64-bit key value or two 32-bit key values (note that these two key spaces do not overlap). If another session already holds a lock on the same resource identifier, @@ -20231,8 +20231,8 @@ SELECT (pg_stat_file('filename')).modification; <primary>pg_advisory_lock_shared</primary> </indexterm> <para> - <function>pg_advisory_lock_shared</> works the same as - <function>pg_advisory_lock</>, + <function>pg_advisory_lock_shared</function> works the same as + <function>pg_advisory_lock</function>, except the lock can be shared with other sessions requesting shared locks. Only would-be exclusive lockers are locked out. </para> @@ -20241,10 +20241,10 @@ SELECT (pg_stat_file('filename')).modification; <primary>pg_try_advisory_lock</primary> </indexterm> <para> - <function>pg_try_advisory_lock</> is similar to - <function>pg_advisory_lock</>, except the function will not wait for the + <function>pg_try_advisory_lock</function> is similar to + <function>pg_advisory_lock</function>, except the function will not wait for the lock to become available. It will either obtain the lock immediately and - return <literal>true</>, or return <literal>false</> if the lock cannot be + return <literal>true</literal>, or return <literal>false</literal> if the lock cannot be acquired immediately. </para> @@ -20252,8 +20252,8 @@ SELECT (pg_stat_file('filename')).modification; <primary>pg_try_advisory_lock_shared</primary> </indexterm> <para> - <function>pg_try_advisory_lock_shared</> works the same as - <function>pg_try_advisory_lock</>, except it attempts to acquire + <function>pg_try_advisory_lock_shared</function> works the same as + <function>pg_try_advisory_lock</function>, except it attempts to acquire a shared rather than an exclusive lock. </para> @@ -20261,10 +20261,10 @@ SELECT (pg_stat_file('filename')).modification; <primary>pg_advisory_unlock</primary> </indexterm> <para> - <function>pg_advisory_unlock</> will release a previously-acquired + <function>pg_advisory_unlock</function> will release a previously-acquired exclusive session level advisory lock. It - returns <literal>true</> if the lock is successfully released. - If the lock was not held, it will return <literal>false</>, + returns <literal>true</literal> if the lock is successfully released. + If the lock was not held, it will return <literal>false</literal>, and in addition, an SQL warning will be reported by the server. </para> @@ -20272,8 +20272,8 @@ SELECT (pg_stat_file('filename')).modification; <primary>pg_advisory_unlock_shared</primary> </indexterm> <para> - <function>pg_advisory_unlock_shared</> works the same as - <function>pg_advisory_unlock</>, + <function>pg_advisory_unlock_shared</function> works the same as + <function>pg_advisory_unlock</function>, except it releases a shared session level advisory lock. </para> @@ -20281,7 +20281,7 @@ SELECT (pg_stat_file('filename')).modification; <primary>pg_advisory_unlock_all</primary> </indexterm> <para> - <function>pg_advisory_unlock_all</> will release all session level advisory + <function>pg_advisory_unlock_all</function> will release all session level advisory locks held by the current session. (This function is implicitly invoked at session end, even if the client disconnects ungracefully.) </para> @@ -20290,8 +20290,8 @@ SELECT (pg_stat_file('filename')).modification; <primary>pg_advisory_xact_lock</primary> </indexterm> <para> - <function>pg_advisory_xact_lock</> works the same as - <function>pg_advisory_lock</>, except the lock is automatically released + <function>pg_advisory_xact_lock</function> works the same as + <function>pg_advisory_lock</function>, except the lock is automatically released at the end of the current transaction and cannot be released explicitly. </para> @@ -20299,8 +20299,8 @@ SELECT (pg_stat_file('filename')).modification; <primary>pg_advisory_xact_lock_shared</primary> </indexterm> <para> - <function>pg_advisory_xact_lock_shared</> works the same as - <function>pg_advisory_lock_shared</>, except the lock is automatically released + <function>pg_advisory_xact_lock_shared</function> works the same as + <function>pg_advisory_lock_shared</function>, except the lock is automatically released at the end of the current transaction and cannot be released explicitly. </para> @@ -20308,8 +20308,8 @@ SELECT (pg_stat_file('filename')).modification; <primary>pg_try_advisory_xact_lock</primary> </indexterm> <para> - <function>pg_try_advisory_xact_lock</> works the same as - <function>pg_try_advisory_lock</>, except the lock, if acquired, + <function>pg_try_advisory_xact_lock</function> works the same as + <function>pg_try_advisory_lock</function>, except the lock, if acquired, is automatically released at the end of the current transaction and cannot be released explicitly. </para> @@ -20318,8 +20318,8 @@ SELECT (pg_stat_file('filename')).modification; <primary>pg_try_advisory_xact_lock_shared</primary> </indexterm> <para> - <function>pg_try_advisory_xact_lock_shared</> works the same as - <function>pg_try_advisory_lock_shared</>, except the lock, if acquired, + <function>pg_try_advisory_xact_lock_shared</function> works the same as + <function>pg_try_advisory_lock_shared</function>, except the lock, if acquired, is automatically released at the end of the current transaction and cannot be released explicitly. </para> @@ -20336,8 +20336,8 @@ SELECT (pg_stat_file('filename')).modification; </indexterm> <para> - Currently <productname>PostgreSQL</> provides one built in trigger - function, <function>suppress_redundant_updates_trigger</>, + Currently <productname>PostgreSQL</productname> provides one built in trigger + function, <function>suppress_redundant_updates_trigger</function>, which will prevent any update that does not actually change the data in the row from taking place, in contrast to the normal behavior which always performs the update @@ -20354,7 +20354,7 @@ SELECT (pg_stat_file('filename')).modification; However, detecting such situations in client code is not always easy, or even possible, and writing expressions to detect them can be error-prone. An alternative is to use - <function>suppress_redundant_updates_trigger</>, which will skip + <function>suppress_redundant_updates_trigger</function>, which will skip updates that don't change the data. You should use this with care, however. The trigger takes a small but non-trivial time for each record, so if most of the records affected by an update are actually changed, @@ -20362,7 +20362,7 @@ SELECT (pg_stat_file('filename')).modification; </para> <para> - The <function>suppress_redundant_updates_trigger</> function can be + The <function>suppress_redundant_updates_trigger</function> function can be added to a table like this: <programlisting> CREATE TRIGGER z_min_update @@ -20384,7 +20384,7 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); <title>Event Trigger Functions</title> <para> - <productname>PostgreSQL</> provides these helper functions + <productname>PostgreSQL</productname> provides these helper functions to retrieve information from event triggers. </para> @@ -20401,12 +20401,12 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); </indexterm> <para> - <function>pg_event_trigger_ddl_commands</> returns a list of + <function>pg_event_trigger_ddl_commands</function> returns a list of <acronym>DDL</acronym> commands executed by each user action, when invoked in a function attached to a - <literal>ddl_command_end</> event trigger. If called in any other + <literal>ddl_command_end</literal> event trigger. If called in any other context, an error is raised. - <function>pg_event_trigger_ddl_commands</> returns one row for each + <function>pg_event_trigger_ddl_commands</function> returns one row for each base command executed; some commands that are a single SQL sentence may return more than one row. This function returns the following columns: @@ -20451,7 +20451,7 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); <entry><literal>schema_name</literal></entry> <entry><type>text</type></entry> <entry> - Name of the schema the object belongs in, if any; otherwise <literal>NULL</>. + Name of the schema the object belongs in, if any; otherwise <literal>NULL</literal>. No quoting is applied. </entry> </row> @@ -20492,11 +20492,11 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); </indexterm> <para> - <function>pg_event_trigger_dropped_objects</> returns a list of all objects - dropped by the command in whose <literal>sql_drop</> event it is called. + <function>pg_event_trigger_dropped_objects</function> returns a list of all objects + dropped by the command in whose <literal>sql_drop</literal> event it is called. If called in any other context, - <function>pg_event_trigger_dropped_objects</> raises an error. - <function>pg_event_trigger_dropped_objects</> returns the following columns: + <function>pg_event_trigger_dropped_objects</function> raises an error. + <function>pg_event_trigger_dropped_objects</function> returns the following columns: <informaltable> <tgroup cols="3"> @@ -20553,7 +20553,7 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); <entry><literal>schema_name</literal></entry> <entry><type>text</type></entry> <entry> - Name of the schema the object belonged in, if any; otherwise <literal>NULL</>. + Name of the schema the object belonged in, if any; otherwise <literal>NULL</literal>. No quoting is applied. </entry> </row> @@ -20562,7 +20562,7 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); <entry><type>text</type></entry> <entry> Name of the object, if the combination of schema and name can be - used as a unique identifier for the object; otherwise <literal>NULL</>. + used as a unique identifier for the object; otherwise <literal>NULL</literal>. No quoting is applied, and name is never schema-qualified. </entry> </row> @@ -20598,7 +20598,7 @@ FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger(); </para> <para> - The <function>pg_event_trigger_dropped_objects</> function can be used + The <function>pg_event_trigger_dropped_objects</function> function can be used in an event trigger like this: <programlisting> CREATE FUNCTION test_event_trigger_for_drops() @@ -20631,7 +20631,7 @@ CREATE EVENT TRIGGER test_event_trigger_for_drops The functions shown in <xref linkend="functions-event-trigger-table-rewrite"> provide information about a table for which a - <literal>table_rewrite</> event has just been called. + <literal>table_rewrite</literal> event has just been called. If called in any other context, an error is raised. </para> @@ -20668,7 +20668,7 @@ CREATE EVENT TRIGGER test_event_trigger_for_drops </table> <para> - The <function>pg_event_trigger_table_rewrite_oid</> function can be used + The <function>pg_event_trigger_table_rewrite_oid</function> function can be used in an event trigger like this: <programlisting> CREATE FUNCTION test_event_trigger_table_rewrite_oid() diff --git a/doc/src/sgml/fuzzystrmatch.sgml b/doc/src/sgml/fuzzystrmatch.sgml index ff5bc08fea8..373ac4891df 100644 --- a/doc/src/sgml/fuzzystrmatch.sgml +++ b/doc/src/sgml/fuzzystrmatch.sgml @@ -8,14 +8,14 @@ </indexterm> <para> - The <filename>fuzzystrmatch</> module provides several + The <filename>fuzzystrmatch</filename> module provides several functions to determine similarities and distance between strings. </para> <caution> <para> - At present, the <function>soundex</>, <function>metaphone</>, - <function>dmetaphone</>, and <function>dmetaphone_alt</> functions do + At present, the <function>soundex</function>, <function>metaphone</function>, + <function>dmetaphone</function>, and <function>dmetaphone_alt</function> functions do not work well with multibyte encodings (such as UTF-8). </para> </caution> @@ -31,7 +31,7 @@ </para> <para> - The <filename>fuzzystrmatch</> module provides two functions + The <filename>fuzzystrmatch</filename> module provides two functions for working with Soundex codes: </para> @@ -49,12 +49,12 @@ difference(text, text) returns int </synopsis> <para> - The <function>soundex</> function converts a string to its Soundex code. - The <function>difference</> function converts two strings to their Soundex + The <function>soundex</function> function converts a string to its Soundex code. + The <function>difference</function> function converts two strings to their Soundex codes and then reports the number of matching code positions. Since Soundex codes have four characters, the result ranges from zero to four, with zero being no match and four being an exact match. (Thus, the - function is misnamed — <function>similarity</> would have been + function is misnamed — <function>similarity</function> would have been a better name.) </para> @@ -115,10 +115,10 @@ levenshtein_less_equal(text source, text target, int max_d) returns int <para> <function>levenshtein_less_equal</function> is an accelerated version of the Levenshtein function for use when only small distances are of interest. - If the actual distance is less than or equal to <literal>max_d</>, + If the actual distance is less than or equal to <literal>max_d</literal>, then <function>levenshtein_less_equal</function> returns the correct - distance; otherwise it returns some value greater than <literal>max_d</>. - If <literal>max_d</> is negative then the behavior is the same as + distance; otherwise it returns some value greater than <literal>max_d</literal>. + If <literal>max_d</literal> is negative then the behavior is the same as <function>levenshtein</function>. </para> @@ -198,9 +198,9 @@ test=# SELECT metaphone('GUMBO', 4); <title>Double Metaphone</title> <para> - The Double Metaphone system computes two <quote>sounds like</> strings - for a given input string — a <quote>primary</> and an - <quote>alternate</>. In most cases they are the same, but for non-English + The Double Metaphone system computes two <quote>sounds like</quote> strings + for a given input string — a <quote>primary</quote> and an + <quote>alternate</quote>. In most cases they are the same, but for non-English names especially they can be a bit different, depending on pronunciation. These functions compute the primary and alternate codes: </para> diff --git a/doc/src/sgml/generate-errcodes-table.pl b/doc/src/sgml/generate-errcodes-table.pl index 01fc6166bf4..e655703b5b5 100644 --- a/doc/src/sgml/generate-errcodes-table.pl +++ b/doc/src/sgml/generate-errcodes-table.pl @@ -30,12 +30,12 @@ while (<$errcodes>) s/-/—/; # Wrap PostgreSQL in <productname/> - s/PostgreSQL/<productname>PostgreSQL<\/>/g; + s/PostgreSQL/<productname>PostgreSQL<\/productname>/g; print "\n\n"; print "<row>\n"; print "<entry spanname=\"span12\">"; - print "<emphasis role=\"bold\">$_</></entry>\n"; + print "<emphasis role=\"bold\">$_</emphasis></entry>\n"; print "</row>\n"; next; diff --git a/doc/src/sgml/generic-wal.sgml b/doc/src/sgml/generic-wal.sgml index dfa78c5ca21..7a0284994c9 100644 --- a/doc/src/sgml/generic-wal.sgml +++ b/doc/src/sgml/generic-wal.sgml @@ -13,8 +13,8 @@ <para> The API for constructing generic WAL records is defined in - <filename>access/generic_xlog.h</> and implemented - in <filename>access/transam/generic_xlog.c</>. + <filename>access/generic_xlog.h</filename> and implemented + in <filename>access/transam/generic_xlog.c</filename>. </para> <para> @@ -24,24 +24,24 @@ <orderedlist> <listitem> <para> - <function>state = GenericXLogStart(relation)</> — start + <function>state = GenericXLogStart(relation)</function> — start construction of a generic WAL record for the given relation. </para> </listitem> <listitem> <para> - <function>page = GenericXLogRegisterBuffer(state, buffer, flags)</> + <function>page = GenericXLogRegisterBuffer(state, buffer, flags)</function> — register a buffer to be modified within the current generic WAL record. This function returns a pointer to a temporary copy of the buffer's page, where modifications should be made. (Do not modify the buffer's contents directly.) The third argument is a bit mask of flags applicable to the operation. Currently the only such flag is - <literal>GENERIC_XLOG_FULL_IMAGE</>, which indicates that a full-page + <literal>GENERIC_XLOG_FULL_IMAGE</literal>, which indicates that a full-page image rather than a delta update should be included in the WAL record. Typically this flag would be set if the page is new or has been rewritten completely. - <function>GenericXLogRegisterBuffer</> can be repeated if the + <function>GenericXLogRegisterBuffer</function> can be repeated if the WAL-logged action needs to modify multiple pages. </para> </listitem> @@ -54,7 +54,7 @@ <listitem> <para> - <function>GenericXLogFinish(state)</> — apply the changes to + <function>GenericXLogFinish(state)</function> — apply the changes to the buffers and emit the generic WAL record. </para> </listitem> @@ -63,7 +63,7 @@ <para> WAL record construction can be canceled between any of the above steps by - calling <function>GenericXLogAbort(state)</>. This will discard all + calling <function>GenericXLogAbort(state)</function>. This will discard all changes to the page image copies. </para> @@ -75,13 +75,13 @@ <listitem> <para> No direct modifications of buffers are allowed! All modifications must - be done in copies acquired from <function>GenericXLogRegisterBuffer()</>. + be done in copies acquired from <function>GenericXLogRegisterBuffer()</function>. In other words, code that makes generic WAL records should never call - <function>BufferGetPage()</> for itself. However, it remains the + <function>BufferGetPage()</function> for itself. However, it remains the caller's responsibility to pin/unpin and lock/unlock the buffers at appropriate times. Exclusive lock must be held on each target buffer - from before <function>GenericXLogRegisterBuffer()</> until after - <function>GenericXLogFinish()</>. + from before <function>GenericXLogRegisterBuffer()</function> until after + <function>GenericXLogFinish()</function>. </para> </listitem> @@ -97,7 +97,7 @@ <listitem> <para> The maximum number of buffers that can be registered for a generic WAL - record is <literal>MAX_GENERIC_XLOG_PAGES</>. An error will be thrown + record is <literal>MAX_GENERIC_XLOG_PAGES</literal>. An error will be thrown if this limit is exceeded. </para> </listitem> @@ -106,26 +106,26 @@ <para> Generic WAL assumes that the pages to be modified have standard layout, and in particular that there is no useful data between - <structfield>pd_lower</> and <structfield>pd_upper</>. + <structfield>pd_lower</structfield> and <structfield>pd_upper</structfield>. </para> </listitem> <listitem> <para> Since you are modifying copies of buffer - pages, <function>GenericXLogStart()</> does not start a critical + pages, <function>GenericXLogStart()</function> does not start a critical section. Thus, you can safely do memory allocation, error throwing, - etc. between <function>GenericXLogStart()</> and - <function>GenericXLogFinish()</>. The only actual critical section is - present inside <function>GenericXLogFinish()</>. There is no need to - worry about calling <function>GenericXLogAbort()</> during an error + etc. between <function>GenericXLogStart()</function> and + <function>GenericXLogFinish()</function>. The only actual critical section is + present inside <function>GenericXLogFinish()</function>. There is no need to + worry about calling <function>GenericXLogAbort()</function> during an error exit, either. </para> </listitem> <listitem> <para> - <function>GenericXLogFinish()</> takes care of marking buffers dirty + <function>GenericXLogFinish()</function> takes care of marking buffers dirty and setting their LSNs. You do not need to do this explicitly. </para> </listitem> @@ -148,7 +148,7 @@ <listitem> <para> - If <literal>GENERIC_XLOG_FULL_IMAGE</> is not specified for a + If <literal>GENERIC_XLOG_FULL_IMAGE</literal> is not specified for a registered buffer, the generic WAL record contains a delta between the old and the new page images. This delta is based on byte-by-byte comparison. This is not very compact for the case of moving data diff --git a/doc/src/sgml/geqo.sgml b/doc/src/sgml/geqo.sgml index e0f8adcd6ed..99ee3ebca01 100644 --- a/doc/src/sgml/geqo.sgml +++ b/doc/src/sgml/geqo.sgml @@ -88,7 +88,7 @@ </para> <para> - According to the <systemitem class="resource">comp.ai.genetic</> <acronym>FAQ</acronym> it cannot be stressed too + According to the <systemitem class="resource">comp.ai.genetic</systemitem> <acronym>FAQ</acronym> it cannot be stressed too strongly that a <acronym>GA</acronym> is not a pure random search for a solution to a problem. A <acronym>GA</acronym> uses stochastic processes, but the result is distinctly non-random (better than random). @@ -222,7 +222,7 @@ are considered; and all the initially-determined relation scan plans are available. The estimated cost is the cheapest of these possibilities.) Join sequences with lower estimated cost are considered - <quote>more fit</> than those with higher cost. The genetic algorithm + <quote>more fit</quote> than those with higher cost. The genetic algorithm discards the least fit candidates. Then new candidates are generated by combining genes of more-fit candidates — that is, by using randomly-chosen portions of known low-cost join sequences to create @@ -235,20 +235,20 @@ <para> This process is inherently nondeterministic, because of the randomized choices made during both the initial population selection and subsequent - <quote>mutation</> of the best candidates. To avoid surprising changes + <quote>mutation</quote> of the best candidates. To avoid surprising changes of the selected plan, each run of the GEQO algorithm restarts its random number generator with the current <xref linkend="guc-geqo-seed"> - parameter setting. As long as <varname>geqo_seed</> and the other + parameter setting. As long as <varname>geqo_seed</varname> and the other GEQO parameters are kept fixed, the same plan will be generated for a given query (and other planner inputs such as statistics). To experiment - with different search paths, try changing <varname>geqo_seed</>. + with different search paths, try changing <varname>geqo_seed</varname>. </para> </sect2> <sect2 id="geqo-future"> <title>Future Implementation Tasks for - <productname>PostgreSQL</> <acronym>GEQO</acronym></title> + <productname>PostgreSQL</productname> <acronym>GEQO</acronym></title> <para> Work is still needed to improve the genetic algorithm parameter diff --git a/doc/src/sgml/gin.sgml b/doc/src/sgml/gin.sgml index 7c2321ec3c3..873627a210b 100644 --- a/doc/src/sgml/gin.sgml +++ b/doc/src/sgml/gin.sgml @@ -21,15 +21,15 @@ </para> <para> - We use the word <firstterm>item</> to refer to a composite value that - is to be indexed, and the word <firstterm>key</> to refer to an element + We use the word <firstterm>item</firstterm> to refer to a composite value that + is to be indexed, and the word <firstterm>key</firstterm> to refer to an element value. <acronym>GIN</acronym> always stores and searches for keys, not item values per se. </para> <para> A <acronym>GIN</acronym> index stores a set of (key, posting list) pairs, - where a <firstterm>posting list</> is a set of row IDs in which the key + where a <firstterm>posting list</firstterm> is a set of row IDs in which the key occurs. The same row ID can appear in multiple posting lists, since an item can contain more than one key. Each key value is stored only once, so a <acronym>GIN</acronym> index is very compact for cases @@ -66,7 +66,7 @@ <title>Built-in Operator Classes</title> <para> - The core <productname>PostgreSQL</> distribution + The core <productname>PostgreSQL</productname> distribution includes the <acronym>GIN</acronym> operator classes shown in <xref linkend="gin-builtin-opclasses-table">. (Some of the optional modules described in <xref linkend="contrib"> @@ -85,38 +85,38 @@ </thead> <tbody> <row> - <entry><literal>array_ops</></entry> - <entry><type>anyarray</></entry> + <entry><literal>array_ops</literal></entry> + <entry><type>anyarray</type></entry> <entry> - <literal>&&</> - <literal><@</> - <literal>=</> - <literal>@></> + <literal>&&</literal> + <literal><@</literal> + <literal>=</literal> + <literal>@></literal> </entry> </row> <row> - <entry><literal>jsonb_ops</></entry> - <entry><type>jsonb</></entry> + <entry><literal>jsonb_ops</literal></entry> + <entry><type>jsonb</type></entry> <entry> - <literal>?</> - <literal>?&</> - <literal>?|</> - <literal>@></> + <literal>?</literal> + <literal>?&</literal> + <literal>?|</literal> + <literal>@></literal> </entry> </row> <row> - <entry><literal>jsonb_path_ops</></entry> - <entry><type>jsonb</></entry> + <entry><literal>jsonb_path_ops</literal></entry> + <entry><type>jsonb</type></entry> <entry> - <literal>@></> + <literal>@></literal> </entry> </row> <row> - <entry><literal>tsvector_ops</></entry> - <entry><type>tsvector</></entry> + <entry><literal>tsvector_ops</literal></entry> + <entry><type>tsvector</type></entry> <entry> - <literal>@@</> - <literal>@@@</> + <literal>@@</literal> + <literal>@@@</literal> </entry> </row> </tbody> @@ -124,8 +124,8 @@ </table> <para> - Of the two operator classes for type <type>jsonb</>, <literal>jsonb_ops</> - is the default. <literal>jsonb_path_ops</> supports fewer operators but + Of the two operator classes for type <type>jsonb</type>, <literal>jsonb_ops</literal> + is the default. <literal>jsonb_path_ops</literal> supports fewer operators but offers better performance for those operators. See <xref linkend="json-indexing"> for details. </para> @@ -157,15 +157,15 @@ <variablelist> <varlistentry> <term><function>Datum *extractValue(Datum itemValue, int32 *nkeys, - bool **nullFlags)</></term> + bool **nullFlags)</function></term> <listitem> <para> Returns a palloc'd array of keys given an item to be indexed. The - number of returned keys must be stored into <literal>*nkeys</>. + number of returned keys must be stored into <literal>*nkeys</literal>. If any of the keys can be null, also palloc an array of - <literal>*nkeys</> <type>bool</type> fields, store its address at - <literal>*nullFlags</>, and set these null flags as needed. - <literal>*nullFlags</> can be left <symbol>NULL</symbol> (its initial value) + <literal>*nkeys</literal> <type>bool</type> fields, store its address at + <literal>*nullFlags</literal>, and set these null flags as needed. + <literal>*nullFlags</literal> can be left <symbol>NULL</symbol> (its initial value) if all keys are non-null. The return value can be <symbol>NULL</symbol> if the item contains no keys. </para> @@ -175,40 +175,40 @@ <varlistentry> <term><function>Datum *extractQuery(Datum query, int32 *nkeys, StrategyNumber n, bool **pmatch, Pointer **extra_data, - bool **nullFlags, int32 *searchMode)</></term> + bool **nullFlags, int32 *searchMode)</function></term> <listitem> <para> Returns a palloc'd array of keys given a value to be queried; that is, - <literal>query</> is the value on the right-hand side of an + <literal>query</literal> is the value on the right-hand side of an indexable operator whose left-hand side is the indexed column. - <literal>n</> is the strategy number of the operator within the + <literal>n</literal> is the strategy number of the operator within the operator class (see <xref linkend="xindex-strategies">). - Often, <function>extractQuery</> will need - to consult <literal>n</> to determine the data type of - <literal>query</> and the method it should use to extract key values. - The number of returned keys must be stored into <literal>*nkeys</>. + Often, <function>extractQuery</function> will need + to consult <literal>n</literal> to determine the data type of + <literal>query</literal> and the method it should use to extract key values. + The number of returned keys must be stored into <literal>*nkeys</literal>. If any of the keys can be null, also palloc an array of - <literal>*nkeys</> <type>bool</type> fields, store its address at - <literal>*nullFlags</>, and set these null flags as needed. - <literal>*nullFlags</> can be left <symbol>NULL</symbol> (its initial value) + <literal>*nkeys</literal> <type>bool</type> fields, store its address at + <literal>*nullFlags</literal>, and set these null flags as needed. + <literal>*nullFlags</literal> can be left <symbol>NULL</symbol> (its initial value) if all keys are non-null. - The return value can be <symbol>NULL</symbol> if the <literal>query</> contains no keys. + The return value can be <symbol>NULL</symbol> if the <literal>query</literal> contains no keys. </para> <para> - <literal>searchMode</> is an output argument that allows - <function>extractQuery</> to specify details about how the search + <literal>searchMode</literal> is an output argument that allows + <function>extractQuery</function> to specify details about how the search will be done. - If <literal>*searchMode</> is set to - <literal>GIN_SEARCH_MODE_DEFAULT</> (which is the value it is + If <literal>*searchMode</literal> is set to + <literal>GIN_SEARCH_MODE_DEFAULT</literal> (which is the value it is initialized to before call), only items that match at least one of the returned keys are considered candidate matches. - If <literal>*searchMode</> is set to - <literal>GIN_SEARCH_MODE_INCLUDE_EMPTY</>, then in addition to items + If <literal>*searchMode</literal> is set to + <literal>GIN_SEARCH_MODE_INCLUDE_EMPTY</literal>, then in addition to items containing at least one matching key, items that contain no keys at all are considered candidate matches. (This mode is useful for implementing is-subset-of operators, for example.) - If <literal>*searchMode</> is set to <literal>GIN_SEARCH_MODE_ALL</>, + If <literal>*searchMode</literal> is set to <literal>GIN_SEARCH_MODE_ALL</literal>, then all non-null items in the index are considered candidate matches, whether they match any of the returned keys or not. (This mode is much slower than the other two choices, since it requires @@ -217,33 +217,33 @@ in most cases is probably not a good candidate for a GIN operator class.) The symbols to use for setting this mode are defined in - <filename>access/gin.h</>. + <filename>access/gin.h</filename>. </para> <para> - <literal>pmatch</> is an output argument for use when partial match - is supported. To use it, <function>extractQuery</> must allocate - an array of <literal>*nkeys</> booleans and store its address at - <literal>*pmatch</>. Each element of the array should be set to TRUE + <literal>pmatch</literal> is an output argument for use when partial match + is supported. To use it, <function>extractQuery</function> must allocate + an array of <literal>*nkeys</literal> booleans and store its address at + <literal>*pmatch</literal>. Each element of the array should be set to TRUE if the corresponding key requires partial match, FALSE if not. - If <literal>*pmatch</> is set to <symbol>NULL</symbol> then GIN assumes partial match + If <literal>*pmatch</literal> is set to <symbol>NULL</symbol> then GIN assumes partial match is not required. The variable is initialized to <symbol>NULL</symbol> before call, so this argument can simply be ignored by operator classes that do not support partial match. </para> <para> - <literal>extra_data</> is an output argument that allows - <function>extractQuery</> to pass additional data to the - <function>consistent</> and <function>comparePartial</> methods. - To use it, <function>extractQuery</> must allocate - an array of <literal>*nkeys</> pointers and store its address at - <literal>*extra_data</>, then store whatever it wants to into the + <literal>extra_data</literal> is an output argument that allows + <function>extractQuery</function> to pass additional data to the + <function>consistent</function> and <function>comparePartial</function> methods. + To use it, <function>extractQuery</function> must allocate + an array of <literal>*nkeys</literal> pointers and store its address at + <literal>*extra_data</literal>, then store whatever it wants to into the individual pointers. The variable is initialized to <symbol>NULL</symbol> before call, so this argument can simply be ignored by operator classes that - do not require extra data. If <literal>*extra_data</> is set, the - whole array is passed to the <function>consistent</> method, and - the appropriate element to the <function>comparePartial</> method. + do not require extra data. If <literal>*extra_data</literal> is set, the + whole array is passed to the <function>consistent</function> method, and + the appropriate element to the <function>comparePartial</function> method. </para> </listitem> @@ -251,10 +251,10 @@ </variablelist> An operator class must also provide a function to check if an indexed item - matches the query. It comes in two flavors, a boolean <function>consistent</> - function, and a ternary <function>triConsistent</> function. - <function>triConsistent</> covers the functionality of both, so providing - <function>triConsistent</> alone is sufficient. However, if the boolean + matches the query. It comes in two flavors, a boolean <function>consistent</function> + function, and a ternary <function>triConsistent</function> function. + <function>triConsistent</function> covers the functionality of both, so providing + <function>triConsistent</function> alone is sufficient. However, if the boolean variant is significantly cheaper to calculate, it can be advantageous to provide both. If only the boolean variant is provided, some optimizations that depend on refuting index items before fetching all the keys are @@ -264,48 +264,48 @@ <varlistentry> <term><function>bool consistent(bool check[], StrategyNumber n, Datum query, int32 nkeys, Pointer extra_data[], bool *recheck, - Datum queryKeys[], bool nullFlags[])</></term> + Datum queryKeys[], bool nullFlags[])</function></term> <listitem> <para> Returns TRUE if an indexed item satisfies the query operator with - strategy number <literal>n</> (or might satisfy it, if the recheck + strategy number <literal>n</literal> (or might satisfy it, if the recheck indication is returned). This function does not have direct access to the indexed item's value, since <acronym>GIN</acronym> does not store items explicitly. Rather, what is available is knowledge about which key values extracted from the query appear in a given - indexed item. The <literal>check</> array has length - <literal>nkeys</>, which is the same as the number of keys previously - returned by <function>extractQuery</> for this <literal>query</> datum. + indexed item. The <literal>check</literal> array has length + <literal>nkeys</literal>, which is the same as the number of keys previously + returned by <function>extractQuery</function> for this <literal>query</literal> datum. Each element of the - <literal>check</> array is TRUE if the indexed item contains the + <literal>check</literal> array is TRUE if the indexed item contains the corresponding query key, i.e., if (check[i] == TRUE) the i-th key of the - <function>extractQuery</> result array is present in the indexed item. - The original <literal>query</> datum is - passed in case the <function>consistent</> method needs to consult it, - and so are the <literal>queryKeys[]</> and <literal>nullFlags[]</> - arrays previously returned by <function>extractQuery</>. - <literal>extra_data</> is the extra-data array returned by - <function>extractQuery</>, or <symbol>NULL</symbol> if none. + <function>extractQuery</function> result array is present in the indexed item. + The original <literal>query</literal> datum is + passed in case the <function>consistent</function> method needs to consult it, + and so are the <literal>queryKeys[]</literal> and <literal>nullFlags[]</literal> + arrays previously returned by <function>extractQuery</function>. + <literal>extra_data</literal> is the extra-data array returned by + <function>extractQuery</function>, or <symbol>NULL</symbol> if none. </para> <para> - When <function>extractQuery</> returns a null key in - <literal>queryKeys[]</>, the corresponding <literal>check[]</> element + When <function>extractQuery</function> returns a null key in + <literal>queryKeys[]</literal>, the corresponding <literal>check[]</literal> element is TRUE if the indexed item contains a null key; that is, the - semantics of <literal>check[]</> are like <literal>IS NOT DISTINCT - FROM</>. The <function>consistent</> function can examine the - corresponding <literal>nullFlags[]</> element if it needs to tell + semantics of <literal>check[]</literal> are like <literal>IS NOT DISTINCT + FROM</literal>. The <function>consistent</function> function can examine the + corresponding <literal>nullFlags[]</literal> element if it needs to tell the difference between a regular value match and a null match. </para> <para> - On success, <literal>*recheck</> should be set to TRUE if the heap + On success, <literal>*recheck</literal> should be set to TRUE if the heap tuple needs to be rechecked against the query operator, or FALSE if the index test is exact. That is, a FALSE return value guarantees that the heap tuple does not match the query; a TRUE return value with - <literal>*recheck</> set to FALSE guarantees that the heap tuple does + <literal>*recheck</literal> set to FALSE guarantees that the heap tuple does match the query; and a TRUE return value with - <literal>*recheck</> set to TRUE means that the heap tuple might match + <literal>*recheck</literal> set to TRUE means that the heap tuple might match the query, so it needs to be fetched and rechecked by evaluating the query operator directly against the originally indexed item. </para> @@ -315,30 +315,30 @@ <varlistentry> <term><function>GinTernaryValue triConsistent(GinTernaryValue check[], StrategyNumber n, Datum query, int32 nkeys, Pointer extra_data[], - Datum queryKeys[], bool nullFlags[])</></term> + Datum queryKeys[], bool nullFlags[])</function></term> <listitem> <para> - <function>triConsistent</> is similar to <function>consistent</>, - but instead of booleans in the <literal>check</> vector, there are + <function>triConsistent</function> is similar to <function>consistent</function>, + but instead of booleans in the <literal>check</literal> vector, there are three possible values for each - key: <literal>GIN_TRUE</>, <literal>GIN_FALSE</> and - <literal>GIN_MAYBE</>. <literal>GIN_FALSE</> and <literal>GIN_TRUE</> + key: <literal>GIN_TRUE</literal>, <literal>GIN_FALSE</literal> and + <literal>GIN_MAYBE</literal>. <literal>GIN_FALSE</literal> and <literal>GIN_TRUE</literal> have the same meaning as regular boolean values, while - <literal>GIN_MAYBE</> means that the presence of that key is not known. - When <literal>GIN_MAYBE</> values are present, the function should only - return <literal>GIN_TRUE</> if the item certainly matches whether or + <literal>GIN_MAYBE</literal> means that the presence of that key is not known. + When <literal>GIN_MAYBE</literal> values are present, the function should only + return <literal>GIN_TRUE</literal> if the item certainly matches whether or not the index item contains the corresponding query keys. Likewise, the - function must return <literal>GIN_FALSE</> only if the item certainly - does not match, whether or not it contains the <literal>GIN_MAYBE</> - keys. If the result depends on the <literal>GIN_MAYBE</> entries, i.e., + function must return <literal>GIN_FALSE</literal> only if the item certainly + does not match, whether or not it contains the <literal>GIN_MAYBE</literal> + keys. If the result depends on the <literal>GIN_MAYBE</literal> entries, i.e., the match cannot be confirmed or refuted based on the known query keys, - the function must return <literal>GIN_MAYBE</>. + the function must return <literal>GIN_MAYBE</literal>. </para> <para> - When there are no <literal>GIN_MAYBE</> values in the <literal>check</> - vector, a <literal>GIN_MAYBE</> return value is the equivalent of - setting the <literal>recheck</> flag in the - boolean <function>consistent</> function. + When there are no <literal>GIN_MAYBE</literal> values in the <literal>check</literal> + vector, a <literal>GIN_MAYBE</literal> return value is the equivalent of + setting the <literal>recheck</literal> flag in the + boolean <function>consistent</function> function. </para> </listitem> </varlistentry> @@ -352,7 +352,7 @@ <variablelist> <varlistentry> - <term><function>int compare(Datum a, Datum b)</></term> + <term><function>int compare(Datum a, Datum b)</function></term> <listitem> <para> Compares two keys (not indexed items!) and returns an integer less than @@ -364,13 +364,13 @@ </varlistentry> </variablelist> - Alternatively, if the operator class does not provide a <function>compare</> + Alternatively, if the operator class does not provide a <function>compare</function> method, GIN will look up the default btree operator class for the index key data type, and use its comparison function. It is recommended to specify the comparison function in a GIN operator class that is meant for just one data type, as looking up the btree operator class costs a few cycles. However, polymorphic GIN operator classes (such - as <literal>array_ops</>) typically cannot specify a single comparison + as <literal>array_ops</literal>) typically cannot specify a single comparison function. </para> @@ -381,7 +381,7 @@ <variablelist> <varlistentry> <term><function>int comparePartial(Datum partial_key, Datum key, StrategyNumber n, - Pointer extra_data)</></term> + Pointer extra_data)</function></term> <listitem> <para> Compare a partial-match query key to an index key. Returns an integer @@ -389,11 +389,11 @@ does not match the query, but the index scan should continue; zero means that the index key does match the query; greater than zero indicates that the index scan should stop because no more matches - are possible. The strategy number <literal>n</> of the operator + are possible. The strategy number <literal>n</literal> of the operator that generated the partial match query is provided, in case its semantics are needed to determine when to end the scan. Also, - <literal>extra_data</> is the corresponding element of the extra-data - array made by <function>extractQuery</>, or <symbol>NULL</symbol> if none. + <literal>extra_data</literal> is the corresponding element of the extra-data + array made by <function>extractQuery</function>, or <symbol>NULL</symbol> if none. Null keys are never passed to this function. </para> </listitem> @@ -402,25 +402,25 @@ </para> <para> - To support <quote>partial match</> queries, an operator class must - provide the <function>comparePartial</> method, and its - <function>extractQuery</> method must set the <literal>pmatch</> + To support <quote>partial match</quote> queries, an operator class must + provide the <function>comparePartial</function> method, and its + <function>extractQuery</function> method must set the <literal>pmatch</literal> parameter when a partial-match query is encountered. See <xref linkend="gin-partial-match"> for details. </para> <para> - The actual data types of the various <literal>Datum</> values mentioned + The actual data types of the various <literal>Datum</literal> values mentioned above vary depending on the operator class. The item values passed to - <function>extractValue</> are always of the operator class's input type, and - all key values must be of the class's <literal>STORAGE</> type. The type of - the <literal>query</> argument passed to <function>extractQuery</>, - <function>consistent</> and <function>triConsistent</> is whatever is the + <function>extractValue</function> are always of the operator class's input type, and + all key values must be of the class's <literal>STORAGE</literal> type. The type of + the <literal>query</literal> argument passed to <function>extractQuery</function>, + <function>consistent</function> and <function>triConsistent</function> is whatever is the right-hand input type of the class member operator identified by the strategy number. This need not be the same as the indexed type, so long as key values of the correct type can be extracted from it. However, it is recommended that the SQL declarations of these three support functions use - the opclass's indexed data type for the <literal>query</> argument, even + the opclass's indexed data type for the <literal>query</literal> argument, even though the actual type might be something else depending on the operator. </para> @@ -434,8 +434,8 @@ constructed over keys, where each key is an element of one or more indexed items (a member of an array, for example) and where each tuple in a leaf page contains either a pointer to a B-tree of heap pointers (a - <quote>posting tree</>), or a simple list of heap pointers (a <quote>posting - list</>) when the list is small enough to fit into a single index tuple along + <quote>posting tree</quote>), or a simple list of heap pointers (a <quote>posting + list</quote>) when the list is small enough to fit into a single index tuple along with the key value. </para> @@ -443,7 +443,7 @@ As of <productname>PostgreSQL</productname> 9.1, null key values can be included in the index. Also, placeholder nulls are included in the index for indexed items that are null or contain no keys according to - <function>extractValue</>. This allows searches that should find empty + <function>extractValue</function>. This allows searches that should find empty items to do so. </para> @@ -461,7 +461,7 @@ intrinsic nature of inverted indexes: inserting or updating one heap row can cause many inserts into the index (one for each key extracted from the indexed item). As of <productname>PostgreSQL</productname> 8.4, - <acronym>GIN</> is capable of postponing much of this work by inserting + <acronym>GIN</acronym> is capable of postponing much of this work by inserting new tuples into a temporary, unsorted list of pending entries. When the table is vacuumed or autoanalyzed, or when <function>gin_clean_pending_list</function> function is called, or if the @@ -479,7 +479,7 @@ of pending entries in addition to searching the regular index, and so a large list of pending entries will slow searches significantly. Another disadvantage is that, while most updates are fast, an update - that causes the pending list to become <quote>too large</> will incur an + that causes the pending list to become <quote>too large</quote> will incur an immediate cleanup cycle and thus be much slower than other updates. Proper use of autovacuum can minimize both of these problems. </para> @@ -497,15 +497,15 @@ <title>Partial Match Algorithm</title> <para> - GIN can support <quote>partial match</> queries, in which the query + GIN can support <quote>partial match</quote> queries, in which the query does not determine an exact match for one or more keys, but the possible matches fall within a reasonably narrow range of key values (within the - key sorting order determined by the <function>compare</> support method). - The <function>extractQuery</> method, instead of returning a key value + key sorting order determined by the <function>compare</function> support method). + The <function>extractQuery</function> method, instead of returning a key value to be matched exactly, returns a key value that is the lower bound of - the range to be searched, and sets the <literal>pmatch</> flag true. - The key range is then scanned using the <function>comparePartial</> - method. <function>comparePartial</> must return zero for a matching + the range to be searched, and sets the <literal>pmatch</literal> flag true. + The key range is then scanned using the <function>comparePartial</function> + method. <function>comparePartial</function> must return zero for a matching index key, less than zero for a non-match that is still within the range to be searched, or greater than zero if the index key is past the range that could match. @@ -542,7 +542,7 @@ <listitem> <para> Build time for a <acronym>GIN</acronym> index is very sensitive to - the <varname>maintenance_work_mem</> setting; it doesn't pay to + the <varname>maintenance_work_mem</varname> setting; it doesn't pay to skimp on work memory during index creation. </para> </listitem> @@ -553,18 +553,18 @@ <listitem> <para> During a series of insertions into an existing <acronym>GIN</acronym> - index that has <literal>fastupdate</> enabled, the system will clean up + index that has <literal>fastupdate</literal> enabled, the system will clean up the pending-entry list whenever the list grows larger than - <varname>gin_pending_list_limit</>. To avoid fluctuations in observed + <varname>gin_pending_list_limit</varname>. To avoid fluctuations in observed response time, it's desirable to have pending-list cleanup occur in the background (i.e., via autovacuum). Foreground cleanup operations - can be avoided by increasing <varname>gin_pending_list_limit</> + can be avoided by increasing <varname>gin_pending_list_limit</varname> or making autovacuum more aggressive. However, enlarging the threshold of the cleanup operation means that if a foreground cleanup does occur, it will take even longer. </para> <para> - <varname>gin_pending_list_limit</> can be overridden for individual + <varname>gin_pending_list_limit</varname> can be overridden for individual GIN indexes by changing storage parameters, and which allows each GIN index to have its own cleanup threshold. For example, it's possible to increase the threshold only for the GIN @@ -616,7 +616,7 @@ <para> <acronym>GIN</acronym> assumes that indexable operators are strict. This - means that <function>extractValue</> will not be called at all on a null + means that <function>extractValue</function> will not be called at all on a null item value (instead, a placeholder index entry is created automatically), and <function>extractQuery</function> will not be called on a null query value either (instead, the query is presumed to be unsatisfiable). Note @@ -629,36 +629,36 @@ <title>Examples</title> <para> - The core <productname>PostgreSQL</> distribution + The core <productname>PostgreSQL</productname> distribution includes the <acronym>GIN</acronym> operator classes previously shown in <xref linkend="gin-builtin-opclasses-table">. - The following <filename>contrib</> modules also contain + The following <filename>contrib</filename> modules also contain <acronym>GIN</acronym> operator classes: <variablelist> <varlistentry> - <term><filename>btree_gin</></term> + <term><filename>btree_gin</filename></term> <listitem> <para>B-tree equivalent functionality for several data types</para> </listitem> </varlistentry> <varlistentry> - <term><filename>hstore</></term> + <term><filename>hstore</filename></term> <listitem> <para>Module for storing (key, value) pairs</para> </listitem> </varlistentry> <varlistentry> - <term><filename>intarray</></term> + <term><filename>intarray</filename></term> <listitem> <para>Enhanced support for <type>int[]</type></para> </listitem> </varlistentry> <varlistentry> - <term><filename>pg_trgm</></term> + <term><filename>pg_trgm</filename></term> <listitem> <para>Text similarity using trigram matching</para> </listitem> diff --git a/doc/src/sgml/gist.sgml b/doc/src/sgml/gist.sgml index 1648eb3672f..4e4470d439b 100644 --- a/doc/src/sgml/gist.sgml +++ b/doc/src/sgml/gist.sgml @@ -44,7 +44,7 @@ <title>Built-in Operator Classes</title> <para> - The core <productname>PostgreSQL</> distribution + The core <productname>PostgreSQL</productname> distribution includes the <acronym>GiST</acronym> operator classes shown in <xref linkend="gist-builtin-opclasses-table">. (Some of the optional modules described in <xref linkend="contrib"> @@ -64,142 +64,142 @@ </thead> <tbody> <row> - <entry><literal>box_ops</></entry> - <entry><type>box</></entry> + <entry><literal>box_ops</literal></entry> + <entry><type>box</type></entry> <entry> - <literal>&&</> - <literal>&></> - <literal>&<</> - <literal>&<|</> - <literal>>></> - <literal><<</> - <literal><<|</> - <literal><@</> - <literal>@></> - <literal>@</> - <literal>|&></> - <literal>|>></> - <literal>~</> - <literal>~=</> + <literal>&&</literal> + <literal>&></literal> + <literal>&<</literal> + <literal>&<|</literal> + <literal>>></literal> + <literal><<</literal> + <literal><<|</literal> + <literal><@</literal> + <literal>@></literal> + <literal>@</literal> + <literal>|&></literal> + <literal>|>></literal> + <literal>~</literal> + <literal>~=</literal> </entry> <entry> </entry> </row> <row> - <entry><literal>circle_ops</></entry> - <entry><type>circle</></entry> + <entry><literal>circle_ops</literal></entry> + <entry><type>circle</type></entry> <entry> - <literal>&&</> - <literal>&></> - <literal>&<</> - <literal>&<|</> - <literal>>></> - <literal><<</> - <literal><<|</> - <literal><@</> - <literal>@></> - <literal>@</> - <literal>|&></> - <literal>|>></> - <literal>~</> - <literal>~=</> + <literal>&&</literal> + <literal>&></literal> + <literal>&<</literal> + <literal>&<|</literal> + <literal>>></literal> + <literal><<</literal> + <literal><<|</literal> + <literal><@</literal> + <literal>@></literal> + <literal>@</literal> + <literal>|&></literal> + <literal>|>></literal> + <literal>~</literal> + <literal>~=</literal> </entry> <entry> - <literal><-></> + <literal><-></literal> </entry> </row> <row> - <entry><literal>inet_ops</></entry> - <entry><type>inet</>, <type>cidr</></entry> + <entry><literal>inet_ops</literal></entry> + <entry><type>inet</type>, <type>cidr</type></entry> <entry> - <literal>&&</> - <literal>>></> - <literal>>>=</> - <literal>></> - <literal>>=</> - <literal><></> - <literal><<</> - <literal><<=</> - <literal><</> - <literal><=</> - <literal>=</> + <literal>&&</literal> + <literal>>></literal> + <literal>>>=</literal> + <literal>></literal> + <literal>>=</literal> + <literal><></literal> + <literal><<</literal> + <literal><<=</literal> + <literal><</literal> + <literal><=</literal> + <literal>=</literal> </entry> <entry> </entry> </row> <row> - <entry><literal>point_ops</></entry> - <entry><type>point</></entry> + <entry><literal>point_ops</literal></entry> + <entry><type>point</type></entry> <entry> - <literal>>></> - <literal>>^</> - <literal><<</> - <literal><@</> - <literal><@</> - <literal><@</> - <literal><^</> - <literal>~=</> + <literal>>></literal> + <literal>>^</literal> + <literal><<</literal> + <literal><@</literal> + <literal><@</literal> + <literal><@</literal> + <literal><^</literal> + <literal>~=</literal> </entry> <entry> - <literal><-></> + <literal><-></literal> </entry> </row> <row> - <entry><literal>poly_ops</></entry> - <entry><type>polygon</></entry> + <entry><literal>poly_ops</literal></entry> + <entry><type>polygon</type></entry> <entry> - <literal>&&</> - <literal>&></> - <literal>&<</> - <literal>&<|</> - <literal>>></> - <literal><<</> - <literal><<|</> - <literal><@</> - <literal>@></> - <literal>@</> - <literal>|&></> - <literal>|>></> - <literal>~</> - <literal>~=</> + <literal>&&</literal> + <literal>&></literal> + <literal>&<</literal> + <literal>&<|</literal> + <literal>>></literal> + <literal><<</literal> + <literal><<|</literal> + <literal><@</literal> + <literal>@></literal> + <literal>@</literal> + <literal>|&></literal> + <literal>|>></literal> + <literal>~</literal> + <literal>~=</literal> </entry> <entry> - <literal><-></> + <literal><-></literal> </entry> </row> <row> - <entry><literal>range_ops</></entry> + <entry><literal>range_ops</literal></entry> <entry>any range type</entry> <entry> - <literal>&&</> - <literal>&></> - <literal>&<</> - <literal>>></> - <literal><<</> - <literal><@</> - <literal>-|-</> - <literal>=</> - <literal>@></> - <literal>@></> + <literal>&&</literal> + <literal>&></literal> + <literal>&<</literal> + <literal>>></literal> + <literal><<</literal> + <literal><@</literal> + <literal>-|-</literal> + <literal>=</literal> + <literal>@></literal> + <literal>@></literal> </entry> <entry> </entry> </row> <row> - <entry><literal>tsquery_ops</></entry> - <entry><type>tsquery</></entry> + <entry><literal>tsquery_ops</literal></entry> + <entry><type>tsquery</type></entry> <entry> - <literal><@</> - <literal>@></> + <literal><@</literal> + <literal>@></literal> </entry> <entry> </entry> </row> <row> - <entry><literal>tsvector_ops</></entry> - <entry><type>tsvector</></entry> + <entry><literal>tsvector_ops</literal></entry> + <entry><type>tsvector</type></entry> <entry> - <literal>@@</> + <literal>@@</literal> </entry> <entry> </entry> @@ -209,9 +209,9 @@ </table> <para> - For historical reasons, the <literal>inet_ops</> operator class is - not the default class for types <type>inet</> and <type>cidr</>. - To use it, mention the class name in <command>CREATE INDEX</>, + For historical reasons, the <literal>inet_ops</literal> operator class is + not the default class for types <type>inet</type> and <type>cidr</type>. + To use it, mention the class name in <command>CREATE INDEX</command>, for example <programlisting> CREATE INDEX ON my_table USING GIST (my_inet_column inet_ops); @@ -270,53 +270,53 @@ CREATE INDEX ON my_table USING GIST (my_inet_column inet_ops); There are five methods that an index operator class for <acronym>GiST</acronym> must provide, and four that are optional. Correctness of the index is ensured - by proper implementation of the <function>same</>, <function>consistent</> - and <function>union</> methods, while efficiency (size and speed) of the - index will depend on the <function>penalty</> and <function>picksplit</> + by proper implementation of the <function>same</function>, <function>consistent</function> + and <function>union</function> methods, while efficiency (size and speed) of the + index will depend on the <function>penalty</function> and <function>picksplit</function> methods. - Two optional methods are <function>compress</> and - <function>decompress</>, which allow an index to have internal tree data of + Two optional methods are <function>compress</function> and + <function>decompress</function>, which allow an index to have internal tree data of a different type than the data it indexes. The leaves are to be of the indexed data type, while the other tree nodes can be of any C struct (but - you still have to follow <productname>PostgreSQL</> data type rules here, - see about <literal>varlena</> for variable sized data). If the tree's - internal data type exists at the SQL level, the <literal>STORAGE</> option - of the <command>CREATE OPERATOR CLASS</> command can be used. - The optional eighth method is <function>distance</>, which is needed + you still have to follow <productname>PostgreSQL</productname> data type rules here, + see about <literal>varlena</literal> for variable sized data). If the tree's + internal data type exists at the SQL level, the <literal>STORAGE</literal> option + of the <command>CREATE OPERATOR CLASS</command> command can be used. + The optional eighth method is <function>distance</function>, which is needed if the operator class wishes to support ordered scans (nearest-neighbor - searches). The optional ninth method <function>fetch</> is needed if the + searches). The optional ninth method <function>fetch</function> is needed if the operator class wishes to support index-only scans, except when the - <function>compress</> method is omitted. + <function>compress</function> method is omitted. </para> <variablelist> <varlistentry> - <term><function>consistent</></term> + <term><function>consistent</function></term> <listitem> <para> - Given an index entry <literal>p</> and a query value <literal>q</>, + Given an index entry <literal>p</literal> and a query value <literal>q</literal>, this function determines whether the index entry is - <quote>consistent</> with the query; that is, could the predicate - <quote><replaceable>indexed_column</> - <replaceable>indexable_operator</> <literal>q</></quote> be true for + <quote>consistent</quote> with the query; that is, could the predicate + <quote><replaceable>indexed_column</replaceable> + <replaceable>indexable_operator</replaceable> <literal>q</literal></quote> be true for any row represented by the index entry? For a leaf index entry this is equivalent to testing the indexable condition, while for an internal tree node this determines whether it is necessary to scan the subtree of the index represented by the tree node. When the result is - <literal>true</>, a <literal>recheck</> flag must also be returned. + <literal>true</literal>, a <literal>recheck</literal> flag must also be returned. This indicates whether the predicate is certainly true or only possibly - true. If <literal>recheck</> = <literal>false</> then the index has - tested the predicate condition exactly, whereas if <literal>recheck</> - = <literal>true</> the row is only a candidate match. In that case the + true. If <literal>recheck</literal> = <literal>false</literal> then the index has + tested the predicate condition exactly, whereas if <literal>recheck</literal> + = <literal>true</literal> the row is only a candidate match. In that case the system will automatically evaluate the - <replaceable>indexable_operator</> against the actual row value to see + <replaceable>indexable_operator</replaceable> against the actual row value to see if it is really a match. This convention allows <acronym>GiST</acronym> to support both lossless and lossy index structures. </para> <para> - The <acronym>SQL</> declaration of the function must look like this: + The <acronym>SQL</acronym> declaration of the function must look like this: <programlisting> CREATE OR REPLACE FUNCTION my_consistent(internal, data_type, smallint, oid, internal) @@ -356,23 +356,23 @@ my_consistent(PG_FUNCTION_ARGS) } </programlisting> - Here, <varname>key</> is an element in the index and <varname>query</> - the value being looked up in the index. The <literal>StrategyNumber</> + Here, <varname>key</varname> is an element in the index and <varname>query</varname> + the value being looked up in the index. The <literal>StrategyNumber</literal> parameter indicates which operator of your operator class is being applied — it matches one of the operator numbers in the - <command>CREATE OPERATOR CLASS</> command. + <command>CREATE OPERATOR CLASS</command> command. </para> <para> Depending on which operators you have included in the class, the data - type of <varname>query</> could vary with the operator, since it will + type of <varname>query</varname> could vary with the operator, since it will be whatever type is on the righthand side of the operator, which might be different from the indexed data type appearing on the lefthand side. (The above code skeleton assumes that only one type is possible; if - not, fetching the <varname>query</> argument value would have to depend + not, fetching the <varname>query</varname> argument value would have to depend on the operator.) It is recommended that the SQL declaration of - the <function>consistent</> function use the opclass's indexed data - type for the <varname>query</> argument, even though the actual type + the <function>consistent</function> function use the opclass's indexed data + type for the <varname>query</varname> argument, even though the actual type might be something else depending on the operator. </para> @@ -380,7 +380,7 @@ my_consistent(PG_FUNCTION_ARGS) </varlistentry> <varlistentry> - <term><function>union</></term> + <term><function>union</function></term> <listitem> <para> This method consolidates information in the tree. Given a set of @@ -389,7 +389,7 @@ my_consistent(PG_FUNCTION_ARGS) </para> <para> - The <acronym>SQL</> declaration of the function must look like this: + The <acronym>SQL</acronym> declaration of the function must look like this: <programlisting> CREATE OR REPLACE FUNCTION my_union(internal, internal) @@ -439,44 +439,44 @@ my_union(PG_FUNCTION_ARGS) <para> As you can see, in this skeleton we're dealing with a data type - where <literal>union(X, Y, Z) = union(union(X, Y), Z)</>. It's easy + where <literal>union(X, Y, Z) = union(union(X, Y), Z)</literal>. It's easy enough to support data types where this is not the case, by implementing the proper union algorithm in this - <acronym>GiST</> support method. + <acronym>GiST</acronym> support method. </para> <para> - The result of the <function>union</> function must be a value of the + The result of the <function>union</function> function must be a value of the index's storage type, whatever that is (it might or might not be - different from the indexed column's type). The <function>union</> - function should return a pointer to newly <function>palloc()</>ed + different from the indexed column's type). The <function>union</function> + function should return a pointer to newly <function>palloc()</function>ed memory. You can't just return the input value as-is, even if there is no type change. </para> <para> - As shown above, the <function>union</> function's - first <type>internal</> argument is actually - a <structname>GistEntryVector</> pointer. The second argument is a + As shown above, the <function>union</function> function's + first <type>internal</type> argument is actually + a <structname>GistEntryVector</structname> pointer. The second argument is a pointer to an integer variable, which can be ignored. (It used to be - required that the <function>union</> function store the size of its + required that the <function>union</function> function store the size of its result value into that variable, but this is no longer necessary.) </para> </listitem> </varlistentry> <varlistentry> - <term><function>compress</></term> + <term><function>compress</function></term> <listitem> <para> Converts a data item into a format suitable for physical storage in an index page. - If the <function>compress</> method is omitted, data items are stored + If the <function>compress</function> method is omitted, data items are stored in the index without modification. </para> <para> - The <acronym>SQL</> declaration of the function must look like this: + The <acronym>SQL</acronym> declaration of the function must look like this: <programlisting> CREATE OR REPLACE FUNCTION my_compress(internal) @@ -519,7 +519,7 @@ my_compress(PG_FUNCTION_ARGS) </para> <para> - You have to adapt <replaceable>compressed_data_type</> to the specific + You have to adapt <replaceable>compressed_data_type</replaceable> to the specific type you're converting to in order to compress your leaf nodes, of course. </para> @@ -527,24 +527,24 @@ my_compress(PG_FUNCTION_ARGS) </varlistentry> <varlistentry> - <term><function>decompress</></term> + <term><function>decompress</function></term> <listitem> <para> Converts the stored representation of a data item into a format that can be manipulated by the other GiST methods in the operator class. - If the <function>decompress</> method is omitted, it is assumed that + If the <function>decompress</function> method is omitted, it is assumed that the other GiST methods can work directly on the stored data format. - (<function>decompress</> is not necessarily the reverse of + (<function>decompress</function> is not necessarily the reverse of the <function>compress</function> method; in particular, if <function>compress</function> is lossy then it's impossible - for <function>decompress</> to exactly reconstruct the original - data. <function>decompress</> is not necessarily equivalent - to <function>fetch</>, either, since the other GiST methods might not + for <function>decompress</function> to exactly reconstruct the original + data. <function>decompress</function> is not necessarily equivalent + to <function>fetch</function>, either, since the other GiST methods might not require full reconstruction of the data.) </para> <para> - The <acronym>SQL</> declaration of the function must look like this: + The <acronym>SQL</acronym> declaration of the function must look like this: <programlisting> CREATE OR REPLACE FUNCTION my_decompress(internal) @@ -573,7 +573,7 @@ my_decompress(PG_FUNCTION_ARGS) </varlistentry> <varlistentry> - <term><function>penalty</></term> + <term><function>penalty</function></term> <listitem> <para> Returns a value indicating the <quote>cost</quote> of inserting the new @@ -584,7 +584,7 @@ my_decompress(PG_FUNCTION_ARGS) </para> <para> - The <acronym>SQL</> declaration of the function must look like this: + The <acronym>SQL</acronym> declaration of the function must look like this: <programlisting> CREATE OR REPLACE FUNCTION my_penalty(internal, internal, internal) @@ -612,15 +612,15 @@ my_penalty(PG_FUNCTION_ARGS) } </programlisting> - For historical reasons, the <function>penalty</> function doesn't - just return a <type>float</> result; instead it has to store the value + For historical reasons, the <function>penalty</function> function doesn't + just return a <type>float</type> result; instead it has to store the value at the location indicated by the third argument. The return value per se is ignored, though it's conventional to pass back the address of that argument. </para> <para> - The <function>penalty</> function is crucial to good performance of + The <function>penalty</function> function is crucial to good performance of the index. It'll get used at insertion time to determine which branch to follow when choosing where to add the new entry in the tree. At query time, the more balanced the index, the quicker the lookup. @@ -629,7 +629,7 @@ my_penalty(PG_FUNCTION_ARGS) </varlistentry> <varlistentry> - <term><function>picksplit</></term> + <term><function>picksplit</function></term> <listitem> <para> When an index page split is necessary, this function decides which @@ -638,7 +638,7 @@ my_penalty(PG_FUNCTION_ARGS) </para> <para> - The <acronym>SQL</> declaration of the function must look like this: + The <acronym>SQL</acronym> declaration of the function must look like this: <programlisting> CREATE OR REPLACE FUNCTION my_picksplit(internal, internal) @@ -725,33 +725,33 @@ my_picksplit(PG_FUNCTION_ARGS) } </programlisting> - Notice that the <function>picksplit</> function's result is delivered - by modifying the passed-in <structname>v</> structure. The return + Notice that the <function>picksplit</function> function's result is delivered + by modifying the passed-in <structname>v</structname> structure. The return value per se is ignored, though it's conventional to pass back the - address of <structname>v</>. + address of <structname>v</structname>. </para> <para> - Like <function>penalty</>, the <function>picksplit</> function + Like <function>penalty</function>, the <function>picksplit</function> function is crucial to good performance of the index. Designing suitable - <function>penalty</> and <function>picksplit</> implementations + <function>penalty</function> and <function>picksplit</function> implementations is where the challenge of implementing well-performing - <acronym>GiST</> indexes lies. + <acronym>GiST</acronym> indexes lies. </para> </listitem> </varlistentry> <varlistentry> - <term><function>same</></term> + <term><function>same</function></term> <listitem> <para> Returns true if two index entries are identical, false otherwise. - (An <quote>index entry</> is a value of the index's storage type, + (An <quote>index entry</quote> is a value of the index's storage type, not necessarily the original indexed column's type.) </para> <para> - The <acronym>SQL</> declaration of the function must look like this: + The <acronym>SQL</acronym> declaration of the function must look like this: <programlisting> CREATE OR REPLACE FUNCTION my_same(storage_type, storage_type, internal) @@ -777,7 +777,7 @@ my_same(PG_FUNCTION_ARGS) } </programlisting> - For historical reasons, the <function>same</> function doesn't + For historical reasons, the <function>same</function> function doesn't just return a Boolean result; instead it has to store the flag at the location indicated by the third argument. The return value per se is ignored, though it's conventional to pass back the @@ -787,15 +787,15 @@ my_same(PG_FUNCTION_ARGS) </varlistentry> <varlistentry> - <term><function>distance</></term> + <term><function>distance</function></term> <listitem> <para> - Given an index entry <literal>p</> and a query value <literal>q</>, + Given an index entry <literal>p</literal> and a query value <literal>q</literal>, this function determines the index entry's - <quote>distance</> from the query value. This function must be + <quote>distance</quote> from the query value. This function must be supplied if the operator class contains any ordering operators. A query using the ordering operator will be implemented by returning - index entries with the smallest <quote>distance</> values first, + index entries with the smallest <quote>distance</quote> values first, so the results must be consistent with the operator's semantics. For a leaf index entry the result just represents the distance to the index entry; for an internal tree node, the result must be the @@ -803,7 +803,7 @@ my_same(PG_FUNCTION_ARGS) </para> <para> - The <acronym>SQL</> declaration of the function must look like this: + The <acronym>SQL</acronym> declaration of the function must look like this: <programlisting> CREATE OR REPLACE FUNCTION my_distance(internal, data_type, smallint, oid, internal) @@ -836,8 +836,8 @@ my_distance(PG_FUNCTION_ARGS) } </programlisting> - The arguments to the <function>distance</> function are identical to - the arguments of the <function>consistent</> function. + The arguments to the <function>distance</function> function are identical to + the arguments of the <function>consistent</function> function. </para> <para> @@ -847,31 +847,31 @@ my_distance(PG_FUNCTION_ARGS) geometric applications. For an internal tree node, the distance returned must not be greater than the distance to any of the child nodes. If the returned distance is not exact, the function must set - <literal>*recheck</> to true. (This is not necessary for internal tree + <literal>*recheck</literal> to true. (This is not necessary for internal tree nodes; for them, the calculation is always assumed to be inexact.) In this case the executor will calculate the accurate distance after fetching the tuple from the heap, and reorder the tuples if necessary. </para> <para> - If the distance function returns <literal>*recheck = true</> for any + If the distance function returns <literal>*recheck = true</literal> for any leaf node, the original ordering operator's return type must - be <type>float8</> or <type>float4</>, and the distance function's + be <type>float8</type> or <type>float4</type>, and the distance function's result values must be comparable to those of the original ordering operator, since the executor will sort using both distance function results and recalculated ordering-operator results. Otherwise, the - distance function's result values can be any finite <type>float8</> + distance function's result values can be any finite <type>float8</type> values, so long as the relative order of the result values matches the order returned by the ordering operator. (Infinity and minus infinity are used internally to handle cases such as nulls, so it is not - recommended that <function>distance</> functions return these values.) + recommended that <function>distance</function> functions return these values.) </para> </listitem> </varlistentry> <varlistentry> - <term><function>fetch</></term> + <term><function>fetch</function></term> <listitem> <para> Converts the compressed index representation of a data item into the @@ -880,7 +880,7 @@ my_distance(PG_FUNCTION_ARGS) </para> <para> - The <acronym>SQL</> declaration of the function must look like this: + The <acronym>SQL</acronym> declaration of the function must look like this: <programlisting> CREATE OR REPLACE FUNCTION my_fetch(internal) @@ -889,14 +889,14 @@ AS 'MODULE_PATHNAME' LANGUAGE C STRICT; </programlisting> - The argument is a pointer to a <structname>GISTENTRY</> struct. On - entry, its <structfield>key</> field contains a non-NULL leaf datum in - compressed form. The return value is another <structname>GISTENTRY</> - struct, whose <structfield>key</> field contains the same datum in its + The argument is a pointer to a <structname>GISTENTRY</structname> struct. On + entry, its <structfield>key</structfield> field contains a non-NULL leaf datum in + compressed form. The return value is another <structname>GISTENTRY</structname> + struct, whose <structfield>key</structfield> field contains the same datum in its original, uncompressed form. If the opclass's compress function does - nothing for leaf entries, the <function>fetch</> method can return the + nothing for leaf entries, the <function>fetch</function> method can return the argument as-is. Or, if the opclass does not have a compress function, - the <function>fetch</> method can be omitted as well, since it would + the <function>fetch</function> method can be omitted as well, since it would necessarily be a no-op. </para> @@ -933,7 +933,7 @@ my_fetch(PG_FUNCTION_ARGS) <para> If the compress method is lossy for leaf entries, the operator class cannot support index-only scans, and must not define - a <function>fetch</> function. + a <function>fetch</function> function. </para> </listitem> @@ -942,15 +942,15 @@ my_fetch(PG_FUNCTION_ARGS) <para> All the GiST support methods are normally called in short-lived memory - contexts; that is, <varname>CurrentMemoryContext</> will get reset after + contexts; that is, <varname>CurrentMemoryContext</varname> will get reset after each tuple is processed. It is therefore not very important to worry about pfree'ing everything you palloc. However, in some cases it's useful for a support method to cache data across repeated calls. To do that, allocate - the longer-lived data in <literal>fcinfo->flinfo->fn_mcxt</>, and - keep a pointer to it in <literal>fcinfo->flinfo->fn_extra</>. Such + the longer-lived data in <literal>fcinfo->flinfo->fn_mcxt</literal>, and + keep a pointer to it in <literal>fcinfo->flinfo->fn_extra</literal>. Such data will survive for the life of the index operation (e.g., a single GiST index scan, index build, or index tuple insertion). Be careful to pfree - the previous value when replacing a <literal>fn_extra</> value, or the leak + the previous value when replacing a <literal>fn_extra</literal> value, or the leak will accumulate for the duration of the operation. </para> @@ -974,7 +974,7 @@ my_fetch(PG_FUNCTION_ARGS) </para> <para> - However, buffering index build needs to call the <function>penalty</> + However, buffering index build needs to call the <function>penalty</function> function more often, which consumes some extra CPU resources. Also, the buffers used in the buffering build need temporary disk space, up to the size of the resulting index. Buffering can also influence the quality @@ -1002,57 +1002,57 @@ my_fetch(PG_FUNCTION_ARGS) The <productname>PostgreSQL</productname> source distribution includes several examples of index methods implemented using <acronym>GiST</acronym>. The core system currently provides text search - support (indexing for <type>tsvector</> and <type>tsquery</>) as well as + support (indexing for <type>tsvector</type> and <type>tsquery</type>) as well as R-Tree equivalent functionality for some of the built-in geometric data types - (see <filename>src/backend/access/gist/gistproc.c</>). The following - <filename>contrib</> modules also contain <acronym>GiST</acronym> + (see <filename>src/backend/access/gist/gistproc.c</filename>). The following + <filename>contrib</filename> modules also contain <acronym>GiST</acronym> operator classes: <variablelist> <varlistentry> - <term><filename>btree_gist</></term> + <term><filename>btree_gist</filename></term> <listitem> <para>B-tree equivalent functionality for several data types</para> </listitem> </varlistentry> <varlistentry> - <term><filename>cube</></term> + <term><filename>cube</filename></term> <listitem> <para>Indexing for multidimensional cubes</para> </listitem> </varlistentry> <varlistentry> - <term><filename>hstore</></term> + <term><filename>hstore</filename></term> <listitem> <para>Module for storing (key, value) pairs</para> </listitem> </varlistentry> <varlistentry> - <term><filename>intarray</></term> + <term><filename>intarray</filename></term> <listitem> <para>RD-Tree for one-dimensional array of int4 values</para> </listitem> </varlistentry> <varlistentry> - <term><filename>ltree</></term> + <term><filename>ltree</filename></term> <listitem> <para>Indexing for tree-like structures</para> </listitem> </varlistentry> <varlistentry> - <term><filename>pg_trgm</></term> + <term><filename>pg_trgm</filename></term> <listitem> <para>Text similarity using trigram matching</para> </listitem> </varlistentry> <varlistentry> - <term><filename>seg</></term> + <term><filename>seg</filename></term> <listitem> <para>Indexing for <quote>float ranges</quote></para> </listitem> diff --git a/doc/src/sgml/high-availability.sgml b/doc/src/sgml/high-availability.sgml index 6c54fbd40d8..086d6abb302 100644 --- a/doc/src/sgml/high-availability.sgml +++ b/doc/src/sgml/high-availability.sgml @@ -3,12 +3,12 @@ <chapter id="high-availability"> <title>High Availability, Load Balancing, and Replication</title> - <indexterm><primary>high availability</></> - <indexterm><primary>failover</></> - <indexterm><primary>replication</></> - <indexterm><primary>load balancing</></> - <indexterm><primary>clustering</></> - <indexterm><primary>data partitioning</></> + <indexterm><primary>high availability</primary></indexterm> + <indexterm><primary>failover</primary></indexterm> + <indexterm><primary>replication</primary></indexterm> + <indexterm><primary>load balancing</primary></indexterm> + <indexterm><primary>clustering</primary></indexterm> + <indexterm><primary>data partitioning</primary></indexterm> <para> Database servers can work together to allow a second server to @@ -38,12 +38,12 @@ <para> Some solutions deal with synchronization by allowing only one server to modify the data. Servers that can modify data are - called read/write, <firstterm>master</> or <firstterm>primary</> servers. - Servers that track changes in the master are called <firstterm>standby</> - or <firstterm>secondary</> servers. A standby server that cannot be connected + called read/write, <firstterm>master</firstterm> or <firstterm>primary</firstterm> servers. + Servers that track changes in the master are called <firstterm>standby</firstterm> + or <firstterm>secondary</firstterm> servers. A standby server that cannot be connected to until it is promoted to a master server is called a <firstterm>warm - standby</> server, and one that can accept connections and serves read-only - queries is called a <firstterm>hot standby</> server. + standby</firstterm> server, and one that can accept connections and serves read-only + queries is called a <firstterm>hot standby</firstterm> server. </para> <para> @@ -99,7 +99,7 @@ <para> Shared hardware functionality is common in network storage devices. Using a network file system is also possible, though care must be - taken that the file system has full <acronym>POSIX</> behavior (see <xref + taken that the file system has full <acronym>POSIX</acronym> behavior (see <xref linkend="creating-cluster-nfs">). One significant limitation of this method is that if the shared disk array fails or becomes corrupt, the primary and standby servers are both nonfunctional. Another issue is @@ -121,7 +121,7 @@ the mirroring must be done in a way that ensures the standby server has a consistent copy of the file system — specifically, writes to the standby must be done in the same order as those on the master. - <productname>DRBD</> is a popular file system replication solution + <productname>DRBD</productname> is a popular file system replication solution for Linux. </para> @@ -143,7 +143,7 @@ protocol to make nodes agree on a serializable transactional order. <para> Warm and hot standby servers can be kept current by reading a - stream of write-ahead log (<acronym>WAL</>) + stream of write-ahead log (<acronym>WAL</acronym>) records. If the main server fails, the standby contains almost all of the data of the main server, and can be quickly made the new master database server. This can be synchronous or @@ -189,7 +189,7 @@ protocol to make nodes agree on a serializable transactional order. </para> <para> - <productname>Slony-I</> is an example of this type of replication, with per-table + <productname>Slony-I</productname> is an example of this type of replication, with per-table granularity, and support for multiple standby servers. Because it updates the standby server asynchronously (in batches), there is possible data loss during fail over. @@ -212,7 +212,7 @@ protocol to make nodes agree on a serializable transactional order. <para> If queries are simply broadcast unmodified, functions like - <function>random()</>, <function>CURRENT_TIMESTAMP</>, and + <function>random()</function>, <function>CURRENT_TIMESTAMP</function>, and sequences can have different values on different servers. This is because each server operates independently, and because SQL queries are broadcast (and not actual modified rows). If @@ -226,7 +226,7 @@ protocol to make nodes agree on a serializable transactional order. transactions either commit or abort on all servers, perhaps using two-phase commit (<xref linkend="sql-prepare-transaction"> and <xref linkend="sql-commit-prepared">). - <productname>Pgpool-II</> and <productname>Continuent Tungsten</> + <productname>Pgpool-II</productname> and <productname>Continuent Tungsten</productname> are examples of this type of replication. </para> </listitem> @@ -266,12 +266,12 @@ protocol to make nodes agree on a serializable transactional order. there is no need to partition workloads between master and standby servers, and because the data changes are sent from one server to another, there is no problem with non-deterministic - functions like <function>random()</>. + functions like <function>random()</function>. </para> <para> - <productname>PostgreSQL</> does not offer this type of replication, - though <productname>PostgreSQL</> two-phase commit (<xref + <productname>PostgreSQL</productname> does not offer this type of replication, + though <productname>PostgreSQL</productname> two-phase commit (<xref linkend="sql-prepare-transaction"> and <xref linkend="sql-commit-prepared">) can be used to implement this in application code or middleware. @@ -284,8 +284,8 @@ protocol to make nodes agree on a serializable transactional order. <listitem> <para> - Because <productname>PostgreSQL</> is open source and easily - extended, a number of companies have taken <productname>PostgreSQL</> + Because <productname>PostgreSQL</productname> is open source and easily + extended, a number of companies have taken <productname>PostgreSQL</productname> and created commercial closed-source solutions with unique failover, replication, and load balancing capabilities. </para> @@ -475,9 +475,9 @@ protocol to make nodes agree on a serializable transactional order. concurrently on a single query. It is usually accomplished by splitting the data among servers and having each server execute its part of the query and return results to a central server where they - are combined and returned to the user. <productname>Pgpool-II</> + are combined and returned to the user. <productname>Pgpool-II</productname> has this capability. Also, this can be implemented using the - <productname>PL/Proxy</> tool set. + <productname>PL/Proxy</productname> tool set. </para> </listitem> @@ -494,10 +494,10 @@ protocol to make nodes agree on a serializable transactional order. <para> Continuous archiving can be used to create a <firstterm>high - availability</> (HA) cluster configuration with one or more - <firstterm>standby servers</> ready to take over operations if the + availability</firstterm> (HA) cluster configuration with one or more + <firstterm>standby servers</firstterm> ready to take over operations if the primary server fails. This capability is widely referred to as - <firstterm>warm standby</> or <firstterm>log shipping</>. + <firstterm>warm standby</firstterm> or <firstterm>log shipping</firstterm>. </para> <para> @@ -513,7 +513,7 @@ protocol to make nodes agree on a serializable transactional order. <para> Directly moving WAL records from one database server to another - is typically described as log shipping. <productname>PostgreSQL</> + is typically described as log shipping. <productname>PostgreSQL</productname> implements file-based log shipping by transferring WAL records one file (WAL segment) at a time. WAL files (16MB) can be shipped easily and cheaply over any distance, whether it be to an @@ -597,7 +597,7 @@ protocol to make nodes agree on a serializable transactional order. <para> In general, log shipping between servers running different major - <productname>PostgreSQL</> release + <productname>PostgreSQL</productname> release levels is not possible. It is the policy of the PostgreSQL Global Development Group not to make changes to disk formats during minor release upgrades, so it is likely that running different minor release levels @@ -621,32 +621,32 @@ protocol to make nodes agree on a serializable transactional order. (see <xref linkend="restore-command">) or directly from the master over a TCP connection (streaming replication). The standby server will also attempt to restore any WAL found in the standby cluster's - <filename>pg_wal</> directory. That typically happens after a server + <filename>pg_wal</filename> directory. That typically happens after a server restart, when the standby replays again WAL that was streamed from the master before the restart, but you can also manually copy files to - <filename>pg_wal</> at any time to have them replayed. + <filename>pg_wal</filename> at any time to have them replayed. </para> <para> At startup, the standby begins by restoring all WAL available in the - archive location, calling <varname>restore_command</>. Once it - reaches the end of WAL available there and <varname>restore_command</> - fails, it tries to restore any WAL available in the <filename>pg_wal</> directory. + archive location, calling <varname>restore_command</varname>. Once it + reaches the end of WAL available there and <varname>restore_command</varname> + fails, it tries to restore any WAL available in the <filename>pg_wal</filename> directory. If that fails, and streaming replication has been configured, the standby tries to connect to the primary server and start streaming WAL - from the last valid record found in archive or <filename>pg_wal</>. If that fails + from the last valid record found in archive or <filename>pg_wal</filename>. If that fails or streaming replication is not configured, or if the connection is later disconnected, the standby goes back to step 1 and tries to restore the file from the archive again. This loop of retries from the - archive, <filename>pg_wal</>, and via streaming replication goes on until the server + archive, <filename>pg_wal</filename>, and via streaming replication goes on until the server is stopped or failover is triggered by a trigger file. </para> <para> Standby mode is exited and the server switches to normal operation - when <command>pg_ctl promote</> is run or a trigger file is found - (<varname>trigger_file</>). Before failover, - any WAL immediately available in the archive or in <filename>pg_wal</> will be + when <command>pg_ctl promote</command> is run or a trigger file is found + (<varname>trigger_file</varname>). Before failover, + any WAL immediately available in the archive or in <filename>pg_wal</filename> will be restored, but no attempt is made to connect to the master. </para> </sect2> @@ -667,8 +667,8 @@ protocol to make nodes agree on a serializable transactional order. If you want to use streaming replication, set up authentication on the primary server to allow replication connections from the standby server(s); that is, create a role and provide a suitable entry or - entries in <filename>pg_hba.conf</> with the database field set to - <literal>replication</>. Also ensure <varname>max_wal_senders</> is set + entries in <filename>pg_hba.conf</filename> with the database field set to + <literal>replication</literal>. Also ensure <varname>max_wal_senders</varname> is set to a sufficiently large value in the configuration file of the primary server. If replication slots will be used, ensure that <varname>max_replication_slots</varname> is set sufficiently @@ -687,19 +687,19 @@ protocol to make nodes agree on a serializable transactional order. <para> To set up the standby server, restore the base backup taken from primary server (see <xref linkend="backup-pitr-recovery">). Create a recovery - command file <filename>recovery.conf</> in the standby's cluster data - directory, and turn on <varname>standby_mode</>. Set - <varname>restore_command</> to a simple command to copy files from + command file <filename>recovery.conf</filename> in the standby's cluster data + directory, and turn on <varname>standby_mode</varname>. Set + <varname>restore_command</varname> to a simple command to copy files from the WAL archive. If you plan to have multiple standby servers for high - availability purposes, set <varname>recovery_target_timeline</> to - <literal>latest</>, to make the standby server follow the timeline change + availability purposes, set <varname>recovery_target_timeline</varname> to + <literal>latest</literal>, to make the standby server follow the timeline change that occurs at failover to another standby. </para> <note> <para> Do not use pg_standby or similar tools with the built-in standby mode - described here. <varname>restore_command</> should return immediately + described here. <varname>restore_command</varname> should return immediately if the file does not exist; the server will retry the command again if necessary. See <xref linkend="log-shipping-alternative"> for using tools like pg_standby. @@ -708,11 +708,11 @@ protocol to make nodes agree on a serializable transactional order. <para> If you want to use streaming replication, fill in - <varname>primary_conninfo</> with a libpq connection string, including + <varname>primary_conninfo</varname> with a libpq connection string, including the host name (or IP address) and any additional details needed to connect to the primary server. If the primary needs a password for authentication, the password needs to be specified in - <varname>primary_conninfo</> as well. + <varname>primary_conninfo</varname> as well. </para> <para> @@ -726,8 +726,8 @@ protocol to make nodes agree on a serializable transactional order. If you're using a WAL archive, its size can be minimized using the <xref linkend="archive-cleanup-command"> parameter to remove files that are no longer required by the standby server. - The <application>pg_archivecleanup</> utility is designed specifically to - be used with <varname>archive_cleanup_command</> in typical single-standby + The <application>pg_archivecleanup</application> utility is designed specifically to + be used with <varname>archive_cleanup_command</varname> in typical single-standby configurations, see <xref linkend="pgarchivecleanup">. Note however, that if you're using the archive for backup purposes, you need to retain files needed to recover from at least the latest base @@ -735,7 +735,7 @@ protocol to make nodes agree on a serializable transactional order. </para> <para> - A simple example of a <filename>recovery.conf</> is: + A simple example of a <filename>recovery.conf</filename> is: <programlisting> standby_mode = 'on' primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' @@ -746,7 +746,7 @@ archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r' <para> You can have any number of standby servers, but if you use streaming - replication, make sure you set <varname>max_wal_senders</> high enough in + replication, make sure you set <varname>max_wal_senders</varname> high enough in the primary to allow them to be connected simultaneously. </para> @@ -773,7 +773,7 @@ archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r' changes becoming visible in the standby. This delay is however much smaller than with file-based log shipping, typically under one second assuming the standby is powerful enough to keep up with the load. With - streaming replication, <varname>archive_timeout</> is not required to + streaming replication, <varname>archive_timeout</varname> is not required to reduce the data loss window. </para> @@ -782,7 +782,7 @@ archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r' archiving, the server might recycle old WAL segments before the standby has received them. If this occurs, the standby will need to be reinitialized from a new base backup. You can avoid this by setting - <varname>wal_keep_segments</> to a value large enough to ensure that + <varname>wal_keep_segments</varname> to a value large enough to ensure that WAL segments are not recycled too early, or by configuring a replication slot for the standby. If you set up a WAL archive that's accessible from the standby, these solutions are not required, since the standby can @@ -793,11 +793,11 @@ archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r' To use streaming replication, set up a file-based log-shipping standby server as described in <xref linkend="warm-standby">. The step that turns a file-based log-shipping standby into streaming replication - standby is setting <varname>primary_conninfo</> setting in the - <filename>recovery.conf</> file to point to the primary server. Set + standby is setting <varname>primary_conninfo</varname> setting in the + <filename>recovery.conf</filename> file to point to the primary server. Set <xref linkend="guc-listen-addresses"> and authentication options - (see <filename>pg_hba.conf</>) on the primary so that the standby server - can connect to the <literal>replication</> pseudo-database on the primary + (see <filename>pg_hba.conf</filename>) on the primary so that the standby server + can connect to the <literal>replication</literal> pseudo-database on the primary server (see <xref linkend="streaming-replication-authentication">). </para> @@ -815,7 +815,7 @@ archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r' </para> <para> - When the standby is started and <varname>primary_conninfo</> is set + When the standby is started and <varname>primary_conninfo</varname> is set correctly, the standby will connect to the primary after replaying all WAL files available in the archive. If the connection is established successfully, you will see a walreceiver process in the standby, and @@ -829,20 +829,20 @@ archive_cleanup_command = 'pg_archivecleanup /path/to/archive %r' so that only trusted users can read the WAL stream, because it is easy to extract privileged information from it. Standby servers must authenticate to the primary as a superuser or an account that has the - <literal>REPLICATION</> privilege. It is recommended to create a - dedicated user account with <literal>REPLICATION</> and <literal>LOGIN</> - privileges for replication. While <literal>REPLICATION</> privilege gives + <literal>REPLICATION</literal> privilege. It is recommended to create a + dedicated user account with <literal>REPLICATION</literal> and <literal>LOGIN</literal> + privileges for replication. While <literal>REPLICATION</literal> privilege gives very high permissions, it does not allow the user to modify any data on - the primary system, which the <literal>SUPERUSER</> privilege does. + the primary system, which the <literal>SUPERUSER</literal> privilege does. </para> <para> Client authentication for replication is controlled by a - <filename>pg_hba.conf</> record specifying <literal>replication</> in the - <replaceable>database</> field. For example, if the standby is running on - host IP <literal>192.168.1.100</> and the account name for replication - is <literal>foo</>, the administrator can add the following line to the - <filename>pg_hba.conf</> file on the primary: + <filename>pg_hba.conf</filename> record specifying <literal>replication</literal> in the + <replaceable>database</replaceable> field. For example, if the standby is running on + host IP <literal>192.168.1.100</literal> and the account name for replication + is <literal>foo</literal>, the administrator can add the following line to the + <filename>pg_hba.conf</filename> file on the primary: <programlisting> # Allow the user "foo" from host 192.168.1.100 to connect to the primary @@ -854,14 +854,14 @@ host replication foo 192.168.1.100/32 md5 </para> <para> The host name and port number of the primary, connection user name, - and password are specified in the <filename>recovery.conf</> file. - The password can also be set in the <filename>~/.pgpass</> file on the - standby (specify <literal>replication</> in the <replaceable>database</> + and password are specified in the <filename>recovery.conf</filename> file. + The password can also be set in the <filename>~/.pgpass</filename> file on the + standby (specify <literal>replication</literal> in the <replaceable>database</replaceable> field). - For example, if the primary is running on host IP <literal>192.168.1.50</>, + For example, if the primary is running on host IP <literal>192.168.1.50</literal>, port <literal>5432</literal>, the account name for replication is - <literal>foo</>, and the password is <literal>foopass</>, the administrator - can add the following line to the <filename>recovery.conf</> file on the + <literal>foo</literal>, and the password is <literal>foopass</literal>, the administrator + can add the following line to the <filename>recovery.conf</filename> file on the standby: <programlisting> @@ -880,22 +880,22 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' standby. You can calculate this lag by comparing the current WAL write location on the primary with the last WAL location received by the standby. These locations can be retrieved using - <function>pg_current_wal_lsn</> on the primary and - <function>pg_last_wal_receive_lsn</> on the standby, + <function>pg_current_wal_lsn</function> on the primary and + <function>pg_last_wal_receive_lsn</function> on the standby, respectively (see <xref linkend="functions-admin-backup-table"> and <xref linkend="functions-recovery-info-table"> for details). The last WAL receive location in the standby is also displayed in the process status of the WAL receiver process, displayed using the - <command>ps</> command (see <xref linkend="monitoring-ps"> for details). + <command>ps</command> command (see <xref linkend="monitoring-ps"> for details). </para> <para> You can retrieve a list of WAL sender processes via the <link linkend="monitoring-stats-views-table"> - <literal>pg_stat_replication</></link> view. Large differences between - <function>pg_current_wal_lsn</> and the view's <literal>sent_lsn</> field + <literal>pg_stat_replication</literal></link> view. Large differences between + <function>pg_current_wal_lsn</function> and the view's <literal>sent_lsn</literal> field might indicate that the master server is under heavy load, while - differences between <literal>sent_lsn</> and - <function>pg_last_wal_receive_lsn</> on the standby might indicate + differences between <literal>sent_lsn</literal> and + <function>pg_last_wal_receive_lsn</function> on the standby might indicate network delay, or that the standby is under heavy load. </para> </sect3> @@ -911,7 +911,7 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' Replication slots provide an automated way to ensure that the master does not remove WAL segments until they have been received by all standbys, and that the master does not remove rows which could cause a - <link linkend="hot-standby-conflict">recovery conflict</> even when the + <link linkend="hot-standby-conflict">recovery conflict</link> even when the standby is disconnected. </para> <para> @@ -922,7 +922,7 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass' However, these methods often result in retaining more WAL segments than required, whereas replication slots retain only the number of segments known to be needed. An advantage of these methods is that they bound - the space requirement for <literal>pg_wal</>; there is currently no way + the space requirement for <literal>pg_wal</literal>; there is currently no way to do this using replication slots. </para> <para> @@ -966,8 +966,8 @@ postgres=# SELECT * FROM pg_replication_slots; node_a_slot | physical | | | f | | | (1 row) </programlisting> - To configure the standby to use this slot, <varname>primary_slot_name</> - should be configured in the standby's <filename>recovery.conf</>. + To configure the standby to use this slot, <varname>primary_slot_name</varname> + should be configured in the standby's <filename>recovery.conf</filename>. Here is a simple example: <programlisting> standby_mode = 'on' @@ -1022,7 +1022,7 @@ primary_slot_name = 'node_a_slot' <para> If an upstream standby server is promoted to become new master, downstream servers will continue to stream from the new master if - <varname>recovery_target_timeline</> is set to <literal>'latest'</>. + <varname>recovery_target_timeline</varname> is set to <literal>'latest'</literal>. </para> <para> @@ -1031,7 +1031,7 @@ primary_slot_name = 'node_a_slot' <xref linkend="guc-max-wal-senders"> and <xref linkend="guc-hot-standby">, and configure <link linkend="auth-pg-hba-conf">host-based authentication</link>). - You will also need to set <varname>primary_conninfo</> in the downstream + You will also need to set <varname>primary_conninfo</varname> in the downstream standby to point to the cascading standby. </para> </sect2> @@ -1044,7 +1044,7 @@ primary_slot_name = 'node_a_slot' </indexterm> <para> - <productname>PostgreSQL</> streaming replication is asynchronous by + <productname>PostgreSQL</productname> streaming replication is asynchronous by default. If the primary server crashes then some transactions that were committed may not have been replicated to the standby server, causing data loss. The amount @@ -1058,8 +1058,8 @@ primary_slot_name = 'node_a_slot' standby servers. This extends that standard level of durability offered by a transaction commit. This level of protection is referred to as 2-safe replication in computer science theory, and group-1-safe - (group-safe and 1-safe) when <varname>synchronous_commit</> is set to - <literal>remote_write</>. + (group-safe and 1-safe) when <varname>synchronous_commit</varname> is set to + <literal>remote_write</literal>. </para> <para> @@ -1104,14 +1104,14 @@ primary_slot_name = 'node_a_slot' Once streaming replication has been configured, configuring synchronous replication requires only one additional configuration step: <xref linkend="guc-synchronous-standby-names"> must be set to - a non-empty value. <varname>synchronous_commit</> must also be set to - <literal>on</>, but since this is the default value, typically no change is + a non-empty value. <varname>synchronous_commit</varname> must also be set to + <literal>on</literal>, but since this is the default value, typically no change is required. (See <xref linkend="runtime-config-wal-settings"> and <xref linkend="runtime-config-replication-master">.) This configuration will cause each commit to wait for confirmation that the standby has written the commit record to durable storage. - <varname>synchronous_commit</> can be set by individual + <varname>synchronous_commit</varname> can be set by individual users, so it can be configured in the configuration file, for particular users or databases, or dynamically by applications, in order to control the durability guarantee on a per-transaction basis. @@ -1121,12 +1121,12 @@ primary_slot_name = 'node_a_slot' After a commit record has been written to disk on the primary, the WAL record is then sent to the standby. The standby sends reply messages each time a new batch of WAL data is written to disk, unless - <varname>wal_receiver_status_interval</> is set to zero on the standby. - In the case that <varname>synchronous_commit</> is set to - <literal>remote_apply</>, the standby sends reply messages when the commit + <varname>wal_receiver_status_interval</varname> is set to zero on the standby. + In the case that <varname>synchronous_commit</varname> is set to + <literal>remote_apply</literal>, the standby sends reply messages when the commit record is replayed, making the transaction visible. If the standby is chosen as a synchronous standby, according to the setting - of <varname>synchronous_standby_names</> on the primary, the reply + of <varname>synchronous_standby_names</varname> on the primary, the reply messages from that standby will be considered along with those from other synchronous standbys to decide when to release transactions waiting for confirmation that the commit record has been received. These parameters @@ -1138,13 +1138,13 @@ primary_slot_name = 'node_a_slot' </para> <para> - Setting <varname>synchronous_commit</> to <literal>remote_write</> will + Setting <varname>synchronous_commit</varname> to <literal>remote_write</literal> will cause each commit to wait for confirmation that the standby has received the commit record and written it out to its own operating system, but not for the data to be flushed to disk on the standby. This - setting provides a weaker guarantee of durability than <literal>on</> + setting provides a weaker guarantee of durability than <literal>on</literal> does: the standby could lose the data in the event of an operating system - crash, though not a <productname>PostgreSQL</> crash. + crash, though not a <productname>PostgreSQL</productname> crash. However, it's a useful setting in practice because it can decrease the response time for the transaction. Data loss could only occur if both the primary and the standby crash and @@ -1152,7 +1152,7 @@ primary_slot_name = 'node_a_slot' </para> <para> - Setting <varname>synchronous_commit</> to <literal>remote_apply</> will + Setting <varname>synchronous_commit</varname> to <literal>remote_apply</literal> will cause each commit to wait until the current synchronous standbys report that they have replayed the transaction, making it visible to user queries. In simple cases, this allows for load balancing with causal @@ -1176,12 +1176,12 @@ primary_slot_name = 'node_a_slot' transactions will wait until all the standby servers which are considered as synchronous confirm receipt of their data. The number of synchronous standbys that transactions must wait for replies from is specified in - <varname>synchronous_standby_names</>. This parameter also specifies - a list of standby names and the method (<literal>FIRST</> and - <literal>ANY</>) to choose synchronous standbys from the listed ones. + <varname>synchronous_standby_names</varname>. This parameter also specifies + a list of standby names and the method (<literal>FIRST</literal> and + <literal>ANY</literal>) to choose synchronous standbys from the listed ones. </para> <para> - The method <literal>FIRST</> specifies a priority-based synchronous + The method <literal>FIRST</literal> specifies a priority-based synchronous replication and makes transaction commits wait until their WAL records are replicated to the requested number of synchronous standbys chosen based on their priorities. The standbys whose names appear earlier in the list are @@ -1192,36 +1192,36 @@ primary_slot_name = 'node_a_slot' next-highest-priority standby. </para> <para> - An example of <varname>synchronous_standby_names</> for + An example of <varname>synchronous_standby_names</varname> for a priority-based multiple synchronous standbys is: <programlisting> synchronous_standby_names = 'FIRST 2 (s1, s2, s3)' </programlisting> - In this example, if four standby servers <literal>s1</>, <literal>s2</>, - <literal>s3</> and <literal>s4</> are running, the two standbys - <literal>s1</> and <literal>s2</> will be chosen as synchronous standbys + In this example, if four standby servers <literal>s1</literal>, <literal>s2</literal>, + <literal>s3</literal> and <literal>s4</literal> are running, the two standbys + <literal>s1</literal> and <literal>s2</literal> will be chosen as synchronous standbys because their names appear early in the list of standby names. - <literal>s3</> is a potential synchronous standby and will take over - the role of synchronous standby when either of <literal>s1</> or - <literal>s2</> fails. <literal>s4</> is an asynchronous standby since + <literal>s3</literal> is a potential synchronous standby and will take over + the role of synchronous standby when either of <literal>s1</literal> or + <literal>s2</literal> fails. <literal>s4</literal> is an asynchronous standby since its name is not in the list. </para> <para> - The method <literal>ANY</> specifies a quorum-based synchronous + The method <literal>ANY</literal> specifies a quorum-based synchronous replication and makes transaction commits wait until their WAL records - are replicated to <emphasis>at least</> the requested number of + are replicated to <emphasis>at least</emphasis> the requested number of synchronous standbys in the list. </para> <para> - An example of <varname>synchronous_standby_names</> for + An example of <varname>synchronous_standby_names</varname> for a quorum-based multiple synchronous standbys is: <programlisting> synchronous_standby_names = 'ANY 2 (s1, s2, s3)' </programlisting> - In this example, if four standby servers <literal>s1</>, <literal>s2</>, - <literal>s3</> and <literal>s4</> are running, transaction commits will - wait for replies from at least any two standbys of <literal>s1</>, - <literal>s2</> and <literal>s3</>. <literal>s4</> is an asynchronous + In this example, if four standby servers <literal>s1</literal>, <literal>s2</literal>, + <literal>s3</literal> and <literal>s4</literal> are running, transaction commits will + wait for replies from at least any two standbys of <literal>s1</literal>, + <literal>s2</literal> and <literal>s3</literal>. <literal>s4</literal> is an asynchronous standby since its name is not in the list. </para> <para> @@ -1243,7 +1243,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' </para> <para> - <productname>PostgreSQL</> allows the application developer + <productname>PostgreSQL</productname> allows the application developer to specify the durability level required via replication. This can be specified for the system overall, though it can also be specified for specific users or connections, or even individual transactions. @@ -1275,10 +1275,10 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' <title>Planning for High Availability</title> <para> - <varname>synchronous_standby_names</> specifies the number and + <varname>synchronous_standby_names</varname> specifies the number and names of synchronous standbys that transaction commits made when - <varname>synchronous_commit</> is set to <literal>on</>, - <literal>remote_apply</> or <literal>remote_write</> will wait for + <varname>synchronous_commit</varname> is set to <literal>on</literal>, + <literal>remote_apply</literal> or <literal>remote_write</literal> will wait for responses from. Such transaction commits may never be completed if any one of synchronous standbys should crash. </para> @@ -1286,7 +1286,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' <para> The best solution for high availability is to ensure you keep as many synchronous standbys as requested. This can be achieved by naming multiple - potential synchronous standbys using <varname>synchronous_standby_names</>. + potential synchronous standbys using <varname>synchronous_standby_names</varname>. </para> <para> @@ -1305,14 +1305,14 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' <para> When a standby first attaches to the primary, it will not yet be properly - synchronized. This is described as <literal>catchup</> mode. Once + synchronized. This is described as <literal>catchup</literal> mode. Once the lag between standby and primary reaches zero for the first time - we move to real-time <literal>streaming</> state. + we move to real-time <literal>streaming</literal> state. The catch-up duration may be long immediately after the standby has been created. If the standby is shut down, then the catch-up period will increase according to the length of time the standby has been down. The standby is only able to become a synchronous standby - once it has reached <literal>streaming</> state. + once it has reached <literal>streaming</literal> state. This state can be viewed using the <structname>pg_stat_replication</structname> view. </para> @@ -1334,7 +1334,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' If you really cannot keep as many synchronous standbys as requested then you should decrease the number of synchronous standbys that transaction commits must wait for responses from - in <varname>synchronous_standby_names</> (or disable it) and + in <varname>synchronous_standby_names</varname> (or disable it) and reload the configuration file on the primary server. </para> @@ -1347,7 +1347,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' If you need to re-create a standby server while transactions are waiting, make sure that the commands pg_start_backup() and pg_stop_backup() are run in a session with - <varname>synchronous_commit</> = <literal>off</>, otherwise those + <varname>synchronous_commit</varname> = <literal>off</literal>, otherwise those requests will wait forever for the standby to appear. </para> @@ -1381,7 +1381,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' </para> <para> - If <varname>archive_mode</varname> is set to <literal>on</>, the + If <varname>archive_mode</varname> is set to <literal>on</literal>, the archiver is not enabled during recovery or standby mode. If the standby server is promoted, it will start archiving after the promotion, but will not archive any WAL it did not generate itself. To get a complete @@ -1415,7 +1415,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' If the primary server fails and the standby server becomes the new primary, and then the old primary restarts, you must have a mechanism for informing the old primary that it is no longer the primary. This is - sometimes known as <acronym>STONITH</> (Shoot The Other Node In The Head), which is + sometimes known as <acronym>STONITH</acronym> (Shoot The Other Node In The Head), which is necessary to avoid situations where both systems think they are the primary, which will lead to confusion and ultimately data loss. </para> @@ -1466,10 +1466,10 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' <para> To trigger failover of a log-shipping standby server, - run <command>pg_ctl promote</> or create a trigger - file with the file name and path specified by the <varname>trigger_file</> - setting in <filename>recovery.conf</>. If you're planning to use - <command>pg_ctl promote</> to fail over, <varname>trigger_file</> is + run <command>pg_ctl promote</command> or create a trigger + file with the file name and path specified by the <varname>trigger_file</varname> + setting in <filename>recovery.conf</filename>. If you're planning to use + <command>pg_ctl promote</command> to fail over, <varname>trigger_file</varname> is not required. If you're setting up the reporting servers that are only used to offload read-only queries from the primary, not for high availability purposes, you don't need to promote it. @@ -1481,9 +1481,9 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' <para> An alternative to the built-in standby mode described in the previous - sections is to use a <varname>restore_command</> that polls the archive location. + sections is to use a <varname>restore_command</varname> that polls the archive location. This was the only option available in versions 8.4 and below. In this - setup, set <varname>standby_mode</> off, because you are implementing + setup, set <varname>standby_mode</varname> off, because you are implementing the polling required for standby operation yourself. See the <xref linkend="pgstandby"> module for a reference implementation of this. @@ -1494,7 +1494,7 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' time, so if you use the standby server for queries (see Hot Standby), there is a delay between an action in the master and when the action becomes visible in the standby, corresponding the time it takes - to fill up the WAL file. <varname>archive_timeout</> can be used to make that delay + to fill up the WAL file. <varname>archive_timeout</varname> can be used to make that delay shorter. Also note that you can't combine streaming replication with this method. </para> @@ -1511,25 +1511,25 @@ synchronous_standby_names = 'ANY 2 (s1, s2, s3)' <para> The magic that makes the two loosely coupled servers work together is - simply a <varname>restore_command</> used on the standby that, + simply a <varname>restore_command</varname> used on the standby that, when asked for the next WAL file, waits for it to become available from - the primary. The <varname>restore_command</> is specified in the - <filename>recovery.conf</> file on the standby server. Normal recovery + the primary. The <varname>restore_command</varname> is specified in the + <filename>recovery.conf</filename> file on the standby server. Normal recovery processing would request a file from the WAL archive, reporting failure if the file was unavailable. For standby processing it is normal for the next WAL file to be unavailable, so the standby must wait for - it to appear. For files ending in <literal>.backup</> or - <literal>.history</> there is no need to wait, and a non-zero return - code must be returned. A waiting <varname>restore_command</> can be + it to appear. For files ending in <literal>.backup</literal> or + <literal>.history</literal> there is no need to wait, and a non-zero return + code must be returned. A waiting <varname>restore_command</varname> can be written as a custom script that loops after polling for the existence of the next WAL file. There must also be some way to trigger failover, which - should interrupt the <varname>restore_command</>, break the loop and + should interrupt the <varname>restore_command</varname>, break the loop and return a file-not-found error to the standby server. This ends recovery and the standby will then come up as a normal server. </para> <para> - Pseudocode for a suitable <varname>restore_command</> is: + Pseudocode for a suitable <varname>restore_command</varname> is: <programlisting> triggered = false; while (!NextWALFileReady() && !triggered) @@ -1544,7 +1544,7 @@ if (!triggered) </para> <para> - A working example of a waiting <varname>restore_command</> is provided + A working example of a waiting <varname>restore_command</varname> is provided in the <xref linkend="pgstandby"> module. It should be used as a reference on how to correctly implement the logic described above. It can also be extended as needed to support specific @@ -1553,14 +1553,14 @@ if (!triggered) <para> The method for triggering failover is an important part of planning - and design. One potential option is the <varname>restore_command</> + and design. One potential option is the <varname>restore_command</varname> command. It is executed once for each WAL file, but the process - running the <varname>restore_command</> is created and dies for + running the <varname>restore_command</varname> is created and dies for each file, so there is no daemon or server process, and signals or a signal handler cannot be used. Therefore, the - <varname>restore_command</> is not suitable to trigger failover. + <varname>restore_command</varname> is not suitable to trigger failover. It is possible to use a simple timeout facility, especially if - used in conjunction with a known <varname>archive_timeout</> + used in conjunction with a known <varname>archive_timeout</varname> setting on the primary. However, this is somewhat error prone since a network problem or busy primary server might be sufficient to initiate failover. A notification mechanism such as the explicit @@ -1579,7 +1579,7 @@ if (!triggered) <para> Set up primary and standby systems as nearly identical as possible, including two identical copies of - <productname>PostgreSQL</> at the same release level. + <productname>PostgreSQL</productname> at the same release level. </para> </listitem> <listitem> @@ -1602,8 +1602,8 @@ if (!triggered) <listitem> <para> Begin recovery on the standby server from the local WAL - archive, using a <filename>recovery.conf</> that specifies a - <varname>restore_command</> that waits as described + archive, using a <filename>recovery.conf</filename> that specifies a + <varname>restore_command</varname> that waits as described previously (see <xref linkend="backup-pitr-recovery">). </para> </listitem> @@ -1637,7 +1637,7 @@ if (!triggered) </para> <para> - An external program can call the <function>pg_walfile_name_offset()</> + An external program can call the <function>pg_walfile_name_offset()</function> function (see <xref linkend="functions-admin">) to find out the file name and the exact byte offset within it of the current end of WAL. It can then access the WAL file directly @@ -1646,17 +1646,17 @@ if (!triggered) loss is the polling cycle time of the copying program, which can be very small, and there is no wasted bandwidth from forcing partially-used segment files to be archived. Note that the standby servers' - <varname>restore_command</> scripts can only deal with whole WAL files, + <varname>restore_command</varname> scripts can only deal with whole WAL files, so the incrementally copied data is not ordinarily made available to the standby servers. It is of use only when the primary dies — then the last partial WAL file is fed to the standby before allowing it to come up. The correct implementation of this process requires - cooperation of the <varname>restore_command</> script with the data + cooperation of the <varname>restore_command</varname> script with the data copying program. </para> <para> - Starting with <productname>PostgreSQL</> version 9.0, you can use + Starting with <productname>PostgreSQL</productname> version 9.0, you can use streaming replication (see <xref linkend="streaming-replication">) to achieve the same benefits with less effort. </para> @@ -1716,17 +1716,17 @@ if (!triggered) <itemizedlist> <listitem> <para> - Query access - <command>SELECT</>, <command>COPY TO</> + Query access - <command>SELECT</command>, <command>COPY TO</command> </para> </listitem> <listitem> <para> - Cursor commands - <command>DECLARE</>, <command>FETCH</>, <command>CLOSE</> + Cursor commands - <command>DECLARE</command>, <command>FETCH</command>, <command>CLOSE</command> </para> </listitem> <listitem> <para> - Parameters - <command>SHOW</>, <command>SET</>, <command>RESET</> + Parameters - <command>SHOW</command>, <command>SET</command>, <command>RESET</command> </para> </listitem> <listitem> @@ -1735,17 +1735,17 @@ if (!triggered) <itemizedlist> <listitem> <para> - <command>BEGIN</>, <command>END</>, <command>ABORT</>, <command>START TRANSACTION</> + <command>BEGIN</command>, <command>END</command>, <command>ABORT</command>, <command>START TRANSACTION</command> </para> </listitem> <listitem> <para> - <command>SAVEPOINT</>, <command>RELEASE</>, <command>ROLLBACK TO SAVEPOINT</> + <command>SAVEPOINT</command>, <command>RELEASE</command>, <command>ROLLBACK TO SAVEPOINT</command> </para> </listitem> <listitem> <para> - <command>EXCEPTION</> blocks and other internal subtransactions + <command>EXCEPTION</command> blocks and other internal subtransactions </para> </listitem> </itemizedlist> @@ -1753,19 +1753,19 @@ if (!triggered) </listitem> <listitem> <para> - <command>LOCK TABLE</>, though only when explicitly in one of these modes: - <literal>ACCESS SHARE</>, <literal>ROW SHARE</> or <literal>ROW EXCLUSIVE</>. + <command>LOCK TABLE</command>, though only when explicitly in one of these modes: + <literal>ACCESS SHARE</literal>, <literal>ROW SHARE</literal> or <literal>ROW EXCLUSIVE</literal>. </para> </listitem> <listitem> <para> - Plans and resources - <command>PREPARE</>, <command>EXECUTE</>, - <command>DEALLOCATE</>, <command>DISCARD</> + Plans and resources - <command>PREPARE</command>, <command>EXECUTE</command>, + <command>DEALLOCATE</command>, <command>DISCARD</command> </para> </listitem> <listitem> <para> - Plugins and extensions - <command>LOAD</> + Plugins and extensions - <command>LOAD</command> </para> </listitem> </itemizedlist> @@ -1779,9 +1779,9 @@ if (!triggered) <itemizedlist> <listitem> <para> - Data Manipulation Language (DML) - <command>INSERT</>, - <command>UPDATE</>, <command>DELETE</>, <command>COPY FROM</>, - <command>TRUNCATE</>. + Data Manipulation Language (DML) - <command>INSERT</command>, + <command>UPDATE</command>, <command>DELETE</command>, <command>COPY FROM</command>, + <command>TRUNCATE</command>. Note that there are no allowed actions that result in a trigger being executed during recovery. This restriction applies even to temporary tables, because table rows cannot be read or written without @@ -1791,31 +1791,31 @@ if (!triggered) </listitem> <listitem> <para> - Data Definition Language (DDL) - <command>CREATE</>, - <command>DROP</>, <command>ALTER</>, <command>COMMENT</>. + Data Definition Language (DDL) - <command>CREATE</command>, + <command>DROP</command>, <command>ALTER</command>, <command>COMMENT</command>. This restriction applies even to temporary tables, because carrying out these operations would require updating the system catalog tables. </para> </listitem> <listitem> <para> - <command>SELECT ... FOR SHARE | UPDATE</>, because row locks cannot be + <command>SELECT ... FOR SHARE | UPDATE</command>, because row locks cannot be taken without updating the underlying data files. </para> </listitem> <listitem> <para> - Rules on <command>SELECT</> statements that generate DML commands. + Rules on <command>SELECT</command> statements that generate DML commands. </para> </listitem> <listitem> <para> - <command>LOCK</> that explicitly requests a mode higher than <literal>ROW EXCLUSIVE MODE</>. + <command>LOCK</command> that explicitly requests a mode higher than <literal>ROW EXCLUSIVE MODE</literal>. </para> </listitem> <listitem> <para> - <command>LOCK</> in short default form, since it requests <literal>ACCESS EXCLUSIVE MODE</>. + <command>LOCK</command> in short default form, since it requests <literal>ACCESS EXCLUSIVE MODE</literal>. </para> </listitem> <listitem> @@ -1824,19 +1824,19 @@ if (!triggered) <itemizedlist> <listitem> <para> - <command>BEGIN READ WRITE</>, - <command>START TRANSACTION READ WRITE</> + <command>BEGIN READ WRITE</command>, + <command>START TRANSACTION READ WRITE</command> </para> </listitem> <listitem> <para> - <command>SET TRANSACTION READ WRITE</>, - <command>SET SESSION CHARACTERISTICS AS TRANSACTION READ WRITE</> + <command>SET TRANSACTION READ WRITE</command>, + <command>SET SESSION CHARACTERISTICS AS TRANSACTION READ WRITE</command> </para> </listitem> <listitem> <para> - <command>SET transaction_read_only = off</> + <command>SET transaction_read_only = off</command> </para> </listitem> </itemizedlist> @@ -1844,35 +1844,35 @@ if (!triggered) </listitem> <listitem> <para> - Two-phase commit commands - <command>PREPARE TRANSACTION</>, - <command>COMMIT PREPARED</>, <command>ROLLBACK PREPARED</> + Two-phase commit commands - <command>PREPARE TRANSACTION</command>, + <command>COMMIT PREPARED</command>, <command>ROLLBACK PREPARED</command> because even read-only transactions need to write WAL in the prepare phase (the first phase of two phase commit). </para> </listitem> <listitem> <para> - Sequence updates - <function>nextval()</>, <function>setval()</> + Sequence updates - <function>nextval()</function>, <function>setval()</function> </para> </listitem> <listitem> <para> - <command>LISTEN</>, <command>UNLISTEN</>, <command>NOTIFY</> + <command>LISTEN</command>, <command>UNLISTEN</command>, <command>NOTIFY</command> </para> </listitem> </itemizedlist> </para> <para> - In normal operation, <quote>read-only</> transactions are allowed to - use <command>LISTEN</>, <command>UNLISTEN</>, and - <command>NOTIFY</>, so Hot Standby sessions operate under slightly tighter + In normal operation, <quote>read-only</quote> transactions are allowed to + use <command>LISTEN</command>, <command>UNLISTEN</command>, and + <command>NOTIFY</command>, so Hot Standby sessions operate under slightly tighter restrictions than ordinary read-only sessions. It is possible that some of these restrictions might be loosened in a future release. </para> <para> - During hot standby, the parameter <varname>transaction_read_only</> is always + During hot standby, the parameter <varname>transaction_read_only</varname> is always true and may not be changed. But as long as no attempt is made to modify the database, connections during hot standby will act much like any other database connection. If failover or switchover occurs, the database will @@ -1884,7 +1884,7 @@ if (!triggered) <para> Users will be able to tell whether their session is read-only by - issuing <command>SHOW transaction_read_only</>. In addition, a set of + issuing <command>SHOW transaction_read_only</command>. In addition, a set of functions (<xref linkend="functions-recovery-info-table">) allow users to access information about the standby server. These allow you to write programs that are aware of the current state of the database. These @@ -1907,7 +1907,7 @@ if (!triggered) <para> There are also additional types of conflict that can occur with Hot Standby. - These conflicts are <emphasis>hard conflicts</> in the sense that queries + These conflicts are <emphasis>hard conflicts</emphasis> in the sense that queries might need to be canceled and, in some cases, sessions disconnected to resolve them. The user is provided with several ways to handle these conflicts. Conflict cases include: @@ -1916,7 +1916,7 @@ if (!triggered) <listitem> <para> Access Exclusive locks taken on the primary server, including both - explicit <command>LOCK</> commands and various <acronym>DDL</> + explicit <command>LOCK</command> commands and various <acronym>DDL</acronym> actions, conflict with table accesses in standby queries. </para> </listitem> @@ -1935,7 +1935,7 @@ if (!triggered) <listitem> <para> Application of a vacuum cleanup record from WAL conflicts with - standby transactions whose snapshots can still <quote>see</> any of + standby transactions whose snapshots can still <quote>see</quote> any of the rows to be removed. </para> </listitem> @@ -1962,18 +1962,18 @@ if (!triggered) <para> An example of the problem situation is an administrator on the primary - server running <command>DROP TABLE</> on a table that is currently being + server running <command>DROP TABLE</command> on a table that is currently being queried on the standby server. Clearly the standby query cannot continue - if the <command>DROP TABLE</> is applied on the standby. If this situation - occurred on the primary, the <command>DROP TABLE</> would wait until the - other query had finished. But when <command>DROP TABLE</> is run on the + if the <command>DROP TABLE</command> is applied on the standby. If this situation + occurred on the primary, the <command>DROP TABLE</command> would wait until the + other query had finished. But when <command>DROP TABLE</command> is run on the primary, the primary doesn't have information about what queries are running on the standby, so it will not wait for any such standby queries. The WAL change records come through to the standby while the standby query is still running, causing a conflict. The standby server must either delay application of the WAL records (and everything after them, too) or else cancel the conflicting query so that the <command>DROP - TABLE</> can be applied. + TABLE</command> can be applied. </para> <para> @@ -1986,7 +1986,7 @@ if (!triggered) once it has taken longer than the relevant delay setting to apply any newly-received WAL data. There are two parameters so that different delay values can be specified for the case of reading WAL data from an archive - (i.e., initial recovery from a base backup or <quote>catching up</> a + (i.e., initial recovery from a base backup or <quote>catching up</quote> a standby server that has fallen far behind) versus reading WAL data via streaming replication. </para> @@ -2003,10 +2003,10 @@ if (!triggered) </para> <para> - Once the delay specified by <varname>max_standby_archive_delay</> or - <varname>max_standby_streaming_delay</> has been exceeded, conflicting + Once the delay specified by <varname>max_standby_archive_delay</varname> or + <varname>max_standby_streaming_delay</varname> has been exceeded, conflicting queries will be canceled. This usually results just in a cancellation - error, although in the case of replaying a <command>DROP DATABASE</> + error, although in the case of replaying a <command>DROP DATABASE</command> the entire conflicting session will be terminated. Also, if the conflict is over a lock held by an idle transaction, the conflicting session is terminated (this behavior might change in the future). @@ -2030,7 +2030,7 @@ if (!triggered) <para> The most common reason for conflict between standby queries and WAL replay - is <quote>early cleanup</>. Normally, <productname>PostgreSQL</> allows + is <quote>early cleanup</quote>. Normally, <productname>PostgreSQL</productname> allows cleanup of old row versions when there are no transactions that need to see them to ensure correct visibility of data according to MVCC rules. However, this rule can only be applied for transactions executing on the @@ -2041,7 +2041,7 @@ if (!triggered) <para> Experienced users should note that both row version cleanup and row version freezing will potentially conflict with standby queries. Running a manual - <command>VACUUM FREEZE</> is likely to cause conflicts even on tables with + <command>VACUUM FREEZE</command> is likely to cause conflicts even on tables with no updated or deleted rows. </para> @@ -2049,15 +2049,15 @@ if (!triggered) Users should be clear that tables that are regularly and heavily updated on the primary server will quickly cause cancellation of longer running queries on the standby. In such cases the setting of a finite value for - <varname>max_standby_archive_delay</> or - <varname>max_standby_streaming_delay</> can be considered similar to - setting <varname>statement_timeout</>. + <varname>max_standby_archive_delay</varname> or + <varname>max_standby_streaming_delay</varname> can be considered similar to + setting <varname>statement_timeout</varname>. </para> <para> Remedial possibilities exist if the number of standby-query cancellations is found to be unacceptable. The first option is to set the parameter - <varname>hot_standby_feedback</>, which prevents <command>VACUUM</> from + <varname>hot_standby_feedback</varname>, which prevents <command>VACUUM</command> from removing recently-dead rows and so cleanup conflicts do not occur. If you do this, you should note that this will delay cleanup of dead rows on the primary, @@ -2067,11 +2067,11 @@ if (!triggered) off-loading execution onto the standby. If standby servers connect and disconnect frequently, you might want to make adjustments to handle the period when - <varname>hot_standby_feedback</> feedback is not being provided. - For example, consider increasing <varname>max_standby_archive_delay</> + <varname>hot_standby_feedback</varname> feedback is not being provided. + For example, consider increasing <varname>max_standby_archive_delay</varname> so that queries are not rapidly canceled by conflicts in WAL archive files during disconnected periods. You should also consider increasing - <varname>max_standby_streaming_delay</> to avoid rapid cancellations + <varname>max_standby_streaming_delay</varname> to avoid rapid cancellations by newly-arrived streaming WAL entries after reconnection. </para> @@ -2080,16 +2080,16 @@ if (!triggered) on the primary server, so that dead rows will not be cleaned up as quickly as they normally would be. This will allow more time for queries to execute before they are canceled on the standby, without having to set - a high <varname>max_standby_streaming_delay</>. However it is + a high <varname>max_standby_streaming_delay</varname>. However it is difficult to guarantee any specific execution-time window with this - approach, since <varname>vacuum_defer_cleanup_age</> is measured in + approach, since <varname>vacuum_defer_cleanup_age</varname> is measured in transactions executed on the primary server. </para> <para> The number of query cancels and the reason for them can be viewed using - the <structname>pg_stat_database_conflicts</> system view on the standby - server. The <structname>pg_stat_database</> system view also contains + the <structname>pg_stat_database_conflicts</structname> system view on the standby + server. The <structname>pg_stat_database</structname> system view also contains summary information. </para> </sect2> @@ -2098,8 +2098,8 @@ if (!triggered) <title>Administrator's Overview</title> <para> - If <varname>hot_standby</> is <literal>on</> in <filename>postgresql.conf</> - (the default value) and there is a <filename>recovery.conf</> + If <varname>hot_standby</varname> is <literal>on</literal> in <filename>postgresql.conf</filename> + (the default value) and there is a <filename>recovery.conf</filename> file present, the server will run in Hot Standby mode. However, it may take some time for Hot Standby connections to be allowed, because the server will not accept connections until it has completed @@ -2120,8 +2120,8 @@ LOG: database system is ready to accept read only connections Consistency information is recorded once per checkpoint on the primary. It is not possible to enable hot standby when reading WAL - written during a period when <varname>wal_level</> was not set to - <literal>replica</> or <literal>logical</> on the primary. Reaching + written during a period when <varname>wal_level</varname> was not set to + <literal>replica</literal> or <literal>logical</literal> on the primary. Reaching a consistent state can also be delayed in the presence of both of these conditions: @@ -2140,7 +2140,7 @@ LOG: database system is ready to accept read only connections If you are running file-based log shipping ("warm standby"), you might need to wait until the next WAL file arrives, which could be as long as the - <varname>archive_timeout</> setting on the primary. + <varname>archive_timeout</varname> setting on the primary. </para> <para> @@ -2155,22 +2155,22 @@ LOG: database system is ready to accept read only connections <itemizedlist> <listitem> <para> - <varname>max_connections</> + <varname>max_connections</varname> </para> </listitem> <listitem> <para> - <varname>max_prepared_transactions</> + <varname>max_prepared_transactions</varname> </para> </listitem> <listitem> <para> - <varname>max_locks_per_transaction</> + <varname>max_locks_per_transaction</varname> </para> </listitem> <listitem> <para> - <varname>max_worker_processes</> + <varname>max_worker_processes</varname> </para> </listitem> </itemizedlist> @@ -2209,19 +2209,19 @@ LOG: database system is ready to accept read only connections <itemizedlist> <listitem> <para> - Data Definition Language (DDL) - e.g. <command>CREATE INDEX</> + Data Definition Language (DDL) - e.g. <command>CREATE INDEX</command> </para> </listitem> <listitem> <para> - Privilege and Ownership - <command>GRANT</>, <command>REVOKE</>, - <command>REASSIGN</> + Privilege and Ownership - <command>GRANT</command>, <command>REVOKE</command>, + <command>REASSIGN</command> </para> </listitem> <listitem> <para> - Maintenance commands - <command>ANALYZE</>, <command>VACUUM</>, - <command>CLUSTER</>, <command>REINDEX</> + Maintenance commands - <command>ANALYZE</command>, <command>VACUUM</command>, + <command>CLUSTER</command>, <command>REINDEX</command> </para> </listitem> </itemizedlist> @@ -2241,14 +2241,14 @@ LOG: database system is ready to accept read only connections </para> <para> - <function>pg_cancel_backend()</> - and <function>pg_terminate_backend()</> will work on user backends, + <function>pg_cancel_backend()</function> + and <function>pg_terminate_backend()</function> will work on user backends, but not the Startup process, which performs recovery. <structname>pg_stat_activity</structname> does not show recovering transactions as active. As a result, <structname>pg_prepared_xacts</structname> is always empty during recovery. If you wish to resolve in-doubt prepared transactions, view - <literal>pg_prepared_xacts</> on the primary and issue commands to + <literal>pg_prepared_xacts</literal> on the primary and issue commands to resolve transactions there or resolve them after the end of recovery. </para> @@ -2256,17 +2256,17 @@ LOG: database system is ready to accept read only connections <structname>pg_locks</structname> will show locks held by backends, as normal. <structname>pg_locks</structname> also shows a virtual transaction managed by the Startup process that owns all - <literal>AccessExclusiveLocks</> held by transactions being replayed by recovery. + <literal>AccessExclusiveLocks</literal> held by transactions being replayed by recovery. Note that the Startup process does not acquire locks to - make database changes, and thus locks other than <literal>AccessExclusiveLocks</> + make database changes, and thus locks other than <literal>AccessExclusiveLocks</literal> do not show in <structname>pg_locks</structname> for the Startup process; they are just presumed to exist. </para> <para> - The <productname>Nagios</> plugin <productname>check_pgsql</> will + The <productname>Nagios</productname> plugin <productname>check_pgsql</productname> will work, because the simple information it checks for exists. - The <productname>check_postgres</> monitoring script will also work, + The <productname>check_postgres</productname> monitoring script will also work, though some reported values could give different or confusing results. For example, last vacuum time will not be maintained, since no vacuum occurs on the standby. Vacuums running on the primary @@ -2275,11 +2275,11 @@ LOG: database system is ready to accept read only connections <para> WAL file control commands will not work during recovery, - e.g. <function>pg_start_backup</>, <function>pg_switch_wal</> etc. + e.g. <function>pg_start_backup</function>, <function>pg_switch_wal</function> etc. </para> <para> - Dynamically loadable modules work, including <structname>pg_stat_statements</>. + Dynamically loadable modules work, including <structname>pg_stat_statements</structname>. </para> <para> @@ -2292,8 +2292,8 @@ LOG: database system is ready to accept read only connections </para> <para> - Trigger-based replication systems such as <productname>Slony</>, - <productname>Londiste</> and <productname>Bucardo</> won't run on the + Trigger-based replication systems such as <productname>Slony</productname>, + <productname>Londiste</productname> and <productname>Bucardo</productname> won't run on the standby at all, though they will run happily on the primary server as long as the changes are not sent to standby servers to be applied. WAL replay is not trigger-based so you cannot relay from the @@ -2302,7 +2302,7 @@ LOG: database system is ready to accept read only connections </para> <para> - New OIDs cannot be assigned, though some <acronym>UUID</> generators may still + New OIDs cannot be assigned, though some <acronym>UUID</acronym> generators may still work as long as they do not rely on writing new status to the database. </para> @@ -2314,32 +2314,32 @@ LOG: database system is ready to accept read only connections </para> <para> - <command>DROP TABLESPACE</> can only succeed if the tablespace is empty. + <command>DROP TABLESPACE</command> can only succeed if the tablespace is empty. Some standby users may be actively using the tablespace via their - <varname>temp_tablespaces</> parameter. If there are temporary files in the + <varname>temp_tablespaces</varname> parameter. If there are temporary files in the tablespace, all active queries are canceled to ensure that temporary files are removed, so the tablespace can be removed and WAL replay can continue. </para> <para> - Running <command>DROP DATABASE</> or <command>ALTER DATABASE ... SET - TABLESPACE</> on the primary + Running <command>DROP DATABASE</command> or <command>ALTER DATABASE ... SET + TABLESPACE</command> on the primary will generate a WAL entry that will cause all users connected to that database on the standby to be forcibly disconnected. This action occurs immediately, whatever the setting of - <varname>max_standby_streaming_delay</>. Note that - <command>ALTER DATABASE ... RENAME</> does not disconnect users, which + <varname>max_standby_streaming_delay</varname>. Note that + <command>ALTER DATABASE ... RENAME</command> does not disconnect users, which in most cases will go unnoticed, though might in some cases cause a program confusion if it depends in some way upon database name. </para> <para> - In normal (non-recovery) mode, if you issue <command>DROP USER</> or <command>DROP ROLE</> + In normal (non-recovery) mode, if you issue <command>DROP USER</command> or <command>DROP ROLE</command> for a role with login capability while that user is still connected then nothing happens to the connected user - they remain connected. The user cannot reconnect however. This behavior applies in recovery also, so a - <command>DROP USER</> on the primary does not disconnect that user on the standby. + <command>DROP USER</command> on the primary does not disconnect that user on the standby. </para> <para> @@ -2361,7 +2361,7 @@ LOG: database system is ready to accept read only connections restartpoints (similar to checkpoints on the primary) and normal block cleaning activities. This can include updates of the hint bit information stored on the standby server. - The <command>CHECKPOINT</> command is accepted during recovery, + The <command>CHECKPOINT</command> command is accepted during recovery, though it performs a restartpoint rather than a new checkpoint. </para> </sect2> @@ -2427,15 +2427,15 @@ LOG: database system is ready to accept read only connections </listitem> <listitem> <para> - At the end of recovery, <literal>AccessExclusiveLocks</> held by prepared transactions + At the end of recovery, <literal>AccessExclusiveLocks</literal> held by prepared transactions will require twice the normal number of lock table entries. If you plan on running either a large number of concurrent prepared transactions - that normally take <literal>AccessExclusiveLocks</>, or you plan on having one - large transaction that takes many <literal>AccessExclusiveLocks</>, you are - advised to select a larger value of <varname>max_locks_per_transaction</>, + that normally take <literal>AccessExclusiveLocks</literal>, or you plan on having one + large transaction that takes many <literal>AccessExclusiveLocks</literal>, you are + advised to select a larger value of <varname>max_locks_per_transaction</varname>, perhaps as much as twice the value of the parameter on the primary server. You need not consider this at all if - your setting of <varname>max_prepared_transactions</> is 0. + your setting of <varname>max_prepared_transactions</varname> is 0. </para> </listitem> <listitem> diff --git a/doc/src/sgml/history.sgml b/doc/src/sgml/history.sgml index a7f4b701ead..d1535469f98 100644 --- a/doc/src/sgml/history.sgml +++ b/doc/src/sgml/history.sgml @@ -132,7 +132,7 @@ (<application>psql</application>) was provided for interactive SQL queries, which used <acronym>GNU</acronym> <application>Readline</application>. This largely superseded - the old <application>monitor</> program. + the old <application>monitor</application> program. </para> </listitem> @@ -215,7 +215,7 @@ </para> <para> - Details about what has happened in <productname>PostgreSQL</> since + Details about what has happened in <productname>PostgreSQL</productname> since then can be found in <xref linkend="release">. </para> </sect2> diff --git a/doc/src/sgml/hstore.sgml b/doc/src/sgml/hstore.sgml index db5d4409a6e..0264e4e532e 100644 --- a/doc/src/sgml/hstore.sgml +++ b/doc/src/sgml/hstore.sgml @@ -8,21 +8,21 @@ </indexterm> <para> - This module implements the <type>hstore</> data type for storing sets of - key/value pairs within a single <productname>PostgreSQL</> value. + This module implements the <type>hstore</type> data type for storing sets of + key/value pairs within a single <productname>PostgreSQL</productname> value. This can be useful in various scenarios, such as rows with many attributes that are rarely examined, or semi-structured data. Keys and values are simply text strings. </para> <sect2> - <title><type>hstore</> External Representation</title> + <title><type>hstore</type> External Representation</title> <para> - The text representation of an <type>hstore</>, used for input and output, - includes zero or more <replaceable>key</> <literal>=></> - <replaceable>value</> pairs separated by commas. Some examples: + The text representation of an <type>hstore</type>, used for input and output, + includes zero or more <replaceable>key</replaceable> <literal>=></literal> + <replaceable>value</replaceable> pairs separated by commas. Some examples: <synopsis> k => v @@ -31,15 +31,15 @@ foo => bar, baz => whatever </synopsis> The order of the pairs is not significant (and may not be reproduced on - output). Whitespace between pairs or around the <literal>=></> sign is + output). Whitespace between pairs or around the <literal>=></literal> sign is ignored. Double-quote keys and values that include whitespace, commas, - <literal>=</>s or <literal>></>s. To include a double quote or a + <literal>=</literal>s or <literal>></literal>s. To include a double quote or a backslash in a key or value, escape it with a backslash. </para> <para> - Each key in an <type>hstore</> is unique. If you declare an <type>hstore</> - with duplicate keys, only one will be stored in the <type>hstore</> and + Each key in an <type>hstore</type> is unique. If you declare an <type>hstore</type> + with duplicate keys, only one will be stored in the <type>hstore</type> and there is no guarantee as to which will be kept: <programlisting> @@ -51,24 +51,24 @@ SELECT 'a=>1,a=>2'::hstore; </para> <para> - A value (but not a key) can be an SQL <literal>NULL</>. For example: + A value (but not a key) can be an SQL <literal>NULL</literal>. For example: <programlisting> key => NULL </programlisting> - The <literal>NULL</> keyword is case-insensitive. Double-quote the - <literal>NULL</> to treat it as the ordinary string <quote>NULL</quote>. + The <literal>NULL</literal> keyword is case-insensitive. Double-quote the + <literal>NULL</literal> to treat it as the ordinary string <quote>NULL</quote>. </para> <note> <para> - Keep in mind that the <type>hstore</> text format, when used for input, - applies <emphasis>before</> any required quoting or escaping. If you are - passing an <type>hstore</> literal via a parameter, then no additional + Keep in mind that the <type>hstore</type> text format, when used for input, + applies <emphasis>before</emphasis> any required quoting or escaping. If you are + passing an <type>hstore</type> literal via a parameter, then no additional processing is needed. But if you're passing it as a quoted literal constant, then any single-quote characters and (depending on the setting of - the <varname>standard_conforming_strings</> configuration parameter) + the <varname>standard_conforming_strings</varname> configuration parameter) backslash characters need to be escaped correctly. See <xref linkend="sql-syntax-strings"> for more on the handling of string constants. @@ -83,7 +83,7 @@ key => NULL </sect2> <sect2> - <title><type>hstore</> Operators and Functions</title> + <title><type>hstore</type> Operators and Functions</title> <para> The operators provided by the <literal>hstore</literal> module are @@ -92,7 +92,7 @@ key => NULL </para> <table id="hstore-op-table"> - <title><type>hstore</> Operators</title> + <title><type>hstore</type> Operators</title> <tgroup cols="4"> <thead> @@ -106,99 +106,99 @@ key => NULL <tbody> <row> - <entry><type>hstore</> <literal>-></> <type>text</></entry> - <entry>get value for key (<literal>NULL</> if not present)</entry> + <entry><type>hstore</type> <literal>-></literal> <type>text</type></entry> + <entry>get value for key (<literal>NULL</literal> if not present)</entry> <entry><literal>'a=>x, b=>y'::hstore -> 'a'</literal></entry> <entry><literal>x</literal></entry> </row> <row> - <entry><type>hstore</> <literal>-></> <type>text[]</></entry> - <entry>get values for keys (<literal>NULL</> if not present)</entry> + <entry><type>hstore</type> <literal>-></literal> <type>text[]</type></entry> + <entry>get values for keys (<literal>NULL</literal> if not present)</entry> <entry><literal>'a=>x, b=>y, c=>z'::hstore -> ARRAY['c','a']</literal></entry> <entry><literal>{"z","x"}</literal></entry> </row> <row> - <entry><type>hstore</> <literal>||</> <type>hstore</></entry> - <entry>concatenate <type>hstore</>s</entry> + <entry><type>hstore</type> <literal>||</literal> <type>hstore</type></entry> + <entry>concatenate <type>hstore</type>s</entry> <entry><literal>'a=>b, c=>d'::hstore || 'c=>x, d=>q'::hstore</literal></entry> <entry><literal>"a"=>"b", "c"=>"x", "d"=>"q"</literal></entry> </row> <row> - <entry><type>hstore</> <literal>?</> <type>text</></entry> - <entry>does <type>hstore</> contain key?</entry> + <entry><type>hstore</type> <literal>?</literal> <type>text</type></entry> + <entry>does <type>hstore</type> contain key?</entry> <entry><literal>'a=>1'::hstore ? 'a'</literal></entry> <entry><literal>t</literal></entry> </row> <row> - <entry><type>hstore</> <literal>?&</> <type>text[]</></entry> - <entry>does <type>hstore</> contain all specified keys?</entry> + <entry><type>hstore</type> <literal>?&</literal> <type>text[]</type></entry> + <entry>does <type>hstore</type> contain all specified keys?</entry> <entry><literal>'a=>1,b=>2'::hstore ?& ARRAY['a','b']</literal></entry> <entry><literal>t</literal></entry> </row> <row> - <entry><type>hstore</> <literal>?|</> <type>text[]</></entry> - <entry>does <type>hstore</> contain any of the specified keys?</entry> + <entry><type>hstore</type> <literal>?|</literal> <type>text[]</type></entry> + <entry>does <type>hstore</type> contain any of the specified keys?</entry> <entry><literal>'a=>1,b=>2'::hstore ?| ARRAY['b','c']</literal></entry> <entry><literal>t</literal></entry> </row> <row> - <entry><type>hstore</> <literal>@></> <type>hstore</></entry> + <entry><type>hstore</type> <literal>@></literal> <type>hstore</type></entry> <entry>does left operand contain right?</entry> <entry><literal>'a=>b, b=>1, c=>NULL'::hstore @> 'b=>1'</literal></entry> <entry><literal>t</literal></entry> </row> <row> - <entry><type>hstore</> <literal><@</> <type>hstore</></entry> + <entry><type>hstore</type> <literal><@</literal> <type>hstore</type></entry> <entry>is left operand contained in right?</entry> <entry><literal>'a=>c'::hstore <@ 'a=>b, b=>1, c=>NULL'</literal></entry> <entry><literal>f</literal></entry> </row> <row> - <entry><type>hstore</> <literal>-</> <type>text</></entry> + <entry><type>hstore</type> <literal>-</literal> <type>text</type></entry> <entry>delete key from left operand</entry> <entry><literal>'a=>1, b=>2, c=>3'::hstore - 'b'::text</literal></entry> <entry><literal>"a"=>"1", "c"=>"3"</literal></entry> </row> <row> - <entry><type>hstore</> <literal>-</> <type>text[]</></entry> + <entry><type>hstore</type> <literal>-</literal> <type>text[]</type></entry> <entry>delete keys from left operand</entry> <entry><literal>'a=>1, b=>2, c=>3'::hstore - ARRAY['a','b']</literal></entry> <entry><literal>"c"=>"3"</literal></entry> </row> <row> - <entry><type>hstore</> <literal>-</> <type>hstore</></entry> + <entry><type>hstore</type> <literal>-</literal> <type>hstore</type></entry> <entry>delete matching pairs from left operand</entry> <entry><literal>'a=>1, b=>2, c=>3'::hstore - 'a=>4, b=>2'::hstore</literal></entry> <entry><literal>"a"=>"1", "c"=>"3"</literal></entry> </row> <row> - <entry><type>record</> <literal>#=</> <type>hstore</></entry> - <entry>replace fields in <type>record</> with matching values from <type>hstore</></entry> + <entry><type>record</type> <literal>#=</literal> <type>hstore</type></entry> + <entry>replace fields in <type>record</type> with matching values from <type>hstore</type></entry> <entry>see Examples section</entry> <entry></entry> </row> <row> - <entry><literal>%%</> <type>hstore</></entry> - <entry>convert <type>hstore</> to array of alternating keys and values</entry> + <entry><literal>%%</literal> <type>hstore</type></entry> + <entry>convert <type>hstore</type> to array of alternating keys and values</entry> <entry><literal>%% 'a=>foo, b=>bar'::hstore</literal></entry> <entry><literal>{a,foo,b,bar}</literal></entry> </row> <row> - <entry><literal>%#</> <type>hstore</></entry> - <entry>convert <type>hstore</> to two-dimensional key/value array</entry> + <entry><literal>%#</literal> <type>hstore</type></entry> + <entry>convert <type>hstore</type> to two-dimensional key/value array</entry> <entry><literal>%# 'a=>foo, b=>bar'::hstore</literal></entry> <entry><literal>{{a,foo},{b,bar}}</literal></entry> </row> @@ -209,8 +209,8 @@ key => NULL <note> <para> - Prior to PostgreSQL 8.2, the containment operators <literal>@></> - and <literal><@</> were called <literal>@</> and <literal>~</>, + Prior to PostgreSQL 8.2, the containment operators <literal>@></literal> + and <literal><@</literal> were called <literal>@</literal> and <literal>~</literal>, respectively. These names are still available, but are deprecated and will eventually be removed. Notice that the old names are reversed from the convention formerly followed by the core geometric data types! @@ -218,7 +218,7 @@ key => NULL </note> <table id="hstore-func-table"> - <title><type>hstore</> Functions</title> + <title><type>hstore</type> Functions</title> <tgroup cols="5"> <thead> @@ -235,7 +235,7 @@ key => NULL <row> <entry><function>hstore(record)</function><indexterm><primary>hstore</primary></indexterm></entry> <entry><type>hstore</type></entry> - <entry>construct an <type>hstore</> from a record or row</entry> + <entry>construct an <type>hstore</type> from a record or row</entry> <entry><literal>hstore(ROW(1,2))</literal></entry> <entry><literal>f1=>1,f2=>2</literal></entry> </row> @@ -243,7 +243,7 @@ key => NULL <row> <entry><function>hstore(text[])</function></entry> <entry><type>hstore</type></entry> - <entry>construct an <type>hstore</> from an array, which may be either + <entry>construct an <type>hstore</type> from an array, which may be either a key/value array, or a two-dimensional array</entry> <entry><literal>hstore(ARRAY['a','1','b','2']) || hstore(ARRAY[['c','3'],['d','4']])</literal></entry> <entry><literal>a=>1, b=>2, c=>3, d=>4</literal></entry> @@ -252,7 +252,7 @@ key => NULL <row> <entry><function>hstore(text[], text[])</function></entry> <entry><type>hstore</type></entry> - <entry>construct an <type>hstore</> from separate key and value arrays</entry> + <entry>construct an <type>hstore</type> from separate key and value arrays</entry> <entry><literal>hstore(ARRAY['a','b'], ARRAY['1','2'])</literal></entry> <entry><literal>"a"=>"1","b"=>"2"</literal></entry> </row> @@ -260,7 +260,7 @@ key => NULL <row> <entry><function>hstore(text, text)</function></entry> <entry><type>hstore</type></entry> - <entry>make single-item <type>hstore</></entry> + <entry>make single-item <type>hstore</type></entry> <entry><literal>hstore('a', 'b')</literal></entry> <entry><literal>"a"=>"b"</literal></entry> </row> @@ -268,7 +268,7 @@ key => NULL <row> <entry><function>akeys(hstore)</function><indexterm><primary>akeys</primary></indexterm></entry> <entry><type>text[]</type></entry> - <entry>get <type>hstore</>'s keys as an array</entry> + <entry>get <type>hstore</type>'s keys as an array</entry> <entry><literal>akeys('a=>1,b=>2')</literal></entry> <entry><literal>{a,b}</literal></entry> </row> @@ -276,7 +276,7 @@ key => NULL <row> <entry><function>skeys(hstore)</function><indexterm><primary>skeys</primary></indexterm></entry> <entry><type>setof text</type></entry> - <entry>get <type>hstore</>'s keys as a set</entry> + <entry>get <type>hstore</type>'s keys as a set</entry> <entry><literal>skeys('a=>1,b=>2')</literal></entry> <entry> <programlisting> @@ -288,7 +288,7 @@ b <row> <entry><function>avals(hstore)</function><indexterm><primary>avals</primary></indexterm></entry> <entry><type>text[]</type></entry> - <entry>get <type>hstore</>'s values as an array</entry> + <entry>get <type>hstore</type>'s values as an array</entry> <entry><literal>avals('a=>1,b=>2')</literal></entry> <entry><literal>{1,2}</literal></entry> </row> @@ -296,7 +296,7 @@ b <row> <entry><function>svals(hstore)</function><indexterm><primary>svals</primary></indexterm></entry> <entry><type>setof text</type></entry> - <entry>get <type>hstore</>'s values as a set</entry> + <entry>get <type>hstore</type>'s values as a set</entry> <entry><literal>svals('a=>1,b=>2')</literal></entry> <entry> <programlisting> @@ -308,7 +308,7 @@ b <row> <entry><function>hstore_to_array(hstore)</function><indexterm><primary>hstore_to_array</primary></indexterm></entry> <entry><type>text[]</type></entry> - <entry>get <type>hstore</>'s keys and values as an array of alternating + <entry>get <type>hstore</type>'s keys and values as an array of alternating keys and values</entry> <entry><literal>hstore_to_array('a=>1,b=>2')</literal></entry> <entry><literal>{a,1,b,2}</literal></entry> @@ -317,7 +317,7 @@ b <row> <entry><function>hstore_to_matrix(hstore)</function><indexterm><primary>hstore_to_matrix</primary></indexterm></entry> <entry><type>text[]</type></entry> - <entry>get <type>hstore</>'s keys and values as a two-dimensional array</entry> + <entry>get <type>hstore</type>'s keys and values as a two-dimensional array</entry> <entry><literal>hstore_to_matrix('a=>1,b=>2')</literal></entry> <entry><literal>{{a,1},{b,2}}</literal></entry> </row> @@ -359,7 +359,7 @@ b <row> <entry><function>slice(hstore, text[])</function><indexterm><primary>slice</primary></indexterm></entry> <entry><type>hstore</type></entry> - <entry>extract a subset of an <type>hstore</></entry> + <entry>extract a subset of an <type>hstore</type></entry> <entry><literal>slice('a=>1,b=>2,c=>3'::hstore, ARRAY['b','c','x'])</literal></entry> <entry><literal>"b"=>"2", "c"=>"3"</literal></entry> </row> @@ -367,7 +367,7 @@ b <row> <entry><function>each(hstore)</function><indexterm><primary>each</primary></indexterm></entry> <entry><type>setof(key text, value text)</type></entry> - <entry>get <type>hstore</>'s keys and values as a set</entry> + <entry>get <type>hstore</type>'s keys and values as a set</entry> <entry><literal>select * from each('a=>1,b=>2')</literal></entry> <entry> <programlisting> @@ -381,7 +381,7 @@ b <row> <entry><function>exist(hstore,text)</function><indexterm><primary>exist</primary></indexterm></entry> <entry><type>boolean</type></entry> - <entry>does <type>hstore</> contain key?</entry> + <entry>does <type>hstore</type> contain key?</entry> <entry><literal>exist('a=>1','a')</literal></entry> <entry><literal>t</literal></entry> </row> @@ -389,7 +389,7 @@ b <row> <entry><function>defined(hstore,text)</function><indexterm><primary>defined</primary></indexterm></entry> <entry><type>boolean</type></entry> - <entry>does <type>hstore</> contain non-<literal>NULL</> value for key?</entry> + <entry>does <type>hstore</type> contain non-<literal>NULL</literal> value for key?</entry> <entry><literal>defined('a=>NULL','a')</literal></entry> <entry><literal>f</literal></entry> </row> @@ -421,7 +421,7 @@ b <row> <entry><function>populate_record(record,hstore)</function><indexterm><primary>populate_record</primary></indexterm></entry> <entry><type>record</type></entry> - <entry>replace fields in <type>record</> with matching values from <type>hstore</></entry> + <entry>replace fields in <type>record</type> with matching values from <type>hstore</type></entry> <entry>see Examples section</entry> <entry></entry> </row> @@ -442,7 +442,7 @@ b <note> <para> The function <function>populate_record</function> is actually declared - with <type>anyelement</>, not <type>record</>, as its first argument, + with <type>anyelement</type>, not <type>record</type>, as its first argument, but it will reject non-record types with a run-time error. </para> </note> @@ -452,8 +452,8 @@ b <title>Indexes</title> <para> - <type>hstore</> has GiST and GIN index support for the <literal>@></>, - <literal>?</>, <literal>?&</> and <literal>?|</> operators. For example: + <type>hstore</type> has GiST and GIN index support for the <literal>@></literal>, + <literal>?</literal>, <literal>?&</literal> and <literal>?|</literal> operators. For example: </para> <programlisting> CREATE INDEX hidx ON testhstore USING GIST (h); @@ -462,12 +462,12 @@ CREATE INDEX hidx ON testhstore USING GIN (h); </programlisting> <para> - <type>hstore</> also supports <type>btree</> or <type>hash</> indexes for - the <literal>=</> operator. This allows <type>hstore</> columns to be - declared <literal>UNIQUE</>, or to be used in <literal>GROUP BY</>, - <literal>ORDER BY</> or <literal>DISTINCT</> expressions. The sort ordering - for <type>hstore</> values is not particularly useful, but these indexes - may be useful for equivalence lookups. Create indexes for <literal>=</> + <type>hstore</type> also supports <type>btree</type> or <type>hash</type> indexes for + the <literal>=</literal> operator. This allows <type>hstore</type> columns to be + declared <literal>UNIQUE</literal>, or to be used in <literal>GROUP BY</literal>, + <literal>ORDER BY</literal> or <literal>DISTINCT</literal> expressions. The sort ordering + for <type>hstore</type> values is not particularly useful, but these indexes + may be useful for equivalence lookups. Create indexes for <literal>=</literal> comparisons as follows: </para> <programlisting> @@ -495,7 +495,7 @@ UPDATE tab SET h = delete(h, 'k1'); </para> <para> - Convert a <type>record</> to an <type>hstore</>: + Convert a <type>record</type> to an <type>hstore</type>: <programlisting> CREATE TABLE test (col1 integer, col2 text, col3 text); INSERT INTO test VALUES (123, 'foo', 'bar'); @@ -509,7 +509,7 @@ SELECT hstore(t) FROM test AS t; </para> <para> - Convert an <type>hstore</> to a predefined <type>record</> type: + Convert an <type>hstore</type> to a predefined <type>record</type> type: <programlisting> CREATE TABLE test (col1 integer, col2 text, col3 text); @@ -523,7 +523,7 @@ SELECT * FROM populate_record(null::test, </para> <para> - Modify an existing record using the values from an <type>hstore</>: + Modify an existing record using the values from an <type>hstore</type>: <programlisting> CREATE TABLE test (col1 integer, col2 text, col3 text); INSERT INTO test VALUES (123, 'foo', 'bar'); @@ -541,7 +541,7 @@ SELECT (r).* FROM (SELECT t #= '"col3"=>"baz"' AS r FROM test t) s; <title>Statistics</title> <para> - The <type>hstore</> type, because of its intrinsic liberality, could + The <type>hstore</type> type, because of its intrinsic liberality, could contain a lot of different keys. Checking for valid keys is the task of the application. The following examples demonstrate several techniques for checking keys and obtaining statistics. @@ -588,7 +588,7 @@ SELECT key, count(*) FROM <title>Compatibility</title> <para> - As of PostgreSQL 9.0, <type>hstore</> uses a different internal + As of PostgreSQL 9.0, <type>hstore</type> uses a different internal representation than previous versions. This presents no obstacle for dump/restore upgrades since the text representation (used in the dump) is unchanged. @@ -599,7 +599,7 @@ SELECT key, count(*) FROM having the new code recognize old-format data. This will entail a slight performance penalty when processing data that has not yet been modified by the new code. It is possible to force an upgrade of all values in a table - column by doing an <literal>UPDATE</> statement as follows: + column by doing an <literal>UPDATE</literal> statement as follows: <programlisting> UPDATE tablename SET hstorecol = hstorecol || ''; </programlisting> @@ -610,7 +610,7 @@ UPDATE tablename SET hstorecol = hstorecol || ''; <programlisting> ALTER TABLE tablename ALTER hstorecol TYPE hstore USING hstorecol || ''; </programlisting> - The <command>ALTER TABLE</> method requires an exclusive lock on the table, + The <command>ALTER TABLE</command> method requires an exclusive lock on the table, but does not result in bloating the table with old row versions. </para> diff --git a/doc/src/sgml/indexam.sgml b/doc/src/sgml/indexam.sgml index aa3d371d2e3..b06ffcdbffe 100644 --- a/doc/src/sgml/indexam.sgml +++ b/doc/src/sgml/indexam.sgml @@ -6,17 +6,17 @@ <para> This chapter defines the interface between the core <productname>PostgreSQL</productname> system and <firstterm>index access - methods</>, which manage individual index types. The core system + methods</firstterm>, which manage individual index types. The core system knows nothing about indexes beyond what is specified here, so it is possible to develop entirely new index types by writing add-on code. </para> <para> All indexes in <productname>PostgreSQL</productname> are what are known - technically as <firstterm>secondary indexes</>; that is, the index is + technically as <firstterm>secondary indexes</firstterm>; that is, the index is physically separate from the table file that it describes. Each index - is stored as its own physical <firstterm>relation</> and so is described - by an entry in the <structname>pg_class</> catalog. The contents of an + is stored as its own physical <firstterm>relation</firstterm> and so is described + by an entry in the <structname>pg_class</structname> catalog. The contents of an index are entirely under the control of its index access method. In practice, all index access methods divide indexes into standard-size pages so that they can use the regular storage manager and buffer manager @@ -28,7 +28,7 @@ <para> An index is effectively a mapping from some data key values to - <firstterm>tuple identifiers</>, or <acronym>TIDs</>, of row versions + <firstterm>tuple identifiers</firstterm>, or <acronym>TIDs</acronym>, of row versions (tuples) in the index's parent table. A TID consists of a block number and an item number within that block (see <xref linkend="storage-page-layout">). This is sufficient @@ -50,7 +50,7 @@ Each index access method is described by a row in the <link linkend="catalog-pg-am"><structname>pg_am</structname></link> system catalog. The <structname>pg_am</structname> entry - specifies a name and a <firstterm>handler function</> for the access + specifies a name and a <firstterm>handler function</firstterm> for the access method. These entries can be created and deleted using the <xref linkend="sql-create-access-method"> and <xref linkend="sql-drop-access-method"> SQL commands. @@ -58,14 +58,14 @@ <para> An index access method handler function must be declared to accept a - single argument of type <type>internal</> and to return the - pseudo-type <type>index_am_handler</>. The argument is a dummy value that + single argument of type <type>internal</type> and to return the + pseudo-type <type>index_am_handler</type>. The argument is a dummy value that simply serves to prevent handler functions from being called directly from SQL commands. The result of the function must be a palloc'd struct of type <structname>IndexAmRoutine</structname>, which contains everything that the core code needs to know to make use of the index access method. The <structname>IndexAmRoutine</structname> struct, also called the access - method's <firstterm>API struct</>, includes fields specifying assorted + method's <firstterm>API struct</firstterm>, includes fields specifying assorted fixed properties of the access method, such as whether it can support multicolumn indexes. More importantly, it contains pointers to support functions for the access method, which do all of the real work to access @@ -144,8 +144,8 @@ typedef struct IndexAmRoutine <para> To be useful, an index access method must also have one or more - <firstterm>operator families</> and - <firstterm>operator classes</> defined in + <firstterm>operator families</firstterm> and + <firstterm>operator classes</firstterm> defined in <link linkend="catalog-pg-opfamily"><structname>pg_opfamily</structname></link>, <link linkend="catalog-pg-opclass"><structname>pg_opclass</structname></link>, <link linkend="catalog-pg-amop"><structname>pg_amop</structname></link>, and @@ -170,12 +170,12 @@ typedef struct IndexAmRoutine key values come from (it is always handed precomputed key values) but it will be very interested in the operator class information in <structname>pg_index</structname>. Both of these catalog entries can be - accessed as part of the <structname>Relation</> data structure that is + accessed as part of the <structname>Relation</structname> data structure that is passed to all operations on the index. </para> <para> - Some of the flag fields of <structname>IndexAmRoutine</> have nonobvious + Some of the flag fields of <structname>IndexAmRoutine</structname> have nonobvious implications. The requirements of <structfield>amcanunique</structfield> are discussed in <xref linkend="index-unique-checks">. The <structfield>amcanmulticol</structfield> flag asserts that the @@ -185,7 +185,7 @@ typedef struct IndexAmRoutine When <structfield>amcanmulticol</structfield> is false, <structfield>amoptionalkey</structfield> essentially says whether the access method supports full-index scans without any restriction clause. - Access methods that support multiple index columns <emphasis>must</> + Access methods that support multiple index columns <emphasis>must</emphasis> support scans that omit restrictions on any or all of the columns after the first; however they are permitted to require some restriction to appear for the first index column, and this is signaled by setting @@ -201,17 +201,17 @@ typedef struct IndexAmRoutine indexes that have <structfield>amoptionalkey</structfield> true must index nulls, since the planner might decide to use such an index with no scan keys at all. A related restriction is that an index - access method that supports multiple index columns <emphasis>must</> + access method that supports multiple index columns <emphasis>must</emphasis> support indexing null values in columns after the first, because the planner will assume the index can be used for queries that do not restrict these columns. For example, consider an index on (a,b) and a query with <literal>WHERE a = 4</literal>. The system will assume the index can be used to scan for rows with <literal>a = 4</literal>, which is wrong if the - index omits rows where <literal>b</> is null. + index omits rows where <literal>b</literal> is null. It is, however, OK to omit rows where the first indexed column is null. An index access method that does index nulls may also set <structfield>amsearchnulls</structfield>, indicating that it supports - <literal>IS NULL</> and <literal>IS NOT NULL</> clauses as search + <literal>IS NULL</literal> and <literal>IS NOT NULL</literal> clauses as search conditions. </para> @@ -235,8 +235,8 @@ ambuild (Relation heapRelation, Build a new index. The index relation has been physically created, but is empty. It must be filled in with whatever fixed data the access method requires, plus entries for all tuples already existing - in the table. Ordinarily the <function>ambuild</> function will call - <function>IndexBuildHeapScan()</> to scan the table for existing tuples + in the table. Ordinarily the <function>ambuild</function> function will call + <function>IndexBuildHeapScan()</function> to scan the table for existing tuples and compute the keys that need to be inserted into the index. The function must return a palloc'd struct containing statistics about the new index. @@ -264,22 +264,22 @@ aminsert (Relation indexRelation, IndexUniqueCheck checkUnique, IndexInfo *indexInfo); </programlisting> - Insert a new tuple into an existing index. The <literal>values</> and - <literal>isnull</> arrays give the key values to be indexed, and - <literal>heap_tid</> is the TID to be indexed. + Insert a new tuple into an existing index. The <literal>values</literal> and + <literal>isnull</literal> arrays give the key values to be indexed, and + <literal>heap_tid</literal> is the TID to be indexed. If the access method supports unique indexes (its - <structfield>amcanunique</> flag is true) then - <literal>checkUnique</> indicates the type of uniqueness check to + <structfield>amcanunique</structfield> flag is true) then + <literal>checkUnique</literal> indicates the type of uniqueness check to perform. This varies depending on whether the unique constraint is deferrable; see <xref linkend="index-unique-checks"> for details. - Normally the access method only needs the <literal>heapRelation</> + Normally the access method only needs the <literal>heapRelation</literal> parameter when performing uniqueness checking (since then it will have to look into the heap to verify tuple liveness). </para> <para> The function's Boolean result value is significant only when - <literal>checkUnique</> is <literal>UNIQUE_CHECK_PARTIAL</>. + <literal>checkUnique</literal> is <literal>UNIQUE_CHECK_PARTIAL</literal>. In this case a TRUE result means the new entry is known unique, whereas FALSE means it might be non-unique (and a deferred uniqueness check must be scheduled). For other cases a constant FALSE result is recommended. @@ -287,7 +287,7 @@ aminsert (Relation indexRelation, <para> Some indexes might not index all tuples. If the tuple is not to be - indexed, <function>aminsert</> should just return without doing anything. + indexed, <function>aminsert</function> should just return without doing anything. </para> <para> @@ -306,26 +306,26 @@ ambulkdelete (IndexVacuumInfo *info, IndexBulkDeleteCallback callback, void *callback_state); </programlisting> - Delete tuple(s) from the index. This is a <quote>bulk delete</> operation + Delete tuple(s) from the index. This is a <quote>bulk delete</quote> operation that is intended to be implemented by scanning the whole index and checking each entry to see if it should be deleted. - The passed-in <literal>callback</> function must be called, in the style - <literal>callback(<replaceable>TID</>, callback_state) returns bool</literal>, + The passed-in <literal>callback</literal> function must be called, in the style + <literal>callback(<replaceable>TID</replaceable>, callback_state) returns bool</literal>, to determine whether any particular index entry, as identified by its referenced TID, is to be deleted. Must return either NULL or a palloc'd struct containing statistics about the effects of the deletion operation. It is OK to return NULL if no information needs to be passed on to - <function>amvacuumcleanup</>. + <function>amvacuumcleanup</function>. </para> <para> - Because of limited <varname>maintenance_work_mem</>, - <function>ambulkdelete</> might need to be called more than once when many - tuples are to be deleted. The <literal>stats</> argument is the result + Because of limited <varname>maintenance_work_mem</varname>, + <function>ambulkdelete</function> might need to be called more than once when many + tuples are to be deleted. The <literal>stats</literal> argument is the result of the previous call for this index (it is NULL for the first call within a - <command>VACUUM</> operation). This allows the AM to accumulate statistics - across the whole operation. Typically, <function>ambulkdelete</> will - modify and return the same struct if the passed <literal>stats</> is not + <command>VACUUM</command> operation). This allows the AM to accumulate statistics + across the whole operation. Typically, <function>ambulkdelete</function> will + modify and return the same struct if the passed <literal>stats</literal> is not null. </para> @@ -336,14 +336,14 @@ amvacuumcleanup (IndexVacuumInfo *info, IndexBulkDeleteResult *stats); </programlisting> Clean up after a <command>VACUUM</command> operation (zero or more - <function>ambulkdelete</> calls). This does not have to do anything + <function>ambulkdelete</function> calls). This does not have to do anything beyond returning index statistics, but it might perform bulk cleanup - such as reclaiming empty index pages. <literal>stats</> is whatever the - last <function>ambulkdelete</> call returned, or NULL if - <function>ambulkdelete</> was not called because no tuples needed to be + such as reclaiming empty index pages. <literal>stats</literal> is whatever the + last <function>ambulkdelete</function> call returned, or NULL if + <function>ambulkdelete</function> was not called because no tuples needed to be deleted. If the result is not NULL it must be a palloc'd struct. - The statistics it contains will be used to update <structname>pg_class</>, - and will be reported by <command>VACUUM</> if <literal>VERBOSE</> is given. + The statistics it contains will be used to update <structname>pg_class</structname>, + and will be reported by <command>VACUUM</command> if <literal>VERBOSE</literal> is given. It is OK to return NULL if the index was not changed at all during the <command>VACUUM</command> operation, but otherwise correct stats should be returned. @@ -351,8 +351,8 @@ amvacuumcleanup (IndexVacuumInfo *info, <para> As of <productname>PostgreSQL</productname> 8.4, - <function>amvacuumcleanup</> will also be called at completion of an - <command>ANALYZE</> operation. In this case <literal>stats</> is always + <function>amvacuumcleanup</function> will also be called at completion of an + <command>ANALYZE</command> operation. In this case <literal>stats</literal> is always NULL and any return value will be ignored. This case can be distinguished by checking <literal>info->analyze_only</literal>. It is recommended that the access method do nothing except post-insert cleanup in such a @@ -365,12 +365,12 @@ bool amcanreturn (Relation indexRelation, int attno); </programlisting> Check whether the index can support <link - linkend="indexes-index-only-scans"><firstterm>index-only scans</></link> on + linkend="indexes-index-only-scans"><firstterm>index-only scans</firstterm></link> on the given column, by returning the indexed column values for an index entry in the form of an <structname>IndexTuple</structname>. The attribute number is 1-based, i.e. the first column's attno is 1. Returns TRUE if supported, else FALSE. If the access method does not support index-only scans at all, - the <structfield>amcanreturn</> field in its <structname>IndexAmRoutine</> + the <structfield>amcanreturn</structfield> field in its <structname>IndexAmRoutine</structname> struct can be set to NULL. </para> @@ -397,18 +397,18 @@ amoptions (ArrayType *reloptions, </programlisting> Parse and validate the reloptions array for an index. This is called only when a non-null reloptions array exists for the index. - <parameter>reloptions</> is a <type>text</> array containing entries of the - form <replaceable>name</><literal>=</><replaceable>value</>. - The function should construct a <type>bytea</> value, which will be copied - into the <structfield>rd_options</> field of the index's relcache entry. - The data contents of the <type>bytea</> value are open for the access + <parameter>reloptions</parameter> is a <type>text</type> array containing entries of the + form <replaceable>name</replaceable><literal>=</literal><replaceable>value</replaceable>. + The function should construct a <type>bytea</type> value, which will be copied + into the <structfield>rd_options</structfield> field of the index's relcache entry. + The data contents of the <type>bytea</type> value are open for the access method to define; most of the standard access methods use struct - <structname>StdRdOptions</>. - When <parameter>validate</> is true, the function should report a suitable + <structname>StdRdOptions</structname>. + When <parameter>validate</parameter> is true, the function should report a suitable error message if any of the options are unrecognized or have invalid - values; when <parameter>validate</> is false, invalid entries should be - silently ignored. (<parameter>validate</> is false when loading options - already stored in <structname>pg_catalog</>; an invalid entry could only + values; when <parameter>validate</parameter> is false, invalid entries should be + silently ignored. (<parameter>validate</parameter> is false when loading options + already stored in <structname>pg_catalog</structname>; an invalid entry could only be found if the access method has changed its rules for options, and in that case ignoring obsolete entries is appropriate.) It is OK to return NULL if default behavior is wanted. @@ -421,44 +421,44 @@ amproperty (Oid index_oid, int attno, IndexAMProperty prop, const char *propname, bool *res, bool *isnull); </programlisting> - The <function>amproperty</> method allows index access methods to override + The <function>amproperty</function> method allows index access methods to override the default behavior of <function>pg_index_column_has_property</function> and related functions. If the access method does not have any special behavior for index property - inquiries, the <structfield>amproperty</> field in - its <structname>IndexAmRoutine</> struct can be set to NULL. - Otherwise, the <function>amproperty</> method will be called with - <parameter>index_oid</> and <parameter>attno</> both zero for + inquiries, the <structfield>amproperty</structfield> field in + its <structname>IndexAmRoutine</structname> struct can be set to NULL. + Otherwise, the <function>amproperty</function> method will be called with + <parameter>index_oid</parameter> and <parameter>attno</parameter> both zero for <function>pg_indexam_has_property</function> calls, - or with <parameter>index_oid</> valid and <parameter>attno</> zero for + or with <parameter>index_oid</parameter> valid and <parameter>attno</parameter> zero for <function>pg_index_has_property</function> calls, - or with <parameter>index_oid</> valid and <parameter>attno</> greater than + or with <parameter>index_oid</parameter> valid and <parameter>attno</parameter> greater than zero for <function>pg_index_column_has_property</function> calls. - <parameter>prop</> is an enum value identifying the property being tested, - while <parameter>propname</> is the original property name string. + <parameter>prop</parameter> is an enum value identifying the property being tested, + while <parameter>propname</parameter> is the original property name string. If the core code does not recognize the property name - then <parameter>prop</> is <literal>AMPROP_UNKNOWN</>. + then <parameter>prop</parameter> is <literal>AMPROP_UNKNOWN</literal>. Access methods can define custom property names by - checking <parameter>propname</> for a match (use <function>pg_strcasecmp</> + checking <parameter>propname</parameter> for a match (use <function>pg_strcasecmp</function> to match, for consistency with the core code); for names known to the core - code, it's better to inspect <parameter>prop</>. - If the <structfield>amproperty</> method returns <literal>true</> then - it has determined the property test result: it must set <literal>*res</> - to the boolean value to return, or set <literal>*isnull</> - to <literal>true</> to return a NULL. (Both of the referenced variables - are initialized to <literal>false</> before the call.) - If the <structfield>amproperty</> method returns <literal>false</> then + code, it's better to inspect <parameter>prop</parameter>. + If the <structfield>amproperty</structfield> method returns <literal>true</literal> then + it has determined the property test result: it must set <literal>*res</literal> + to the boolean value to return, or set <literal>*isnull</literal> + to <literal>true</literal> to return a NULL. (Both of the referenced variables + are initialized to <literal>false</literal> before the call.) + If the <structfield>amproperty</structfield> method returns <literal>false</literal> then the core code will proceed with its normal logic for determining the property test result. </para> <para> Access methods that support ordering operators should - implement <literal>AMPROP_DISTANCE_ORDERABLE</> property testing, as the + implement <literal>AMPROP_DISTANCE_ORDERABLE</literal> property testing, as the core code does not know how to do that and will return NULL. It may - also be advantageous to implement <literal>AMPROP_RETURNABLE</> testing, + also be advantageous to implement <literal>AMPROP_RETURNABLE</literal> testing, if that can be done more cheaply than by opening the index and calling - <structfield>amcanreturn</>, which is the core code's default behavior. + <structfield>amcanreturn</structfield>, which is the core code's default behavior. The default behavior should be satisfactory for all other standard properties. </para> @@ -471,18 +471,18 @@ amvalidate (Oid opclassoid); Validate the catalog entries for the specified operator class, so far as the access method can reasonably do that. For example, this might include testing that all required support functions are provided. - The <function>amvalidate</> function must return false if the opclass is - invalid. Problems should be reported with <function>ereport</> messages. + The <function>amvalidate</function> function must return false if the opclass is + invalid. Problems should be reported with <function>ereport</function> messages. </para> <para> The purpose of an index, of course, is to support scans for tuples matching - an indexable <literal>WHERE</> condition, often called a - <firstterm>qualifier</> or <firstterm>scan key</>. The semantics of + an indexable <literal>WHERE</literal> condition, often called a + <firstterm>qualifier</firstterm> or <firstterm>scan key</firstterm>. The semantics of index scanning are described more fully in <xref linkend="index-scanning">, - below. An index access method can support <quote>plain</> index scans, - <quote>bitmap</> index scans, or both. The scan-related functions that an + below. An index access method can support <quote>plain</quote> index scans, + <quote>bitmap</quote> index scans, or both. The scan-related functions that an index access method must or may provide are: </para> @@ -493,17 +493,17 @@ ambeginscan (Relation indexRelation, int nkeys, int norderbys); </programlisting> - Prepare for an index scan. The <literal>nkeys</> and <literal>norderbys</> + Prepare for an index scan. The <literal>nkeys</literal> and <literal>norderbys</literal> parameters indicate the number of quals and ordering operators that will be used in the scan; these may be useful for space allocation purposes. Note that the actual values of the scan keys aren't provided yet. The result must be a palloc'd struct. For implementation reasons the index access method - <emphasis>must</> create this struct by calling - <function>RelationGetIndexScan()</>. In most cases - <function>ambeginscan</> does little beyond making that call and perhaps + <emphasis>must</emphasis> create this struct by calling + <function>RelationGetIndexScan()</function>. In most cases + <function>ambeginscan</function> does little beyond making that call and perhaps acquiring locks; - the interesting parts of index-scan startup are in <function>amrescan</>. + the interesting parts of index-scan startup are in <function>amrescan</function>. </para> <para> @@ -516,10 +516,10 @@ amrescan (IndexScanDesc scan, int norderbys); </programlisting> Start or restart an index scan, possibly with new scan keys. (To restart - using previously-passed keys, NULL is passed for <literal>keys</> and/or - <literal>orderbys</>.) Note that it is not allowed for + using previously-passed keys, NULL is passed for <literal>keys</literal> and/or + <literal>orderbys</literal>.) Note that it is not allowed for the number of keys or order-by operators to be larger than - what was passed to <function>ambeginscan</>. In practice the restart + what was passed to <function>ambeginscan</function>. In practice the restart feature is used when a new outer tuple is selected by a nested-loop join and so a new key comparison value is needed, but the scan key structure remains the same. @@ -534,42 +534,42 @@ amgettuple (IndexScanDesc scan, Fetch the next tuple in the given scan, moving in the given direction (forward or backward in the index). Returns TRUE if a tuple was obtained, FALSE if no matching tuples remain. In the TRUE case the tuple - TID is stored into the <literal>scan</> structure. Note that - <quote>success</> means only that the index contains an entry that matches + TID is stored into the <literal>scan</literal> structure. Note that + <quote>success</quote> means only that the index contains an entry that matches the scan keys, not that the tuple necessarily still exists in the heap or - will pass the caller's snapshot test. On success, <function>amgettuple</> - must also set <literal>scan->xs_recheck</> to TRUE or FALSE. + will pass the caller's snapshot test. On success, <function>amgettuple</function> + must also set <literal>scan->xs_recheck</literal> to TRUE or FALSE. FALSE means it is certain that the index entry matches the scan keys. TRUE means this is not certain, and the conditions represented by the scan keys must be rechecked against the heap tuple after fetching it. - This provision supports <quote>lossy</> index operators. + This provision supports <quote>lossy</quote> index operators. Note that rechecking will extend only to the scan conditions; a partial - index predicate (if any) is never rechecked by <function>amgettuple</> + index predicate (if any) is never rechecked by <function>amgettuple</function> callers. </para> <para> If the index supports <link linkend="indexes-index-only-scans">index-only scans</link> (i.e., <function>amcanreturn</function> returns TRUE for it), - then on success the AM must also check <literal>scan->xs_want_itup</>, + then on success the AM must also check <literal>scan->xs_want_itup</literal>, and if that is true it must return the originally indexed data for the index entry. The data can be returned in the form of an - <structname>IndexTuple</> pointer stored at <literal>scan->xs_itup</>, - with tuple descriptor <literal>scan->xs_itupdesc</>; or in the form of - a <structname>HeapTuple</> pointer stored at <literal>scan->xs_hitup</>, - with tuple descriptor <literal>scan->xs_hitupdesc</>. (The latter + <structname>IndexTuple</structname> pointer stored at <literal>scan->xs_itup</literal>, + with tuple descriptor <literal>scan->xs_itupdesc</literal>; or in the form of + a <structname>HeapTuple</structname> pointer stored at <literal>scan->xs_hitup</literal>, + with tuple descriptor <literal>scan->xs_hitupdesc</literal>. (The latter format should be used when reconstructing data that might possibly not fit - into an <structname>IndexTuple</>.) In either case, + into an <structname>IndexTuple</structname>.) In either case, management of the data referenced by the pointer is the access method's responsibility. The data must remain good at least until the next - <function>amgettuple</>, <function>amrescan</>, or <function>amendscan</> + <function>amgettuple</function>, <function>amrescan</function>, or <function>amendscan</function> call for the scan. </para> <para> - The <function>amgettuple</> function need only be provided if the access - method supports <quote>plain</> index scans. If it doesn't, the - <structfield>amgettuple</> field in its <structname>IndexAmRoutine</> + The <function>amgettuple</function> function need only be provided if the access + method supports <quote>plain</quote> index scans. If it doesn't, the + <structfield>amgettuple</structfield> field in its <structname>IndexAmRoutine</structname> struct must be set to NULL. </para> @@ -583,24 +583,24 @@ amgetbitmap (IndexScanDesc scan, <type>TIDBitmap</type> (that is, OR the set of tuple IDs into whatever set is already in the bitmap). The number of tuples fetched is returned (this might be just an approximate count, for instance some AMs do not detect duplicates). - While inserting tuple IDs into the bitmap, <function>amgetbitmap</> can + While inserting tuple IDs into the bitmap, <function>amgetbitmap</function> can indicate that rechecking of the scan conditions is required for specific - tuple IDs. This is analogous to the <literal>xs_recheck</> output parameter - of <function>amgettuple</>. Note: in the current implementation, support + tuple IDs. This is analogous to the <literal>xs_recheck</literal> output parameter + of <function>amgettuple</function>. Note: in the current implementation, support for this feature is conflated with support for lossy storage of the bitmap itself, and therefore callers recheck both the scan conditions and the partial index predicate (if any) for recheckable tuples. That might not always be true, however. - <function>amgetbitmap</> and - <function>amgettuple</> cannot be used in the same index scan; there - are other restrictions too when using <function>amgetbitmap</>, as explained + <function>amgetbitmap</function> and + <function>amgettuple</function> cannot be used in the same index scan; there + are other restrictions too when using <function>amgetbitmap</function>, as explained in <xref linkend="index-scanning">. </para> <para> - The <function>amgetbitmap</> function need only be provided if the access - method supports <quote>bitmap</> index scans. If it doesn't, the - <structfield>amgetbitmap</> field in its <structname>IndexAmRoutine</> + The <function>amgetbitmap</function> function need only be provided if the access + method supports <quote>bitmap</quote> index scans. If it doesn't, the + <structfield>amgetbitmap</structfield> field in its <structname>IndexAmRoutine</structname> struct must be set to NULL. </para> @@ -609,7 +609,7 @@ amgetbitmap (IndexScanDesc scan, void amendscan (IndexScanDesc scan); </programlisting> - End a scan and release resources. The <literal>scan</> struct itself + End a scan and release resources. The <literal>scan</literal> struct itself should not be freed, but any locks or pins taken internally by the access method must be released. </para> @@ -624,9 +624,9 @@ ammarkpos (IndexScanDesc scan); </para> <para> - The <function>ammarkpos</> function need only be provided if the access + The <function>ammarkpos</function> function need only be provided if the access method supports ordered scans. If it doesn't, - the <structfield>ammarkpos</> field in its <structname>IndexAmRoutine</> + the <structfield>ammarkpos</structfield> field in its <structname>IndexAmRoutine</structname> struct may be set to NULL. </para> @@ -639,15 +639,15 @@ amrestrpos (IndexScanDesc scan); </para> <para> - The <function>amrestrpos</> function need only be provided if the access + The <function>amrestrpos</function> function need only be provided if the access method supports ordered scans. If it doesn't, - the <structfield>amrestrpos</> field in its <structname>IndexAmRoutine</> + the <structfield>amrestrpos</structfield> field in its <structname>IndexAmRoutine</structname> struct may be set to NULL. </para> <para> In addition to supporting ordinary index scans, some types of index - may wish to support <firstterm>parallel index scans</>, which allow + may wish to support <firstterm>parallel index scans</firstterm>, which allow multiple backends to cooperate in performing an index scan. The index access method should arrange things so that each cooperating process returns a subset of the tuples that would be performed by @@ -668,7 +668,7 @@ amestimateparallelscan (void); Estimate and return the number of bytes of dynamic shared memory which the access method will be needed to perform a parallel scan. (This number is in addition to, not in lieu of, the amount of space needed for - AM-independent data in <structname>ParallelIndexScanDescData</>.) + AM-independent data in <structname>ParallelIndexScanDescData</structname>.) </para> <para> @@ -683,9 +683,9 @@ void aminitparallelscan (void *target); </programlisting> This function will be called to initialize dynamic shared memory at the - beginning of a parallel scan. <parameter>target</> will point to at least + beginning of a parallel scan. <parameter>target</parameter> will point to at least the number of bytes previously returned by - <function>amestimateparallelscan</>, and this function may use that + <function>amestimateparallelscan</function>, and this function may use that amount of space to store whatever data it wishes. </para> @@ -702,7 +702,7 @@ amparallelrescan (IndexScanDesc scan); </programlisting> This function, if implemented, will be called when a parallel index scan must be restarted. It should reset any shared state set up by - <function>aminitparallelscan</> such that the scan will be restarted from + <function>aminitparallelscan</function> such that the scan will be restarted from the beginning. </para> @@ -714,16 +714,16 @@ amparallelrescan (IndexScanDesc scan); <para> In an index scan, the index access method is responsible for regurgitating the TIDs of all the tuples it has been told about that match the - <firstterm>scan keys</>. The access method is <emphasis>not</> involved in + <firstterm>scan keys</firstterm>. The access method is <emphasis>not</emphasis> involved in actually fetching those tuples from the index's parent table, nor in determining whether they pass the scan's time qualification test or other conditions. </para> <para> - A scan key is the internal representation of a <literal>WHERE</> clause of - the form <replaceable>index_key</> <replaceable>operator</> - <replaceable>constant</>, where the index key is one of the columns of the + A scan key is the internal representation of a <literal>WHERE</literal> clause of + the form <replaceable>index_key</replaceable> <replaceable>operator</replaceable> + <replaceable>constant</replaceable>, where the index key is one of the columns of the index and the operator is one of the members of the operator family associated with that index column. An index scan has zero or more scan keys, which are implicitly ANDed — the returned tuples are expected @@ -731,7 +731,7 @@ amparallelrescan (IndexScanDesc scan); </para> <para> - The access method can report that the index is <firstterm>lossy</>, or + The access method can report that the index is <firstterm>lossy</firstterm>, or requires rechecks, for a particular query. This implies that the index scan will return all the entries that pass the scan key, plus possibly additional entries that do not. The core system's index-scan machinery @@ -743,16 +743,16 @@ amparallelrescan (IndexScanDesc scan); <para> Note that it is entirely up to the access method to ensure that it correctly finds all and only the entries passing all the given scan keys. - Also, the core system will simply hand off all the <literal>WHERE</> + Also, the core system will simply hand off all the <literal>WHERE</literal> clauses that match the index keys and operator families, without any semantic analysis to determine whether they are redundant or contradictory. As an example, given - <literal>WHERE x > 4 AND x > 14</> where <literal>x</> is a b-tree - indexed column, it is left to the b-tree <function>amrescan</> function + <literal>WHERE x > 4 AND x > 14</literal> where <literal>x</literal> is a b-tree + indexed column, it is left to the b-tree <function>amrescan</function> function to realize that the first scan key is redundant and can be discarded. - The extent of preprocessing needed during <function>amrescan</> will + The extent of preprocessing needed during <function>amrescan</function> will depend on the extent to which the index access method needs to reduce - the scan keys to a <quote>normalized</> form. + the scan keys to a <quote>normalized</quote> form. </para> <para> @@ -765,7 +765,7 @@ amparallelrescan (IndexScanDesc scan); <para> Access methods that always return entries in the natural ordering of their data (such as btree) should set - <structfield>amcanorder</> to true. + <structfield>amcanorder</structfield> to true. Currently, such access methods must use btree-compatible strategy numbers for their equality and ordering operators. </para> @@ -773,11 +773,11 @@ amparallelrescan (IndexScanDesc scan); <listitem> <para> Access methods that support ordering operators should set - <structfield>amcanorderbyop</> to true. + <structfield>amcanorderbyop</structfield> to true. This indicates that the index is capable of returning entries in - an order satisfying <literal>ORDER BY</> <replaceable>index_key</> - <replaceable>operator</> <replaceable>constant</>. Scan modifiers - of that form can be passed to <function>amrescan</> as described + an order satisfying <literal>ORDER BY</literal> <replaceable>index_key</replaceable> + <replaceable>operator</replaceable> <replaceable>constant</replaceable>. Scan modifiers + of that form can be passed to <function>amrescan</function> as described previously. </para> </listitem> @@ -785,29 +785,29 @@ amparallelrescan (IndexScanDesc scan); </para> <para> - The <function>amgettuple</> function has a <literal>direction</> argument, - which can be either <literal>ForwardScanDirection</> (the normal case) - or <literal>BackwardScanDirection</>. If the first call after - <function>amrescan</> specifies <literal>BackwardScanDirection</>, then the + The <function>amgettuple</function> function has a <literal>direction</literal> argument, + which can be either <literal>ForwardScanDirection</literal> (the normal case) + or <literal>BackwardScanDirection</literal>. If the first call after + <function>amrescan</function> specifies <literal>BackwardScanDirection</literal>, then the set of matching index entries is to be scanned back-to-front rather than in - the normal front-to-back direction, so <function>amgettuple</> must return + the normal front-to-back direction, so <function>amgettuple</function> must return the last matching tuple in the index, rather than the first one as it normally would. (This will only occur for access - methods that set <structfield>amcanorder</> to true.) After the - first call, <function>amgettuple</> must be prepared to advance the scan in + methods that set <structfield>amcanorder</structfield> to true.) After the + first call, <function>amgettuple</function> must be prepared to advance the scan in either direction from the most recently returned entry. (But if - <structfield>amcanbackward</> is false, all subsequent + <structfield>amcanbackward</structfield> is false, all subsequent calls will have the same direction as the first one.) </para> <para> - Access methods that support ordered scans must support <quote>marking</> a + Access methods that support ordered scans must support <quote>marking</quote> a position in a scan and later returning to the marked position. The same position might be restored multiple times. However, only one position need - be remembered per scan; a new <function>ammarkpos</> call overrides the + be remembered per scan; a new <function>ammarkpos</function> call overrides the previously marked position. An access method that does not support ordered - scans need not provide <function>ammarkpos</> and <function>amrestrpos</> - functions in <structname>IndexAmRoutine</>; set those pointers to NULL + scans need not provide <function>ammarkpos</function> and <function>amrestrpos</function> + functions in <structname>IndexAmRoutine</structname>; set those pointers to NULL instead. </para> @@ -835,29 +835,29 @@ amparallelrescan (IndexScanDesc scan); </para> <para> - Instead of using <function>amgettuple</>, an index scan can be done with - <function>amgetbitmap</> to fetch all tuples in one call. This can be - noticeably more efficient than <function>amgettuple</> because it allows + Instead of using <function>amgettuple</function>, an index scan can be done with + <function>amgetbitmap</function> to fetch all tuples in one call. This can be + noticeably more efficient than <function>amgettuple</function> because it allows avoiding lock/unlock cycles within the access method. In principle - <function>amgetbitmap</> should have the same effects as repeated - <function>amgettuple</> calls, but we impose several restrictions to - simplify matters. First of all, <function>amgetbitmap</> returns all + <function>amgetbitmap</function> should have the same effects as repeated + <function>amgettuple</function> calls, but we impose several restrictions to + simplify matters. First of all, <function>amgetbitmap</function> returns all tuples at once and marking or restoring scan positions isn't supported. Secondly, the tuples are returned in a bitmap which doesn't - have any specific ordering, which is why <function>amgetbitmap</> doesn't - take a <literal>direction</> argument. (Ordering operators will never be + have any specific ordering, which is why <function>amgetbitmap</function> doesn't + take a <literal>direction</literal> argument. (Ordering operators will never be supplied for such a scan, either.) Also, there is no provision for index-only scans with - <function>amgetbitmap</>, since there is no way to return the contents of + <function>amgetbitmap</function>, since there is no way to return the contents of index tuples. - Finally, <function>amgetbitmap</> + Finally, <function>amgetbitmap</function> does not guarantee any locking of the returned tuples, with implications spelled out in <xref linkend="index-locking">. </para> <para> Note that it is permitted for an access method to implement only - <function>amgetbitmap</> and not <function>amgettuple</>, or vice versa, + <function>amgetbitmap</function> and not <function>amgettuple</function>, or vice versa, if its internal implementation is unsuited to one API or the other. </para> @@ -870,26 +870,26 @@ amparallelrescan (IndexScanDesc scan); Index access methods must handle concurrent updates of the index by multiple processes. The core <productname>PostgreSQL</productname> system obtains - <literal>AccessShareLock</> on the index during an index scan, and - <literal>RowExclusiveLock</> when updating the index (including plain - <command>VACUUM</>). Since these lock types do not conflict, the access + <literal>AccessShareLock</literal> on the index during an index scan, and + <literal>RowExclusiveLock</literal> when updating the index (including plain + <command>VACUUM</command>). Since these lock types do not conflict, the access method is responsible for handling any fine-grained locking it might need. An exclusive lock on the index as a whole will be taken only during index - creation, destruction, or <command>REINDEX</>. + creation, destruction, or <command>REINDEX</command>. </para> <para> Building an index type that supports concurrent updates usually requires extensive and subtle analysis of the required behavior. For the b-tree and hash index types, you can read about the design decisions involved in - <filename>src/backend/access/nbtree/README</> and - <filename>src/backend/access/hash/README</>. + <filename>src/backend/access/nbtree/README</filename> and + <filename>src/backend/access/hash/README</filename>. </para> <para> Aside from the index's own internal consistency requirements, concurrent updates create issues about consistency between the parent table (the - <firstterm>heap</>) and the index. Because + <firstterm>heap</firstterm>) and the index. Because <productname>PostgreSQL</productname> separates accesses and updates of the heap from those of the index, there are windows in which the index might be inconsistent with the heap. We handle this problem @@ -906,7 +906,7 @@ amparallelrescan (IndexScanDesc scan); </listitem> <listitem> <para> - When a heap entry is to be deleted (by <command>VACUUM</>), all its + When a heap entry is to be deleted (by <command>VACUUM</command>), all its index entries must be removed first. </para> </listitem> @@ -914,7 +914,7 @@ amparallelrescan (IndexScanDesc scan); <para> An index scan must maintain a pin on the index page holding the item last returned by - <function>amgettuple</>, and <function>ambulkdelete</> cannot delete + <function>amgettuple</function>, and <function>ambulkdelete</function> cannot delete entries from pages that are pinned by other backends. The need for this rule is explained below. </para> @@ -922,33 +922,33 @@ amparallelrescan (IndexScanDesc scan); </itemizedlist> Without the third rule, it is possible for an index reader to - see an index entry just before it is removed by <command>VACUUM</>, and + see an index entry just before it is removed by <command>VACUUM</command>, and then to arrive at the corresponding heap entry after that was removed by - <command>VACUUM</>. + <command>VACUUM</command>. This creates no serious problems if that item number is still unused when the reader reaches it, since an empty - item slot will be ignored by <function>heap_fetch()</>. But what if a + item slot will be ignored by <function>heap_fetch()</function>. But what if a third backend has already re-used the item slot for something else? When using an MVCC-compliant snapshot, there is no problem because the new occupant of the slot is certain to be too new to pass the snapshot test. However, with a non-MVCC-compliant snapshot (such as - <literal>SnapshotAny</>), it would be possible to accept and return + <literal>SnapshotAny</literal>), it would be possible to accept and return a row that does not in fact match the scan keys. We could defend against this scenario by requiring the scan keys to be rechecked against the heap row in all cases, but that is too expensive. Instead, we use a pin on an index page as a proxy to indicate that the reader - might still be <quote>in flight</> from the index entry to the matching - heap entry. Making <function>ambulkdelete</> block on such a pin ensures - that <command>VACUUM</> cannot delete the heap entry before the reader + might still be <quote>in flight</quote> from the index entry to the matching + heap entry. Making <function>ambulkdelete</function> block on such a pin ensures + that <command>VACUUM</command> cannot delete the heap entry before the reader is done with it. This solution costs little in run time, and adds blocking overhead only in the rare cases where there actually is a conflict. </para> <para> - This solution requires that index scans be <quote>synchronous</>: we have + This solution requires that index scans be <quote>synchronous</quote>: we have to fetch each heap tuple immediately after scanning the corresponding index entry. This is expensive for a number of reasons. An - <quote>asynchronous</> scan in which we collect many TIDs from the index, + <quote>asynchronous</quote> scan in which we collect many TIDs from the index, and only visit the heap tuples sometime later, requires much less index locking overhead and can allow a more efficient heap access pattern. Per the above analysis, we must use the synchronous approach for @@ -957,13 +957,13 @@ amparallelrescan (IndexScanDesc scan); </para> <para> - In an <function>amgetbitmap</> index scan, the access method does not + In an <function>amgetbitmap</function> index scan, the access method does not keep an index pin on any of the returned tuples. Therefore it is only safe to use such scans with MVCC-compliant snapshots. </para> <para> - When the <structfield>ampredlocks</> flag is not set, any scan using that + When the <structfield>ampredlocks</structfield> flag is not set, any scan using that index access method within a serializable transaction will acquire a nonblocking predicate lock on the full index. This will generate a read-write conflict with the insert of any tuple into that index by a @@ -982,9 +982,9 @@ amparallelrescan (IndexScanDesc scan); <para> <productname>PostgreSQL</productname> enforces SQL uniqueness constraints - using <firstterm>unique indexes</>, which are indexes that disallow + using <firstterm>unique indexes</firstterm>, which are indexes that disallow multiple entries with identical keys. An access method that supports this - feature sets <structfield>amcanunique</> true. + feature sets <structfield>amcanunique</structfield> true. (At present, only b-tree supports it.) </para> @@ -1032,7 +1032,7 @@ amparallelrescan (IndexScanDesc scan); no violation should be reported. (This case cannot occur during the ordinary scenario of inserting a row that's just been created by the current transaction. It can happen during - <command>CREATE UNIQUE INDEX CONCURRENTLY</>, however.) + <command>CREATE UNIQUE INDEX CONCURRENTLY</command>, however.) </para> <para> @@ -1057,32 +1057,32 @@ amparallelrescan (IndexScanDesc scan); are done. Otherwise, we schedule a recheck to occur when it is time to enforce the constraint. If, at the time of the recheck, both the inserted tuple and some other tuple with the same key are live, then the error - must be reported. (Note that for this purpose, <quote>live</> actually - means <quote>any tuple in the index entry's HOT chain is live</>.) - To implement this, the <function>aminsert</> function is passed a - <literal>checkUnique</> parameter having one of the following values: + must be reported. (Note that for this purpose, <quote>live</quote> actually + means <quote>any tuple in the index entry's HOT chain is live</quote>.) + To implement this, the <function>aminsert</function> function is passed a + <literal>checkUnique</literal> parameter having one of the following values: <itemizedlist> <listitem> <para> - <literal>UNIQUE_CHECK_NO</> indicates that no uniqueness checking + <literal>UNIQUE_CHECK_NO</literal> indicates that no uniqueness checking should be done (this is not a unique index). </para> </listitem> <listitem> <para> - <literal>UNIQUE_CHECK_YES</> indicates that this is a non-deferrable + <literal>UNIQUE_CHECK_YES</literal> indicates that this is a non-deferrable unique index, and the uniqueness check must be done immediately, as described above. </para> </listitem> <listitem> <para> - <literal>UNIQUE_CHECK_PARTIAL</> indicates that the unique + <literal>UNIQUE_CHECK_PARTIAL</literal> indicates that the unique constraint is deferrable. <productname>PostgreSQL</productname> will use this mode to insert each row's index entry. The access method must allow duplicate entries into the index, and report any - potential duplicates by returning FALSE from <function>aminsert</>. + potential duplicates by returning FALSE from <function>aminsert</function>. For each row for which FALSE is returned, a deferred recheck will be scheduled. </para> @@ -1098,21 +1098,21 @@ amparallelrescan (IndexScanDesc scan); </listitem> <listitem> <para> - <literal>UNIQUE_CHECK_EXISTING</> indicates that this is a deferred + <literal>UNIQUE_CHECK_EXISTING</literal> indicates that this is a deferred recheck of a row that was reported as a potential uniqueness violation. - Although this is implemented by calling <function>aminsert</>, the - access method must <emphasis>not</> insert a new index entry in this + Although this is implemented by calling <function>aminsert</function>, the + access method must <emphasis>not</emphasis> insert a new index entry in this case. The index entry is already present. Rather, the access method must check to see if there is another live index entry. If so, and if the target row is also still live, report error. </para> <para> - It is recommended that in a <literal>UNIQUE_CHECK_EXISTING</> call, + It is recommended that in a <literal>UNIQUE_CHECK_EXISTING</literal> call, the access method further verify that the target row actually does have an existing entry in the index, and report error if not. This is a good idea because the index tuple values passed to - <function>aminsert</> will have been recomputed. If the index + <function>aminsert</function> will have been recomputed. If the index definition involves functions that are not really immutable, we might be checking the wrong area of the index. Checking that the target row is found in the recheck verifies that we are scanning @@ -1128,20 +1128,20 @@ amparallelrescan (IndexScanDesc scan); <title>Index Cost Estimation Functions</title> <para> - The <function>amcostestimate</> function is given information describing + The <function>amcostestimate</function> function is given information describing a possible index scan, including lists of WHERE and ORDER BY clauses that have been determined to be usable with the index. It must return estimates of the cost of accessing the index and the selectivity of the WHERE clauses (that is, the fraction of parent-table rows that will be retrieved during the index scan). For simple cases, nearly all the work of the cost estimator can be done by calling standard routines - in the optimizer; the point of having an <function>amcostestimate</> function is + in the optimizer; the point of having an <function>amcostestimate</function> function is to allow index access methods to provide index-type-specific knowledge, in case it is possible to improve on the standard estimates. </para> <para> - Each <function>amcostestimate</> function must have the signature: + Each <function>amcostestimate</function> function must have the signature: <programlisting> void @@ -1158,7 +1158,7 @@ amcostestimate (PlannerInfo *root, <variablelist> <varlistentry> - <term><parameter>root</></term> + <term><parameter>root</parameter></term> <listitem> <para> The planner's information about the query being processed. @@ -1167,7 +1167,7 @@ amcostestimate (PlannerInfo *root, </varlistentry> <varlistentry> - <term><parameter>path</></term> + <term><parameter>path</parameter></term> <listitem> <para> The index access path being considered. All fields except cost and @@ -1177,14 +1177,14 @@ amcostestimate (PlannerInfo *root, </varlistentry> <varlistentry> - <term><parameter>loop_count</></term> + <term><parameter>loop_count</parameter></term> <listitem> <para> The number of repetitions of the index scan that should be factored into the cost estimates. This will typically be greater than one when considering a parameterized scan for use in the inside of a nestloop join. Note that the cost estimates should still be for just one scan; - a larger <parameter>loop_count</> means that it may be appropriate + a larger <parameter>loop_count</parameter> means that it may be appropriate to allow for some caching effects across multiple scans. </para> </listitem> @@ -1197,7 +1197,7 @@ amcostestimate (PlannerInfo *root, <variablelist> <varlistentry> - <term><parameter>*indexStartupCost</></term> + <term><parameter>*indexStartupCost</parameter></term> <listitem> <para> Set to cost of index start-up processing @@ -1206,7 +1206,7 @@ amcostestimate (PlannerInfo *root, </varlistentry> <varlistentry> - <term><parameter>*indexTotalCost</></term> + <term><parameter>*indexTotalCost</parameter></term> <listitem> <para> Set to total cost of index processing @@ -1215,7 +1215,7 @@ amcostestimate (PlannerInfo *root, </varlistentry> <varlistentry> - <term><parameter>*indexSelectivity</></term> + <term><parameter>*indexSelectivity</parameter></term> <listitem> <para> Set to index selectivity @@ -1224,7 +1224,7 @@ amcostestimate (PlannerInfo *root, </varlistentry> <varlistentry> - <term><parameter>*indexCorrelation</></term> + <term><parameter>*indexCorrelation</parameter></term> <listitem> <para> Set to correlation coefficient between index scan order and @@ -1244,17 +1244,17 @@ amcostestimate (PlannerInfo *root, <para> The index access costs should be computed using the parameters used by <filename>src/backend/optimizer/path/costsize.c</filename>: a sequential - disk block fetch has cost <varname>seq_page_cost</>, a nonsequential fetch - has cost <varname>random_page_cost</>, and the cost of processing one index - row should usually be taken as <varname>cpu_index_tuple_cost</>. In - addition, an appropriate multiple of <varname>cpu_operator_cost</> should + disk block fetch has cost <varname>seq_page_cost</varname>, a nonsequential fetch + has cost <varname>random_page_cost</varname>, and the cost of processing one index + row should usually be taken as <varname>cpu_index_tuple_cost</varname>. In + addition, an appropriate multiple of <varname>cpu_operator_cost</varname> should be charged for any comparison operators invoked during index processing (especially evaluation of the indexquals themselves). </para> <para> The access costs should include all disk and CPU costs associated with - scanning the index itself, but <emphasis>not</> the costs of retrieving or + scanning the index itself, but <emphasis>not</emphasis> the costs of retrieving or processing the parent-table rows that are identified by the index. </para> @@ -1266,21 +1266,21 @@ amcostestimate (PlannerInfo *root, </para> <para> - The <parameter>indexSelectivity</> should be set to the estimated fraction of the parent + The <parameter>indexSelectivity</parameter> should be set to the estimated fraction of the parent table rows that will be retrieved during the index scan. In the case of a lossy query, this will typically be higher than the fraction of rows that actually pass the given qual conditions. </para> <para> - The <parameter>indexCorrelation</> should be set to the correlation (ranging between + The <parameter>indexCorrelation</parameter> should be set to the correlation (ranging between -1.0 and 1.0) between the index order and the table order. This is used to adjust the estimate for the cost of fetching rows from the parent table. </para> <para> - When <parameter>loop_count</> is greater than one, the returned numbers + When <parameter>loop_count</parameter> is greater than one, the returned numbers should be averages expected for any one scan of the index. </para> @@ -1307,17 +1307,17 @@ amcostestimate (PlannerInfo *root, <step> <para> Estimate the number of index rows that will be visited during the - scan. For many index types this is the same as <parameter>indexSelectivity</> times + scan. For many index types this is the same as <parameter>indexSelectivity</parameter> times the number of rows in the index, but it might be more. (Note that the index's size in pages and rows is available from the - <literal>path->indexinfo</> struct.) + <literal>path->indexinfo</literal> struct.) </para> </step> <step> <para> Estimate the number of index pages that will be retrieved during the scan. - This might be just <parameter>indexSelectivity</> times the index's size in pages. + This might be just <parameter>indexSelectivity</parameter> times the index's size in pages. </para> </step> diff --git a/doc/src/sgml/indices.sgml b/doc/src/sgml/indices.sgml index e40750e8ec2..4cdd387b7be 100644 --- a/doc/src/sgml/indices.sgml +++ b/doc/src/sgml/indices.sgml @@ -147,14 +147,14 @@ CREATE INDEX test1_id_index ON test1 (id); </simplelist> Constructs equivalent to combinations of these operators, such as - <literal>BETWEEN</> and <literal>IN</>, can also be implemented with - a B-tree index search. Also, an <literal>IS NULL</> or <literal>IS NOT - NULL</> condition on an index column can be used with a B-tree index. + <literal>BETWEEN</literal> and <literal>IN</literal>, can also be implemented with + a B-tree index search. Also, an <literal>IS NULL</literal> or <literal>IS NOT + NULL</literal> condition on an index column can be used with a B-tree index. </para> <para> The optimizer can also use a B-tree index for queries involving the - pattern matching operators <literal>LIKE</> and <literal>~</literal> + pattern matching operators <literal>LIKE</literal> and <literal>~</literal> <emphasis>if</emphasis> the pattern is a constant and is anchored to the beginning of the string — for example, <literal>col LIKE 'foo%'</literal> or <literal>col ~ '^foo'</literal>, but not @@ -206,7 +206,7 @@ CREATE INDEX <replaceable>name</replaceable> ON <replaceable>table</replaceable> within which many different indexing strategies can be implemented. Accordingly, the particular operators with which a GiST index can be used vary depending on the indexing strategy (the <firstterm>operator - class</>). As an example, the standard distribution of + class</firstterm>). As an example, the standard distribution of <productname>PostgreSQL</productname> includes GiST operator classes for several two-dimensional geometric data types, which support indexed queries using these operators: @@ -231,12 +231,12 @@ CREATE INDEX <replaceable>name</replaceable> ON <replaceable>table</replaceable> The GiST operator classes included in the standard distribution are documented in <xref linkend="gist-builtin-opclasses-table">. Many other GiST operator - classes are available in the <literal>contrib</> collection or as separate + classes are available in the <literal>contrib</literal> collection or as separate projects. For more information see <xref linkend="GiST">. </para> <para> - GiST indexes are also capable of optimizing <quote>nearest-neighbor</> + GiST indexes are also capable of optimizing <quote>nearest-neighbor</quote> searches, such as <programlisting><![CDATA[ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; @@ -245,7 +245,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; which finds the ten places closest to a given target point. The ability to do this is again dependent on the particular operator class being used. In <xref linkend="gist-builtin-opclasses-table">, operators that can be - used in this way are listed in the column <quote>Ordering Operators</>. + used in this way are listed in the column <quote>Ordering Operators</quote>. </para> <para> @@ -290,7 +290,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; <primary>GIN</primary> <see>index</see> </indexterm> - GIN indexes are <quote>inverted indexes</> which are appropriate for + GIN indexes are <quote>inverted indexes</quote> which are appropriate for data values that contain multiple component values, such as arrays. An inverted index contains a separate entry for each component value, and can efficiently handle queries that test for the presence of specific @@ -318,7 +318,7 @@ SELECT * FROM places ORDER BY location <-> point '(101,456)' LIMIT 10; The GIN operator classes included in the standard distribution are documented in <xref linkend="gin-builtin-opclasses-table">. Many other GIN operator - classes are available in the <literal>contrib</> collection or as separate + classes are available in the <literal>contrib</literal> collection or as separate projects. For more information see <xref linkend="GIN">. </para> @@ -407,13 +407,13 @@ CREATE INDEX test2_mm_idx ON test2 (major, minor); are checked in the index, so they save visits to the table proper, but they do not reduce the portion of the index that has to be scanned. For example, given an index on <literal>(a, b, c)</literal> and a - query condition <literal>WHERE a = 5 AND b >= 42 AND c < 77</>, + query condition <literal>WHERE a = 5 AND b >= 42 AND c < 77</literal>, the index would have to be scanned from the first entry with - <literal>a</> = 5 and <literal>b</> = 42 up through the last entry with - <literal>a</> = 5. Index entries with <literal>c</> >= 77 would be + <literal>a</literal> = 5 and <literal>b</literal> = 42 up through the last entry with + <literal>a</literal> = 5. Index entries with <literal>c</literal> >= 77 would be skipped, but they'd still have to be scanned through. This index could in principle be used for queries that have constraints - on <literal>b</> and/or <literal>c</> with no constraint on <literal>a</> + on <literal>b</literal> and/or <literal>c</literal> with no constraint on <literal>a</literal> — but the entire index would have to be scanned, so in most cases the planner would prefer a sequential table scan over using the index. </para> @@ -462,17 +462,17 @@ CREATE INDEX test2_mm_idx ON test2 (major, minor); <sect1 id="indexes-ordering"> - <title>Indexes and <literal>ORDER BY</></title> + <title>Indexes and <literal>ORDER BY</literal></title> <indexterm zone="indexes-ordering"> <primary>index</primary> - <secondary>and <literal>ORDER BY</></secondary> + <secondary>and <literal>ORDER BY</literal></secondary> </indexterm> <para> In addition to simply finding the rows to be returned by a query, an index may be able to deliver them in a specific sorted order. - This allows a query's <literal>ORDER BY</> specification to be honored + This allows a query's <literal>ORDER BY</literal> specification to be honored without a separate sorting step. Of the index types currently supported by <productname>PostgreSQL</productname>, only B-tree can produce sorted output — the other index types return @@ -480,7 +480,7 @@ CREATE INDEX test2_mm_idx ON test2 (major, minor); </para> <para> - The planner will consider satisfying an <literal>ORDER BY</> specification + The planner will consider satisfying an <literal>ORDER BY</literal> specification either by scanning an available index that matches the specification, or by scanning the table in physical order and doing an explicit sort. For a query that requires scanning a large fraction of the @@ -488,50 +488,50 @@ CREATE INDEX test2_mm_idx ON test2 (major, minor); because it requires less disk I/O due to following a sequential access pattern. Indexes are more useful when only a few rows need be fetched. An important - special case is <literal>ORDER BY</> in combination with - <literal>LIMIT</> <replaceable>n</>: an explicit sort will have to process - all the data to identify the first <replaceable>n</> rows, but if there is - an index matching the <literal>ORDER BY</>, the first <replaceable>n</> + special case is <literal>ORDER BY</literal> in combination with + <literal>LIMIT</literal> <replaceable>n</replaceable>: an explicit sort will have to process + all the data to identify the first <replaceable>n</replaceable> rows, but if there is + an index matching the <literal>ORDER BY</literal>, the first <replaceable>n</replaceable> rows can be retrieved directly, without scanning the remainder at all. </para> <para> By default, B-tree indexes store their entries in ascending order with nulls last. This means that a forward scan of an index on - column <literal>x</> produces output satisfying <literal>ORDER BY x</> - (or more verbosely, <literal>ORDER BY x ASC NULLS LAST</>). The + column <literal>x</literal> produces output satisfying <literal>ORDER BY x</literal> + (or more verbosely, <literal>ORDER BY x ASC NULLS LAST</literal>). The index can also be scanned backward, producing output satisfying - <literal>ORDER BY x DESC</> - (or more verbosely, <literal>ORDER BY x DESC NULLS FIRST</>, since - <literal>NULLS FIRST</> is the default for <literal>ORDER BY DESC</>). + <literal>ORDER BY x DESC</literal> + (or more verbosely, <literal>ORDER BY x DESC NULLS FIRST</literal>, since + <literal>NULLS FIRST</literal> is the default for <literal>ORDER BY DESC</literal>). </para> <para> You can adjust the ordering of a B-tree index by including the - options <literal>ASC</>, <literal>DESC</>, <literal>NULLS FIRST</>, - and/or <literal>NULLS LAST</> when creating the index; for example: + options <literal>ASC</literal>, <literal>DESC</literal>, <literal>NULLS FIRST</literal>, + and/or <literal>NULLS LAST</literal> when creating the index; for example: <programlisting> CREATE INDEX test2_info_nulls_low ON test2 (info NULLS FIRST); CREATE INDEX test3_desc_index ON test3 (id DESC NULLS LAST); </programlisting> An index stored in ascending order with nulls first can satisfy - either <literal>ORDER BY x ASC NULLS FIRST</> or - <literal>ORDER BY x DESC NULLS LAST</> depending on which direction + either <literal>ORDER BY x ASC NULLS FIRST</literal> or + <literal>ORDER BY x DESC NULLS LAST</literal> depending on which direction it is scanned in. </para> <para> You might wonder why bother providing all four options, when two options together with the possibility of backward scan would cover - all the variants of <literal>ORDER BY</>. In single-column indexes + all the variants of <literal>ORDER BY</literal>. In single-column indexes the options are indeed redundant, but in multicolumn indexes they can be - useful. Consider a two-column index on <literal>(x, y)</>: this can - satisfy <literal>ORDER BY x, y</> if we scan forward, or - <literal>ORDER BY x DESC, y DESC</> if we scan backward. + useful. Consider a two-column index on <literal>(x, y)</literal>: this can + satisfy <literal>ORDER BY x, y</literal> if we scan forward, or + <literal>ORDER BY x DESC, y DESC</literal> if we scan backward. But it might be that the application frequently needs to use - <literal>ORDER BY x ASC, y DESC</>. There is no way to get that + <literal>ORDER BY x ASC, y DESC</literal>. There is no way to get that ordering from a plain index, but it is possible if the index is defined - as <literal>(x ASC, y DESC)</> or <literal>(x DESC, y ASC)</>. + as <literal>(x ASC, y DESC)</literal> or <literal>(x DESC, y ASC)</literal>. </para> <para> @@ -559,38 +559,38 @@ CREATE INDEX test3_desc_index ON test3 (id DESC NULLS LAST); <para> A single index scan can only use query clauses that use the index's columns with operators of its operator class and are joined with - <literal>AND</>. For example, given an index on <literal>(a, b)</literal> - a query condition like <literal>WHERE a = 5 AND b = 6</> could - use the index, but a query like <literal>WHERE a = 5 OR b = 6</> could not + <literal>AND</literal>. For example, given an index on <literal>(a, b)</literal> + a query condition like <literal>WHERE a = 5 AND b = 6</literal> could + use the index, but a query like <literal>WHERE a = 5 OR b = 6</literal> could not directly use the index. </para> <para> Fortunately, - <productname>PostgreSQL</> has the ability to combine multiple indexes + <productname>PostgreSQL</productname> has the ability to combine multiple indexes (including multiple uses of the same index) to handle cases that cannot - be implemented by single index scans. The system can form <literal>AND</> - and <literal>OR</> conditions across several index scans. For example, - a query like <literal>WHERE x = 42 OR x = 47 OR x = 53 OR x = 99</> - could be broken down into four separate scans of an index on <literal>x</>, + be implemented by single index scans. The system can form <literal>AND</literal> + and <literal>OR</literal> conditions across several index scans. For example, + a query like <literal>WHERE x = 42 OR x = 47 OR x = 53 OR x = 99</literal> + could be broken down into four separate scans of an index on <literal>x</literal>, each scan using one of the query clauses. The results of these scans are then ORed together to produce the result. Another example is that if we - have separate indexes on <literal>x</> and <literal>y</>, one possible - implementation of a query like <literal>WHERE x = 5 AND y = 6</> is to + have separate indexes on <literal>x</literal> and <literal>y</literal>, one possible + implementation of a query like <literal>WHERE x = 5 AND y = 6</literal> is to use each index with the appropriate query clause and then AND together the index results to identify the result rows. </para> <para> To combine multiple indexes, the system scans each needed index and - prepares a <firstterm>bitmap</> in memory giving the locations of + prepares a <firstterm>bitmap</firstterm> in memory giving the locations of table rows that are reported as matching that index's conditions. The bitmaps are then ANDed and ORed together as needed by the query. Finally, the actual table rows are visited and returned. The table rows are visited in physical order, because that is how the bitmap is laid out; this means that any ordering of the original indexes is lost, and so a separate sort step will be needed if the query has an <literal>ORDER - BY</> clause. For this reason, and because each additional index scan + BY</literal> clause. For this reason, and because each additional index scan adds extra time, the planner will sometimes choose to use a simple index scan even though additional indexes are available that could have been used as well. @@ -603,19 +603,19 @@ CREATE INDEX test3_desc_index ON test3 (id DESC NULLS LAST); indexes are best, but sometimes it's better to create separate indexes and rely on the index-combination feature. For example, if your workload includes a mix of queries that sometimes involve only column - <literal>x</>, sometimes only column <literal>y</>, and sometimes both + <literal>x</literal>, sometimes only column <literal>y</literal>, and sometimes both columns, you might choose to create two separate indexes on - <literal>x</> and <literal>y</>, relying on index combination to + <literal>x</literal> and <literal>y</literal>, relying on index combination to process the queries that use both columns. You could also create a - multicolumn index on <literal>(x, y)</>. This index would typically be + multicolumn index on <literal>(x, y)</literal>. This index would typically be more efficient than index combination for queries involving both columns, but as discussed in <xref linkend="indexes-multicolumn">, it - would be almost useless for queries involving only <literal>y</>, so it + would be almost useless for queries involving only <literal>y</literal>, so it should not be the only index. A combination of the multicolumn index - and a separate index on <literal>y</> would serve reasonably well. For - queries involving only <literal>x</>, the multicolumn index could be + and a separate index on <literal>y</literal> would serve reasonably well. For + queries involving only <literal>x</literal>, the multicolumn index could be used, though it would be larger and hence slower than an index on - <literal>x</> alone. The last alternative is to create all three + <literal>x</literal> alone. The last alternative is to create all three indexes, but this is probably only reasonable if the table is searched much more often than it is updated and all three types of query are common. If one of the types of query is much less common than the @@ -698,9 +698,9 @@ CREATE INDEX test1_lower_col1_idx ON test1 (lower(col1)); </para> <para> - If we were to declare this index <literal>UNIQUE</>, it would prevent - creation of rows whose <literal>col1</> values differ only in case, - as well as rows whose <literal>col1</> values are actually identical. + If we were to declare this index <literal>UNIQUE</literal>, it would prevent + creation of rows whose <literal>col1</literal> values differ only in case, + as well as rows whose <literal>col1</literal> values are actually identical. Thus, indexes on expressions can be used to enforce constraints that are not definable as simple unique constraints. </para> @@ -717,7 +717,7 @@ CREATE INDEX people_names ON people ((first_name || ' ' || last_name)); </para> <para> - The syntax of the <command>CREATE INDEX</> command normally requires + The syntax of the <command>CREATE INDEX</command> command normally requires writing parentheses around index expressions, as shown in the second example. The parentheses can be omitted when the expression is just a function call, as in the first example. @@ -727,9 +727,9 @@ CREATE INDEX people_names ON people ((first_name || ' ' || last_name)); Index expressions are relatively expensive to maintain, because the derived expression(s) must be computed for each row upon insertion and whenever it is updated. However, the index expressions are - <emphasis>not</> recomputed during an indexed search, since they are + <emphasis>not</emphasis> recomputed during an indexed search, since they are already stored in the index. In both examples above, the system - sees the query as just <literal>WHERE indexedcolumn = 'constant'</> + sees the query as just <literal>WHERE indexedcolumn = 'constant'</literal> and so the speed of the search is equivalent to any other simple index query. Thus, indexes on expressions are useful when retrieval speed is more important than insertion and update speed. @@ -856,12 +856,12 @@ CREATE INDEX orders_unbilled_index ON orders (order_nr) SELECT * FROM orders WHERE billed is not true AND order_nr < 10000; </programlisting> However, the index can also be used in queries that do not involve - <structfield>order_nr</> at all, e.g.: + <structfield>order_nr</structfield> at all, e.g.: <programlisting> SELECT * FROM orders WHERE billed is not true AND amount > 5000.00; </programlisting> This is not as efficient as a partial index on the - <structfield>amount</> column would be, since the system has to + <structfield>amount</structfield> column would be, since the system has to scan the entire index. Yet, if there are relatively few unbilled orders, using this partial index just to find the unbilled orders could be a win. @@ -886,7 +886,7 @@ SELECT * FROM orders WHERE order_nr = 3501; predicate must match the conditions used in the queries that are supposed to benefit from the index. To be precise, a partial index can be used in a query only if the system can recognize that - the <literal>WHERE</> condition of the query mathematically implies + the <literal>WHERE</literal> condition of the query mathematically implies the predicate of the index. <productname>PostgreSQL</productname> does not have a sophisticated theorem prover that can recognize mathematically equivalent @@ -896,7 +896,7 @@ SELECT * FROM orders WHERE order_nr = 3501; The system can recognize simple inequality implications, for example <quote>x < 1</quote> implies <quote>x < 2</quote>; otherwise the predicate condition must exactly match part of the query's - <literal>WHERE</> condition + <literal>WHERE</literal> condition or the index will not be recognized as usable. Matching takes place at query planning time, not at run time. As a result, parameterized query clauses do not work with a partial index. For @@ -919,9 +919,9 @@ SELECT * FROM orders WHERE order_nr = 3501; <para> Suppose that we have a table describing test outcomes. We wish - to ensure that there is only one <quote>successful</> entry for + to ensure that there is only one <quote>successful</quote> entry for a given subject and target combination, but there might be any number of - <quote>unsuccessful</> entries. Here is one way to do it: + <quote>unsuccessful</quote> entries. Here is one way to do it: <programlisting> CREATE TABLE tests ( subject text, @@ -944,7 +944,7 @@ CREATE UNIQUE INDEX tests_success_constraint ON tests (subject, target) distributions might cause the system to use an index when it really should not. In that case the index can be set up so that it is not available for the offending query. Normally, - <productname>PostgreSQL</> makes reasonable choices about index + <productname>PostgreSQL</productname> makes reasonable choices about index usage (e.g., it avoids them when retrieving common values, so the earlier example really only saves index size, it is not required to avoid index usage), and grossly incorrect plan choices are cause @@ -956,7 +956,7 @@ CREATE UNIQUE INDEX tests_success_constraint ON tests (subject, target) know at least as much as the query planner knows, in particular you know when an index might be profitable. Forming this knowledge requires experience and understanding of how indexes in - <productname>PostgreSQL</> work. In most cases, the advantage of a + <productname>PostgreSQL</productname> work. In most cases, the advantage of a partial index over a regular index will be minimal. </para> @@ -998,8 +998,8 @@ CREATE INDEX <replaceable>name</replaceable> ON <replaceable>table</replaceable> the proper class when making an index. The operator class determines the basic sort ordering (which can then be modified by adding sort options <literal>COLLATE</literal>, - <literal>ASC</>/<literal>DESC</> and/or - <literal>NULLS FIRST</>/<literal>NULLS LAST</>). + <literal>ASC</literal>/<literal>DESC</literal> and/or + <literal>NULLS FIRST</literal>/<literal>NULLS LAST</literal>). </para> <para> @@ -1025,8 +1025,8 @@ CREATE INDEX <replaceable>name</replaceable> ON <replaceable>table</replaceable> CREATE INDEX test_index ON test_table (col varchar_pattern_ops); </programlisting> Note that you should also create an index with the default operator - class if you want queries involving ordinary <literal><</>, - <literal><=</>, <literal>></>, or <literal>>=</> comparisons + class if you want queries involving ordinary <literal><</literal>, + <literal><=</literal>, <literal>></literal>, or <literal>>=</literal> comparisons to use an index. Such queries cannot use the <literal><replaceable>xxx</replaceable>_pattern_ops</literal> operator classes. (Ordinary equality comparisons can use these @@ -1057,7 +1057,7 @@ SELECT am.amname AS index_method, <para> An operator class is actually just a subset of a larger structure called an - <firstterm>operator family</>. In cases where several data types have + <firstterm>operator family</firstterm>. In cases where several data types have similar behaviors, it is frequently useful to define cross-data-type operators and allow these to work with indexes. To do this, the operator classes for each of the types must be grouped into the same operator @@ -1147,13 +1147,13 @@ CREATE INDEX test1c_content_y_index ON test1c (content COLLATE "y"); </indexterm> <para> - All indexes in <productname>PostgreSQL</> are <firstterm>secondary</> + All indexes in <productname>PostgreSQL</productname> are <firstterm>secondary</firstterm> indexes, meaning that each index is stored separately from the table's - main data area (which is called the table's <firstterm>heap</> - in <productname>PostgreSQL</> terminology). This means that in an + main data area (which is called the table's <firstterm>heap</firstterm> + in <productname>PostgreSQL</productname> terminology). This means that in an ordinary index scan, each row retrieval requires fetching data from both the index and the heap. Furthermore, while the index entries that match a - given indexable <literal>WHERE</> condition are usually close together in + given indexable <literal>WHERE</literal> condition are usually close together in the index, the table rows they reference might be anywhere in the heap. The heap-access portion of an index scan thus involves a lot of random access into the heap, which can be slow, particularly on traditional @@ -1163,8 +1163,8 @@ CREATE INDEX test1c_content_y_index ON test1c (content COLLATE "y"); </para> <para> - To solve this performance problem, <productname>PostgreSQL</> - supports <firstterm>index-only scans</>, which can answer queries from an + To solve this performance problem, <productname>PostgreSQL</productname> + supports <firstterm>index-only scans</firstterm>, which can answer queries from an index alone without any heap access. The basic idea is to return values directly out of each index entry instead of consulting the associated heap entry. There are two fundamental restrictions on when this method can be @@ -1187,8 +1187,8 @@ CREATE INDEX test1c_content_y_index ON test1c (content COLLATE "y"); <listitem> <para> The query must reference only columns stored in the index. For - example, given an index on columns <literal>x</> and <literal>y</> of a - table that also has a column <literal>z</>, these queries could use + example, given an index on columns <literal>x</literal> and <literal>y</literal> of a + table that also has a column <literal>z</literal>, these queries could use index-only scans: <programlisting> SELECT x, y FROM tab WHERE x = 'key'; @@ -1210,17 +1210,17 @@ SELECT x FROM tab WHERE x = 'key' AND z < 42; If these two fundamental requirements are met, then all the data values required by the query are available from the index, so an index-only scan is physically possible. But there is an additional requirement for any - table scan in <productname>PostgreSQL</>: it must verify that each - retrieved row be <quote>visible</> to the query's MVCC snapshot, as + table scan in <productname>PostgreSQL</productname>: it must verify that each + retrieved row be <quote>visible</quote> to the query's MVCC snapshot, as discussed in <xref linkend="mvcc">. Visibility information is not stored in index entries, only in heap entries; so at first glance it would seem that every row retrieval would require a heap access anyway. And this is indeed the case, if the table row has been modified recently. However, for seldom-changing data there is a way around this - problem. <productname>PostgreSQL</> tracks, for each page in a table's + problem. <productname>PostgreSQL</productname> tracks, for each page in a table's heap, whether all rows stored in that page are old enough to be visible to all current and future transactions. This information is stored in a bit - in the table's <firstterm>visibility map</>. An index-only scan, after + in the table's <firstterm>visibility map</firstterm>. An index-only scan, after finding a candidate index entry, checks the visibility map bit for the corresponding heap page. If it's set, the row is known visible and so the data can be returned with no further work. If it's not set, the heap @@ -1243,48 +1243,48 @@ SELECT x FROM tab WHERE x = 'key' AND z < 42; <para> To make effective use of the index-only scan feature, you might choose to create indexes in which only the leading columns are meant to - match <literal>WHERE</> clauses, while the trailing columns - hold <quote>payload</> data to be returned by a query. For example, if + match <literal>WHERE</literal> clauses, while the trailing columns + hold <quote>payload</quote> data to be returned by a query. For example, if you commonly run queries like <programlisting> SELECT y FROM tab WHERE x = 'key'; </programlisting> the traditional approach to speeding up such queries would be to create an - index on <literal>x</> only. However, an index on <literal>(x, y)</> + index on <literal>x</literal> only. However, an index on <literal>(x, y)</literal> would offer the possibility of implementing this query as an index-only scan. As previously discussed, such an index would be larger and hence - more expensive than an index on <literal>x</> alone, so this is attractive + more expensive than an index on <literal>x</literal> alone, so this is attractive only if the table is known to be mostly static. Note it's important that - the index be declared on <literal>(x, y)</> not <literal>(y, x)</>, as for + the index be declared on <literal>(x, y)</literal> not <literal>(y, x)</literal>, as for most index types (particularly B-trees) searches that do not constrain the leading index columns are not very efficient. </para> <para> In principle, index-only scans can be used with expression indexes. - For example, given an index on <literal>f(x)</> where <literal>x</> is a + For example, given an index on <literal>f(x)</literal> where <literal>x</literal> is a table column, it should be possible to execute <programlisting> SELECT f(x) FROM tab WHERE f(x) < 1; </programlisting> - as an index-only scan; and this is very attractive if <literal>f()</> is - an expensive-to-compute function. However, <productname>PostgreSQL</>'s + as an index-only scan; and this is very attractive if <literal>f()</literal> is + an expensive-to-compute function. However, <productname>PostgreSQL</productname>'s planner is currently not very smart about such cases. It considers a query to be potentially executable by index-only scan only when - all <emphasis>columns</> needed by the query are available from the index. - In this example, <literal>x</> is not needed except in the - context <literal>f(x)</>, but the planner does not notice that and + all <emphasis>columns</emphasis> needed by the query are available from the index. + In this example, <literal>x</literal> is not needed except in the + context <literal>f(x)</literal>, but the planner does not notice that and concludes that an index-only scan is not possible. If an index-only scan seems sufficiently worthwhile, this can be worked around by declaring the - index to be on <literal>(f(x), x)</>, where the second column is not + index to be on <literal>(f(x), x)</literal>, where the second column is not expected to be used in practice but is just there to convince the planner that an index-only scan is possible. An additional caveat, if the goal is - to avoid recalculating <literal>f(x)</>, is that the planner won't - necessarily match uses of <literal>f(x)</> that aren't in - indexable <literal>WHERE</> clauses to the index column. It will usually + to avoid recalculating <literal>f(x)</literal>, is that the planner won't + necessarily match uses of <literal>f(x)</literal> that aren't in + indexable <literal>WHERE</literal> clauses to the index column. It will usually get this right in simple queries such as shown above, but not in queries that involve joins. These deficiencies may be remedied in future versions - of <productname>PostgreSQL</>. + of <productname>PostgreSQL</productname>. </para> <para> @@ -1299,13 +1299,13 @@ CREATE UNIQUE INDEX tests_success_constraint ON tests (subject, target) <programlisting> SELECT target FROM tests WHERE subject = 'some-subject' AND success; </programlisting> - But there's a problem: the <literal>WHERE</> clause refers - to <literal>success</> which is not available as a result column of the + But there's a problem: the <literal>WHERE</literal> clause refers + to <literal>success</literal> which is not available as a result column of the index. Nonetheless, an index-only scan is possible because the plan does - not need to recheck that part of the <literal>WHERE</> clause at run time: - all entries found in the index necessarily have <literal>success = true</> + not need to recheck that part of the <literal>WHERE</literal> clause at run time: + all entries found in the index necessarily have <literal>success = true</literal> so this need not be explicitly checked in the - plan. <productname>PostgreSQL</> versions 9.6 and later will recognize + plan. <productname>PostgreSQL</productname> versions 9.6 and later will recognize such cases and allow index-only scans to be generated, but older versions will not. </para> @@ -1321,7 +1321,7 @@ SELECT target FROM tests WHERE subject = 'some-subject' AND success; </indexterm> <para> - Although indexes in <productname>PostgreSQL</> do not need + Although indexes in <productname>PostgreSQL</productname> do not need maintenance or tuning, it is still important to check which indexes are actually used by the real-life query workload. Examining index usage for an individual query is done with the @@ -1388,8 +1388,8 @@ SELECT target FROM tests WHERE subject = 'some-subject' AND success; their use. There are run-time parameters that can turn off various plan types (see <xref linkend="runtime-config-query-enable">). For instance, turning off sequential scans - (<varname>enable_seqscan</>) and nested-loop joins - (<varname>enable_nestloop</>), which are the most basic plans, + (<varname>enable_seqscan</varname>) and nested-loop joins + (<varname>enable_nestloop</varname>), which are the most basic plans, will force the system to use a different plan. If the system still chooses a sequential scan or nested-loop join then there is probably a more fundamental reason why the index is not being @@ -1428,7 +1428,7 @@ SELECT target FROM tests WHERE subject = 'some-subject' AND success; If you do not succeed in adjusting the costs to be more appropriate, then you might have to resort to forcing index usage explicitly. You might also want to contact the - <productname>PostgreSQL</> developers to examine the issue. + <productname>PostgreSQL</productname> developers to examine the issue. </para> </listitem> </itemizedlist> diff --git a/doc/src/sgml/info.sgml b/doc/src/sgml/info.sgml index 233ba0e6687..6b9f1b5d814 100644 --- a/doc/src/sgml/info.sgml +++ b/doc/src/sgml/info.sgml @@ -15,9 +15,9 @@ <para> The <productname>PostgreSQL</productname> <ulink url="https://wiki.postgresql.org">wiki</ulink> contains the project's <ulink - url="https://wiki.postgresql.org/wiki/Frequently_Asked_Questions">FAQ</> + url="https://wiki.postgresql.org/wiki/Frequently_Asked_Questions">FAQ</ulink> (Frequently Asked Questions) list, <ulink - url="https://wiki.postgresql.org/wiki/Todo">TODO</> list, and + url="https://wiki.postgresql.org/wiki/Todo">TODO</ulink> list, and detailed information about many more topics. </para> </listitem> @@ -42,7 +42,7 @@ <para> The mailing lists are a good place to have your questions answered, to share experiences with other users, and to contact - the developers. Consult the <productname>PostgreSQL</> web site + the developers. Consult the <productname>PostgreSQL</productname> web site for details. </para> </listitem> diff --git a/doc/src/sgml/information_schema.sgml b/doc/src/sgml/information_schema.sgml index e07ff35bca0..58c54254d7b 100644 --- a/doc/src/sgml/information_schema.sgml +++ b/doc/src/sgml/information_schema.sgml @@ -35,12 +35,12 @@ <para> This problem can appear when querying information schema views such - as <literal>check_constraint_routine_usage</>, - <literal>check_constraints</>, <literal>domain_constraints</>, and - <literal>referential_constraints</>. Some other views have similar + as <literal>check_constraint_routine_usage</literal>, + <literal>check_constraints</literal>, <literal>domain_constraints</literal>, and + <literal>referential_constraints</literal>. Some other views have similar issues but contain the table name to help distinguish duplicate - rows, e.g., <literal>constraint_column_usage</>, - <literal>constraint_table_usage</>, <literal>table_constraints</>. + rows, e.g., <literal>constraint_column_usage</literal>, + <literal>constraint_table_usage</literal>, <literal>table_constraints</literal>. </para> </note> @@ -384,19 +384,19 @@ <row> <entry><literal>character_set_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>character_set_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>character_set_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> @@ -535,25 +535,25 @@ <row> <entry><literal>scope_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>scope_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>scope_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>maximum_cardinality</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, because arrays always have unlimited maximum cardinality in <productname>PostgreSQL</></entry> + <entry>Always null, because arrays always have unlimited maximum cardinality in <productname>PostgreSQL</productname></entry> </row> <row> @@ -572,7 +572,7 @@ <row> <entry><literal>is_derived_reference_attribute</literal></entry> <entry><type>yes_or_no</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> </tbody> </tgroup> @@ -1256,7 +1256,7 @@ <para> The view <literal>columns</literal> contains information about all table columns (or view columns) in the database. System columns - (<literal>oid</>, etc.) are not included. Only those columns are + (<literal>oid</literal>, etc.) are not included. Only those columns are shown that the current user has access to (by way of being the owner or having some privilege). </para> @@ -1441,19 +1441,19 @@ <row> <entry><literal>character_set_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>character_set_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>character_set_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> @@ -1540,25 +1540,25 @@ <row> <entry><literal>scope_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>scope_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>scope_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>maximum_cardinality</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, because arrays always have unlimited maximum cardinality in <productname>PostgreSQL</></entry> + <entry>Always null, because arrays always have unlimited maximum cardinality in <productname>PostgreSQL</productname></entry> </row> <row> @@ -1577,7 +1577,7 @@ <row> <entry><literal>is_self_referencing</literal></entry> <entry><type>yes_or_no</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> @@ -1648,13 +1648,13 @@ <row> <entry><literal>is_generated</literal></entry> <entry><type>character_data</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>generation_expression</literal></entry> <entry><type>character_data</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> @@ -2152,19 +2152,19 @@ <row> <entry><literal>character_set_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>character_set_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>character_set_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> @@ -2300,25 +2300,25 @@ <row> <entry><literal>scope_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>scope_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>scope_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>maximum_cardinality</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, because arrays always have unlimited maximum cardinality in <productname>PostgreSQL</></entry> + <entry>Always null, because arrays always have unlimited maximum cardinality in <productname>PostgreSQL</productname></entry> </row> <row> @@ -2442,31 +2442,31 @@ ORDER BY c.ordinal_position; <row> <entry><literal>character_maximum_length</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, since this information is not applied to array element data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to array element data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>character_octet_length</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, since this information is not applied to array element data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to array element data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>character_set_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>character_set_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>character_set_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> @@ -2501,37 +2501,37 @@ ORDER BY c.ordinal_position; <row> <entry><literal>numeric_precision</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, since this information is not applied to array element data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to array element data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>numeric_precision_radix</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, since this information is not applied to array element data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to array element data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>numeric_scale</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, since this information is not applied to array element data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to array element data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>datetime_precision</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, since this information is not applied to array element data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to array element data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>interval_type</literal></entry> <entry><type>character_data</type></entry> - <entry>Always null, since this information is not applied to array element data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to array element data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>interval_precision</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, since this information is not applied to array element data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to array element data types in <productname>PostgreSQL</productname></entry> </row> <row> @@ -2569,25 +2569,25 @@ ORDER BY c.ordinal_position; <row> <entry><literal>scope_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>scope_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>scope_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>maximum_cardinality</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, because arrays always have unlimited maximum cardinality in <productname>PostgreSQL</></entry> + <entry>Always null, because arrays always have unlimited maximum cardinality in <productname>PostgreSQL</productname></entry> </row> <row> @@ -3160,13 +3160,13 @@ ORDER BY c.ordinal_position; <row> <entry><literal>is_result</literal></entry> <entry><type>yes_or_no</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>as_locator</literal></entry> <entry><type>yes_or_no</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> @@ -3191,85 +3191,85 @@ ORDER BY c.ordinal_position; <row> <entry><literal>character_maximum_length</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, since this information is not applied to parameter data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to parameter data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>character_octet_length</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, since this information is not applied to parameter data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to parameter data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>character_set_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>character_set_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>character_set_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>collation_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Always null, since this information is not applied to parameter data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to parameter data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>collation_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Always null, since this information is not applied to parameter data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to parameter data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>collation_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Always null, since this information is not applied to parameter data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to parameter data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>numeric_precision</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, since this information is not applied to parameter data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to parameter data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>numeric_precision_radix</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, since this information is not applied to parameter data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to parameter data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>numeric_scale</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, since this information is not applied to parameter data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to parameter data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>datetime_precision</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, since this information is not applied to parameter data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to parameter data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>interval_type</literal></entry> <entry><type>character_data</type></entry> - <entry>Always null, since this information is not applied to parameter data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to parameter data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>interval_precision</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, since this information is not applied to parameter data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to parameter data types in <productname>PostgreSQL</productname></entry> </row> <row> @@ -3301,25 +3301,25 @@ ORDER BY c.ordinal_position; <row> <entry><literal>scope_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>scope_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>scope_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>maximum_cardinality</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, because arrays always have unlimited maximum cardinality in <productname>PostgreSQL</></entry> + <entry>Always null, because arrays always have unlimited maximum cardinality in <productname>PostgreSQL</productname></entry> </row> <row> @@ -4045,37 +4045,37 @@ ORDER BY c.ordinal_position; <row> <entry><literal>module_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>module_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>module_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>udt_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>udt_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>udt_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> @@ -4094,85 +4094,85 @@ ORDER BY c.ordinal_position; <row> <entry><literal>character_maximum_length</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, since this information is not applied to return data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to return data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>character_octet_length</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, since this information is not applied to return data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to return data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>character_set_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>character_set_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>character_set_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>collation_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Always null, since this information is not applied to return data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to return data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>collation_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Always null, since this information is not applied to return data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to return data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>collation_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Always null, since this information is not applied to return data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to return data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>numeric_precision</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, since this information is not applied to return data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to return data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>numeric_precision_radix</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, since this information is not applied to return data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to return data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>numeric_scale</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, since this information is not applied to return data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to return data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>datetime_precision</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, since this information is not applied to return data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to return data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>interval_type</literal></entry> <entry><type>character_data</type></entry> - <entry>Always null, since this information is not applied to return data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to return data types in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>interval_precision</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, since this information is not applied to return data types in <productname>PostgreSQL</></entry> + <entry>Always null, since this information is not applied to return data types in <productname>PostgreSQL</productname></entry> </row> <row> @@ -4204,25 +4204,25 @@ ORDER BY c.ordinal_position; <row> <entry><literal>scope_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>scope_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>scope_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>maximum_cardinality</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Always null, because arrays always have unlimited maximum cardinality in <productname>PostgreSQL</></entry> + <entry>Always null, because arrays always have unlimited maximum cardinality in <productname>PostgreSQL</productname></entry> </row> <row> @@ -4283,7 +4283,7 @@ ORDER BY c.ordinal_position; <entry><type>character_data</type></entry> <entry> Always <literal>GENERAL</literal> (The SQL standard defines - other parameter styles, which are not available in <productname>PostgreSQL</>.) + other parameter styles, which are not available in <productname>PostgreSQL</productname>.) </entry> </row> @@ -4294,7 +4294,7 @@ ORDER BY c.ordinal_position; If the function is declared immutable (called deterministic in the SQL standard), then <literal>YES</literal>, else <literal>NO</literal>. (You cannot query the other volatility - levels available in <productname>PostgreSQL</> through the information schema.) + levels available in <productname>PostgreSQL</productname> through the information schema.) </entry> </row> @@ -4304,7 +4304,7 @@ ORDER BY c.ordinal_position; <entry> Always <literal>MODIFIES</literal>, meaning that the function possibly modifies SQL data. This information is not useful for - <productname>PostgreSQL</>. + <productname>PostgreSQL</productname>. </entry> </row> @@ -4321,7 +4321,7 @@ ORDER BY c.ordinal_position; <row> <entry><literal>sql_path</literal></entry> <entry><type>character_data</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> @@ -4330,26 +4330,26 @@ ORDER BY c.ordinal_position; <entry> Always <literal>YES</literal> (The opposite would be a method of a user-defined type, which is a feature not available in - <productname>PostgreSQL</>.) + <productname>PostgreSQL</productname>.) </entry> </row> <row> <entry><literal>max_dynamic_result_sets</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>is_user_defined_cast</literal></entry> <entry><type>yes_or_no</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>is_implicitly_invocable</literal></entry> <entry><type>yes_or_no</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> @@ -4366,43 +4366,43 @@ ORDER BY c.ordinal_position; <row> <entry><literal>to_sql_specific_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>to_sql_specific_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>to_sql_specific_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>as_locator</literal></entry> <entry><type>yes_or_no</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>created</literal></entry> <entry><type>time_stamp</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>last_altered</literal></entry> <entry><type>time_stamp</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>new_savepoint_level</literal></entry> <entry><type>yes_or_no</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> @@ -4411,152 +4411,152 @@ ORDER BY c.ordinal_position; <entry> Currently always <literal>NO</literal>. The alternative <literal>YES</literal> applies to a feature not available in - <productname>PostgreSQL</>. + <productname>PostgreSQL</productname>. </entry> </row> <row> <entry><literal>result_cast_from_data_type</literal></entry> <entry><type>character_data</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>result_cast_as_locator</literal></entry> <entry><type>yes_or_no</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>result_cast_char_max_length</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>result_cast_char_octet_length</literal></entry> <entry><type>character_data</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>result_cast_char_set_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>result_cast_char_set_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>result_cast_char_set_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>result_cast_collation_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>result_cast_collation_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>result_cast_collation_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>result_cast_numeric_precision</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>result_cast_numeric_precision_radix</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>result_cast_numeric_scale</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>result_cast_datetime_precision</literal></entry> <entry><type>character_data</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>result_cast_interval_type</literal></entry> <entry><type>character_data</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>result_cast_interval_precision</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>result_cast_type_udt_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>result_cast_type_udt_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>result_cast_type_udt_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>result_cast_scope_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>result_cast_scope_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>result_cast_scope_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>result_cast_maximum_cardinality</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>result_cast_dtd_identifier</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> </tbody> </tgroup> @@ -4606,25 +4606,25 @@ ORDER BY c.ordinal_position; <row> <entry><literal>default_character_set_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>default_character_set_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>default_character_set_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>sql_path</literal></entry> <entry><type>character_data</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> </tbody> </tgroup> @@ -4808,7 +4808,7 @@ ORDER BY c.ordinal_position; <entry><type>yes_or_no</type></entry> <entry> <literal>YES</literal> if the feature is fully supported by the - current version of <productname>PostgreSQL</>, <literal>NO</literal> if not + current version of <productname>PostgreSQL</productname>, <literal>NO</literal> if not </entry> </row> @@ -4816,7 +4816,7 @@ ORDER BY c.ordinal_position; <entry><literal>is_verified_by</literal></entry> <entry><type>character_data</type></entry> <entry> - Always null, since the <productname>PostgreSQL</> development group does not + Always null, since the <productname>PostgreSQL</productname> development group does not perform formal testing of feature conformance </entry> </row> @@ -4982,7 +4982,7 @@ ORDER BY c.ordinal_position; <entry><type>character_data</type></entry> <entry> The programming language, if the binding style is - <literal>EMBEDDED</literal>, else null. <productname>PostgreSQL</> only + <literal>EMBEDDED</literal>, else null. <productname>PostgreSQL</productname> only supports the language C. </entry> </row> @@ -5031,7 +5031,7 @@ ORDER BY c.ordinal_position; <entry><type>yes_or_no</type></entry> <entry> <literal>YES</literal> if the package is fully supported by the - current version of <productname>PostgreSQL</>, <literal>NO</literal> if not + current version of <productname>PostgreSQL</productname>, <literal>NO</literal> if not </entry> </row> @@ -5039,7 +5039,7 @@ ORDER BY c.ordinal_position; <entry><literal>is_verified_by</literal></entry> <entry><type>character_data</type></entry> <entry> - Always null, since the <productname>PostgreSQL</> development group does not + Always null, since the <productname>PostgreSQL</productname> development group does not perform formal testing of feature conformance </entry> </row> @@ -5093,7 +5093,7 @@ ORDER BY c.ordinal_position; <entry><type>yes_or_no</type></entry> <entry> <literal>YES</literal> if the part is fully supported by the - current version of <productname>PostgreSQL</>, + current version of <productname>PostgreSQL</productname>, <literal>NO</literal> if not </entry> </row> @@ -5102,7 +5102,7 @@ ORDER BY c.ordinal_position; <entry><literal>is_verified_by</literal></entry> <entry><type>character_data</type></entry> <entry> - Always null, since the <productname>PostgreSQL</> development group does not + Always null, since the <productname>PostgreSQL</productname> development group does not perform formal testing of feature conformance </entry> </row> @@ -5182,7 +5182,7 @@ ORDER BY c.ordinal_position; <para> The table <literal>sql_sizing_profiles</literal> contains information about the <literal>sql_sizing</literal> values that are - required by various profiles of the SQL standard. <productname>PostgreSQL</> does + required by various profiles of the SQL standard. <productname>PostgreSQL</productname> does not track any SQL profiles, so this table is empty. </para> @@ -5465,13 +5465,13 @@ ORDER BY c.ordinal_position; <row> <entry><literal>self_referencing_column_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>reference_generation</literal></entry> <entry><type>character_data</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> @@ -5806,31 +5806,31 @@ ORDER BY c.ordinal_position; <row> <entry><literal>action_reference_old_table</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>action_reference_new_table</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>action_reference_old_row</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>action_reference_new_row</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>created</literal></entry> <entry><type>time_stamp</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> </tbody> </tgroup> @@ -5864,7 +5864,7 @@ ORDER BY c.ordinal_position; <note> <para> - Prior to <productname>PostgreSQL</> 9.1, this view's columns + Prior to <productname>PostgreSQL</productname> 9.1, this view's columns <structfield>action_timing</structfield>, <structfield>action_reference_old_table</structfield>, <structfield>action_reference_new_table</structfield>, @@ -6113,151 +6113,151 @@ ORDER BY c.ordinal_position; <row> <entry><literal>is_instantiable</literal></entry> <entry><type>yes_or_no</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>is_final</literal></entry> <entry><type>yes_or_no</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>ordering_form</literal></entry> <entry><type>character_data</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>ordering_category</literal></entry> <entry><type>character_data</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>ordering_routine_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>ordering_routine_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>ordering_routine_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>reference_type</literal></entry> <entry><type>character_data</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>data_type</literal></entry> <entry><type>character_data</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>character_maximum_length</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>character_octet_length</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>character_set_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>character_set_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>character_set_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>collation_catalog</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>collation_schema</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>collation_name</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>numeric_precision</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>numeric_precision_radix</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>numeric_scale</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>datetime_precision</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>interval_type</literal></entry> <entry><type>character_data</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>interval_precision</literal></entry> <entry><type>cardinal_number</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>source_dtd_identifier</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> <entry><literal>ref_dtd_identifier</literal></entry> <entry><type>sql_identifier</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> </tbody> </tgroup> @@ -6660,7 +6660,7 @@ ORDER BY c.ordinal_position; <row> <entry><literal>check_option</literal></entry> <entry><type>character_data</type></entry> - <entry>Applies to a feature not available in <productname>PostgreSQL</></entry> + <entry>Applies to a feature not available in <productname>PostgreSQL</productname></entry> </row> <row> @@ -6686,8 +6686,8 @@ ORDER BY c.ordinal_position; <entry><literal>is_trigger_updatable</literal></entry> <entry><type>yes_or_no</type></entry> <entry> - <literal>YES</> if the view has an <literal>INSTEAD OF</> - <command>UPDATE</> trigger defined on it, <literal>NO</> if not + <literal>YES</literal> if the view has an <literal>INSTEAD OF</literal> + <command>UPDATE</command> trigger defined on it, <literal>NO</literal> if not </entry> </row> @@ -6695,8 +6695,8 @@ ORDER BY c.ordinal_position; <entry><literal>is_trigger_deletable</literal></entry> <entry><type>yes_or_no</type></entry> <entry> - <literal>YES</> if the view has an <literal>INSTEAD OF</> - <command>DELETE</> trigger defined on it, <literal>NO</> if not + <literal>YES</literal> if the view has an <literal>INSTEAD OF</literal> + <command>DELETE</command> trigger defined on it, <literal>NO</literal> if not </entry> </row> @@ -6704,8 +6704,8 @@ ORDER BY c.ordinal_position; <entry><literal>is_trigger_insertable_into</literal></entry> <entry><type>yes_or_no</type></entry> <entry> - <literal>YES</> if the view has an <literal>INSTEAD OF</> - <command>INSERT</> trigger defined on it, <literal>NO</> if not + <literal>YES</literal> if the view has an <literal>INSTEAD OF</literal> + <command>INSERT</command> trigger defined on it, <literal>NO</literal> if not </entry> </row> </tbody> diff --git a/doc/src/sgml/install-windows.sgml b/doc/src/sgml/install-windows.sgml index 696c620b181..029e1dbc285 100644 --- a/doc/src/sgml/install-windows.sgml +++ b/doc/src/sgml/install-windows.sgml @@ -84,13 +84,13 @@ <productname>Microsoft Windows SDK</productname> version 6.0a to 8.1 or <productname>Visual Studio 2008</productname> and above. Compilation is supported down to <productname>Windows XP</productname> and - <productname>Windows Server 2003</> when building with - <productname>Visual Studio 2005</> to + <productname>Windows Server 2003</productname> when building with + <productname>Visual Studio 2005</productname> to <productname>Visual Studio 2013</productname>. Building with <productname>Visual Studio 2015</productname> is supported down to - <productname>Windows Vista</> and <productname>Windows Server 2008</>. + <productname>Windows Vista</productname> and <productname>Windows Server 2008</productname>. Building with <productname>Visual Studio 2017</productname> is supported - down to <productname>Windows 7 SP1</> and <productname>Windows Server 2008 R2 SP1</>. + down to <productname>Windows 7 SP1</productname> and <productname>Windows Server 2008 R2 SP1</productname>. </para> <para> @@ -163,7 +163,7 @@ $ENV{MSBFLAGS}="/m"; <productname>Microsoft Windows SDK</productname> it is recommended that you upgrade to the latest version (currently version 7.1), available for download from - <ulink url="https://www.microsoft.com/download"></>. + <ulink url="https://www.microsoft.com/download"></ulink>. </para> <para> You must always include the @@ -182,7 +182,7 @@ $ENV{MSBFLAGS}="/m"; ActiveState Perl is required to run the build generation scripts. MinGW or Cygwin Perl will not work. It must also be present in the PATH. Binaries can be downloaded from - <ulink url="http://www.activestate.com"></> + <ulink url="http://www.activestate.com"></ulink> (Note: version 5.8.3 or later is required, the free Standard Distribution is sufficient). </para></listitem> @@ -219,7 +219,7 @@ $ENV{MSBFLAGS}="/m"; <para> Both <productname>Bison</productname> and <productname>Flex</productname> are included in the <productname>msys</productname> tool suite, available - from <ulink url="http://www.mingw.org/wiki/MSYS"></> as part of the + from <ulink url="http://www.mingw.org/wiki/MSYS"></ulink> as part of the <productname>MinGW</productname> compiler suite. </para> @@ -259,7 +259,7 @@ $ENV{MSBFLAGS}="/m"; <term><productname>Diff</productname></term> <listitem><para> Diff is required to run the regression tests, and can be downloaded - from <ulink url="http://gnuwin32.sourceforge.net"></>. + from <ulink url="http://gnuwin32.sourceforge.net"></ulink>. </para></listitem> </varlistentry> @@ -267,7 +267,7 @@ $ENV{MSBFLAGS}="/m"; <term><productname>Gettext</productname></term> <listitem><para> Gettext is required to build with NLS support, and can be downloaded - from <ulink url="http://gnuwin32.sourceforge.net"></>. Note that binaries, + from <ulink url="http://gnuwin32.sourceforge.net"></ulink>. Note that binaries, dependencies and developer files are all needed. </para></listitem> </varlistentry> @@ -277,7 +277,7 @@ $ENV{MSBFLAGS}="/m"; <listitem><para> Required for GSSAPI authentication support. MIT Kerberos can be downloaded from - <ulink url="http://web.mit.edu/Kerberos/dist/index.html"></>. + <ulink url="http://web.mit.edu/Kerberos/dist/index.html"></ulink>. </para></listitem> </varlistentry> @@ -286,8 +286,8 @@ $ENV{MSBFLAGS}="/m"; <productname>libxslt</productname></term> <listitem><para> Required for XML support. Binaries can be downloaded from - <ulink url="http://zlatkovic.com/pub/libxml"></> or source from - <ulink url="http://xmlsoft.org"></>. Note that libxml2 requires iconv, + <ulink url="http://zlatkovic.com/pub/libxml"></ulink> or source from + <ulink url="http://xmlsoft.org"></ulink>. Note that libxml2 requires iconv, which is available from the same download location. </para></listitem> </varlistentry> @@ -296,8 +296,8 @@ $ENV{MSBFLAGS}="/m"; <term><productname>openssl</productname></term> <listitem><para> Required for SSL support. Binaries can be downloaded from - <ulink url="http://www.slproweb.com/products/Win32OpenSSL.html"></> - or source from <ulink url="http://www.openssl.org"></>. + <ulink url="http://www.slproweb.com/products/Win32OpenSSL.html"></ulink> + or source from <ulink url="http://www.openssl.org"></ulink>. </para></listitem> </varlistentry> @@ -306,7 +306,7 @@ $ENV{MSBFLAGS}="/m"; <listitem><para> Required for UUID-OSSP support (contrib only). Source can be downloaded from - <ulink url="http://www.ossp.org/pkg/lib/uuid/"></>. + <ulink url="http://www.ossp.org/pkg/lib/uuid/"></ulink>. </para></listitem> </varlistentry> @@ -314,7 +314,7 @@ $ENV{MSBFLAGS}="/m"; <term><productname>Python</productname></term> <listitem><para> Required for building <application>PL/Python</application>. Binaries can - be downloaded from <ulink url="http://www.python.org"></>. + be downloaded from <ulink url="http://www.python.org"></ulink>. </para></listitem> </varlistentry> @@ -323,7 +323,7 @@ $ENV{MSBFLAGS}="/m"; <listitem><para> Required for compression support in <application>pg_dump</application> and <application>pg_restore</application>. Binaries can be downloaded - from <ulink url="http://www.zlib.net"></>. + from <ulink url="http://www.zlib.net"></ulink>. </para></listitem> </varlistentry> @@ -347,8 +347,8 @@ $ENV{MSBFLAGS}="/m"; </para> <para> - To use a server-side third party library such as <productname>python</> or - <productname>openssl</>, this library <emphasis>must</emphasis> also be + To use a server-side third party library such as <productname>python</productname> or + <productname>openssl</productname>, this library <emphasis>must</emphasis> also be 64-bit. There is no support for loading a 32-bit library in a 64-bit server. Several of the third party libraries that PostgreSQL supports may only be available in 32-bit versions, in which case they cannot be used with @@ -462,20 +462,20 @@ $ENV{CONFIG}="Debug"; <para> Running the regression tests on client programs, with - <command>vcregress bincheck</>, or on recovery tests, with - <command>vcregress recoverycheck</>, requires an additional Perl module + <command>vcregress bincheck</command>, or on recovery tests, with + <command>vcregress recoverycheck</command>, requires an additional Perl module to be installed: <variablelist> <varlistentry> <term><productname>IPC::Run</productname></term> <listitem><para> - As of this writing, <literal>IPC::Run</> is not included in the + As of this writing, <literal>IPC::Run</literal> is not included in the ActiveState Perl installation, nor in the ActiveState Perl Package Manager (PPM) library. To install, download the - <filename>IPC-Run-<version>.tar.gz</> source archive from CPAN, - at <ulink url="http://search.cpan.org/dist/IPC-Run/"></>, and - uncompress. Edit the <filename>buildenv.pl</> file, and add a PERL5LIB - variable to point to the <filename>lib</> subdirectory from the + <filename>IPC-Run-<version>.tar.gz</filename> source archive from CPAN, + at <ulink url="http://search.cpan.org/dist/IPC-Run/"></ulink>, and + uncompress. Edit the <filename>buildenv.pl</filename> file, and add a PERL5LIB + variable to point to the <filename>lib</filename> subdirectory from the extracted archive. For example: <programlisting> $ENV{PERL5LIB}=$ENV{PERL5LIB} . ';c:\IPC-Run-0.94\lib'; @@ -498,7 +498,7 @@ $ENV{PERL5LIB}=$ENV{PERL5LIB} . ';c:\IPC-Run-0.94\lib'; <term>OpenJade 1.3.1-2</term> <listitem><para> Download from - <ulink url="http://sourceforge.net/projects/openjade/files/openjade/1.3.1/openjade-1_3_1-2-bin.zip/download"></> + <ulink url="http://sourceforge.net/projects/openjade/files/openjade/1.3.1/openjade-1_3_1-2-bin.zip/download"></ulink> and uncompress in the subdirectory <filename>openjade-1.3.1</filename>. </para></listitem> </varlistentry> @@ -507,7 +507,7 @@ $ENV{PERL5LIB}=$ENV{PERL5LIB} . ';c:\IPC-Run-0.94\lib'; <term>DocBook DTD 4.2</term> <listitem><para> Download from - <ulink url="http://www.oasis-open.org/docbook/sgml/4.2/docbook-4.2.zip"></> + <ulink url="http://www.oasis-open.org/docbook/sgml/4.2/docbook-4.2.zip"></ulink> and uncompress in the subdirectory <filename>docbook</filename>. </para></listitem> </varlistentry> @@ -516,7 +516,7 @@ $ENV{PERL5LIB}=$ENV{PERL5LIB} . ';c:\IPC-Run-0.94\lib'; <term>ISO character entities</term> <listitem><para> Download from - <ulink url="http://www.oasis-open.org/cover/ISOEnts.zip"></> and + <ulink url="http://www.oasis-open.org/cover/ISOEnts.zip"></ulink> and uncompress in the subdirectory <filename>docbook</filename>. </para></listitem> </varlistentry> diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml index f4e4fc7c5e2..f8e1d60356a 100644 --- a/doc/src/sgml/installation.sgml +++ b/doc/src/sgml/installation.sgml @@ -52,17 +52,17 @@ su - postgres <para> In general, a modern Unix-compatible platform should be able to run - <productname>PostgreSQL</>. + <productname>PostgreSQL</productname>. The platforms that had received specific testing at the time of release are listed in <xref linkend="supported-platforms"> - below. In the <filename>doc</> subdirectory of the distribution - there are several platform-specific <acronym>FAQ</> documents you + below. In the <filename>doc</filename> subdirectory of the distribution + there are several platform-specific <acronym>FAQ</acronym> documents you might wish to consult if you are having trouble. </para> <para> The following software packages are required for building - <productname>PostgreSQL</>: + <productname>PostgreSQL</productname>: <itemizedlist> <listitem> @@ -71,9 +71,9 @@ su - postgres <primary>make</primary> </indexterm> - <acronym>GNU</> <application>make</> version 3.80 or newer is required; other - <application>make</> programs or older <acronym>GNU</> <application>make</> versions will <emphasis>not</> work. - (<acronym>GNU</> <application>make</> is sometimes installed under + <acronym>GNU</acronym> <application>make</application> version 3.80 or newer is required; other + <application>make</application> programs or older <acronym>GNU</acronym> <application>make</application> versions will <emphasis>not</emphasis> work. + (<acronym>GNU</acronym> <application>make</application> is sometimes installed under the name <filename>gmake</filename>.) To test for <acronym>GNU</acronym> <application>make</application> enter: <screen> @@ -84,19 +84,19 @@ su - postgres <listitem> <para> - You need an <acronym>ISO</>/<acronym>ANSI</> C compiler (at least + You need an <acronym>ISO</acronym>/<acronym>ANSI</acronym> C compiler (at least C89-compliant). Recent - versions of <productname>GCC</> are recommended, but - <productname>PostgreSQL</> is known to build using a wide variety + versions of <productname>GCC</productname> are recommended, but + <productname>PostgreSQL</productname> is known to build using a wide variety of compilers from different vendors. </para> </listitem> <listitem> <para> - <application>tar</> is required to unpack the source + <application>tar</application> is required to unpack the source distribution, in addition to either - <application>gzip</> or <application>bzip2</>. + <application>gzip</application> or <application>bzip2</application>. </para> </listitem> @@ -109,23 +109,23 @@ su - postgres <primary>libedit</primary> </indexterm> - The <acronym>GNU</> <productname>Readline</> library is used by + The <acronym>GNU</acronym> <productname>Readline</productname> library is used by default. It allows <application>psql</application> (the PostgreSQL command line SQL interpreter) to remember each command you type, and allows you to use arrow keys to recall and edit previous commands. This is very helpful and is strongly recommended. If you don't want to use it then you must specify the <option>--without-readline</option> option to - <filename>configure</>. As an alternative, you can often use the + <filename>configure</filename>. As an alternative, you can often use the BSD-licensed <filename>libedit</filename> library, originally developed on <productname>NetBSD</productname>. The <filename>libedit</filename> library is GNU <productname>Readline</productname>-compatible and is used if <filename>libreadline</filename> is not found, or if <option>--with-libedit-preferred</option> is used as an - option to <filename>configure</>. If you are using a package-based + option to <filename>configure</filename>. If you are using a package-based Linux distribution, be aware that you need both the - <literal>readline</> and <literal>readline-devel</> packages, if + <literal>readline</literal> and <literal>readline-devel</literal> packages, if those are separate in your distribution. </para> </listitem> @@ -140,8 +140,8 @@ su - postgres used by default. If you don't want to use it then you must specify the <option>--without-zlib</option> option to <filename>configure</filename>. Using this option disables - support for compressed archives in <application>pg_dump</> and - <application>pg_restore</>. + support for compressed archives in <application>pg_dump</application> and + <application>pg_restore</application>. </para> </listitem> </itemizedlist> @@ -179,14 +179,14 @@ su - postgres If you intend to make more than incidental use of <application>PL/Perl</application>, you should ensure that the <productname>Perl</productname> installation was built with the - <literal>usemultiplicity</> option enabled (<literal>perl -V</> + <literal>usemultiplicity</literal> option enabled (<literal>perl -V</literal> will show whether this is the case). </para> </listitem> <listitem> <para> - To build the <application>PL/Python</> server programming + To build the <application>PL/Python</application> server programming language, you need a <productname>Python</productname> installation with the header files and the <application>distutils</application> module. The minimum @@ -209,15 +209,15 @@ su - postgres find a shared <filename>libpython</filename>. That might mean that you either have to install additional packages or rebuild (part of) your <productname>Python</productname> installation to provide this shared - library. When building from source, run <productname>Python</>'s - configure with the <literal>--enable-shared</> flag. + library. When building from source, run <productname>Python</productname>'s + configure with the <literal>--enable-shared</literal> flag. </para> </listitem> <listitem> <para> To build the <application>PL/Tcl</application> - procedural language, you of course need a <productname>Tcl</> + procedural language, you of course need a <productname>Tcl</productname> installation. The minimum required version is <productname>Tcl</productname> 8.4. </para> @@ -228,13 +228,13 @@ su - postgres To enable Native Language Support (<acronym>NLS</acronym>), that is, the ability to display a program's messages in a language other than English, you need an implementation of the - <application>Gettext</> <acronym>API</acronym>. Some operating + <application>Gettext</application> <acronym>API</acronym>. Some operating systems have this built-in (e.g., <systemitem - class="osname">Linux</>, <systemitem class="osname">NetBSD</>, - <systemitem class="osname">Solaris</>), for other systems you + class="osname">Linux</systemitem>, <systemitem class="osname">NetBSD</systemitem>, + <systemitem class="osname">Solaris</systemitem>), for other systems you can download an add-on package from <ulink url="http://www.gnu.org/software/gettext/"></ulink>. - If you are using the <application>Gettext</> implementation in + If you are using the <application>Gettext</application> implementation in the <acronym>GNU</acronym> C library then you will additionally need the <productname>GNU Gettext</productname> package for some utility programs. For any of the other implementations you will @@ -244,7 +244,7 @@ su - postgres <listitem> <para> - You need <productname>OpenSSL</>, if you want to support + You need <productname>OpenSSL</productname>, if you want to support encrypted client connections. The minimum required version is 0.9.8. </para> @@ -252,8 +252,8 @@ su - postgres <listitem> <para> - You need <application>Kerberos</>, <productname>OpenLDAP</>, - and/or <application>PAM</>, if you want to support authentication + You need <application>Kerberos</application>, <productname>OpenLDAP</productname>, + and/or <application>PAM</application>, if you want to support authentication using those services. </para> </listitem> @@ -289,12 +289,12 @@ su - postgres <primary>yacc</primary> </indexterm> - GNU <application>Flex</> and <application>Bison</> + GNU <application>Flex</application> and <application>Bison</application> are needed to build from a Git checkout, or if you changed the actual scanner and parser definition files. If you need them, be sure - to get <application>Flex</> 2.5.31 or later and - <application>Bison</> 1.875 or later. Other <application>lex</> - and <application>yacc</> programs cannot be used. + to get <application>Flex</application> 2.5.31 or later and + <application>Bison</application> 1.875 or later. Other <application>lex</application> + and <application>yacc</application> programs cannot be used. </para> </listitem> <listitem> @@ -303,10 +303,10 @@ su - postgres <primary>perl</primary> </indexterm> - <application>Perl</> 5.8.3 or later is needed to build from a Git checkout, + <application>Perl</application> 5.8.3 or later is needed to build from a Git checkout, or if you changed the input files for any of the build steps that use Perl scripts. If building on Windows you will need - <application>Perl</> in any case. <application>Perl</application> is + <application>Perl</application> in any case. <application>Perl</application> is also required to run some test suites. </para> </listitem> @@ -316,7 +316,7 @@ su - postgres <para> If you need to get a <acronym>GNU</acronym> package, you can find it at your local <acronym>GNU</acronym> mirror site (see <ulink - url="http://www.gnu.org/order/ftp.html"></> + url="http://www.gnu.org/order/ftp.html"></ulink> for a list) or at <ulink url="ftp://ftp.gnu.org/gnu/"></ulink>. </para> @@ -337,7 +337,7 @@ su - postgres <title>Getting The Source</title> <para> - The <productname>PostgreSQL</> &version; sources can be obtained from the + The <productname>PostgreSQL</productname> &version; sources can be obtained from the download section of our website: <ulink url="https://www.postgresql.org/download/"></ulink>. You should get a file named <filename>postgresql-&version;.tar.gz</filename> @@ -351,7 +351,7 @@ su - postgres have the <filename>.bz2</filename> file.) This will create a directory <filename>postgresql-&version;</filename> under the current directory - with the <productname>PostgreSQL</> sources. + with the <productname>PostgreSQL</productname> sources. Change into that directory for the rest of the installation procedure. </para> @@ -377,7 +377,7 @@ su - postgres <para> The first step of the installation procedure is to configure the source tree for your system and choose the options you would like. - This is done by running the <filename>configure</> script. For a + This is done by running the <filename>configure</filename> script. For a default installation simply enter: <screen> <userinput>./configure</userinput> @@ -403,7 +403,7 @@ su - postgres The default configuration will build the server and utilities, as well as all client applications and interfaces that require only a C compiler. All files will be installed under - <filename>/usr/local/pgsql</> by default. + <filename>/usr/local/pgsql</filename> by default. </para> <para> @@ -413,14 +413,14 @@ su - postgres <variablelist> <varlistentry> - <term><option>--prefix=<replaceable>PREFIX</></option></term> + <term><option>--prefix=<replaceable>PREFIX</replaceable></option></term> <listitem> <para> - Install all files under the directory <replaceable>PREFIX</> + Install all files under the directory <replaceable>PREFIX</replaceable> instead of <filename>/usr/local/pgsql</filename>. The actual files will be installed into various subdirectories; no files will ever be installed directly into the - <replaceable>PREFIX</> directory. + <replaceable>PREFIX</replaceable> directory. </para> <para> @@ -428,13 +428,13 @@ su - postgres individual subdirectories with the following options. However, if you leave these with their defaults, the installation will be relocatable, meaning you can move the directory after - installation. (The <literal>man</> and <literal>doc</> + installation. (The <literal>man</literal> and <literal>doc</literal> locations are not affected by this.) </para> <para> For relocatable installs, you might want to use - <filename>configure</filename>'s <literal>--disable-rpath</> + <filename>configure</filename>'s <literal>--disable-rpath</literal> option. Also, you will need to tell the operating system how to find the shared libraries. </para> @@ -442,15 +442,15 @@ su - postgres </varlistentry> <varlistentry> - <term><option>--exec-prefix=<replaceable>EXEC-PREFIX</></option></term> + <term><option>--exec-prefix=<replaceable>EXEC-PREFIX</replaceable></option></term> <listitem> <para> You can install architecture-dependent files under a - different prefix, <replaceable>EXEC-PREFIX</>, than what - <replaceable>PREFIX</> was set to. This can be useful to + different prefix, <replaceable>EXEC-PREFIX</replaceable>, than what + <replaceable>PREFIX</replaceable> was set to. This can be useful to share architecture-independent files between hosts. If you - omit this, then <replaceable>EXEC-PREFIX</> is set equal to - <replaceable>PREFIX</> and both architecture-dependent and + omit this, then <replaceable>EXEC-PREFIX</replaceable> is set equal to + <replaceable>PREFIX</replaceable> and both architecture-dependent and independent files will be installed under the same tree, which is probably what you want. </para> @@ -458,114 +458,114 @@ su - postgres </varlistentry> <varlistentry> - <term><option>--bindir=<replaceable>DIRECTORY</></option></term> + <term><option>--bindir=<replaceable>DIRECTORY</replaceable></option></term> <listitem> <para> Specifies the directory for executable programs. The default - is <filename><replaceable>EXEC-PREFIX</>/bin</>, which - normally means <filename>/usr/local/pgsql/bin</>. + is <filename><replaceable>EXEC-PREFIX</replaceable>/bin</filename>, which + normally means <filename>/usr/local/pgsql/bin</filename>. </para> </listitem> </varlistentry> <varlistentry> - <term><option>--sysconfdir=<replaceable>DIRECTORY</></option></term> + <term><option>--sysconfdir=<replaceable>DIRECTORY</replaceable></option></term> <listitem> <para> Sets the directory for various configuration files, - <filename><replaceable>PREFIX</>/etc</> by default. + <filename><replaceable>PREFIX</replaceable>/etc</filename> by default. </para> </listitem> </varlistentry> <varlistentry> - <term><option>--libdir=<replaceable>DIRECTORY</></option></term> + <term><option>--libdir=<replaceable>DIRECTORY</replaceable></option></term> <listitem> <para> Sets the location to install libraries and dynamically loadable modules. The default is - <filename><replaceable>EXEC-PREFIX</>/lib</>. + <filename><replaceable>EXEC-PREFIX</replaceable>/lib</filename>. </para> </listitem> </varlistentry> <varlistentry> - <term><option>--includedir=<replaceable>DIRECTORY</></option></term> + <term><option>--includedir=<replaceable>DIRECTORY</replaceable></option></term> <listitem> <para> Sets the directory for installing C and C++ header files. The - default is <filename><replaceable>PREFIX</>/include</>. + default is <filename><replaceable>PREFIX</replaceable>/include</filename>. </para> </listitem> </varlistentry> <varlistentry> - <term><option>--datarootdir=<replaceable>DIRECTORY</></option></term> + <term><option>--datarootdir=<replaceable>DIRECTORY</replaceable></option></term> <listitem> <para> Sets the root directory for various types of read-only data files. This only sets the default for some of the following options. The default is - <filename><replaceable>PREFIX</>/share</>. + <filename><replaceable>PREFIX</replaceable>/share</filename>. </para> </listitem> </varlistentry> <varlistentry> - <term><option>--datadir=<replaceable>DIRECTORY</></option></term> + <term><option>--datadir=<replaceable>DIRECTORY</replaceable></option></term> <listitem> <para> Sets the directory for read-only data files used by the installed programs. The default is - <filename><replaceable>DATAROOTDIR</></>. Note that this has + <filename><replaceable>DATAROOTDIR</replaceable></filename>. Note that this has nothing to do with where your database files will be placed. </para> </listitem> </varlistentry> <varlistentry> - <term><option>--localedir=<replaceable>DIRECTORY</></option></term> + <term><option>--localedir=<replaceable>DIRECTORY</replaceable></option></term> <listitem> <para> Sets the directory for installing locale data, in particular message translation catalog files. The default is - <filename><replaceable>DATAROOTDIR</>/locale</>. + <filename><replaceable>DATAROOTDIR</replaceable>/locale</filename>. </para> </listitem> </varlistentry> <varlistentry> - <term><option>--mandir=<replaceable>DIRECTORY</></option></term> + <term><option>--mandir=<replaceable>DIRECTORY</replaceable></option></term> <listitem> <para> - The man pages that come with <productname>PostgreSQL</> will be installed under + The man pages that come with <productname>PostgreSQL</productname> will be installed under this directory, in their respective - <filename>man<replaceable>x</></> subdirectories. - The default is <filename><replaceable>DATAROOTDIR</>/man</>. + <filename>man<replaceable>x</replaceable></filename> subdirectories. + The default is <filename><replaceable>DATAROOTDIR</replaceable>/man</filename>. </para> </listitem> </varlistentry> <varlistentry> - <term><option>--docdir=<replaceable>DIRECTORY</></option></term> + <term><option>--docdir=<replaceable>DIRECTORY</replaceable></option></term> <listitem> <para> Sets the root directory for installing documentation files, - except <quote>man</> pages. This only sets the default for + except <quote>man</quote> pages. This only sets the default for the following options. The default value for this option is - <filename><replaceable>DATAROOTDIR</>/doc/postgresql</>. + <filename><replaceable>DATAROOTDIR</replaceable>/doc/postgresql</filename>. </para> </listitem> </varlistentry> <varlistentry> - <term><option>--htmldir=<replaceable>DIRECTORY</></option></term> + <term><option>--htmldir=<replaceable>DIRECTORY</replaceable></option></term> <listitem> <para> The HTML-formatted documentation for <productname>PostgreSQL</productname> will be installed under this directory. The default is - <filename><replaceable>DATAROOTDIR</></>. + <filename><replaceable>DATAROOTDIR</replaceable></filename>. </para> </listitem> </varlistentry> @@ -574,15 +574,15 @@ su - postgres <note> <para> Care has been taken to make it possible to install - <productname>PostgreSQL</> into shared installation locations + <productname>PostgreSQL</productname> into shared installation locations (such as <filename>/usr/local/include</filename>) without interfering with the namespace of the rest of the system. First, the string <quote><literal>/postgresql</literal></quote> is automatically appended to <varname>datadir</varname>, <varname>sysconfdir</varname>, and <varname>docdir</varname>, unless the fully expanded directory name already contains the - string <quote><literal>postgres</></quote> or - <quote><literal>pgsql</></quote>. For example, if you choose + string <quote><literal>postgres</literal></quote> or + <quote><literal>pgsql</literal></quote>. For example, if you choose <filename>/usr/local</filename> as prefix, the documentation will be installed in <filename>/usr/local/doc/postgresql</filename>, but if the prefix is <filename>/opt/postgres</filename>, then it @@ -602,10 +602,10 @@ su - postgres <para> <variablelist> <varlistentry> - <term><option>--with-extra-version=<replaceable>STRING</></option></term> + <term><option>--with-extra-version=<replaceable>STRING</replaceable></option></term> <listitem> <para> - Append <replaceable>STRING</> to the PostgreSQL version number. You + Append <replaceable>STRING</replaceable> to the PostgreSQL version number. You can use this, for example, to mark binaries built from unreleased Git snapshots or containing custom patches with an extra version string such as a <command>git describe</command> identifier or a @@ -615,35 +615,35 @@ su - postgres </varlistentry> <varlistentry> - <term><option>--with-includes=<replaceable>DIRECTORIES</></option></term> + <term><option>--with-includes=<replaceable>DIRECTORIES</replaceable></option></term> <listitem> <para> - <replaceable>DIRECTORIES</> is a colon-separated list of + <replaceable>DIRECTORIES</replaceable> is a colon-separated list of directories that will be added to the list the compiler searches for header files. If you have optional packages - (such as GNU <application>Readline</>) installed in a non-standard + (such as GNU <application>Readline</application>) installed in a non-standard location, you have to use this option and probably also the corresponding - <option>--with-libraries</> option. + <option>--with-libraries</option> option. </para> <para> - Example: <literal>--with-includes=/opt/gnu/include:/usr/sup/include</>. + Example: <literal>--with-includes=/opt/gnu/include:/usr/sup/include</literal>. </para> </listitem> </varlistentry> <varlistentry> - <term><option>--with-libraries=<replaceable>DIRECTORIES</></option></term> + <term><option>--with-libraries=<replaceable>DIRECTORIES</replaceable></option></term> <listitem> <para> - <replaceable>DIRECTORIES</> is a colon-separated list of + <replaceable>DIRECTORIES</replaceable> is a colon-separated list of directories to search for libraries. You will probably have to use this option (and the corresponding - <option>--with-includes</> option) if you have packages + <option>--with-includes</option> option) if you have packages installed in non-standard locations. </para> <para> - Example: <literal>--with-libraries=/opt/gnu/lib:/usr/sup/lib</>. + Example: <literal>--with-libraries=/opt/gnu/lib:/usr/sup/lib</literal>. </para> </listitem> </varlistentry> @@ -657,7 +657,7 @@ su - postgres language other than English. <replaceable>LANGUAGES</replaceable> is an optional space-separated list of codes of the languages that you want supported, for - example <literal>--enable-nls='de fr'</>. (The intersection + example <literal>--enable-nls='de fr'</literal>. (The intersection between your list and the set of actually provided translations will be computed automatically.) If you do not specify a list, then all available translations are @@ -666,22 +666,22 @@ su - postgres <para> To use this option, you will need an implementation of the - <application>Gettext</> API; see above. + <application>Gettext</application> API; see above. </para> </listitem> </varlistentry> <varlistentry> - <term><option>--with-pgport=<replaceable>NUMBER</></option></term> + <term><option>--with-pgport=<replaceable>NUMBER</replaceable></option></term> <listitem> <para> - Set <replaceable>NUMBER</> as the default port number for + Set <replaceable>NUMBER</replaceable> as the default port number for server and clients. The default is 5432. The port can always be changed later on, but if you specify it here then both server and clients will have the same default compiled in, which can be very convenient. Usually the only good reason to select a non-default value is if you intend to run multiple - <productname>PostgreSQL</> servers on the same machine. + <productname>PostgreSQL</productname> servers on the same machine. </para> </listitem> </varlistentry> @@ -690,7 +690,7 @@ su - postgres <term><option>--with-perl</option></term> <listitem> <para> - Build the <application>PL/Perl</> server-side language. + Build the <application>PL/Perl</application> server-side language. </para> </listitem> </varlistentry> @@ -699,7 +699,7 @@ su - postgres <term><option>--with-python</option></term> <listitem> <para> - Build the <application>PL/Python</> server-side language. + Build the <application>PL/Python</application> server-side language. </para> </listitem> </varlistentry> @@ -708,7 +708,7 @@ su - postgres <term><option>--with-tcl</option></term> <listitem> <para> - Build the <application>PL/Tcl</> server-side language. + Build the <application>PL/Tcl</application> server-side language. </para> </listitem> </varlistentry> @@ -734,10 +734,10 @@ su - postgres Build with support for GSSAPI authentication. On many systems, the GSSAPI (usually a part of the Kerberos installation) system is not installed in a location - that is searched by default (e.g., <filename>/usr/include</>, - <filename>/usr/lib</>), so you must use the options - <option>--with-includes</> and <option>--with-libraries</> in - addition to this option. <filename>configure</> will check + that is searched by default (e.g., <filename>/usr/include</filename>, + <filename>/usr/lib</filename>), so you must use the options + <option>--with-includes</option> and <option>--with-libraries</option> in + addition to this option. <filename>configure</filename> will check for the required header files and libraries to make sure that your GSSAPI installation is sufficient before proceeding. </para> @@ -745,7 +745,7 @@ su - postgres </varlistentry> <varlistentry> - <term><option>--with-krb-srvnam=<replaceable>NAME</></option></term> + <term><option>--with-krb-srvnam=<replaceable>NAME</replaceable></option></term> <listitem> <para> The default name of the Kerberos service principal used @@ -763,7 +763,7 @@ su - postgres <listitem> <para> Build with support for - the <productname>ICU</productname><indexterm><primary>ICU</></> + the <productname>ICU</productname><indexterm><primary>ICU</primary></indexterm> library. This requires the <productname>ICU4C</productname> package to be installed. The minimum required version of <productname>ICU4C</productname> is currently 4.2. @@ -771,7 +771,7 @@ su - postgres <para> By default, - <productname>pkg-config</productname><indexterm><primary>pkg-config</></> + <productname>pkg-config</productname><indexterm><primary>pkg-config</primary></indexterm> will be used to find the required compilation options. This is supported for <productname>ICU4C</productname> version 4.6 and later. For older versions, or if <productname>pkg-config</productname> is @@ -798,11 +798,11 @@ su - postgres </term> <listitem> <para> - Build with support for <acronym>SSL</> (encrypted) - connections. This requires the <productname>OpenSSL</> - package to be installed. <filename>configure</> will check + Build with support for <acronym>SSL</acronym> (encrypted) + connections. This requires the <productname>OpenSSL</productname> + package to be installed. <filename>configure</filename> will check for the required header files and libraries to make sure that - your <productname>OpenSSL</> installation is sufficient + your <productname>OpenSSL</productname> installation is sufficient before proceeding. </para> </listitem> @@ -812,7 +812,7 @@ su - postgres <term><option>--with-pam</option></term> <listitem> <para> - Build with <acronym>PAM</><indexterm><primary>PAM</></> + Build with <acronym>PAM</acronym><indexterm><primary>PAM</primary></indexterm> (Pluggable Authentication Modules) support. </para> </listitem> @@ -833,15 +833,15 @@ su - postgres <term><option>--with-ldap</option></term> <listitem> <para> - Build with <acronym>LDAP</><indexterm><primary>LDAP</></> + Build with <acronym>LDAP</acronym><indexterm><primary>LDAP</primary></indexterm> support for authentication and connection parameter lookup (see <phrase id="install-ldap-links"><xref linkend="libpq-ldap"> and <xref linkend="auth-ldap"></phrase> for more information). On Unix, - this requires the <productname>OpenLDAP</> package to be - installed. On Windows, the default <productname>WinLDAP</> - library is used. <filename>configure</> will check for the required + this requires the <productname>OpenLDAP</productname> package to be + installed. On Windows, the default <productname>WinLDAP</productname> + library is used. <filename>configure</filename> will check for the required header files and libraries to make sure that your - <productname>OpenLDAP</> installation is sufficient before + <productname>OpenLDAP</productname> installation is sufficient before proceeding. </para> </listitem> @@ -867,8 +867,8 @@ su - postgres <term><option>--without-readline</option></term> <listitem> <para> - Prevents use of the <application>Readline</> library - (and <application>libedit</> as well). This option disables + Prevents use of the <application>Readline</application> library + (and <application>libedit</application> as well). This option disables command-line editing and history in <application>psql</application>, so it is not recommended. </para> @@ -879,10 +879,10 @@ su - postgres <term><option>--with-libedit-preferred</option></term> <listitem> <para> - Favors the use of the BSD-licensed <application>libedit</> library - rather than GPL-licensed <application>Readline</>. This option + Favors the use of the BSD-licensed <application>libedit</application> library + rather than GPL-licensed <application>Readline</application>. This option is significant only if you have both libraries installed; the - default in that case is to use <application>Readline</>. + default in that case is to use <application>Readline</application>. </para> </listitem> </varlistentry> @@ -909,21 +909,21 @@ su - postgres <itemizedlist> <listitem> <para> - <option>bsd</> to use the UUID functions found in FreeBSD, NetBSD, + <option>bsd</option> to use the UUID functions found in FreeBSD, NetBSD, and some other BSD-derived systems </para> </listitem> <listitem> <para> - <option>e2fs</> to use the UUID library created by - the <literal>e2fsprogs</> project; this library is present in most + <option>e2fs</option> to use the UUID library created by + the <literal>e2fsprogs</literal> project; this library is present in most Linux systems and in macOS, and can be obtained for other platforms as well </para> </listitem> <listitem> <para> - <option>ossp</> to use the <ulink + <option>ossp</option> to use the <ulink url="http://www.ossp.org/pkg/lib/uuid/">OSSP UUID library</ulink> </para> </listitem> @@ -969,7 +969,7 @@ su - postgres <para> Use libxslt when building the <xref linkend="xml2"> - module. <application>xml2</> relies on this library + module. <application>xml2</application> relies on this library to perform XSL transformations of XML. </para> </listitem> @@ -979,13 +979,13 @@ su - postgres <term><option>--disable-float4-byval</option></term> <listitem> <para> - Disable passing float4 values <quote>by value</>, causing them - to be passed <quote>by reference</> instead. This option costs + Disable passing float4 values <quote>by value</quote>, causing them + to be passed <quote>by reference</quote> instead. This option costs performance, but may be needed for compatibility with old user-defined functions that are written in C and use the - <quote>version 0</> calling convention. A better long-term + <quote>version 0</quote> calling convention. A better long-term solution is to update any such functions to use the - <quote>version 1</> calling convention. + <quote>version 1</quote> calling convention. </para> </listitem> </varlistentry> @@ -994,17 +994,17 @@ su - postgres <term><option>--disable-float8-byval</option></term> <listitem> <para> - Disable passing float8 values <quote>by value</>, causing them - to be passed <quote>by reference</> instead. This option costs + Disable passing float8 values <quote>by value</quote>, causing them + to be passed <quote>by reference</quote> instead. This option costs performance, but may be needed for compatibility with old user-defined functions that are written in C and use the - <quote>version 0</> calling convention. A better long-term + <quote>version 0</quote> calling convention. A better long-term solution is to update any such functions to use the - <quote>version 1</> calling convention. + <quote>version 1</quote> calling convention. Note that this option affects not only float8, but also int8 and some related types such as timestamp. - On 32-bit platforms, <option>--disable-float8-byval</> is the default - and it is not allowed to select <option>--enable-float8-byval</>. + On 32-bit platforms, <option>--disable-float8-byval</option> is the default + and it is not allowed to select <option>--enable-float8-byval</option>. </para> </listitem> </varlistentry> @@ -1013,17 +1013,17 @@ su - postgres <term><option>--with-segsize=<replaceable>SEGSIZE</replaceable></option></term> <listitem> <para> - Set the <firstterm>segment size</>, in gigabytes. Large tables are + Set the <firstterm>segment size</firstterm>, in gigabytes. Large tables are divided into multiple operating-system files, each of size equal to the segment size. This avoids problems with file size limits that exist on many platforms. The default segment size, 1 gigabyte, is safe on all supported platforms. If your operating system has - <quote>largefile</> support (which most do, nowadays), you can use + <quote>largefile</quote> support (which most do, nowadays), you can use a larger segment size. This can be helpful to reduce the number of file descriptors consumed when working with very large tables. But be careful not to select a value larger than is supported by your platform and the file systems you intend to use. Other - tools you might wish to use, such as <application>tar</>, could + tools you might wish to use, such as <application>tar</application>, could also set limits on the usable file size. It is recommended, though not absolutely required, that this value be a power of 2. @@ -1036,7 +1036,7 @@ su - postgres <term><option>--with-blocksize=<replaceable>BLOCKSIZE</replaceable></option></term> <listitem> <para> - Set the <firstterm>block size</>, in kilobytes. This is the unit + Set the <firstterm>block size</firstterm>, in kilobytes. This is the unit of storage and I/O within tables. The default, 8 kilobytes, is suitable for most situations; but other values may be useful in special cases. @@ -1050,7 +1050,7 @@ su - postgres <term><option>--with-wal-blocksize=<replaceable>BLOCKSIZE</replaceable></option></term> <listitem> <para> - Set the <firstterm>WAL block size</>, in kilobytes. This is the unit + Set the <firstterm>WAL block size</firstterm>, in kilobytes. This is the unit of storage and I/O within the WAL log. The default, 8 kilobytes, is suitable for most situations; but other values may be useful in special cases. @@ -1064,14 +1064,14 @@ su - postgres <term><option>--disable-spinlocks</option></term> <listitem> <para> - Allow the build to succeed even if <productname>PostgreSQL</> + Allow the build to succeed even if <productname>PostgreSQL</productname> has no CPU spinlock support for the platform. The lack of spinlock support will result in poor performance; therefore, this option should only be used if the build aborts and informs you that the platform lacks spinlock support. If this - option is required to build <productname>PostgreSQL</> on + option is required to build <productname>PostgreSQL</productname> on your platform, please report the problem to the - <productname>PostgreSQL</> developers. + <productname>PostgreSQL</productname> developers. </para> </listitem> </varlistentry> @@ -1080,7 +1080,7 @@ su - postgres <term><option>--disable-strong-random</option></term> <listitem> <para> - Allow the build to succeed even if <productname>PostgreSQL</> + Allow the build to succeed even if <productname>PostgreSQL</productname> has no support for strong random numbers on the platform. A source of random numbers is needed for some authentication protocols, as well as some routines in the @@ -1114,7 +1114,7 @@ su - postgres </term> <listitem> <para> - <productname>PostgreSQL</> includes its own time zone database, + <productname>PostgreSQL</productname> includes its own time zone database, which it requires for date and time operations. This time zone database is in fact compatible with the IANA time zone database provided by many operating systems such as FreeBSD, @@ -1128,7 +1128,7 @@ su - postgres installation routine will not detect mismatching or erroneous time zone data. If you use this option, you are advised to run the regression tests to verify that the time zone data you have - pointed to works correctly with <productname>PostgreSQL</>. + pointed to works correctly with <productname>PostgreSQL</productname>. </para> <indexterm><primary>cross compilation</primary></indexterm> @@ -1153,7 +1153,7 @@ su - postgres <indexterm> <primary>zlib</primary> </indexterm> - Prevents use of the <application>Zlib</> library. This disables + Prevents use of the <application>Zlib</application> library. This disables support for compressed archives in <application>pg_dump</application> and <application>pg_restore</application>. This option is only intended for those rare systems where this @@ -1201,7 +1201,7 @@ su - postgres <para> If using GCC, all programs and libraries are compiled so they can be profiled. On backend exit, a subdirectory will be created - that contains the <filename>gmon.out</> file for use in profiling. + that contains the <filename>gmon.out</filename> file for use in profiling. This option is for use only with GCC and when doing development work. </para> </listitem> @@ -1211,8 +1211,8 @@ su - postgres <term><option>--enable-cassert</option></term> <listitem> <para> - Enables <firstterm>assertion</> checks in the server, which test for - many <quote>cannot happen</> conditions. This is invaluable for + Enables <firstterm>assertion</firstterm> checks in the server, which test for + many <quote>cannot happen</quote> conditions. This is invaluable for code development purposes, but the tests can slow down the server significantly. Also, having the tests turned on won't necessarily enhance the @@ -1266,7 +1266,7 @@ su - postgres can be specified in the environment variable <envar>DTRACEFLAGS</envar>. On Solaris, to include DTrace support in a 64-bit binary, you must specify - <literal>DTRACEFLAGS="-64"</> to configure. For example, + <literal>DTRACEFLAGS="-64"</literal> to configure. For example, using the GCC compiler: <screen> ./configure CC='gcc -m64' --enable-dtrace DTRACEFLAGS='-64' ... @@ -1295,10 +1295,10 @@ su - postgres <para> If you prefer a C compiler different from the one <filename>configure</filename> picks, you can set the - environment variable <envar>CC</> to the program of your choice. + environment variable <envar>CC</envar> to the program of your choice. By default, <filename>configure</filename> will pick <filename>gcc</filename> if available, else the platform's - default (usually <filename>cc</>). Similarly, you can override the + default (usually <filename>cc</filename>). Similarly, you can override the default compiler flags if needed with the <envar>CFLAGS</envar> variable. </para> @@ -1306,7 +1306,7 @@ su - postgres You can specify environment variables on the <filename>configure</filename> command line, for example: <screen> -<userinput>./configure CC=/opt/bin/gcc CFLAGS='-O2 -pipe'</> +<userinput>./configure CC=/opt/bin/gcc CFLAGS='-O2 -pipe'</userinput> </screen> </para> @@ -1473,51 +1473,51 @@ su - postgres <para> Sometimes it is useful to add compiler flags after-the-fact to the set - that were chosen by <filename>configure</>. An important example is - that <application>gcc</>'s <option>-Werror</> option cannot be included - in the <envar>CFLAGS</envar> passed to <filename>configure</>, because - it will break many of <filename>configure</>'s built-in tests. To add + that were chosen by <filename>configure</filename>. An important example is + that <application>gcc</application>'s <option>-Werror</option> option cannot be included + in the <envar>CFLAGS</envar> passed to <filename>configure</filename>, because + it will break many of <filename>configure</filename>'s built-in tests. To add such flags, include them in the <envar>COPT</envar> environment variable - while running <filename>make</>. The contents of <envar>COPT</envar> + while running <filename>make</filename>. The contents of <envar>COPT</envar> are added to both the <envar>CFLAGS</envar> and <envar>LDFLAGS</envar> - options set up by <filename>configure</>. For example, you could do + options set up by <filename>configure</filename>. For example, you could do <screen> -<userinput>make COPT='-Werror'</> +<userinput>make COPT='-Werror'</userinput> </screen> or <screen> -<userinput>export COPT='-Werror'</> -<userinput>make</> +<userinput>export COPT='-Werror'</userinput> +<userinput>make</userinput> </screen> </para> <note> <para> When developing code inside the server, it is recommended to - use the configure options <option>--enable-cassert</> (which - turns on many run-time error checks) and <option>--enable-debug</> + use the configure options <option>--enable-cassert</option> (which + turns on many run-time error checks) and <option>--enable-debug</option> (which improves the usefulness of debugging tools). </para> <para> If using GCC, it is best to build with an optimization level of - at least <option>-O1</>, because using no optimization - (<option>-O0</>) disables some important compiler warnings (such + at least <option>-O1</option>, because using no optimization + (<option>-O0</option>) disables some important compiler warnings (such as the use of uninitialized variables). However, non-zero optimization levels can complicate debugging because stepping through compiled code will usually not match up one-to-one with source code lines. If you get confused while trying to debug optimized code, recompile the specific files of interest with - <option>-O0</>. An easy way to do this is by passing an option - to <application>make</>: <command>make PROFILE=-O0 file.o</>. + <option>-O0</option>. An easy way to do this is by passing an option + to <application>make</application>: <command>make PROFILE=-O0 file.o</command>. </para> <para> - The <envar>COPT</> and <envar>PROFILE</> environment variables are - actually handled identically by the <productname>PostgreSQL</> + The <envar>COPT</envar> and <envar>PROFILE</envar> environment variables are + actually handled identically by the <productname>PostgreSQL</productname> makefiles. Which to use is a matter of preference, but a common habit - among developers is to use <envar>PROFILE</> for one-time flag - adjustments, while <envar>COPT</> might be kept set all the time. + among developers is to use <envar>PROFILE</envar> for one-time flag + adjustments, while <envar>COPT</envar> might be kept set all the time. </para> </note> </step> @@ -1530,7 +1530,7 @@ su - postgres <screen> <userinput>make</userinput> </screen> - (Remember to use <acronym>GNU</> <application>make</>.) The build + (Remember to use <acronym>GNU</acronym> <application>make</application>.) The build will take a few minutes depending on your hardware. The last line displayed should be: <screen> @@ -1562,7 +1562,7 @@ PostgreSQL, contrib, and documentation successfully made. Ready to install. <para> If you want to test the newly built server before you install it, you can run the regression tests at this point. The regression - tests are a test suite to verify that <productname>PostgreSQL</> + tests are a test suite to verify that <productname>PostgreSQL</productname> runs on your machine in the way the developers expected it to. Type: <screen> @@ -1588,7 +1588,7 @@ PostgreSQL, contrib, and documentation successfully made. Ready to install. </note> <para> - To install <productname>PostgreSQL</> enter: + To install <productname>PostgreSQL</productname> enter: <screen> <userinput>make install</userinput> </screen> @@ -1632,8 +1632,8 @@ PostgreSQL, contrib, and documentation successfully made. Ready to install. The standard installation provides all the header files needed for client application development as well as for server-side program development, such as custom functions or data types written in C. - (Prior to <productname>PostgreSQL</> 8.0, a separate <literal>make - install-all-headers</> command was needed for the latter, but this + (Prior to <productname>PostgreSQL</productname> 8.0, a separate <literal>make + install-all-headers</literal> command was needed for the latter, but this step has been folded into the standard install.) </para> @@ -1643,12 +1643,12 @@ PostgreSQL, contrib, and documentation successfully made. Ready to install. If you want to install only the client applications and interface libraries, then you can use these commands: <screen> -<userinput>make -C src/bin install</> -<userinput>make -C src/include install</> -<userinput>make -C src/interfaces install</> -<userinput>make -C doc install</> +<userinput>make -C src/bin install</userinput> +<userinput>make -C src/include install</userinput> +<userinput>make -C src/interfaces install</userinput> +<userinput>make -C doc install</userinput> </screen> - <filename>src/bin</> has a few binaries for server-only use, + <filename>src/bin</filename> has a few binaries for server-only use, but they are small. </para> </formalpara> @@ -1659,7 +1659,7 @@ PostgreSQL, contrib, and documentation successfully made. Ready to install. <title>Uninstallation:</title> <para> To undo the installation use the command <command>make - uninstall</>. However, this will not remove any created directories. + uninstall</command>. However, this will not remove any created directories. </para> </formalpara> @@ -1669,10 +1669,10 @@ PostgreSQL, contrib, and documentation successfully made. Ready to install. <para> After the installation you can free disk space by removing the built files from the source tree with the command <command>make - clean</>. This will preserve the files made by the <command>configure</command> - program, so that you can rebuild everything with <command>make</> + clean</command>. This will preserve the files made by the <command>configure</command> + program, so that you can rebuild everything with <command>make</command> later on. To reset the source tree to the state in which it was - distributed, use <command>make distclean</>. If you are going to + distributed, use <command>make distclean</command>. If you are going to build for several platforms within the same source tree you must do this and re-configure for each platform. (Alternatively, use a separate build tree for each platform, so that the source tree @@ -1681,10 +1681,10 @@ PostgreSQL, contrib, and documentation successfully made. Ready to install. </formalpara> <para> - If you perform a build and then discover that your <command>configure</> - options were wrong, or if you change anything that <command>configure</> + If you perform a build and then discover that your <command>configure</command> + options were wrong, or if you change anything that <command>configure</command> investigates (for example, software upgrades), then it's a good - idea to do <command>make distclean</> before reconfiguring and + idea to do <command>make distclean</command> before reconfiguring and rebuilding. Without this, your changes in configuration choices might not propagate everywhere they need to. </para> @@ -1705,31 +1705,31 @@ PostgreSQL, contrib, and documentation successfully made. Ready to install. you need to tell the system how to find the newly installed shared libraries. The systems on which this is <emphasis>not</emphasis> necessary include - <systemitem class="osname">FreeBSD</>, - <systemitem class="osname">HP-UX</>, - <systemitem class="osname">Linux</>, - <systemitem class="osname">NetBSD</>, <systemitem - class="osname">OpenBSD</>, and - <systemitem class="osname">Solaris</>. + <systemitem class="osname">FreeBSD</systemitem>, + <systemitem class="osname">HP-UX</systemitem>, + <systemitem class="osname">Linux</systemitem>, + <systemitem class="osname">NetBSD</systemitem>, <systemitem + class="osname">OpenBSD</systemitem>, and + <systemitem class="osname">Solaris</systemitem>. </para> <para> The method to set the shared library search path varies between platforms, but the most widely-used method is to set the - environment variable <envar>LD_LIBRARY_PATH</> like so: In Bourne - shells (<command>sh</>, <command>ksh</>, <command>bash</>, <command>zsh</>): + environment variable <envar>LD_LIBRARY_PATH</envar> like so: In Bourne + shells (<command>sh</command>, <command>ksh</command>, <command>bash</command>, <command>zsh</command>): <programlisting> LD_LIBRARY_PATH=/usr/local/pgsql/lib export LD_LIBRARY_PATH </programlisting> - or in <command>csh</> or <command>tcsh</>: + or in <command>csh</command> or <command>tcsh</command>: <programlisting> setenv LD_LIBRARY_PATH /usr/local/pgsql/lib </programlisting> - Replace <literal>/usr/local/pgsql/lib</> with whatever you set - <option><literal>--libdir</></> to in <xref linkend="configure">. + Replace <literal>/usr/local/pgsql/lib</literal> with whatever you set + <option><literal>--libdir</literal></option> to in <xref linkend="configure">. You should put these commands into a shell start-up file such as - <filename>/etc/profile</> or <filename>~/.bash_profile</>. Some + <filename>/etc/profile</filename> or <filename>~/.bash_profile</filename>. Some good information about the caveats associated with this method can be found at <ulink url="http://xahlee.org/UnixResource_dir/_/ldpath.html"></ulink>. @@ -1763,17 +1763,17 @@ libpq.so.2.1: cannot open shared object file: No such file or directory <indexterm> <primary>ldconfig</primary> </indexterm> - If you are on <systemitem class="osname">Linux</> and you have root + If you are on <systemitem class="osname">Linux</systemitem> and you have root access, you can run: <programlisting> /sbin/ldconfig /usr/local/pgsql/lib </programlisting> (or equivalent directory) after installation to enable the run-time linker to find the shared libraries faster. Refer to the - manual page of <command>ldconfig</> for more information. On - <systemitem class="osname">FreeBSD</>, <systemitem - class="osname">NetBSD</>, and <systemitem - class="osname">OpenBSD</> the command is: + manual page of <command>ldconfig</command> for more information. On + <systemitem class="osname">FreeBSD</systemitem>, <systemitem + class="osname">NetBSD</systemitem>, and <systemitem + class="osname">OpenBSD</systemitem> the command is: <programlisting> /sbin/ldconfig -m /usr/local/pgsql/lib </programlisting> @@ -1790,24 +1790,24 @@ libpq.so.2.1: cannot open shared object file: No such file or directory </indexterm> <para> - If you installed into <filename>/usr/local/pgsql</> or some other + If you installed into <filename>/usr/local/pgsql</filename> or some other location that is not searched for programs by default, you should - add <filename>/usr/local/pgsql/bin</> (or whatever you set - <option><literal>--bindir</></> to in <xref linkend="configure">) - into your <envar>PATH</>. Strictly speaking, this is not - necessary, but it will make the use of <productname>PostgreSQL</> + add <filename>/usr/local/pgsql/bin</filename> (or whatever you set + <option><literal>--bindir</literal></option> to in <xref linkend="configure">) + into your <envar>PATH</envar>. Strictly speaking, this is not + necessary, but it will make the use of <productname>PostgreSQL</productname> much more convenient. </para> |