Postgresql: Notes For Professionals
Postgresql: Notes For Professionals
Postgresql: Notes For Professionals
PostgreSQL
Notes for Professionals
®
60+ pages
of professional hints and tricks
Disclaimer
GoalKicker.com This is an unocial free book created for educational purposes and is
not aliated with ocial PostgreSQL® group(s) or company(s).
Free Programming Books All trademarks and registered trademarks are
the property of their respective owners
Contents
About ................................................................................................................................................................................... 1
Chapter 1: Getting started with PostgreSQL .................................................................................................. 2
Section 1.1: Installing PostgreSQL on Windows ........................................................................................................... 2
Section 1.2: Install PostgreSQL from Source on Linux ............................................................................................... 3
Section 1.3: Installation on GNU+Linux ......................................................................................................................... 4
Section 1.4: How to install PostgreSQL via MacPorts on OSX ................................................................................... 5
Section 1.5: Install postgresql with brew on Mac ........................................................................................................ 7
Section 1.6: Postgres.app for Mac OSX ........................................................................................................................ 7
Chapter 2: Data Types ............................................................................................................................................... 8
Section 2.1: Numeric Types ........................................................................................................................................... 8
Section 2.2: Date/ Time Types .................................................................................................................................... 8
Section 2.3: Geometric Types ....................................................................................................................................... 8
Section 2.4: Network Adress Types ............................................................................................................................. 9
Section 2.5: Character Types ....................................................................................................................................... 9
Section 2.6: Arrays ......................................................................................................................................................... 9
Chapter 3: Comments in PostgreSQL ............................................................................................................... 11
Section 3.1: COMMENT on Table ................................................................................................................................ 11
Section 3.2: Remove Comment .................................................................................................................................. 11
Chapter 4: Dates, Timestamps, and Intervals ............................................................................................ 12
Section 4.1: SELECT the last day of month ............................................................................................................... 12
Section 4.2: Cast a timestamp or interval to a string .............................................................................................. 12
Section 4.3: Count the number of records per week ............................................................................................... 12
Chapter 5: Table Creation ...................................................................................................................................... 13
Section 5.1: Show table definition ............................................................................................................................... 13
Section 5.2: Create table from select ........................................................................................................................ 13
Section 5.3: Create unlogged table ........................................................................................................................... 13
Section 5.4: Table creation with Primary Key .......................................................................................................... 13
Section 5.5: Create a table that references other table .......................................................................................... 14
Chapter 6: SELECT ...................................................................................................................................................... 15
Section 6.1: SELECT using WHERE ............................................................................................................................. 15
Chapter 7: Find String Length / Character Length ................................................................................... 16
Section 7.1: Example to get length of a character varying field ............................................................................. 16
Chapter 8: COALESCE ............................................................................................................................................... 17
Section 8.1: Single non null argument ....................................................................................................................... 17
Section 8.2: Multiple non null arguments .................................................................................................................. 17
Section 8.3: All null arguments ................................................................................................................................... 17
Chapter 9: INSERT ...................................................................................................................................................... 18
Section 9.1: Insert data using COPY ........................................................................................................................... 18
Section 9.2: Inserting multiple rows ........................................................................................................................... 19
Section 9.3: INSERT data and RETURING values ..................................................................................................... 19
Section 9.4: Basic INSERT ........................................................................................................................................... 19
Section 9.5: Insert from select .................................................................................................................................... 19
Section 9.6: UPSERT - INSERT ... ON CONFLICT DO UPDATE.. ................................................................................ 20
Section 9.7: SELECT data into file .............................................................................................................................. 20
Chapter 10: UPDATE ................................................................................................................................................... 22
Section 10.1: Updating a table based on joining another table .............................................................................. 22
Section 10.2: Update all rows in a table .................................................................................................................... 22
Section 10.3: Update all rows meeting a condition .................................................................................................. 22
Section 10.4: Updating multiple columns in table .................................................................................................... 22
Chapter 11: JSON Support ...................................................................................................................................... 23
Section 11.1: Using JSONb operators ......................................................................................................................... 23
Section 11.2: Querying complex JSON documents .................................................................................................. 27
Section 11.3: Creating a pure JSON table .................................................................................................................. 28
Chapter 12: Aggregate Functions ....................................................................................................................... 29
Section 12.1: Simple statistics: min(), max(), avg() .................................................................................................... 29
Section 12.2: regr_slope(Y, X) : slope of the least-squares-fit linear equation determined by the (X, Y) pairs
................................................................................................................................................................................ 29
Section 12.3: string_agg(expression, delimiter) ....................................................................................................... 30
Chapter 13: Common Table Expressions (WITH) ......................................................................................... 32
Section 13.1: Common Table Expressions in SELECT Queries ................................................................................. 32
Section 13.2: Traversing tree using WITH RECURSIVE ............................................................................................ 32
Chapter 14: Window Functions ............................................................................................................................ 33
Section 14.1: generic example ..................................................................................................................................... 33
Section 14.2: column values vs dense_rank vs rank vs row_number ................................................................... 34
Chapter 15: Recursive queries .............................................................................................................................. 35
Section 15.1: Sum of Integers ...................................................................................................................................... 35
Chapter 16: Programming with PL/pgSQL ..................................................................................................... 36
Section 16.1: Basic PL/pgSQL Function ...................................................................................................................... 36
Section 16.2: custom exceptions ................................................................................................................................ 36
Section 16.3: PL/pgSQL Syntax .................................................................................................................................. 37
Section 16.4: RETURNS Block ..................................................................................................................................... 37
Chapter 17: Inheritance ............................................................................................................................................ 38
Section 17.1: Creating children tables ........................................................................................................................ 38
Chapter 18: Export PostgreSQL database table header and data to CSV file ........................... 39
Section 18.1: copy from query .................................................................................................................................... 39
Section 18.2: Export PostgreSQL table to csv with header for some column(s) ................................................... 39
Section 18.3: Full table backup to csv with header .................................................................................................. 39
Chapter 19: EXTENSION dblink and postgres_fdw .................................................................................... 40
Section 19.1: Extention FDW ........................................................................................................................................ 40
Section 19.2: Foreign Data Wrapper ......................................................................................................................... 40
Section 19.3: Extention dblink ..................................................................................................................................... 41
Chapter 20: Triggers and Trigger Functions ................................................................................................ 42
Section 20.1: Type of triggers ..................................................................................................................................... 42
Section 20.2: Basic PL/pgSQL Trigger Function ...................................................................................................... 43
Chapter 21: Event Triggers .................................................................................................................................... 45
Section 21.1: Logging DDL Command Start Events .................................................................................................. 45
Chapter 22: Role Management ............................................................................................................................ 46
Section 22.1: Create a user with a password ............................................................................................................ 46
Section 22.2: Grant and Revoke Privileges ............................................................................................................... 46
Section 22.3: Create Role and matching database ................................................................................................. 47
Section 22.4: Alter default search_path of user ...................................................................................................... 47
Section 22.5: Create Read Only User ........................................................................................................................ 48
Section 22.6: Grant access privileges on objects created in the future ................................................................ 48
Chapter 23: Postgres cryptographic functions ........................................................................................... 49
Section 23.1: digest ...................................................................................................................................................... 49
Chapter 24: PostgreSQL High Availability ..................................................................................................... 50
Section 24.1: Replication in PostgreSQL .................................................................................................................... 50
Chapter 25: Backup and Restore ....................................................................................................................... 53
Section 25.1: Backing up one database .................................................................................................................... 53
Section 25.2: Restoring backups ............................................................................................................................... 53
Section 25.3: Backing up the whole cluster .............................................................................................................. 53
Section 25.4: Using psql to export data .................................................................................................................... 54
Section 25.5: Using Copy to import ........................................................................................................................... 54
Section 25.6: Using Copy to export ........................................................................................................................... 55
Chapter 26: Backup script for a production DB .......................................................................................... 56
Section 26.1: saveProdDb.sh ....................................................................................................................................... 56
Chapter 27: Accessing Data Programmatically .......................................................................................... 57
Section 27.1: Accessing PostgreSQL with the C-API ................................................................................................. 57
Section 27.2: Accessing PostgreSQL from python using psycopg2 ...................................................................... 60
Section 27.3: Accessing PostgreSQL from .NET using the Npgsql provider ......................................................... 60
Section 27.4: Accessing PostgreSQL from PHP using Pomm2 ............................................................................... 61
Chapter 28: Connect to PostgreSQL from Java ......................................................................................... 63
Section 28.1: Connecting with java.sql.DriverManager ............................................................................................ 63
Section 28.2: Connecting with java.sql.DriverManager and Properties ................................................................. 63
Section 28.3: Connecting with javax.sql.DataSource using a connection pool ..................................................... 64
Chapter 29: Postgres Tip and Tricks ................................................................................................................. 66
Section 29.1: DATEADD alternative in Postgres ....................................................................................................... 66
Section 29.2: Comma separated values of a column ............................................................................................. 66
Section 29.3: Delete duplicate records from postgres table .................................................................................. 66
Section 29.4: Update query with join between two tables alternative since Postresql does not support join
in update query ................................................................................................................................................... 66
Section 29.5: Dierence between two date timestamps month wise and year wise .......................................... 66
Section 29.6: Query to Copy/Move/Transafer table data from one database to other database table with
same schema ...................................................................................................................................................... 67
Credits .............................................................................................................................................................................. 68
You may also like ........................................................................................................................................................ 69
About
Please feel free to share this PDF with anyone for free,
latest version of this book can be downloaded from:
http://GoalKicker.com/PostgreSQLBook
This PostgreSQL® Notes for Professionals book is compiled from Stack Overflow
Documentation, the content is written by the beautiful people at Stack Overflow.
Text content is released under Creative Commons BY-SA, see credits at the end
of this book whom contributed to the various chapters. Images may be copyright
of their respective owners unless otherwise specified
This is an unofficial free book created for educational purposes and is not
affiliated with official PostgreSQL® group(s) or company(s) nor Stack Overflow.
All trademarks and registered trademarks are the property of their respective
company owners
Select the latest stable (non-Beta) version (9.5.3 at the time of writing). You will most likely want the Win x86-64
package, but if you are running a 32 bit version of Windows, which is common on older computers, select Win
x86-32 instead.
Note: Switching between Beta and Stable versions will involve complex tasks like dump and restore. Upgrading
within beta or stable version only needs a service restart.
You can check if your version of Windows is 32 or 64 bit by going to Control Panel -> System and Security -> System
-> System type, which will say "##-bit Operating System". This is the path for Windows 7, it may be slightly different
on other versions of Windows.
In the installer select the packages you would like to use. For example:
pgAdmin ( https://www.pgadmin.org ) is a free GUI for managing your database and I highly recommend it. In
9.6 this will be installed by default .
PostGIS ( http://postgis.net ) provides geospatial analysis features on GPS coordinates, distances etc. very
popular among GIS developers.
The Language Package provides required libraries for officially supported procedural language PL/Python,
PL/Perl and PL/Tcl.
Other packages like pgAgent, pgBouncer and Slony are useful for larger production servers, only checked as
needed.
All those optional packages can be later installed through "Application Stack Builder".
Note: There are also other non-officially supported language such as PL/V8, PL/Lua PL/Java available.
Open pgAdmin and connect to your server by double clicking on its name, ex. "PostgreSQL 9.5 (localhost:5432).
From this point you can follow guides such as the excellent book PostgreSQL: Up and Running, 2nd Edition (
http://shop.oreilly.com/product/0636920032144.do ).
Why would you want to manually control the PostgreSQL service? If you're using your PC as a development server
some of the time and but also use it to play video games for example, PostegreSQL could slow down your system a
bit while its running.
Why wouldn't you want manual control? Starting and stopping the service can be a hassle if you do it often.
If you don't notice any difference in speed and prefer avoiding the hassle then leave its Startup Type as Automatic
and ignore the rest of this guide. Otherwise...
Select "Services" from the list, right click on its icon, and select Send To -> Desktop to create a desktop icon for more
convenient access.
Close the Administrative Tools window then launch Services from the desktop icon you just created.
Scroll down until you see a service with a name like postgresql-x##-9.# (ex. "postgresql-x64-9.5").
Right click on the postgres service, select Properties -> Startup type -> Manual -> Apply -> OK. You can change it
back to automatic just as easily.
If you see other PostgreSQL related services in the list such "pgbouncer" or "PostgreSQL Scheduling Agent -
pgAgent" you can also change their Startup Type to Manual because they're not much use if PostgreSQL isn't
running. Although this will mean more hassle each time you start and stop so it's up to you. They don't use as many
resources as PostgreSQL itself and may not have any noticeable impact on your systems performance.
If the service is running its Status will say Started, otherwise it isn't running.
To start it right click and select Start. A loading prompt will be displayed and should disappear on its own soon
after. If it gives you an error try a second time. If that doesn't work then there was some problem with the
installation, possibly because you changed some setting in Windows most people don't change, so finding the
problem might require some sleuthing.
If you ever get an error while attempting to connect to your database check Services to make sure its running.
For other very specific details about the EDB PostgreSQL installation, e.g. the python runtime version in the official
language pack of a specific PostgreSQL version, always refer to the official EBD installation guide , change the
version in link to your installer's major version.
There are a large number of different options for the configuration of PostgreSQL:
Go into the new created folder and run the cofigure script with the desired options:
./configure --exec=/usr/local/pgsql
For the extension switch the directory cd contrib, run make and make install
These are installed with the following command: yum -y install postgresqlXX postgresqlXX-server postgresqlXX-libs
postgresqlXX-contrib
Once installed you will need to start the database service as the service owner (Default is postgres). This is done
with the pg_ctl command.
Debian family
This will install the PostgreSQL server package, at the default version offered by the operating system's package
repositories.
If the version that's installed by default is not the one that you want, you can use the package manager to search
for specific versions which may simultaneously be offered.
You can also use the Yum repository provided by the PostgreSQL project (known as PGDG) to get a different
version. This may allow versions not yet offered by operating system package repositories.
You should get a list that looks something like the following:
In this example, the most recent version of PostgreSQL that is supported in 9.6, so we will install that.
The log provides instructions on the rest of the steps for installation, so we do that next.
su postgres -c psql
psql (9.6.1)
Type "help" for help.
Here you can type a query to see that the server is running.
setting
------------------------------------------
/opt/local/var/db/postgresql96/defaultdb
(1 row)
postgres=#
Type \q to quit:
postgres=#\q
brew UPDATE
brew install postgresql
Homebrew generally installs the latest stable version. If you need a different one then brew SEARCH postgresql will
list the versions available. If you need PostgreSQL built with particular options then brew info postgresql will list
which options are supported. If you require an unsupported build option, you may have to do the build yourself,
but can still use Homebrew to install the common dependencies.
psql
If psql complains that there's no corresponding database for your user, run CREATEDB.
https://www.postgresql.org/docs/9.6/static/datatype.html
Declaring an Array
SELECT INTEGER[];
SELECT INTEGER[3];
SELECT INTEGER[][];
SELECT INTEGER[3][3];
SELECT INTEGER ARRAY;
SELECT INTEGER ARRAY[3];
Creating an Array
SELECT '{0,1,2}';
SELECT '{{0,1},{1,2}}';
SELECT ARRAY[0,1,2];
SELECT ARRAY[ARRAY[0,1],ARRAY[1,2]];
Accessing an Array
By default PostgreSQL uses a one-based numbering convention for arrays, that is, an array of n elements starts
with ARRAY[1] and ends with ARRAY[n].
int_arr
---------
0
(1 ROW)
--sclicing an array
WITH arr AS (SELECT ARRAY[0,1,2] int_arr) SELECT int_arr[1:2] FROM arr;
int_arr
---------
{0,1}
array_dims
------------
[1:3]
(1 ROW)
array_length
--------------
3
(1 ROW)
cardinality
-------------
3
(1 ROW)
Array functions
will be added
Only a single comment(string) can be given on any database object. COMMENT will help us to know what for the
particular database object has been defined whats its actual purpose is.
The rule for COMMENT ON ROLE is that you must be superuser to comment on a superuser role, or have the
CREATEROLE privilege to comment on non-superuser roles. Of course, a superuser can comment on anything
This statement will produce the string "12 Aug 2016 04:40:32PM". The formatting string can be modified in many
different ways; the full list of template patterns can be found here.
Note that you can also insert plain text into the formatting string and you can use the template patterns in any
order:
This will produce the string "Today is Saturday, the 12th day of the month of August of 2016". You should keep in
mind, though, that any template patterns - even the single letter ones like "I", "D", "W" - are converted, unless the
plain text is in double quotes. As a safety measure, you should put all plain text in double quotes, as done above.
You can localize the string to your language of choice (day and month names) by using the TM (translation mode)
modifier. This option uses the localization setting of the server running PostgreSQL or the client connecting to it.
With a Spanish locale setting this produces "Sábado, 12 de Agosto del año 2016".
\d tablename
\d+ tablename
If you have forgotten the name of the table, just type \d into psql to obtain a list of tables and views in the current
database.
CREATE TABLE people_over_30 AS SELECT * FROM person WHERE age > 30;
Alternatively, you can place the PRIMARY KEY constraint directly in the column definition:
It is recommended that you use lower case names for the table and as well as all the columns. If you use upper
case names such as Person you would have to wrap that name in double quotes ("Person") in each and every
query because PostgreSQL enforces case folding.
+----+------------+-----------+----------+------+
| id | first_name | last_name | username | pass |
+----+------------+-----------+----------+------+
| 1 | hello | world | hello | word |
+----+------------+-----------+----------+------+
| 2 | root | me | root | toor |
+----+------------+-----------+----------+------+
Syntax
Examples
Result:
Result:
COALESCE
--------
'HELLO WORLD'
coalesce
--------
'first non null'
COALESCE
--------
1,Yogesh
2,Raunak
3,Varun
4,Kamal
5,Hari
6,Amit
And we need a two column table into which this data can be imported into.
Now the actual copy operation, this will create six records in the table.
If you want to insert data into my_table and get the id of that row:
Above query will return the id of the row where the new record was inserted.
The most basic insert involves inserting all values in the table:
INSERT INTO person VALUES (1, 'john doe', 25, 'new york');
If you want to insert only specific columns, you need to explicitly indicate which columns:
Note that if any constraints exist on the table , such as NOT NULL, you will be required to include those columns in
either case.
Note that the projection of the select must match the columns required for the insert. In this case, the tmp_person
table has the same columns as person.
Say you have a table called my_table, created in several previous examples. We insert a row, returning PK value of
inserted row:
INSERT 0 1
Now if we try to insert row with existing unique key it will raise an exception:
b=# INSERT INTO my_table values (2,'one',333) ON CONFLICT (id) DO UPDATE SET name =
my_table.name||' changed to: "two" at '||now() returning *;
id | name | contact_number
----+---------------------------------------------------------------------------------------------
--------------+----------------
2 | one changed to: "two" at 2016-11-23 08:32:17.105179+00 | 333
(1 row)
INSERT 0 1
UPDATE person
SET state_code = cities.state_code
FROM cities
WHERE cities.city = city;
Here we are joining the person city column to the cities city column in order to get the city's state code. This is
then used to update the state_code column in the person table.
UPDATE person
SET country = 'USA',
state = 'NY'
WHERE city = 'New York';
Populating the DB
INSERT INTO books(client, DATA) VALUES (
'Joe',
'{ "title": "Siddhartha", "author": { "first_name": "Herman", "last_name": "Hesse" } }'
),(
'Jenny',
'{ "title": "Dharma Bums", "author": { "first_name": "Jack", "last_name": "Kerouac" } }'
),(
'Jenny',
'{ "title": "100 años de soledad", "author": { "first_name": "Gabo", "last_name": "Marquéz" }
}'
);
Output:
Selecting 1 column:
SELECT client,
data->'title' AS title
FROM books;
Output:
SELECT client,
data->'title' AS title, data->'author' AS author
FROM books;
Output:
-> vs ->>
The -> operator returns the original JSON type (which might be an object), whereas ->> returns text.
You can use the -> to return a nested object and thus chain the operators:
SELECT client,
data->'author'->'last_name' AS author
FROM books;
Output:
Filtering
SELECT
client,
data->'title' AS title
FROM books
WHERE data->'title' = '"Dharma Bums"';
Output:
SELECT
client,
data->'title' AS title
FROM books
WHERE data->'author'->>'last_name' = 'Kerouac';
Output:
We’re going to store events in this table, like pageviews. Each event has properties, which could be anything (e.g.
current page) and also sends information about the browser (like OS, screen resolution, etc). Both of these are
completely free form and could change over time (as we think of extra stuff to track).
Output:
Using the JSON operators, combined with traditional PostgreSQL aggregate functions, we can pull out whatever we
want. You have the full might of an RDBMS at your disposal.
Output:
Output:
Output:
the first statement will use the index created above whereas the latter two will not, requiring a complete table scan.
It is still allowable to use the -> operator when obtaining resultant data, so the following queries will also use the
index:
At this point you can insert data in to the table and query it efficiently.
Name Age
Allie 17
Amanda 14
Alana 20
You could write this statement to get the minimum, maximum and average value:
Result:
All memory leak candidates will have a trend of consuming more memory as more time passes. If you plot this
trend, you would imagine a line going up and to the left:
^
|
s | Legend:
i | * - DATA point
z | -- - trend
e |
( |
b | *
y | --
t | --
e | * -- *
s | --
) | *-- *
| -- *
| -- *
--------------------------------------->
TIME
Suppose you have a table containing heap dump histogram data (a mapping of classes to how much memory they
consume):
To compute the slope for each class, we group by over the class. The HAVING clause > 0 ensures that we get only
candidates with a positive slop (a line going up and to the left). We sort by the slope descending so that we get the
classes with the largest rate of memory increase at the top.
Output:
class | slope
---------------------------+----------------------
java.util.ArrayList | 71.7993806279174
java.util.HashMap | 49.0324576155785
java.lang.String | 31.7770770326123
joe.schmoe.BusinessObject | 23.2036817108056
java.lang.ThreadLocal | 20.9013528767851
From the output we see that java.util.ArrayList's memory consumption is increasing the fastest at 71.799 bytes per
second and is potentially part of the memory leak.
You could write SELECT ... GROUP BY statement to get names from each country:
Note that you need to use a GROUP BY clause because STRING_AGG() is an aggregate function.
Result:
names country
Allie, Amanda USA
Alana Russia
WITH sales AS (
SELECT
orders.ordered_at,
orders.user_id,
SUM(orders.amount) AS total
FROM orders
GROUP BY orders.ordered_at, orders.user_id
)
SELECT
sales.ordered_at,
sales.total,
users.NAME
FROM sales
JOIN users USING (user_id)
Running:
SELECT *
, dense_rank() OVER (ORDER BY i) dist_by_i
, lag(t) OVER () prev_t
, nth_value(i, 6) OVER () nth
, COUNT(TRUE) OVER (partition BY i) num_by_i
, COUNT(TRUE) OVER () num_all
, ntile(3) OVER() ntile
FROM wf_example
;
Result:
Explanation:
dist_by_i: DENSE_RANK() OVER (ORDER BY i) is like a row_number per distinct values. Can be used for the number
of distinct values of i (COUNT(DISTINCT i) wold not work). Just use the maximum value.
prev_t: LAG(t) OVER () is a previous value of t over the whole window. mind that it is null for the first row.
nth: NTH_VALUE(i, 6) OVER () is the value of sixth rows column i over the whole window
ntile: NTILE(3) over() splits the whole window to 3 (as much as possible) equal in quantity parts
SELECT i
, dense_rank() OVER (ORDER BY i)
, ROW_NUMBER() OVER ()
, rank() OVER (ORDER BY i)
FROM wf_example
dense_rank orders VALUES of i by appearance in window. i=1 appears, so first row has dense_rank, next and
third i value does not change, so it is dense_rank shows 1 - FIRST value not changed. fourth row i=2, it is
second value of i met, so dense_rank shows 2, andso for the next row. Then it meets value i=3 at 6th row, so
it show 3. Same for the rest two values of i. So the last value of dense_rank is the number of distinct values of
i.
rank Not to confuse with dense_rank this function orders ROW NUMBER of i values. So it starts same with
three ones, but has next value 4, which means i=2 (new value) was met at row 4. Same i=3 was met at row 6.
Etc..
Link to Documentation
This could have been achieved with just the SQL statement but demonstrates the basic structure of a function.
SELECT active_subscribers();
calling:
t=# DO
$$
DECLARE
_t TEXT;
BEGIN
END;
$$
;
INFO: state P0001 caught: NOTHING specified
ERROR: S 164
DETAIL: D 164
HINT: H 164
CONTEXT: SQL STATEMENT "SELECT s164()"
PL/pgSQL FUNCTION inline_code_block line 7 AT PERFORM
here custom P0001 processed, and P2222, not, aborting the execution.
Also it makes huge sense to keep a table of exceptions, like here: http://stackoverflow.com/a/2700312/5315974
users
Column Type
username text
email text
simple_users
Column Type
username text
email text
users_with_password
Column Type
username text
email text
password text
1-Create an extention:
2-Create SERVER:
CREATE SERVER name_srv FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'hostname',
dbname 'bd_name', port '5432');
CREATE USER MAPPING FOR postgres SERVER name_srv OPTIONS(USER 'postgres', PASSWORD 'password');
1. Create EXTENSION :
2. Create SERVER :
CREATE SERVER server_name FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'host_ip', dbname 'db_name',
port 'port_number');
CREATE USER MAPPING FOR CURRENT_USER SERVER server_name OPTIONS (user 'user_name', password
'password');
For exemple Select some attribute from another table in another database:
SELECT * FROM
dblink ('dbname = bd_distance port = 5432 host = 10.6.6.6 user = username
password = passw@rd', 'SELECT id, code FROM schema.table')
AS newTable(id INTEGER, code CHARACTER VARYING);
FOR EACH ROW is called once for every row that the operation modifies;
FOR EACH STATEMENT is called onde for any given operation.
Step 3: test it
INSERT INTO company (NAME) VALUES ('My company');
SELECT * FROM company;
RETURN vReturn;
END $BODY$
LANGUAGE plpgsql;
Step 3: test it
INSERT INTO company (NAME) VALUES ('Company 1');
INSERT INTO company (NAME) VALUES ('Company 2');
INSERT INTO company (NAME) VALUES ('Company 3');
UPDATE company SET NAME='Company new 2' WHERE NAME='Company 2';
DELETE FROM company WHERE NAME='Company 1';
SELECT * FROM log;
BEGIN
-- TG_TABLE_NAME :name of the table that caused the trigger invocation
IF (TG_TABLE_NAME = 'users') THEN
END IF;
RETURN NULL;
END IF;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
DDL_COMMAND_START
DDL_COMMAND_END
SQL_DROP
This is example for creating an Event Trigger and logging DDL_COMMAND_START events.
The problem with that is that queries typed into the psql console get saved in a history file .psql_history in the
user's home directory and may as well be logged to the PostgreSQL database server log, thus exposing the
password.
To avoid this, use the \PASSWORD command to set the user password. If the user issuing the command is a
superuser, the current password will not be asked. (Must be superuser to alter passwords of superusers)
--ACCESS DB
REVOKE CONNECT ON DATABASE nova FROM PUBLIC;
GRANT CONNECT ON DATABASE nova TO USER;
With the above queries, untrusted users can no longer connect to the database.
--ACCESS SCHEMA
REVOKE ALL ON SCHEMA public FROM PUBLIC;
GRANT USAGE ON SCHEMA public TO USER;
The next set of queries revoke all privileges from unauthenticated users and provide limited set of privileges for the
read_write user.
--ACCESS TABLES
REVOKE ALL ON ALL TABLES IN SCHEMA public FROM PUBLIC ;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO read_only ;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO read_write ;
GRANT ALL ON ALL TABLES IN SCHEMA public TO ADMIN ;
--ACCESS SEQUENCES
REVOKE ALL ON ALL SEQUENCES IN SCHEMA public FROM PUBLIC;
GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO read_only; -- allows the use of CURRVAL
GRANT UPDATE ON ALL SEQUENCES IN SCHEMA public TO read_write; -- allows the use of NEXTVAL and
SETVAL
GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO read_write; -- allows the use of CURRVAL and
NEXTVAL
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO ADMIN;
$ CREATEUSER -P blogger
Enter PASSWORD FOR the NEW ROLE: ********
Enter it again: ********
This assumes that pg_hba.conf has been properly configured, which probably looks like this:
2. Set search_path with ALTER USER command to append a new schema my_schema
Alternative:
With below queries, you can set access privileges on objects created in the future in specified schema.
Or, you can set access privileges on objects created in the future by specified user.
ALTER DEFAULT PRIVILEGES FOR ROLE ADMIN GRANT SELECT ON TABLES TO read_only;
Examples:
Requirements:
mkdir $PGDATA/archive
This is host base authentication file, contains the setting for client autherntication. Add below entry:
wal_level = hot_standby
`hot_standby` logs what IS required TO accept READ ONLY queries ON slave SERVER.
archive_mode=ON
This parameters allows to send WAL segments to archive location using archive_command parameter.
Basically what above archive_command does is it copies the WAL segments to archive directory.
Important: Don't start the service again until all configuration and backup steps are complete. You must
bring up the standby server in a state where it is ready to be a backup server. This means that all
configuration settings must be in place and the databases must be already synchronized. Otherwise,
streaming replication will fail to start`
pg_basebackup utility copies the data from primary server data directory to slave data directory.
-h: Specifies the SYSTEM WHERE TO look FOR the PRIMARY SERVER
-xlog-method=stream: This will FORCE the pg_basebackup TO open another CONNECTION AND stream
enough xlog WHILE backup IS running.
It ALSO ensures that fresh backup can be started WITHOUT failing back TO
USING an archive.
To configure the standby server, you'll edit postgresql.conf and create a new configuration file named
recovery.conf.
hot_standby = ON
This specifies whether you are allowed to run queries while recovering
standby_mode = ON
Set the connection string to the primary server. Replace with the external IP address of the primary
server. Replace with the password for the user named replication
trigger_file = '/tmp/postgresql.trigger.5432'
The trigger_file path that you specify is the location where you can add a file when you want the
system to fail over to the standby server. The presence of the file "triggers" the failover. Alternatively,
you can use the pg_ctl promote command to trigger failover.
You now have everything in place and are ready to bring up the standby server
Attribution
This article is substantially derived from and attributed to How to Set Up PostgreSQL for High Availability and
Replication with Hot Standby, with minor changes in formatting and examples and some text deleted. The source
was published under the Creative Commons Public License 3.0, which is maintained here.
The -Fc selects the "custom backup format" which gives you more power than raw SQL; see pg_restore for more
details. If you want a vanilla SQL file, you can do this instead:
or even
A safer alternative uses -1 to wrap the restore in a transaction. The -f specifies the filename rather than using shell
redirection.
Custom format files must be restored using pg_restore with the -d option to specify the database:
Usage of the custom format is recommended because you can choose which things to restore and optionally
enable parallel processing.
You may need to do a pg_dump followed by a pg_restore if you upgrade from one postgresql release to a newer
one.
This works behind the scenes by making multiple connections to the server once for each database and executing
pg_dump on it.
Sometimes, you might be tempted to set this up as a cron job, so you want to see the date the backup was taken as
part of the filename:
$ postgres-backup-$(DATE +%Y-%m-%d).sql
However, please note that this could produce large files on a daily basis. Postgresql has a much better mechanism
for regular backups - WAL archives
To take a filesystem backup, you must use these functions to help ensure that Postgres is in a consistent state while
the backup is prepared.
psql -p 5432 -U postgres -d test_database -A -F, -c "select * from user" > /home/USER/user_data.CSV
-F is to specify delimiter
-A OR --no-align
Switches to unaligned output mode. (The default output mode is otherwise aligned.)
To insert into table USER from a file named user_data.CSV placed inside /home/USER/:
Note: In absence of the option WITH DELIMITER, the default delimiter is comma ,
Note: If data is quoted, by default data quoting characters are double quote. If the data is quoted using any other
character use the QUOTE option; however, this option is allowed only when using CSV format.
#!/bin/sh
cd /save_db
#rm -R /save_db/*
DATE=$(date +%d-%m-%Y-%Hh%M)
echo -e "Sauvegarde de la base du ${DATE}"
mkdir prodDir${DATE}
cd prodDir${DATE}
#dump file
/opt/postgres/9.0/bin/pg_dump -i -h localhost -p 5432 -U postgres -F c -b -w -v -f
"dbprod${DATE}.backup" dbprod
#SQL file
/opt/postgres/9.0/bin/pg_dump -i -h localhost -p 5432 -U postgres --format plain --verbose -f
"dbprod${DATE}.sql" dbprod
During compilation, you have to add the PostgreSQL include directory, which can be found with pg_config --
includedir, to the include path.
You must link with the PostgreSQL client shared library (libpq.so on UNIX, libpq.dll on Windows). This library is
in the PostgreSQL library directory, which can be found with pg_config --libdir.
Note: For historical reason, the library is called libpq.soand not libpg.so, which is a popular trap for beginners.
Given that the below code sample is in file coltype.c, compilation and linking would be done with
with the GNU C compiler (consider adding -Wl,-rpath,"$(pg_config --libdir)" to add the library search path) or
with
Sample program
/* necessary for all PostgreSQL client programs, should be first */
#include <libpq-fe.h>
#include <stdio.h>
#include <string.h>
#ifdef TRACE
#define TRACEFILE "trace.out"
#endif
/*
* Using an empty connectstring will use default values for everything.
* If set, the environment variables PGHOST, PGDATABASE, PGPORT and
* PGUSER will be used.
*/
conn = PQconnectdb("");
#ifdef TRACE
if (NULL == (trc = fopen(TRACEFILE, "w")))
{
fprintf(stderr, "Error opening trace file \"%s\"!\n", TRACEFILE);
PQfinish(conn);
return 1;
}
printf(PQfname(res, j));
}
printf("\n\n");
printf(PQgetvalue(res, i, j));
}
printf("\n");
}
import psycopg2
db_host = 'postgres.server.com'
db_port = '5432'
db_un = 'user'
db_pw = 'password'
db_name = 'testdb'
print(cur.fetchall())
Will result:
A typical query is performed by creating a command, binding parameters, and then executing the command. In C#:
conn.Open();
// Create a new command with CommandText and Connection constructor
using (var cmd = new NpgsqlCommand(querystring, conn))
{
// Add a parameter and set its type with the NpgsqlDbType enum
var contentString = "Hello World!";
cmd.Parameters.Add("@content", NpgsqlDbType.Text).Value = contentString;
/* It is possible to reuse a command object and open connection instead of creating new ones
*/
// Execute the command and read through the rows one by one
using (NpgsqlDataReader reader = cmd.ExecuteReader())
{
while (reader.Read()) // Returns false for 0 rows, or after reading the last row of
the results
{
// read an integer value
int primaryKey = reader.GetInt32(0);
// or
primaryKey = Convert.ToInt32(reader["primary_key"]);
Assuming, Pomm has been installed using composer, here is a complete example:
<?php
use PommProject\Foundation\Pomm;
$loader = require __DIR__ . '/vendor/autoload.php';
$pomm = new Pomm(['my_db' => ['dsn' => 'pgsql://user:pass@host:5432/db_name']]);
// TABLE comment (
// comment_id uuid PK, created_at timestamptz NN,
// is_moderated bool NN default false,
// content text NN CHECK (content !~ '^\s+$'), author_email text NN)
$sql = <<<SQL
SELECT
comment_id,
created_at,
is_moderated,
content,
author_email
FROM comment
INNER JOIN author USING (author_email)
WHERE
age(now(), created_at) < $*::interval
ORDER BY created_at ASC
SQL;
if ($comments->isEmpty()) {
printf("There are no new comments since yesterday.");
} else {
Pomm’s query manager module escapes query arguments to prevent SQL injection. When the arguments are cast,
it also converts them from a PHP representation to valid Postgres values. The result is an iterator, it uses a cursor
internally. Every row is converted on the fly, booleans to booleans, timestamps to \DateTime etc.
To use it, put the JAR-file with the driver on the JAVA class path.
This documentation shows samples how to use the JDBC driver to connect to a database.
First, the driver has to be registered with java.sql.DriverManager so that it knows which class to use.
This is done by loading the driver class, typically with java.lang.CLASS.forname(;driver class name>).
/**
* Connect to a PostgreSQL database.
* @param url the JDBC URL to connect to; must start with "jdbc:postgresql:"
* @param user the username for the connection
* @param password the password for the connection
* @return a connection object for the established connection
* @throws ClassNotFoundException if the driver class cannot be found on the Java class path
* @throws java.sql.SQLException if the connection to the database fails
*/
private static java.sql.Connection connect(String url, String user, String password)
throws ClassNotFoundException, java.sql.SQLException
{
/*
* Register the PostgreSQL JDBC driver.
* This may throw a ClassNotFoundException.
*/
Class.forName("org.postgresql.Driver");
/*
* Tell the driver manager to connect to the database specified with the URL.
* This may throw an SQLException.
*/
return java.sql.DriverManager.getConnection(url, user, password);
}
Not that user and password can also be included in the JDBC URL, in which case you don't have to specify them in
the getConnection method call.
/**
* Connect to a PostgreSQL database.
* @param url the JDBC URL to connect to. Must start with "jdbc:postgresql:"
* @param user the username for the connection
* @param password the password for the connection
/**
* Create a data source with connection pool for PostgreSQL connections
* @param url the JDBC URL to connect to. Must start with "jdbc:postgresql:"
* @param user the username for the connection
* @param password the password for the connection
* @return a data source with the correct properties set
*/
private static javax.sql.DataSource createDataSource(String url, String user, String password)
{
/* use a data source with connection pooling */
org.postgresql.ds.PGPoolingDataSource ds = new org.postgresql.ds.PGPoolingDataSource();
ds.setUrl(url);
ds.setUser(user);
ds.setPassword(password);
/* the connection pool will have 10 to 20 connections */
ds.setInitialConnections(10);
ds.setMaxConnections(20);
/* use SSL connections without checking server certificate */
ds.setSslMode("require");
ds.setSslfactory("org.postgresql.ssl.NonValidatingFactory");
return ds;
}
Once you have created a data source by calling this function, you would use it like this:
SELECT
(
(DATE_PART('year', AgeonDate) - DATE_PART('year', tmpdate)) * 12
+
(DATE_PART('month', AgeonDate) - DATE_PART('month', tmpdate))
)
FROM dbo."Table1"
Then
INSERT INTO
<SCHEMA_NAME>.<TABLE_NAME_1>
SELECT *
FROM
DBLINK(
'HOST=<IP-ADDRESS> USER=<USERNAME> PASSWORD=<PASSWORD> DBNAME=<DATABASE>',
'SELECT * FROM <SCHEMA_NAME>.<TABLE_NAME_2>')
AS <TABLE_NAME>
(
<COLUMN_1> <DATATYPE_1>,
<COLUMN_1> <DATATYPE_2>,
<COLUMN_1> <DATATYPE_3>
);