Infrastructure at your Service

Daniel Westermann

Upgrading PostgreSQL to a new major release

The last post looked into how to upgrade PostgreSQL to a new minor version. In this post I’ll look into how to upgrade PostgreSQL to a new major version. This is not as simple as just installing the binaries and start the instance from there. For major upgrades there are two possibilities:

I’ll only look into pg_upgrade for this post. For simplicity I’ll upgrade the 9.4.5 PostgreSQL instance from the last post to 9.5 beta2. The binaries for 9.5 beta2 are already there:

[email protected]:/u01/app/postgres/software/ [PG1] which pg_upgrade
/u01/app/postgres/product/95/db_b2/bin/pg_upgrade

Obviously we need to stop the current version before performing the upgrade:

[email protected]:/u01/app/postgres/software/ [PG1] pg_ctl stop -D /u02/pgdata/PG1
waiting for server to shut down.... done
server stopped

Then we need to create a new database cluster with the version we want to upgrade to (9.5 beta2 in this case):

[email protected]:/u01/app/postgres/software/ [PG1] mkdir /u02/pgdata/PG7
[email protected]:/u01/app/postgres/software/ [PG1] mkdir /u03/pgdata/PG7
[email protected]:/u01/app/postgres/software/ [PG1] mkdir /u90/arch/PG7
[email protected]:/u01/app/postgres/software/ [PG1] initdb -D /u02/pgdata/PG7 -X /u03/pgdata/PG7/ 
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /u02/pgdata/PG7 ... ok
fixing permissions on existing directory /u03/pgdata/PG7 ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /u02/pgdata/PG7/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /u02/pgdata/PG7 -l logfile start

To verify the version we do a quick startup:

[email protected]:/u01/app/postgres/software/ [PG7] pg_ctl -D /u02/pgdata/PG7 start
server starting
[email protected]:/u01/app/postgres/software/ [PG7] LOG:  database system was shut down at 2015-12-01 12:10:02 CET
LOG:  MultiXact member wraparound protections are now enabled
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started

[email protected]:/u01/app/postgres/software/ [PG7] sqh
Null display is "NULL".
Timing is on.
psql (9.5beta2)
Type "help" for help.

([email protected][local]:5449) [postgres] > 

Then shut it down again:

[email protected]:/u01/app/postgres/software/ [PG7] pg_ctl -D /u02/pgdata/PG7 stop -m fast
waiting for server to shut down....LOG:  received fast shutdown request
LOG:  aborting any active transactions
LOG:  autovacuum launcher shutting down
LOG:  shutting down
LOG:  database system is shut down
 done
server stopped

Now we can begin with the upgrade by specifying four environment variables:

[email protected]:/u01/app/postgres/software/ [PG7] export PGDATAOLD=/u02/pgdata/PG1
[email protected]:/u01/app/postgres/software/ [PG7] export PGDATANEW=/u02/pgdata/PG7
[email protected]:/u01/app/postgres/software/ [PG7] export PGBINOLD=/u01/app/postgres/product/94/db_5/bin
[email protected]:/u01/app/postgres/software/ [PG7] export PGBINNEW=/u01/app/postgres/product/95/db_b2/bin/
[email protected]:/u01/app/postgres/software/ [PG7] pg_upgrade
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* system OID user data types                ok
Checking for contrib/isn with bigint-passing mismatch       ok
Creating dump of global objects                             ok
Creating dump of database schemas
                                                            ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
------------------
Analyzing all rows in the new cluster                       ok
Freezing all rows on the new cluster                        ok
Deleting files from new pg_clog                             ok
Copying old pg_clog to new server                           ok
Setting next transaction ID and epoch for new cluster       ok
Deleting files from new pg_multixact/offsets                ok
Copying old pg_multixact/offsets to new server              ok
Deleting files from new pg_multixact/members                ok
Copying old pg_multixact/members to new server              ok
Setting next multixact ID and offset for new cluster        ok
Resetting WAL archives                                      ok
Setting frozenxid and minmxid counters in new cluster       ok
Restoring global objects in the new cluster                 ok
Restoring database schemas in the new cluster
                                                            ok
Creating newly-required TOAST tables                        ok
Copying user relation files
                                                            ok
Setting next OID for new cluster                            ok
Sync data directory to disk                                 ok
Creating script to analyze new cluster                      ok
Creating script to delete old cluster                       ok

Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
    ./analyze_new_cluster.sh

Running this script will delete the old cluster's data files:
    ./delete_old_cluster.sh

Quite easy, isn’t it? As pointed out by pg_upgrade optimizer statistics need to be gathered as this is not done automatically:

[email protected]:/u01/app/postgres/software/ [PG7] pg_ctl -D /u02/pgdata/PG7 start
server starting
[email protected]:/u01/app/postgres/software/ [PG7] LOG:  database system was shut down at 2015-12-01 12:18:34 CET
LOG:  MultiXact member wraparound protections are now enabled
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started

[email protected]:/u01/app/postgres/software/ [PG7] ./analyze_new_cluster.sh 
This script will generate minimal optimizer statistics rapidly
so your system is usable, and then gather statistics twice more
with increasing accuracy.  When it is done, your system will
have the default level of optimizer statistics.

If you have used ALTER TABLE to modify the statistics target for
any tables, you might want to remove them and restore them after
running this script because they will delay fast statistics generation.

If you would like default statistics as quickly as possible, cancel
this script and run:
    "/u01/app/postgres/product/95/db_b2/bin/vacuumdb" --all --analyze-only

vacuumdb: processing database "bi": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "db1": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "patch": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "postgres": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "template1": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "bi": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "db1": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "patch": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "postgres": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "template1": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "bi": Generating default (full) optimizer statistics
vacuumdb: processing database "db1": Generating default (full) optimizer statistics
vacuumdb: processing database "patch": Generating default (full) optimizer statistics
vacuumdb: processing database "postgres": Generating default (full) optimizer statistics
vacuumdb: processing database "template1": Generating default (full) optimizer statistics

Done

The output above proves that the “patch” database is really there but we can confirm that:

[email protected]:/u01/app/postgres/software/ [PG7] sqh
Null display is "NULL".
Timing is on.
psql (9.5beta2)
Type "help" for help.

([email protected][local]:5449) [postgres] > \c patch
You are now connected to database "patch" as user "postgres".
([email protected][local]:5449)  > select count(*) from test;
 count 
-------
   100
(1 row)

Time: 0.705 ms

And finally we can delete the old data files:

[email protected]:/u01/app/postgres/software/ [PG7] ./delete_old_cluster.sh 
[email protected]:/u01/app/postgres/software/ [PG7] ls -al /u02/pgdata/PG1
ls: cannot access /u02/pgdata/PG1: No such file or directory

That’s it.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Daniel Westermann
Daniel Westermann

Principal Consultant & Technology Leader Open Infrastructure