I have written some posts on how you can make your PostgreSQL deployment high available by using PostgreSQL’s streaming replication feature in the past ( 1, 2 ). The main issue you’ll have to resolve with such a setup is how the application can be made aware of a new master when a fail over happened. You could use EDB Failover Manager (1, 2, 3, 4) for that because it provides the functionality to move a VIP from one host to another so the application can always connect to the very same IP address no matter where the current master is running (EDB EFM requires a subscription). You could also use Pacemaker and Corosync for that. But, which is the scope of this post, you can also use pgpool which is widely known in the PostgreSQL community. When you configure it the right way you can even spread your read operations over all hot standby servers in your configuration and only write operations go to the master. This allows you to vertically scale your PostgreSQL deployment by adding more standby nodes when you need more resources. Lets go …
To start with a picture is always a good idea. This is what we want to setup:
We will have two nodes dedicated to pgpool (centos7_pgpool_m1/m2). pgpool will be running in a watchdog configuration so that one node can take over in case the other goes down. pgpool will provide a virtual IP address for the clients to connect to (which fails over to the surviving node in case a node goes down for any reason). In the background there are two nodes which host the PostgreSQL 9.6.2 primary and hot standby instances (centos7_pgpool_1/2). At the very beginning the master is running on centos7_pgpool_1 although that does not really matter once the whole setup is completed.
I’ll not describe the setup of the PostgreSQL master->standby setup. When you need assistance there take a look here, here or search the web, there are many great howtos.
Lets start by installing pgpool onto the hosts dedicated for pgpool (centos7_pgpool_m1/m2):
You can download pgpool here. As pgpool requires libpq we’ll just install the PostgreSQL binaries on the hosts dedicated for pgpool as well before proceeding with the installation of pgpool. Of course these steps need to be done on both hosts (centos7_pgpool_m1/m2):
[root@centos7_pgpool_m1 ~]$ groupadd postgres [root@centos7_pgpool_m1 ~]$ useradd -g postgres postgres [root@centos7_pgpool_m1 ~]$ passwd postgres [root@centos7_pgpool_m1 ~]$ mkdir -p /u01/app/postgres/software [root@centos7_pgpool_m1 ~]$ chown -R postgres:postgres /u01/app/postgres [root@centos7_pgpool_m1 ~]$ su - postgres [postgres@centos7_pgpool_m1 ~]$ cd /u01/app/postgres/software/ [postgres@centos7_pgpool_m1 software]$ wget https://ftp.postgresql.org/pub/source/v9.6.2/postgresql-9.6.2.tar.bz2 [postgres@centos7_pgpool_m1 software]$ tar -axf postgresql-9.6.2.tar.bz2 [postgres@centos7_pgpool_m1 software]$ cd postgresql-9.6.2 [postgres@centos7_pgpool_m1 postgresql-9.6.2]$ PGHOME=/u01/app/postgres/product/96/db_2 [postgres@centos7_pgpool_m1 postgresql-9.6.2]$ SEGSIZE=2 [postgres@centos7_pgpool_m1 postgresql-9.6.2]$ BLOCKSIZE=8 [postgres@centos7_pgpool_m1 postgresql-9.6.2]$ WALSEGSIZE=16 [postgres@centos7_pgpool_m1 postgresql-9.6.2]$ ./configure --prefix=${PGHOME} \ > --exec-prefix=${PGHOME} \ > --bindir=${PGHOME}/bin \ > --libdir=${PGHOME}/lib \ > --sysconfdir=${PGHOME}/etc \ > --includedir=${PGHOME}/include \ > --datarootdir=${PGHOME}/share \ > --datadir=${PGHOME}/share \ > --with-pgport=5432 \ > --with-perl \ > --with-python \ > --with-tcl \ > --with-openssl \ > --with-pam \ > --with-ldap \ > --with-libxml \ > --with-libxslt \ > --with-segsize=${SEGSIZE} \ > --with-blocksize=${BLOCKSIZE} \ > --with-wal-segsize=${WALSEGSIZE} \ > --with-extra-version=" dbi services build" [postgres@centos7_pgpool_m1 postgresql-9.6.2]$ make world [postgres@centos7_pgpool_m1 postgresql-9.6.2]$ make install-world [postgres@centos7_pgpool_m1 postgresql-9.6.2]$ cd .. [postgres@centos7_pgpool_m1 software]$ rm -rf postgresql-9.6.2* ### download pgpool [postgres@centos7_pgpool_m1 software]$ ls pgpool-II-3.6.1.tar.gz [postgres@centos7_pgpool_m1 software]$ tar -axf pgpool-II-3.6.1.tar.gz [postgres@centos7_pgpool_m1 software]$ cd pgpool-II-3.6.1 [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ export PATH=/u01/app/postgres/product/96/db_2/bin/:$PATH [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ ./configure --prefix=/u01/app/postgres/product/pgpool-II [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ make [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ make install [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ cd src/sql/pgpool-recovery/ [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ make [postgres@centos7_pgpool_m1 pgpool-recovery]$ make install [postgres@centos7_pgpool_m1 pgpool-recovery]$ cd ../pgpool-regclass/ [postgres@centos7_pgpool_m1 pgpool-regclass]$ make [postgres@centos7_pgpool_m1 pgpool-regclass]$ make install
Copy the generated extensions to the PostgreSQL master and standby servers:
[postgres@centos7_pgpool_m1 ~]$ cd /u01/app/postgres/software/pgpool-II-3.6.1 # master node [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/share/extension/pgpool_recovery.control 192.168.22.34:/u01/app/postgres/product/96/db_2/share/extension/pgpool_recovery.control [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/share/extension/pgpool_recovery--1.1.sql 192.168.22.34:/u01/app/postgres/product/96/db_2/share/extension/pgpool_recovery--1.1.sql [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/share/extension/pgpool-recovery.sql 192.168.22.34:/u01/app/postgres/product/96/db_2/share/extension/pgpool-recovery.sql [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/lib/pgpool-recovery.so 192.168.22.34:/u01/app/postgres/product/96/db_2/lib/pgpool-recovery.so [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/share/extension/pgpool_regclass--1.0.sql 192.168.22.34:/u01/app/postgres/product/96/db_2/share/extension/pgpool_regclass--1.0.sql [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/share/extension/pgpool_regclass.control 192.168.22.34:/u01/app/postgres/product/96/db_2/share/extension/pgpool_regclass.control [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/lib/pgpool-regclass.so 192.168.22.34:/u01/app/postgres/product/96/db_2/lib/pgpool-regclass.so # standby node [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/share/extension/pgpool_recovery.control 192.168.22.35:/u01/app/postgres/product/96/db_2/share/extension/pgpool_recovery.control [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/share/extension/pgpool_recovery--1.1.sql 192.168.22.35:/u01/app/postgres/product/96/db_2/share/extension/pgpool_recovery--1.1.sql [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/share/extension/pgpool-recovery.sql 192.168.22.35:/u01/app/postgres/product/96/db_2/share/extension/pgpool-recovery.sql [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/lib/pgpool-recovery.so 192.168.22.35:/u01/app/postgres/product/96/db_2/lib/pgpool-recovery.so [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/share/extension/pgpool_regclass--1.0.sql 192.168.22.35:/u01/app/postgres/product/96/db_2/share/extension/pgpool_regclass--1.0.sql [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/share/extension/pgpool_regclass.control 192.168.22.35:/u01/app/postgres/product/96/db_2/share/extension/pgpool_regclass.control [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/lib/pgpool-regclass.so 192.168.22.35:/u01/app/postgres/product/96/db_2/lib/pgpool-regclass.so
Install the extensions on the master node only (this will be replicated to the standby node automatically as the PostgreSQL instances already operate in hot_standby mode):
postgres@pgpool1:/u01/app/postgres/product/96/db_2/ [PG1] psql template1 psql (9.6.2 dbi services build) Type "help" for help. (postgres@[local]:5432) [template1] > create extension pgpool_recovery; CREATE EXTENSION (postgres@[local]:5432) [template1] > create extension pgpool_regclass; CREATE EXTENSION (postgres@[local]:5432) [template1] > \dx List of installed extensions Name | Version | Schema | Description -----------------+---------+------------+---------------------------------------------------- pgpool_recovery | 1.1 | public | recovery functions for pgpool-II for V3.4 or later pgpool_regclass | 1.0 | public | replacement for regclass plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language (3 rows)
Create the pgpool.conf configuration file on both nodes. For node 1 (centos7_pgpool_m1):
echo "echo "listen_addresses = '*' port = 5432 socket_dir = '/tmp' pcp_port = 9898 pcp_socket_dir = '/tmp' backend_hostname0 = '192.168.22.34' backend_port0 = 5432 backend_weight0 = 1 backend_data_directory0 = '/u02/pgdata/PG1' backend_flag0 = 'ALLOW_TO_FAILOVER' backend_hostname1 = '192.168.22.35' backend_port1 = 5432 backend_weight1 = 1 backend_data_directory1 = '/u02/pgdata/PG1' backend_flag1 = 'ALLOW_TO_FAILOVER' enable_pool_hba = off pool_passwd = 'pool_passwd' authentication_timeout = 60 ssl = off num_init_children = 32 max_pool = 4 child_life_time = 300 child_max_connections = 0 connection_life_time = 0 client_idle_limit = 0 log_destination = 'stderr' print_timestamp = on log_connections = off log_hostname = off log_statement = off log_per_node_statement = off log_standby_delay = 'none' syslog_facility = 'LOCAL0' syslog_ident = 'pgpool' debug_level = 0 pid_file_name = '/tmp/pgpool.pid' logdir = '/tmp/pgpool' connection_cache = on reset_query_list = 'ABORT; DISCARD ALL' replication_mode = off replicate_select = off insert_lock = on lobj_lock_table = '' replication_stop_on_mismatch = off failover_if_affected_tuples_mismatch = off load_balance_mode = off ignore_leading_white_space = on white_function_list = '' black_function_list = 'nextval,setval' master_slave_mode = on master_slave_sub_mode = 'stream' sr_check_period = 0 sr_check_user = 'postgres' sr_check_password = '' delay_threshold = 0 follow_master_command = '' parallel_mode = off pgpool2_hostname = 'centos7_pgpool_m2' system_db_hostname = 'localhost' system_db_port = 5432 system_db_dbname = 'pgpool' system_db_schema = 'pgpool_catalog' system_db_user = 'pgpool' system_db_password = '' health_check_period = 20 health_check_timeout = 20 health_check_user = 'postgres' health_check_password = '' health_check_max_retries = 0 health_check_retry_delay = 1 failover_command = '/home/postgres/failover.sh %d "%h" %p %D %m %M "%H" %P' failback_command = '' fail_over_on_backend_error = on search_primary_node_timeout = 10 recovery_user = 'postgres' recovery_password = '' recovery_1st_stage_command = 'resync_master.sh' recovery_2nd_stage_command = '' recovery_timeout = 90 client_idle_limit_in_recovery = 0 use_watchdog = on trusted_servers = '' ping_path = '/usr/bin' wd_hostname = 'centos7_pgpool_m1' wd_port = 9000 wd_authkey = '' other_pgpool_hostname0 = 'centos7_pgpool_m2' other_pgpool_port0 = 5432 other_wd_port0 = 9000 delegate_IP = '192.168.22.38' ifconfig_path = '/usr/bin' if_up_cmd = 'ifconfig enp0s8:0 inet \$_IP_\$ netmask 255.255.255.0' if_down_cmd = 'ifconfig enp0s8:0 down' arping_path = '/usr/sbin' arping_cmd = 'arping -U \$_IP_\$ -w 1' clear_memqcache_on_escalation = on wd_escalation_command = '' wd_lifecheck_method = 'heartbeat' wd_interval = 10 wd_heartbeat_port = 9694 wd_heartbeat_keepalive = 2 wd_heartbeat_deadtime = 30 heartbeat_destination0 = 'host0_ip1' heartbeat_destination_port0 = 9694 heartbeat_device0 = '' wd_life_point = 3 wd_lifecheck_query = 'SELECT 1' wd_lifecheck_dbname = 'template1' wd_lifecheck_user = 'nobody' wd_lifecheck_password = '' relcache_expire = 0 relcache_size = 256 check_temp_table = on memory_cache_enabled = off memqcache_method = 'shmem' memqcache_memcached_host = 'localhost' memqcache_memcached_port = 11211 memqcache_total_size = 67108864 memqcache_max_num_cache = 1000000 memqcache_expire = 0 memqcache_auto_cache_invalidation = on memqcache_maxcache = 409600 memqcache_cache_block_size = 1048576 memqcache_oiddir = '/var/log/pgpool/oiddir' white_memqcache_table_list = '' black_memqcache_table_list = '' " > /u01/app/postgres/product/pgpool-II/etc/pgpool.conf
For node 2 (centos7_pgpool_m2):
echo "echo "listen_addresses = '*' port = 5432 socket_dir = '/tmp' pcp_port = 9898 pcp_socket_dir = '/tmp' backend_hostname0 = '192.168.22.34' backend_port0 = 5432 backend_weight0 = 1 backend_data_directory0 = '/u02/pgdata/PG1' backend_flag0 = 'ALLOW_TO_FAILOVER' backend_hostname1 = '192.168.22.35' backend_port1 = 5432 backend_weight1 = 1 backend_data_directory1 = '/u02/pgdata/PG1' backend_flag1 = 'ALLOW_TO_FAILOVER' enable_pool_hba = off pool_passwd = 'pool_passwd' authentication_timeout = 60 ssl = off num_init_children = 32 max_pool = 4 child_life_time = 300 child_max_connections = 0 connection_life_time = 0 client_idle_limit = 0 log_destination = 'stderr' print_timestamp = on log_connections = off log_hostname = off log_statement = off log_per_node_statement = off log_standby_delay = 'none' syslog_facility = 'LOCAL0' syslog_ident = 'pgpool' debug_level = 0 pid_file_name = '/tmp/pgpool.pid' logdir = '/tmp/pgpool' connection_cache = on reset_query_list = 'ABORT; DISCARD ALL' replication_mode = off replicate_select = off insert_lock = on lobj_lock_table = '' replication_stop_on_mismatch = off failover_if_affected_tuples_mismatch = off load_balance_mode = off ignore_leading_white_space = on white_function_list = '' black_function_list = 'nextval,setval' master_slave_mode = on master_slave_sub_mode = 'stream' sr_check_period = 0 sr_check_user = 'postgres' sr_check_password = '' delay_threshold = 0 follow_master_command = '' parallel_mode = off pgpool2_hostname = 'centos7_pgpool_m2' system_db_hostname = 'localhost' system_db_port = 5432 system_db_dbname = 'pgpool' system_db_schema = 'pgpool_catalog' system_db_user = 'pgpool' system_db_password = '' health_check_period = 20 health_check_timeout = 20 health_check_user = 'postgres' health_check_password = '' health_check_max_retries = 0 health_check_retry_delay = 1 failover_command = '/home/postgres/failover.sh %d "%h" %p %D %m %M "%H" %P' failback_command = '' fail_over_on_backend_error = on search_primary_node_timeout = 10 recovery_user = 'postgres' recovery_password = '' recovery_1st_stage_command = 'resync_master.sh' recovery_2nd_stage_command = '' recovery_timeout = 90 client_idle_limit_in_recovery = 0 use_watchdog = on trusted_servers = '' ping_path = '/usr/bin' wd_hostname = 'centos7_pgpool_m2' wd_port = 9000 wd_authkey = '' other_pgpool_hostname0 = 'centos7_pgpool_m1' other_pgpool_port0 = 5432 other_wd_port0 = 9000 delegate_IP = '192.168.22.38' ifconfig_path = '/usr/sbin' if_up_cmd = 'ifconfig enp0s8:0 inet \$_IP_\$ netmask 255.255.255.0' if_down_cmd = 'ifconfig enp0s8:0 down' arping_path = '/usr/sbin' arping_cmd = 'arping -U \$_IP_\$ -w 1' clear_memqcache_on_escalation = on wd_escalation_command = '' wd_lifecheck_method = 'heartbeat' wd_interval = 10 wd_heartbeat_port = 9694 wd_heartbeat_keepalive = 2 wd_heartbeat_deadtime = 30 heartbeat_destination0 = 'host0_ip1' heartbeat_destination_port0 = 9694 heartbeat_device0 = '' wd_life_point = 3 wd_lifecheck_query = 'SELECT 1' wd_lifecheck_dbname = 'template1' wd_lifecheck_user = 'nobody' wd_lifecheck_password = '' relcache_expire = 0 relcache_size = 256 check_temp_table = on memory_cache_enabled = off memqcache_method = 'shmem' memqcache_memcached_host = 'localhost' memqcache_memcached_port = 11211 memqcache_total_size = 67108864 memqcache_max_num_cache = 1000000 memqcache_expire = 0 memqcache_auto_cache_invalidation = on memqcache_maxcache = 409600 memqcache_cache_block_size = 1048576 memqcache_oiddir = '/var/log/pgpool/oiddir' white_memqcache_table_list = '' black_memqcache_table_list = '' " > /u01/app/postgres/product/pgpool-II/etc/pgpool.conf
For switching the VIP from one host to another pgpool must be able to bring up and shutdown the virtual interface. You could use sudo for that or change the suid bit on the ifconfig and arping binaries:
[postgres@centos7_pgpool_m1 pgpool-II]$ sudo chmod u+s /usr/sbin/arping [postgres@centos7_pgpool_m1 pgpool-II]$ sudo chmod u+s /sbin/ifconfig
The other important configuration file for pgpool is the pcp.conf file. This file holds the authentication for pgpool itself and requires a user name and a md5 hashed password. To generate the password you can use the pg_md5 utility which comes with the installation of pgpool:
[postgres@centos7_pgpool_m1 ~]$ /u01/app/postgres/product/pgpool-II/bin/pg_md5 --prompt --username postgres password: e8a48653851e28c69d0506508fb27fc5
Once you have the hashed password we can create the pcp.conf file (on both pgpool nodes of course):
[postgres@centos7_pgpool_m1 ~]$ echo "postgres:e8a48653851e28c69d0506508fb27fc5" > /u01/app/postgres/product/pgpool-II/etc/pcp.conf
Before doing anything else we need to allow connections from the pgpool nodes to the database nodes by adjusting the pg_hba.conf file for both PostgreSQL instances. On both nodes:
postgres@pgpool1:/home/postgres/ [PG1] echo "host all postgres 192.168.22.36/32 trust" >> /u02/pgdata/PG1/pg_hba.conf postgres@pgpool1:/home/postgres/ [PG1] echo "host all postgres 192.168.22.37/32 trust" >> /u02/pgdata/PG1/pg_hba.conf postgres@pgpool1:/home/postgres/ [PG1] pg_ctl -D /u02/pgdata/PG1/ reload
Before we start pgpool on both pgpool nodes lets take a look at the important watchdog parameters on node 1:
ping_path = '/usr/bin' wd_hostname = 'centos7_pgpool_m2' wd_port = 9000 wd_authkey = '' other_pgpool_hostname0 = 'centos7_pgpool_m1' other_pgpool_port0 = 5432 other_wd_port0 = 9000 delegate_IP = '192.168.22.38' ifconfig_path = '/usr/sbin' if_up_cmd = 'ifconfig enp0s8:0 inet \$_IP_\$ netmask 255.255.255.0' if_down_cmd = 'ifconfig enp0s8:0 down' arping_path = '/usr/sbin' arping_cmd = 'arping -U \$_IP_\$ -w 1
The various *path* variables are obvious, they tell pgpool where to find the binaries for ping, arping and ifconfig (you can also use the ip command instead). The other0* variables specify which other host runs a pgpool instance on which pgpool and watchdog ports. This is essential for the communication between the two pgpool hosts. And then we have the commands to bring up the virtual interface and to bring it down (if_up_cmd,if_down_cmd). In addition we need an address for the virtual interface which is specified by the “delegate_IP” variable. Lets see if it works and start pgpool on both nodes:
# node 1 [postgres@centos7_pgpool_m1 ~]$ /u01/app/postgres/product/pgpool-II/bin/pgpool [postgres@centos7_pgpool_m1 ~]$ # node 2 [postgres@centos7_pgpool_m2 ~]$ /u01/app/postgres/product/pgpool-II/bin/pgpool [postgres@centos7_pgpool_m2 ~]$
Looks not so bad as no issues are printed to the screen. When everything went fine we should see the a new virtual IP (192.168.22.38) on one of the nodes (node2 in my case):
[postgres@centos7_pgpool_m2 ~]$ ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s3: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:d6:95:ab brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3 valid_lft 85216sec preferred_lft 85216sec inet6 fe80::a00:27ff:fed6:95ab/64 scope link valid_lft forever preferred_lft forever 3: enp0s8: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:5c:b0:e5 brd ff:ff:ff:ff:ff:ff inet 192.168.22.37/24 brd 192.168.22.255 scope global enp0s8 valid_lft forever preferred_lft forever inet 192.168.22.38/24 brd 192.168.22.255 scope global secondary enp0s8:0 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe5c:b0e5/64 scope link tentative dadfailed valid_lft forever preferred_lft forever
When we shutdown pgpool on the node where the VIP is currently running it should be switched to the other node automatically, so shutdown pgpool on the node where it is running currently:
[postgres@centos7_pgpool_m2 ~]$ /u01/app/postgres/product/pgpool-II/bin/pgpool -m fast stop 2017-03-16 17:54:02: pid 2371: LOG: stop request sent to pgpool. waiting for termination... .done.
Check the other host for the VIP:
[postgres@centos7_pgpool_m1 ~]$ ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s3: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:d6:95:ab brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3 valid_lft 85067sec preferred_lft 85067sec inet6 fe80::a00:27ff:fed6:95ab/64 scope link valid_lft forever preferred_lft forever 3: enp0s8: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:5c:b0:e5 brd ff:ff:ff:ff:ff:ff inet 192.168.22.36/24 brd 192.168.22.255 scope global enp0s8 valid_lft forever preferred_lft forever inet 192.168.22.38/24 brd 192.168.22.255 scope global secondary enp0s8:0 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe5c:b0e5/64 scope link tentative dadfailed valid_lft forever preferred_lft forever
Cool, now we have a VIP the application can connect to which switches between the pgpool hosts automatically in case the host where it currently runs on experiences an issue or is shutdown intentionally. There is a pcp command which shows you more details in regards to the watchdog:
[postgres@centos7_pgpool_m1 ~]$ /u01/app/postgres/product/pgpool-II/bin/pcp_watchdog_info Password: 2 YES centos7_pgpool_m1:5432 Linux centos7_pgpool_m1 centos7_pgpool_m1 centos7_pgpool_m1:5432 Linux centos7_pgpool_m1 centos7_pgpool_m1 5432 9000 4 MASTER centos7_pgpool_m2:5432 Linux centos7_pgpool_m2 centos7_pgpool_m2 5432 9000 7 STANDBY
As we now have a VIP we should be able to connect to the PostgreSQL backends by connecting to this VIP:
[postgres@centos7_pgpool_m1 ~]$ psql -h 192.168.22.38 psql (9.6.2 dbi services build) Type "help" for help. postgres=# \l List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+----------+----------+-------------+-------------+----------------------- postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres (3 rows)
Ok, that works as well. What do we see on the PostgreSQL instances? On the master:
(postgres@[local]:5432) [postgres] > select datname,client_addr,client_hostname from pg_stat_activity where client_addr is not null; datname | client_addr | client_hostname ----------+---------------+----------------- postgres | 192.168.22.36 | NULL (1 row)
We see one connection from the first pgpool node. What do we see on the standby?
(postgres@[local]:5432) [postgres] > select datname,client_addr,client_hostname from pg_stat_activity where client_addr is not null; datname | client_addr | client_hostname ----------+---------------+----------------- postgres | 192.168.22.36 | NULL (1 row)
One connection as well. Looks good.
When you connect the PostgreSQL instances though pgpool there is a sql like syntax for displaying pgpool stuff as well:
postgres=# show pool_nodes; node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replicati on_delay ---------+---------------+------+--------+-----------+---------+------------+-------------------+---------- --------- 0 | 192.168.22.34 | 5432 | up | 0.500000 | primary | 1 | true | 0 1 | 192.168.22.35 | 5432 | up | 0.500000 | standby | 0 | false | 0 (2 rows)
To summarize: We now have a pgpool instance running on two nodes. Only one of these nodes hosts the VIP and the VIP switches to the other host in case there is an issue. Client connections from now on can go the VIP and pgpool will redirect the connection to one of the PostgreSQL nodes (depending if it is a write or a pure read operation).
In the next post we’ll dig deeper into the pgpool configuration, how you can tell on which instance you actually landed and how we can instruct pgpool to automatically promote a new master, dsiconnect the old master and the rebuild the old master as a new standby that follows the new master.