Infrastructure at your Service

Julia Gugel

Running two Patroni on one host using an existing etcd

Have you ever asked yourself, how to create a second Patroni PostgreSQL cluster on an existing server using the existing etcd? My first idea was to study the documentation of Patroni, but unfortunately without big success. This post should help to identify the changes you have to do on the hosts to run two parallel Patroni clusters using an existing etcd.

Starting Point

First we want to have a short look on the existing infrastructure to have a better overview where we are starting.

There is a Patroni installation and etcd already existing on the servers. As well as one PostgreSQL cluster streaming from primary to replica. We are not using two replicas in this example, but it works the same for numerous replicas.


As the etcd is already running on the hosts, let’s start with this one. And here we already have good news! You don’t have to change anything on etcd side. Just leave your configuration as it is.

[email protected]_primary:/home/postgres/.local/bin/ [PGTEST] cat /u01/app/postgres/local/dmk/dmk_postgres/etc/etcd.conf
name: postgres-primary
data-dir: /u02/postgres/pgdata/etcd
initial-cluster: postgres-primary=, postgres-stby=, postgres-stby2=


Let’s go on with the patroni.yml. As there is already a Patroni running on that server you need to create another patroni.yml, let’s say patroni_pgtest.yml. To keep it simple and not reinventing the wheel, just copy your existing yml file

[email protected]_primary:/home/postgres/ [PGTEST] cd /u01/app/postgres/local/dmk/dmk_postgres/etc
[email protected]_primary:/u01/app/postgres/local/dmk/dmk_postgres/etc/ [PGTEST] cp patroni.yml patroni_pgtest.yml

Once we have the new patroni_pgtest.yml we need to adjust some entries in this file. Most important entries to change are “namespace” and “scope”. Without changing this, your new Patroni service won’t create a new PostgreSQL cluster

scope: PGTEST
namespace: /pgtest/
name: pgtest1

Next parameters to change are the restapi ones. You can keep the IP address, but you have to adjust the port. Otherwise the service will start with an: “Address already in use” error.


Once this is done, of course the PostgreSQL parameters need to be adjusted to not use the same port and clustername as the already existing cluster. Further the PGDATA directory needs to be adjusted.

    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576
      use_pg_rewind: true
      use_slots: true
        port: 5410
  data_dir: /u02/postgres/pgdata/13/PGTEST/

Patroni service

Now that we changed all our parameters, we can create a second Patroni service named patroni_pgtest.service. Be sure to point to the correct patroni_pgtest.yml

[email protected]_primary:/home/postgres/ [PGTEST] sudo vi /etc/systemd/system/patroni_pgtest.service
# systemd integration for patroni

Description=dbi services patroni service

ExecStart=/u01/app/postgres/local/dmk/bin/patroni /u01/app/postgres/local/dmk/etc/patroni_pgtest.yml
ExecReload=/bin/kill -s HUP $MAINPID


Now we can start and enable the service

[email protected]_primary:/home/postgres/ [PGTEST] sudo systemctl start patroni_pgtest.service
[email protected]_primary:/home/postgres/ [PGTEST] sudo systemctl status patroni_pgtest.service
● patroni_pgtest.service - dbi services patroni service
   Loaded: loaded (/etc/systemd/system/patroni_pgtest.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2020-12-22 20:07:46 CET; 9h ago
 Main PID: 4418 (patroni)
   CGroup: /system.slice/patroni.service
           β”œβ”€4418 /usr/bin/python2 /u01/app/postgres/local/dmk/dmk_postgres/bin/patroni /u01/app/postgres/local/dmk/dmk_postgres/etc/patroni_pgtest.yml
           β”œβ”€5258 /u01/app/postgres/product/PG13/db_1/bin/postgres -D /u02/pgdata/13/PG1/ --config-file=/u02/postgres/pgdata/13/PG1/postgresql.conf --listen_addresses= --max_worker_processes=8 --max_locks_per_tra...
           β”œβ”€5282 postgres: PG1: logger process
           β”œβ”€5292 postgres: PG1: checkpointer process
           β”œβ”€5294 postgres: PG1: writer process
           β”œβ”€5296 postgres: PG1: stats collector process
           β”œβ”€6171 postgres: PG1: postgres postgres idle
           β”œβ”€6473 postgres: PG1: wal writer process
           β”œβ”€6474 postgres: PG1: autovacuum launcher process

Dec 23 05:36:21 postgres_primary patroni[4418]: 2020-12-23 05:36:21,032 INFO: Lock owner: postgres_primary; I am postgres_primary
Dec 23 05:36:21 postgres_primary patroni[4418]: 2020-12-23 05:36:21,047 INFO: no action.  i am the leader with the lock

[email protected]_primary:/home/postgres/ [PGTEST] sudo systemctl enable patroni_pgtest.service

As the cluster is running on the primary server now, you can do the exactly same steps on your replica server(s). Be sure to set all ports and IPs correctly.


Even if it seems to be easy to setup a second Patroni on a server, it took some time to found out, what exactly needs to be changes. But once you know all that, it’s really simple. Just keep in mind that you have to use a port for your PostgreSQL cluster that is not used at the moment.
Furthermore if you are using our DMK on your host, be sure to use ‘patronictl list’ calling the correct configuration file and the complete path for patronictl. DMK gives you an alias for patronictl which will only work for the first Patroni cluster created on the server.

[email protected]_primary:/home/postgres/ [PGTEST] cd .local/bin
[email protected]_primary:/home/postgres/.local/bin [PGTEST] patronictl -c /u01/app/postgres/local/dmk/etc/patroni_pgtest.yml list
| Cluster    |     Member       |      Host     |  Role  |  State  |  TL | Lag in MB |
|   PGTEST   | postgres_primary | | Leader | running | 528 |       0.0 |
|   PGTEST   | postgres_replica | |        | running | 528 |       0.0 |

In case you’re not using DMK, you have to add the configuration file in any case. You also have to set the correct PATH variable or use the complete path to call patronictl.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Julia Gugel
Julia Gugel