Managing High Availability in PostgreSQL: Part 3 — Patroni
In our previous posts, we discussed the capabilities and functioning of PostgreSQL Automatic Failover (PAF) by Cluster Labs and Replication Manager (repmgr) by 2ndQuadrant. In the final post of this series, we will review the last solution, Patroni by Zalando, and compare all three at the end so you can determine which high availability framework is best for your PostgreSQL hosting deployment.
- Managing High Availability in PostgreSQL: Part 1 — PostgreSQL Automatic Failover
- Managing High Availability in PostgreSQL: Part 2 — Replication Manager
Patroni for PostgreSQL
Patroni originated as a fork of Governor, a project from Compose. It is an open-source tool suite, written in Python, for managing high availability of PostgreSQL clusters. Instead of building its own consistency protocol, Patroni smartly leverages the consistency model provided by a Distributed Configuration Store (DCS). It also supports other DCS solutions like Zookeeper, etcd, Consul, and Kubernetes.
Patroni ensures the end-to-end setup of PostgreSQL HA clusters, including streaming replication. It supports various ways of creating a standby node and works like a template that can be customized to your needs.
This feature-rich tool exposes its functionality via REST APIs and also via a command-line utility called patronictl. It supports integration with HAProxy by using its health check APIs to handle load balancing.
Patroni also supports event notification with the help of callbacks, which are scripts triggered by certain actions. It enables users to perform any maintenance actions by providing pause/resume functionality. The Watchdog support feature makes the framework even more robust.
You may also like: Scalability and High Availability
How It Works
Initially, PostgreSQL and Patroni binaries need to be installed. Once this is done, you will also need to set up an HA DCS configuration. All the necessary configurations to bootstrap the cluster needs to be specified in the YAML configuration file, and Patroni will use this file for initialization. On the first node, Patroni initializes the database, obtains the leader lock from DCS, and ensures the node is being run as the master.
The next step is adding standby nodes for which Patroni provides multiple options. By default, Patroni uses pg_basebackup to create the standby node and also supports custom methods like WAL-E, pgBackRest, Barman, and others for the standby node creation. Patroni makes it very simple to add a standby node and handles all the bootstrapping tasks and the setting up of your streaming replication.
Once your cluster setup is complete, Patroni will actively monitor the cluster and ensure it’s in a healthy state. The master node renews the leader lock every ttl second(s) (default: 30 seconds). When the master node fails to renew the leader lock, Patroni triggers an election, and the node, which will obtain the leader lock, will be elected as the new master.
How Does It Handle the Split-Brain Scenario?
In a distributed system, consensus plays an important role in determining consistency, and Patroni uses DCS to attain consensus. Only the node that holds the leader lock can be the master, and the leader lock is obtained via DCS. If the master node doesn’t hold the leader lock, then it will be demoted immediately by Patroni to run as a standby. This way, at any point in time, there can only be one master running in the system.
Are There Any Setup Requirements?
- Patroni needs python 2.7 and above.
- The DCS and its specific python module must be installed. For test purposes, DCS can be installed on same nodes running PostgreSQL. However, in production, DCS must be installed on separate nodes.
- The YAML configuration file must be present using these high-level configuration settings:
Global/Universal
This includes configuration such as the name of the host (name), which needs to be unique for the cluster, the name of the cluster (scope), and the path for storing config in DCS (namespace).Log
Patroni-specific log settings including level, format, file_num, file_size, etc.Bootstrap configuration
This is the global configuration for a cluster that will be written to DCS. These configuration parameters can be changed with the help of Patroni APIs or directly in DCS. The bootstrap configuration includes standby creation methods, initdb parameters, post initialization script, etc. It also contains timeouts configuration, parameters to decide the usage of PostgreSQL features like replication slots, synchronous mode, etc. This section will be written into /<namespace>/<scope>/config of a given configuration store after the initializing of the new cluster.PostgreSQL
This section contains the PostgreSQL-specific parameters like authentication, directory paths for data, binary and config, listen ip address, etc.REST API
This section includes the Patroni-specific configuration related to REST API’s, such as listen address, authentication, SSL, etc.Consul
Settings specific to Consul DCS.Etcd
Settings specific to Etcd DCS.Exhibitor
Settings specific to Exhibitor DCS.Kubernetes
Settings specific to Kubernetes DCS.ZooKeeper
Settings specific to ZooKeeper DCS.Watchdog
Settings specific to Watchdog.
Patroni Pros
- Patroni enables end-to-end setup up of the cluster
- Supports REST APIs and HAproxy integration
- Supports event notifications via callbacks scripts triggered by certain actions
- Leverages DCS for consensus
Patroni Cons
- Patroni will not detect the misconfiguration of a standby with an unknown or non-existent node in the recovery configuration. The node will be shown as a slave even if the standby is running without connecting to the master/cascading standby node.
- The user needs to handle setup, management, and upgrades of the DCS software.
- Requires multiple ports to be open for components communication:
- REST API port for Patroni
- Minimum 2 ports for DCS
High Availability Test Scenarios
We conducted a few tests on PostgreSQL HA management using Patroni. All of these tests were performed while the application was running and inserting data into the PostgreSQL database. The application was written using PostgreSQL Java leveraging the connection failover capability.
Standby Server Tests
Sl. No | Test Scenario | Observation |
---|---|---|
1 | Kill the PostgreSQL process | Patroni brought the PostgreSQL process back to running state.
|
2 | Stop the PostgreSQL process | Patroni brought the PostgreSQL process back to running state.
|
3 | Reboot the server | Patroni needs to be started after a reboot unless configured to not start on reboot. Once Patroni was started, it started the PostgreSQL process and set up the standby configuration.
|
4 | Stop the Patroni process |
So, essentially, you need to monitor the health of the Patroni process – otherwise, it will lead to issues down the line. |
Master/Primary Server Tests
Sl. No | Test Scenario | Observation |
1 | Kill the PostgreSQL process | Patroni brought the PostgreSQL process back to running state. Patroni running on that node had a primary lock, so the election was not triggered.
|
2 | Stop the PostgreSQL process and bring it back immediately after health check expiry | Patroni brought the PostgreSQL process back to running state. Patroni running on that node had a primary lock, so the election was not triggered.
|
3 | Reboot the server | Failover happened, and one of the standby servers was elected as the new master after obtaining the lock. When Patroni was started on the old master, it brought back the old master up and performed pg_rewind and started following the new master.
|
4 | Stop/Kill the Patroni process |
As you can see above, it is very important to monitor the health of the Patroni process on the master. Failure to do so can lead to a multi-master scenario and potential data loss. |
Network Isolation Tests
Sl. No | Test Scenario | Observation |
1 | Network-isolate the master server from other servers | DCS communication was blocked for the master node.
|
2 | Network-isolate the standby server from other servers | DCS communication was blocked for the standby node.
|
What’s the Best PostgreSQL HA Framework?
Patroni is a valuable tool for PostgreSQL database administrators (DBAs), as it performs end-to-end setup and monitoring of a PostgreSQL cluster. The flexibility of choosing DCS and standby creation is an advantage to the end-user, as they can choose the method they are comfortable with.
REST APIs, HaProxy integration, Watchdog support, callbacks, and its feature-rich management makes Patroni the best solution for PostgreSQL HA management.
PostgreSQL HA Framework Testing: PAF vs. repmgr vs. Patroni
Included below is a comprehensive table detailing the results of all the tests we have performed on all three frameworks — PostgreSQL Automatic Failover (PAF), Replication Manager (repmgr), and Patroni.
Standby Server Tests
Test Scenario | PostgreSQL Automatic Failover (PAF) | Replication Manager (repmgr) | Patroni |
---|---|---|---|
Kill the PostgreSQL process | Pacemaker brought the PostgreSQL process back to running state.
|
The standby server was marked as failed. Manual intervention was required to start the PostgreSQL process again.
|
Patroni brought the PostgreSQL process back to running state.
|
Stop the PostgreSQL process | Pacemaker brought the PostgreSQL process back to running state.
|
The standby server was marked as failed. Manual intervention was required to start the PostgreSQL process again.
|
Patroni brought the PostgreSQL process back to running state.
|
Reboot the server | The standby server was marked offline initially. Once the server came up after reboot, PostgreSQL was started by Pacemaker and the server was marked as online. If fencing was enabled, then the node wouldn’t have been added automatically to the cluster.
|
The standby server was marked as failed. Once the server came up after reboot, PostgreSQL was started manually and server was marked as running.
|
Patroni needs to be started after a reboot unless configured to not start on reboot. Once Patroni was started, it started the PostgreSQL process and set up the standby configuration.
|
Stop the framework agent process | Agent: pacemaker
|
Agent: repmgrd
|
Agent: patroni
|
Master/Primary Server Tests
Test Scenario | PostgreSQL Automatic Failover (PAF) | Replication Manager (repmgr) | Patroni |
---|---|---|---|
Kill the PostgreSQL process | Pacemaker brought the PostgreSQL process back to running state. Primary got recovered within the threshold time and hence election was not triggered.
|
repmgrd started health check for primary server connection on all standby servers for a fixed interval. When all retries failed, an election was triggered on all the standby servers. As a result of the election, the standby that had the latest received LSN got promoted. The standby servers that lost the election will wait for the notification from the new master node and will follow it once they receive the notification. Manual intervention was required to start the PostgreSQL process again.
|
Patroni brought the PostgreSQL process back to running state. Patroni running on that node had a primary lock, hence the election was not triggered.
|
Stop the PostgreSQL process and bring it back immediately after health check expiry | Pacemaker brought the PostgreSQL process back to running state. Primary got recovered within the threshold time and hence election was not triggered.
|
repmgrd started a health check for primary server connections on all standby servers for a fixed interval. When all the retries failed, an election was triggered on all the standby nodes. However, the newly elected master didn’t notify the existing standby servers since the old master was back. The cluster was left in an indeterminate state and manual intervention was required.
|
Patroni brought the PostgreSQL process back to running state. Patroni running on that node had a primary lock, hence the election was not triggered.
|
Reboot the server | The election was triggered by Pacemaker after the threshold time for which the master was not available. The most eligible standby server was promoted as the new master. Once the old master came up after reboot, it was added back to the cluster as a standby. If fencing was enabled, then the node wouldn’t have been added automatically to the cluster.
|
repmgrd started the election when the master connection health check failed on all standby servers. The eligible standby was promoted. When this server came back, it didn’t join the cluster and was marked failed. The repmgr node rejoin command was run to add the server back to the cluster.
|
Failover happened, and one of the standby servers was elected as the new master after obtaining the lock. When Patroni was started on the old master, it brought back the old master up and performed pg_rewind and started following the new master.
|
Stop the framework agent process | Agent: pacemaker
|
Agent: repmgrd
|
Agent: patroni
|
Network Isolation Tests
Test Scenario | PostgreSQL Automatic Failover (PAF) | Replication Manager (repmgr) | Patroni |
---|---|---|---|
Network isolate the master server from other servers (split-brain scenario) | Corosync traffic was blocked on the master server.
|
All servers have the same value for locationin repmgr configuration:
The standby servers have the same value for location, but the primary had a different value for location in the repmgr configuration:
|
DCS communication was blocked for the master node.
|
Network-isolate the standby server from other servers | Corosync traffic was blocked on the standby server.
|
|
DCS communication was blocked for the standby node.
|
Further Reading
What Developers Can Do to Help Achieve High Availability