Bobcares

Postgres Docker Replication | All About

by | Jul 12, 2022

Postgres docker replication requires the creation of a PostgreSQL master first, followed by the addition of a slave using the replication method, all in docker containers.

Bobcares responds to all questions, no matter how small, as part of our Docker hosting support.

Let’s take a closer look at steps on replicating the PosgreSQL in Docker.

Postgres Docker Replication

These days, replication is an essential component of every modern infrastructure, and with scalability in mind, it contributes significantly to the delivery of high-quality software to billions of users worldwide.

postgres docker replication

A fantastic option for a dependable and scalable database engine is PostgreSQL with built-in replication support. The configuration we’re about to talk about has a single master instance that manages read and write operations and numerous slave instances that only handle read requests. This is so because writing data is a completely different experience from reading it.

Create the Master instance

We’ll start by creating the Master instance. This is what the Dockerfile contains.


FROM postgres:9.6-alpine
COPY ./setup-master.sh /docker-entrypoint-initdb.d/setup-master.sh
RUN chmod 0666 /docker-entrypoint-initdb.d/setup-master.sh

We’ll see that the setup-master.sh file, which prepares Postgres to serve as a master in the replication process, needs to be copy the image.

#!/bin/bash
echo "host replication all 0.0.0.0/0 md5" >> "$PGDATA/pg_hba.conf"
set -e
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
CREATE USER $PG_REP_USER REPLICATION LOGIN CONNECTION LIMIT 100 ENCRYPTED PASSWORD '$PG_REP_PASSWORD';
EOSQL
cat >> ${PGDATA}/postgresql.conf <<EOF
wal_level = hot_standby
archive_mode = on
archive_command = 'cd .'
max_wal_senders = 8
wal_keep_segments = 8
hot_standby = onEOF

We now also require a Dockerfile for slave instances.

FROM postgres:9.6-alpine
ENV GOSU_VERSION 1.10
ADD ./gosu /usr/bin/
RUN chmod +x /usr/bin/gosu
RUN apk add --update iputils
COPY ./docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["gosu", "postgres", "postgres"]
  • To run Postgres as root, we require the gosu binary.
  • To ping the master, we also require the iputils package.

Prepare the slave images

The slave images must then be transformed into actual slaves in the following step. We employ a file called docker-entrypoin.sh, which is the first thing that Docker runs when building a container.

#!/bin/bash
if [ ! -s "$PGDATA/PG_VERSION" ]; then
echo "*:*:*:$PG_REP_USER:$PG_REP_PASSWORD" > ~/.pgpass
chmod 0600 ~/.pgpass
until ping -c 1 -W 1 pg_master
do
echo "Waiting for master to ping..."
sleep 1s
done
until pg_basebackup -h pg_master -D ${PGDATA} -U ${PG_REP_USER} -vP -W
do
echo "Waiting for master to connect..."
sleep 1s
done
echo "host replication all 0.0.0.0/0 md5" >> "$PGDATA/pg_hba.conf"
set -e
cat > ${PGDATA}/recovery.conf <<EOF
standby_mode = on
primary_conninfo = 'host=pg_master port=5432 user=$PG_REP_USER password=$PG_REP_PASSWORD'
trigger_file = '/tmp/touch_me_to_promote_to_me_master'
EOF
chown postgres. ${PGDATA} -R
chmod 700 ${PGDATA} -R
fi
sed -i 's/wal_level = hot_standby/wal_level = replica/g' ${PGDATA}/postgresql.conf
exec "$@"
  1. In order to avoid having to do this each time the container starts up, we check the PG VERSION file in the PG DATA path on the first line to see if this instance has already been set up or not.
  2. So that Postgres can access it, we put the replication user and password in the .pgpass file.
  3. To make sure the Master is already operational, we begin pinging it.
  4. Additionally, we set up the slave servers’ configuration as needed.

Create a docker-compose file

To launch our database containers, we simply need to create a docker-compose.yml file.

version: "3"
services: pg_master:  
build: ./master  
volumes:   - pg_data:/var/lib/postgresql/data  
environment:   
- POSTGRES_USER=bobcares   
- POSTGRES_PASSWORD=bobcares_password  
- POSTGRES_DB=bobcares   
- PG_REP_USER=support   
- PG_REP_PASSWORD=support_password

networks:

default:   
aliases:    
- pg_cluster 
pg_slave:  
build: ./slave 
 environment:   
- POSTGRES_USER=bobcares   
- POSTGRES_PASSWORD=bobcares_password   
- POSTGRES_DB=bobcares   
- PG_REP_USER=support   
- PG_REP_PASSWORD=support_password 
networks:   
default:   
aliases:    
- pg_cluster
volumes: 
pg_data:

Keep in mind that we placed the setup-master.sh file and the Dockerfile for the slave in the slave directory, while the gosu binary and the Dockerfile for the master were placed in the master directory.

Running the setup

Finally, we have two different ways to operate this setup.

  1. Using Docker compose is the first one.

    docker-compose up

  2. The second approach is to use Docker Swarm.

    We must create those images and upload them to a public Docker repository in order to be able to run these containers in a swarm. Run the following command after changing build: ./master and build: ./slave.

    docker stack deploy -c docker-compose.yml my_pg_replication

    By doing this, we can easily scale up the slaves to however many we require. However, we must remember to keep the value of “max wal senders = 8” in the master up to date.

    docker service scale my_pg_replication_slave=8

pg_master and pg_cluster are the two network aliases that we have set, if you’ve noticed. Both write and read operations can be performed on pg_master and pg_cluster in the application, and docker will automatically route these requests to the appropriate container instance.

[Looking for a solution to another query? We are just a click away.]

Conclusion

In conclusion, replication is a crucial part of every contemporary infrastructure and, taken into account for scalability, it makes a significant contribution to the distribution of high-quality software to billions of users globally. The procedures were explained by our support team.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Never again lose customers to poor
server speed! Let us help you.

Privacy Preference Center

Necessary

Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.

PHPSESSID - Preserves user session state across page requests.

gdpr[consent_types] - Used to store user consents.

gdpr[allowed_cookies] - Used to store user allowed cookies.

PHPSESSID, gdpr[consent_types], gdpr[allowed_cookies]
PHPSESSID
WHMCSpKDlPzh2chML

Statistics

Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.

_ga - Preserves user session state across page requests.

_gat - Used by Google Analytics to throttle request rate

_gid - Registers a unique ID that is used to generate statistical data on how you use the website.

smartlookCookie - Used to collect user device and location information of the site visitors to improve the websites User Experience.

_ga, _gat, _gid
_ga, _gat, _gid
smartlookCookie
_clck, _clsk, CLID, ANONCHK, MR, MUID, SM

Marketing

Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.

IDE - Used by Google DoubleClick to register and report the website user's actions after viewing or clicking one of the advertiser's ads with the purpose of measuring the efficacy of an ad and to present targeted ads to the user.

test_cookie - Used to check if the user's browser supports cookies.

1P_JAR - Google cookie. These cookies are used to collect website statistics and track conversion rates.

NID - Registers a unique ID that identifies a returning user's device. The ID is used for serving ads that are most relevant to the user.

DV - Google ad personalisation

_reb2bgeo - The visitor's geographical location

_reb2bloaded - Whether or not the script loaded for the visitor

_reb2bref - The referring URL for the visit

_reb2bsessionID - The visitor's RB2B session ID

_reb2buid - The visitor's RB2B user ID

IDE, test_cookie, 1P_JAR, NID, DV, NID
IDE, test_cookie
1P_JAR, NID, DV
NID
hblid
_reb2bgeo, _reb2bloaded, _reb2bref, _reb2bsessionID, _reb2buid

Security

These are essential site cookies, used by the google reCAPTCHA. These cookies use an unique identifier to verify if a visitor is human or a bot.

SID, APISID, HSID, NID, PREF
SID, APISID, HSID, NID, PREF