How to Perform PostgreSQL Backup and Restore Operations with with pg_basebackup?

I have tried to document step by step How to Perform PostgreSQL Backup and Restore Operations with with pg_basebackup. I hope it will be useful for those who need it!

High Level Installation and Configuration Steps

1. General Steps
1.1. Environment Overview
1.2. Set Hostnames on Both Servers
1.3. Update /etc/hosts Files on Both Servers
1.4. Disable Firewall on Both Servers
1.5. Install PostgreSQL 16 on Both Servers
1.6. Enable WAL Archiving (Required for PITR & Consistent Backups) on Primary Server
2. Full Restore Test Steps
2.1. Create Sample Database and Data on Primary Server
2.2. Take a Base Backup (with WAL, Timestamp, and Compression) on Primary Server
2.3. Transfer Backup and WALs to Restore Server from Primary Server
2.4. Prepare the Environment on Restore Server
2.5. Extract and Restore the Backup on Restore Server
2.6. Verify Data After Restore on Restore Server
3. Point-in-Time Recovery (PITR) Test (Simulated Data Deletion) Steps
3.1. Create Sample Database and Data on Primary Server
3.2. Take a Base Backup (with WAL, Timestamp, and Compression) on Primary Server
3.3. Add Another Database and Data After Backup on Primary Server
3.4. Record Current Time (Before Data Loss) on Primary Server
3.5. Simulate Accidental Data Deletion on Primary Server
3.6. Ensure All WALs Are Archived on Primary Server
3.7. Transfer Backup and WALs to Restore Server from Primary Server
3.8. Prepare the Restore Server Environment
3.9. Extract and Restore the Backup on Restore Server
3.10. Restore Archived WALs on Restore Server
3.11. Update Config on Restore Server
3.12. Create recovery.signal on Restore Server
3.13. Start PostgreSQL and Monitor Recovery on Restore Server
3.14. Verify PITR Result (Data Restored) on Restore Server
3.15. Cleanup on Restore Server (Optional)

1. General Steps

1.1. Environment Overview

1.2. Set Hostnames on Both Servers

192.168.1.151 -> sudo hostnamectl set-hostname postgres16-primary
192.168.1.152 -> sudo hostnamectl set-hostname postgres16-restoretest

1.3. Update /etc/hosts Files on Both Servers

vi /etc/hosts

192.168.1.151 postgres16-primary
192.168.1.152 postgres16-restoretest

1.4. Disable Firewall on Both Servers

systemctl status ufw
systemctl stop ufw
sudo systemctl disable ufw
sudo systemctl mask ufw
systemctl status ufw

1.5. Install PostgreSQL 16 on Both Servers

Install PostgreSQL Package Tools

# This package provides tools required to manage PostgreSQL versions and to install the official PostgreSQL APT repository.

sudo apt update
sudo apt install -y postgresql-common

Add the Official PostgreSQL APT Repository

# This script automatically adds the PostgreSQL Global Development Group (PGDG) APT repository to your 
# system, enabling access to the latest PostgreSQL versions — including PostgreSQL 16.

sudo /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh

Update Package List and Install PostgreSQL 16 with Additional Modules

# postgresql-16: The core PostgreSQL 16 server package.
# postgresql-contrib-16: Adds useful extensions like pg_stat_statements, uuid-ossp, and more.

sudo apt update
sudo apt install -y postgresql-16 postgresql-contrib-16
psql --version

1.6. Enable WAL Archiving (Required for PITR & Consistent Backups) on Primary Server

# Backup PostgreSQL config before editing:
sudo cp -p /etc/postgresql/16/main/postgresql.conf /etc/postgresql/16/main/postgresql.conf_original

# Edit PostgreSQL config:
sudo vi /etc/postgresql/16/main/postgresql.conf

# Add or modify:
archive_mode = on
archive_command = 'cp %p /var/lib/postgresql/wal_archive/%f'
wal_level = replica

# Create WAL archive directory:
sudo mkdir -p /var/lib/postgresql/wal_archive
sudo chown postgres:postgres /var/lib/postgresql/wal_archive
sudo chmod 700 /var/lib/postgresql/wal_archive

# Restart PostgreSQL:
sudo systemctl restart postgresql@16-main

# Check PostgreSQL:
sudo systemctl status postgresql@16-main

# Verify settings:
sudo -u postgres psql -c "SHOW archive_mode;"
sudo -u postgres psql -c "SHOW archive_command;"
sudo -u postgres psql -c "SHOW wal_level;"

# Explanation:
Each time a new WAL file is completed, PostgreSQL executes:
cp %p /var/lib/postgresql/wal_archive/%f

%p → the full path of the source WAL file
%f → the file name (e.g., 0000000100000000000000A1)
cp ... → copies the file to the specified directory

In short:
Every time a new WAL file is created, PostgreSQL copies it to the directory /var/lib/postgresql/wal_archive/.
This is required for point-in-time recovery (PITR) and backup purposes.

2. Full Restore Test Steps

2.1. Create Sample Database and Data on Primary Server

##### Steps Used When Repeating Tests #####
sudo -u postgres psql
\l
drop database restore_test;
drop database pitr_test;
drop database pitr_test_2;
\l
##### Steps Used When Repeating Tests #####

sudo -u postgres psql

CREATE DATABASE restore_test;
\c restore_test
CREATE TABLE employees (
    id SERIAL PRIMARY KEY,
    name TEXT,
    department TEXT,
    salary NUMERIC
);
INSERT INTO employees (name, department, salary) VALUES
('Alice', 'Finance', 6500),
('Bob', 'IT', 7200),
('Charlie', 'HR', 5500);
SELECT * FROM employees;
\q

Expected:
 id |  name    | department | salary
----+----------+-------------+--------
  1 | Alice    | Finance     | 6500
  2 | Bob      | IT          | 7200
  3 | Charlie  | HR          | 5500

2.2. Take a Base Backup (with WAL, Timestamp, and Compression) on Primary Server

##### Steps Used When Repeating Tests #####
# Remove Older Backups If Exists
ls -lrt /backup/
rm -f /backup/*
ls -lrt /backup/
##### Steps Used When Repeating Tests #####

# Define backup location and timestamp:
BACKUP_DIR="/backup"
TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
FULL_BACKUP_PATH="$BACKUP_DIR/basebackup_$TIMESTAMP"

# Create backup directory:
sudo mkdir -p "$FULL_BACKUP_PATH"
sudo chown postgres:postgres "$FULL_BACKUP_PATH"

# Take the base backup:
# Includes all required WAL files for consistency.
sudo -u postgres pg_basebackup \
  -D "$FULL_BACKUP_PATH" \
  -Fp \
  -Xs \
  -P \
  -v \
  -h localhost \
  -p 5432 \
  -U postgres
# Compress backup:
cd "$BACKUP_DIR"
sudo tar -czvf "basebackup_$TIMESTAMP.tar.gz" "basebackup_$TIMESTAMP"
sudo rm -rf "basebackup_$TIMESTAMP"

# Check Resulting file:
pwd
ls -lrt
basebackup_20251026_125856.tar.gz

2.3. Transfer Backup and WALs to Restore Server from Primary Server

##### Steps Used When Repeating Tests #####
# basebackup_20251026_125856.tar.gz
sudo -u postgres scp /backup/basebackup_20251026_125856.tar.gz postgres16-restoretest:/backup/

# Remove Older wal_archive directory (if exists) from Restore Server
ls -lrt /var/lib/postgresql/wal_archive
rm -rf /var/lib/postgresql/wal_archive
ls -lrt /var/lib/postgresql/wal_archive
sudo -u postgres scp -r /var/lib/postgresql/wal_archive postgres16-restoretest:/var/lib/postgresql/
##### Steps Used When Repeating Tests #####

# Create backup directory on Restore Server:
sudo mkdir -p /backup
sudo chown postgres:postgres /backup/

# Remove Older wal_archive directory (if exists) from Restore Server
ls -lrt /var/lib/postgresql/wal_archive
rm -rf /var/lib/postgresql/wal_archive
ls -lrt /var/lib/postgresql/wal_archive

# Remove Older backup files (if exists) from Restore Server
ls -lrt /backup
rm -rf /backup/*
ls -lrt /backup

# Using scp: (change basebackup_20251026_125856.tar.gz name with yours)
sudo -u postgres scp /backup/basebackup_20251026_125856.tar.gz postgres16-restoretest:/backup/
sudo -u postgres scp -r /var/lib/postgresql/wal_archive postgres16-restoretest:/var/lib/postgresql/

2.4. Prepare the Environment on Restore Server

##### Steps Used When Repeating Tests #####
ls -lrt /var/lib/postgresql/16/
rm -rf /var/lib/postgresql/16/main_old
ls -lrt /var/lib/postgresql/16/
##### Steps Used When Repeating Tests #####

# Check PostgreSQL:
sudo systemctl status postgresql@16-main

# Stop PostgreSQL:
sudo systemctl stop postgresql@16-main

# Clean or rename old data directory:
sudo mv /var/lib/postgresql/16/main /var/lib/postgresql/16/main_old

# Check data directory:
ls -lrt /var/lib/postgresql/16/

2.5. Extract and Restore the Backup on Restore Server

##### Steps Used When Repeating Tests #####
# basebackup_20251026_125856.tar.gz
cd /backup
ls -lrt
sudo tar -xzvf basebackup_20251026_125856.tar.gz -C /backup
sudo cp -R /backup/basebackup_20251026_125856 /var/lib/postgresql/16/main
sudo chown -R postgres:postgres /var/lib/postgresql/16/main
sudo chmod 700 /var/lib/postgresql/16/main
ls -lrt /var/lib/postgresql/16/main
##### Steps Used When Repeating Tests #####

# Uncompress the backup (change basebackup_20251026_125856.tar.gz name with yours)
cd /backup
ls -lrt
sudo tar -xzvf basebackup_20251026_125856.tar.gz -C /backup

# Copy data into PostgreSQL data directory (change basebackup_20251026_125856.tar.gz name with yours)
sudo cp -R /backup/basebackup_20251026_125856 /var/lib/postgresql/16/main
sudo chown -R postgres:postgres /var/lib/postgresql/16/main
sudo chmod 700 /var/lib/postgresql/16/main
ls -lrt /var/lib/postgresql/16/main

2.6. Verify Data After Restore on Restore Server

# Start PostgreSQL to ensure the base backup itself is consistent:
sudo systemctl start postgresql@16-main

# Check status:
sudo systemctl status postgresql@16-main

# Then connect and verify:
sudo -u postgres psql
\l
\c restore_test
SELECT * FROM employees;
\q

# Expected output:

 id |  name    | department | salary
----+----------+-------------+--------
  1 | Alice    | Finance     | 6500
  2 | Bob      | IT          | 7200
  3 | Charlie  | HR          | 5500

# If everything looks correct, stop PostgreSQL again before proceeding with PITR:
sudo systemctl stop postgresql@16-main

3. Point-in-Time Recovery (PITR) Test (Simulated Data Deletion) Steps

3.1. Create Sample Database and Data on Primary Server

##### Steps Used When Repeating Tests #####
sudo -u postgres psql
\l
drop database restore_test;
drop database pitr_test;
drop database pitr_test_2;
\l
##### Steps Used When Repeating Tests #####

sudo -u postgres psql

CREATE DATABASE pitr_test;
\c pitr_test
CREATE TABLE employees (
    id SERIAL PRIMARY KEY,
    name TEXT,
    department TEXT,
    salary NUMERIC
);
INSERT INTO employees (name, department, salary) VALUES
('Alice', 'Finance', 6500),
('Bob', 'IT', 7200),
('Charlie', 'HR', 5500);
SELECT * FROM employees;
\q

Expected:
 id |  name    | department | salary
----+----------+-------------+--------
  1 | Alice    | Finance     | 6500
  2 | Bob      | IT          | 7200
  3 | Charlie  | HR          | 5500

3.2. Take a Base Backup (with WAL, Timestamp, and Compression) on Primary Server

##### Steps Used When Repeating Tests #####
# Remove Older Backups If Exists
ls -lrt /backup/
rm -f /backup/*
ls -lrt /backup/
##### Steps Used When Repeating Tests #####

# Define backup location and timestamp:
BACKUP_DIR="/backup"
TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
FULL_BACKUP_PATH="$BACKUP_DIR/basebackup_$TIMESTAMP"

# Create backup directory:
sudo mkdir -p "$FULL_BACKUP_PATH"
sudo chown postgres:postgres "$FULL_BACKUP_PATH"

# Take the base backup:
# Includes all required WAL files for consistency.
sudo -u postgres pg_basebackup \
  -D "$FULL_BACKUP_PATH" \
  -Fp \
  -Xs \
  -P \
  -v \
  -h localhost \
  -p 5432 \
  -U postgres
# Compress backup:
cd "$BACKUP_DIR"
sudo tar -czvf "basebackup_$TIMESTAMP.tar.gz" "basebackup_$TIMESTAMP"
sudo rm -rf "basebackup_$TIMESTAMP"

# Check Resulting file:
pwd
ls -lrt
basebackup_20251026_142852.tar.gz

3.3. Add Another Database and Data After Backup on Primary Server

sudo -u postgres psql

CREATE DATABASE pitr_test_2;
\c pitr_test_2
CREATE TABLE employees (
    id SERIAL PRIMARY KEY,
    name TEXT,
    department TEXT,
    salary NUMERIC
);
INSERT INTO employees (name, department, salary) VALUES
('Alice', 'Finance', 6500),
('Bob', 'IT', 7200),
('Charlie', 'HR', 5500);
SELECT * FROM employees;
SELECT pg_switch_wal();
\q

Expected:
 id |  name    | department | salary
----+----------+-------------+--------
  1 | Alice    | Finance     | 6500
  2 | Bob      | IT          | 7200
  3 | Charlie  | HR          | 5500

3.4. Record Current Time (Before Data Loss) on Primary Server

# On Primary Server:
sudo -u postgres psql -d pitr_test_2 -c "SELECT now();"

Example:
root@postgres16-primary:~# sudo -u postgres psql -d pitr_test_2 -c "SELECT now();"
             now              
------------------------------
 2025-10-26 14:33:47.85731+03
(1 row)

# This is our recovery target time:
recovery_target_time = '2025-10-26 14:33:47'

3.5. Simulate Accidental Data Deletion on Primary Server

# Check:
sudo -u postgres psql -d pitr_test_2 -c "SELECT * FROM employees;"

# Delete
sudo -u postgres psql -d pitr_test_2 -c "DELETE FROM employees WHERE name = 'Charlie';"

# Check:
sudo -u postgres psql -d pitr_test_2 -c "SELECT * FROM employees;"

# Output now shows data loss:
 id |  name    | department | salary
----+----------+-------------+--------
  1 | Alice    | Finance     | 6500
  2 | Bob      | IT          | 7200
-- Charlie row missing --

3.6. Ensure All WALs Are Archived on Primary Server

# Ensure All WALs exist in wal_archive directory on Primary Server
# It is so important. Otherwise you can not see all changes after database recovery 
ls -lrt /var/lib/postgresql/16/main/pg_wal
ls -lrt /var/lib/postgresql/wal_archive | tail -10

3.7. Transfer Backup and WALs to Restore Server from Primary Server

##### Steps Used When Repeating Tests #####
# basebackup_20251026_142852.tar.gz
sudo -u postgres scp /backup/basebackup_20251026_142852.tar.gz postgres16-restoretest:/backup/

# Remove Older wal_archive directory (if exists) from Restore Server
ls -lrt /var/lib/postgresql/wal_archive
rm -rf /var/lib/postgresql/wal_archive
ls -lrt /var/lib/postgresql/wal_archive
sudo -u postgres scp -r /var/lib/postgresql/wal_archive postgres16-restoretest:/var/lib/postgresql/
##### Steps Used When Repeating Tests #####

# Create backup directory on Restore Server:
sudo mkdir -p /backup
sudo chown postgres:postgres /backup/

# Remove Older wal_archive directory (if exists) from Restore Server
ls -lrt /var/lib/postgresql/wal_archive
rm -rf /var/lib/postgresql/wal_archive
ls -lrt /var/lib/postgresql/wal_archive

# Remove Older backup files (if exists) from Restore Server
ls -lrt /backup
rm -rf /backup/*
ls -lrt /backup

# Using scp: (change basebackup_20251026_142852.tar.gz name with yours)
sudo -u postgres scp /backup/basebackup_20251026_142852.tar.gz postgres16-restoretest:/backup/
sudo -u postgres scp -r /var/lib/postgresql/wal_archive postgres16-restoretest:/var/lib/postgresql/

3.8. Prepare the Restore Server Environment

##### Steps Used When Repeating Tests #####
ls -lrt /var/lib/postgresql/16/
rm -rf /var/lib/postgresql/16/main_old
ls -lrt /var/lib/postgresql/16/
##### Steps Used When Repeating Tests #####

# Check PostgreSQL:
sudo systemctl status postgresql@16-main

# Stop PostgreSQL:
sudo systemctl stop postgresql@16-main

# Clean or rename old data directory:
ls -lrt /var/lib/postgresql/16/
rm -rf /var/lib/postgresql/16/main_old
sudo mv /var/lib/postgresql/16/main /var/lib/postgresql/16/main_old

# Check data directory:
ls -lrt /var/lib/postgresql/16/

3.9. Extract and Restore the Backup on Restore Server

##### Steps Used When Repeating Tests #####
# basebackup_20251026_142852.tar.gz
cd /backup
ls -lrt
sudo tar -xzvf basebackup_20251026_142852.tar.gz -C /backup
sudo cp -R /backup/basebackup_20251026_142852 /var/lib/postgresql/16/main
sudo chown -R postgres:postgres /var/lib/postgresql/16/main
sudo chmod 700 /var/lib/postgresql/16/main
ls -lrt /var/lib/postgresql/16/main
##### Steps Used When Repeating Tests #####

# Uncompress the backup (change basebackup_20251026_142852.tar.gz name with yours)
cd /backup
ls -lrt
sudo tar -xzvf basebackup_20251026_142852.tar.gz -C /backup

# Copy data into PostgreSQL data directory:
sudo cp -R /backup/basebackup_20251026_142852 /var/lib/postgresql/16/main
sudo chown -R postgres:postgres /var/lib/postgresql/16/main
sudo chmod 700 /var/lib/postgresql/16/main
ls -lrt /var/lib/postgresql/16/main

3.10. Restore Archived WALs on Restore Server

# Delete Older WALs on Restore Server
ls -lrt /var/lib/postgresql/16/main/pg_wal/
rm -f /var/lib/postgresql/16/main/pg_wal/0*
ls -lrt /var/lib/postgresql/16/main/pg_wal/
# Copy Archived WALs to pg_wal Directory 
ls -lrt /var/lib/postgresql/wal_archive
ls -lrt /var/lib/postgresql/16/main/pg_wal/
sudo -u postgres cp /var/lib/postgresql/wal_archive/* /var/lib/postgresql/16/main/pg_wal/
ls -lrt /var/lib/postgresql/16/main/pg_wal/

3.11. Update Config on Restore Server

# Check recovery_target_time from "Record Current Time (Before Data Loss) on Primary Server" Section
# recovery_target_time = '2025-10-26 14:33:47'

sudo vi /etc/postgresql/16/main/postgresql.conf

# Add or modify:
restore_command = 'cp /var/lib/postgresql/wal_archive/%f %p'
recovery_target_time = '2025-10-26 14:33:47'

# Verify settings:
more /etc/postgresql/16/main/postgresql.conf | grep restore_command
more /etc/postgresql/16/main/postgresql.conf | grep recovery_target_time | grep -v recovery_target_timeline

3.12. Create recovery.signal on Restore Server

# recovery.signal must be owned by postgres user.

ls -lrt /var/lib/postgresql/16/main/recovery.signal
sudo -u postgres touch /var/lib/postgresql/16/main/recovery.signal
ls -lrt /var/lib/postgresql/16/main/recovery.signal

3.13. Start PostgreSQL and Monitor Recovery on Restore Server

# Start PostgreSQL to ensure the base backup itself is consistent:
sudo systemctl start postgresql@16-main

# Check status:
sudo systemctl status postgresql@16-main

# Exit Read-Only Mode
sudo -u postgres psql
SELECT pg_wal_replay_resume();
# Monitor the logs: 
sudo tail -20f /var/log/postgresql/postgresql-16-main.log

# Watch for:
database system is ready to accept connections

3.14. Verify PITR Result (Data Restored) on Restore Server

# Then connect and verify:
sudo -u postgres psql
\l

\c pitr_test
SELECT * FROM employees;

# Expected output:

 id |  name    | department | salary
----+----------+-------------+--------
  1 | Alice    | Finance     | 6500
  2 | Bob      | IT          | 7200
  3 | Charlie  | HR          | 5500

\c pitr_test_2
SELECT * FROM employees;

✅ Expected (Charlie restored!):

 id |  name    | department | salary
----+----------+-------------+--------
  1 | Alice    | Finance     | 6500
  2 | Bob      | IT          | 7200
  3 | Charlie  | HR          | 5500

3.15. Cleanup on Restore Server (Optional)

ls -lrt /var/lib/postgresql/16/
sudo rm -rf /var/lib/postgresql/16/main_old
ls -lrt /var/lib/postgresql/16/

How to Perform PostgreSQL Backup and Restore Operations with pgBackRest?

I have tried to document step by step How to Perform PostgreSQL Backup and Restore Operations with pgBackRest. I hope it will be useful for those who need it!

High Level Installation and Configuration Steps

1-Setup Environment Details
2-Set Hostnames on Each Server
3-Update /etc/hosts Files on All Servers
4-Disable Firewall on All Servers
5-Install PostgreSQL on Primary DB Server & Restore Test Server
6-Install pgBackRest on All Servers
7-Setup Initial Configuration for Backup Server
8-Setup Initial Configuration on Primary DB Server & Restore Test Server
9-Set Passwordless SSH between Primary DB Server & Backup Server
10-Configure Archive Parameters on Primary DB Server
11-Configure the Stanza on Backup Server
12-Try to Take First Full Backup on Backup Server
13-Try to Schedule First Backups on Backup Server
14-Check Backup & Expire Log Files on Backup Server
15-Check Backup & Expire Log Files on Backup Server
16-Check Detail Backup Information on Backup Server
17-Full Backup Restore Steps on Restore Test Server
18-Delta Changes Restore Steps on Restore Test Server
19-Specific Database Restore Steps on Restore Test Server
20-Point-In-Time Recovery (PITR) Steps using Time on Restore Test Server
21-Point-In-Time Recovery (PITR) Steps using LSN on Restore Test Server
22-Multi-Repo Configuration for Backup Redundancy (If Needed)
23-Configure Backup Retention Parameters
24-Expire Backups Manually (If Needed)

Low Level Installation and Configuration Steps

1-Setup Environment Details

2-Set Hostnames on Each Server

192.168.1.151 -> sudo hostnamectl set-hostname postgres-primary
192.168.1.152 -> sudo hostnamectl set-hostname postgres-pgbackrest
192.168.1.153 -> sudo hostnamectl set-hostname postgres-restoretest

3-Update /etc/hosts Files on All Servers

This step should be repeated on below servers.

  • postgres-primary
  • postgres-pgbackrest
  • postgres-restoretest
vi /etc/hosts

192.168.1.151 postgres-primary
192.168.1.152 postgres-pgbackrest
192.168.1.153 postgres-restoretest

4-Disable Firewall on All Servers

This step should be repeated on below servers.

  • postgres-primary
  • postgres-pgbackrest
  • postgres-restoretest
systemctl status ufw
systemctl stop ufw
sudo systemctl disable ufw
sudo systemctl mask ufw
systemctl status ufw

5-Install PostgreSQL on Primary DB Server & Restore Test Server

This step should be repeated on below servers.

  • postgres-primary
  • postgres-restoretest

Install PostgreSQL Package Tools

This package provides tools required to manage PostgreSQL versions and to install the official PostgreSQL APT repository.

sudo apt update
sudo apt install -y postgresql-common

Add the Official PostgreSQL APT Repository

This script automatically adds the PostgreSQL Global Development Group (PGDG) APT repository to your system, enabling access to the latest PostgreSQL versions — including PostgreSQL 16.

sudo /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh

Update Package List and Install PostgreSQL 16 with Additional Modules

  • postgresql-16: The core PostgreSQL 16 server package.
  • postgresql-contrib-16: Adds useful extensions like pg_stat_statements, uuid-ossp, and more.
sudo apt update
sudo apt install -y postgresql-16 postgresql-contrib-16
psql --version

6-Install pgBackRest on All Servers

IMPORTANT NOTE: The pgBackRest version must be the same on all servers, otherwise you will encounter an error.

This step should be repeated on below servers.

  • postgres-primary
  • postgres-pgbackrest
  • postgres-restoretest
sudo apt update
sudo apt-get install pgbackrest
which pgbackrest
pgbackrest version

# To Install Specific pgBackRest Version Follow Below Steps
sudo apt remove pgbackrest (If another version is installed, remove it)
apt-cache policy pgbackrest
sudo apt install pgbackrest=2.50-1build2
pgbackrest version

7-Setup Initial Configuration for Backup Server

# Create a directory in which the backup files will reside and make sure that it is owned by postgres user
sudo mkdir -p /data/backups
sudo chown postgres:postgres /data/backups
 
#Create the log directory for pgBackRest and keep postgres as the owner of this log directory:
sudo mkdir -p 770 /var/log/pgbackrest
sudo chown postgres:postgres /var/log/pgbackrest

#Create a directory where the configuration files of the pgBackRest will reside and set proper permission and owner for the config as well:
sudo mkdir -p /etc/pgbackrest
sudo touch /etc/pgbackrest/pgbackrest.conf 
sudo chmod 640 /etc/pgbackrest/pgbackrest.conf
sudo chown postgres:postgres /etc/pgbackrest/pgbackrest.conf
ls -lrt /etc/pgbackrest/pgbackrest.conf
 
#Add below contents in /etc/pgbackrest/pgbackrest.conf
vi /etc/pgbackrest/pgbackrest.conf

[global]
repo1-path=/data/backups
log-level-file=detail
start-fast=y

[demo]
pg1-host=192.168.1.151
pg1-path=/var/lib/postgresql/16/main
pg1-host-user=postgres

# Check the contents of /etc/pgbackrest/pgbackrest.conf
more /etc/pgbackrest/pgbackrest.conf
Explanation of the options:

[global]
These are global settings that apply to all database instances (called stanzas in pgBackRest).

- repo1-path=/data/backups
This defines the path where backups will be stored. It can be a local directory or a mounted remote location.

- log-level-file=detail
Sets the logging level for log files to detail, which provides more in-depth logging (useful for troubleshooting and auditing).

- start-fast=y
Enables a fast start by forcing PostgreSQL to run a checkpoint more aggressively at the beginning of the backup. This speeds up the backup process but may use more resources.


[demo]
This section defines a stanza, named demo, which represents a specific PostgreSQL instance.

-pg1-host=192.168.1.151
The IP address of the primary PostgreSQL server.

-pg1-path=/var/lib/postgresql/16/main
The data directory path of the PostgreSQL instance (typically the default location for PostgreSQL 16).

-pg1-host-user=postgres
The SSH user that pgBackRest will use to connect to the remote server (e.g., ssh postgres@192.168.1.151).

8-Setup Initial Configuration on Primary DB Server & Restore Test Server

This step should be repeated on below servers.

  • postgres-primary
  • postgres-restoretest
#Check PostgreSQL cluster
sudo su - postgres
pg_lsclusters

#Create the log directory for pgBackRest and keep postgres as the owner of this log directory:
exit
sudo mkdir -p 770 /var/log/pgbackrest
sudo chown postgres:postgres /var/log/pgbackrest

#Create a directory where the configuration files of the pgBackRest will reside and set proper permission and owner for the config as well:
sudo mkdir -p /etc/pgbackrest
sudo touch /etc/pgbackrest/pgbackrest.conf
sudo chmod 640 /etc/pgbackrest/pgbackrest.conf
sudo chown postgres:postgres /etc/pgbackrest/pgbackrest.conf
ls -lrt /etc/pgbackrest/pgbackrest.conf

#Add below contents in /etc/pgbackrest/pgbackrest.conf
vi /etc/pgbackrest/pgbackrest.conf

[global]
repo1-host=192.168.1.152
repo1-path=/data/backups
repo1-host-user=postgres
log-level-file=detail
start-fast=y

[demo]
pg1-path=/var/lib/postgresql/16/main
pg1-host-user=postgres

# Check the contents of /etc/pgbackrest/pgbackrest.conf
more /etc/pgbackrest/pgbackrest.conf
Explanation of the options:

[global] Section
This section contains settings that apply globally across all stanzas (like [demo] below).

-repo1-host=192.168.1.152	
IP address of the remote backup server where the backups will be stored.

-repo1-path=/data/backups	
Directory path on the backup server where backups will be saved.

-repo1-host-user=postgres	
Linux user used to connect to the backup server via SSH (usually the same user running PostgreSQL).

-log-level-file=detail	
Sets the logging level for log files. detail means detailed logs will be written.

-start-fast=y	
Allows PostgreSQL to start without waiting for WAL replay during restore. This speeds up restarts.


[demo] Section
This is the stanza definition (a named PostgreSQL instance). You can have multiple stanzas, each representing a different database cluster.

-pg1-path=/var/lib/postgresql/16/main	
Path to the PostgreSQL data directory on the primary server.

-pg1-host-user=postgres	
The Linux user that connects to the PostgreSQL host (typically the postgres user).

9-Set Passwordless SSH between Primary DB Server & Backup Server

Generate and Note SSH Key on Backup Server

#Switch to postgres user on Backup Server
sudo su - postgres
 
#Generate the SSH key on Backup Server
ssh-keygen -t rsa

#Switch to hidden directory under user’s home directory on Backup Server
cd ~/.ssh

# Check the created files. public key with .pub extension and private key without any extension on Backup Server
ls -lrt

# Check and note public key on Backup Server
more id_rsa.pub

postgres@postgres-pgbackrest:~/.ssh$ more id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2+0LaXhqf9IdlKtUOfMSt52gDBzm/lVHRCUe+bQUr4R0skWFnQ2aBHXM4Z9z5UE2HaxPVrCtl5yMHMakzz39XSioaO728XBKyuP8nEW9XziYOsLuIy7zmJcA0TAubM9+QaXrIbNI2wY6jesNuz22Bbk2tPRL1eJK7gotGZ4pmB9KFk0gCzrDz2IuvnzFO+f85/KRmlm2HwkN1d9leSqRHdQSg/T6TVoJR87hn2YdVNWSQgTalDDC/h3D5obM0/SVBYiKbxiZXqCPPhICVVk/cXJHyMX5p5G+maHUJ2Yk5D/DfJrRwbfWYmvz3iux7ihExJqRTMRohoWk8/S6RgKMWk+FPNtfKukxf72LcNQVpeNdbwxqU4azrCqjeSAdyZGk6hYCs/X50iC9dMOQQw03XN7YmjTZ3lQfT8RlnjjAcbd6K0tKAQBhAdBW0xV5m53N3iYI/CsDgtSsc9+uqMaaW328raF728xMOF8j9t3vJZRBEgyJdzr6m7irfzLSLqXU= postgres@postgres-pgbackrest

Generate and Note SSH Key on Primary DB Server

#Switch to postgres user on Primary DB Server
sudo su - postgres
 
#Generate the SSH key on Primary DB Server
ssh-keygen -t rsa

#Switch to hidden directory under user’s home directory on Primary DB Server
cd ~/.ssh

# Check the created files. public key with .pub extension and private key without any extension on Primary DB Server
ls -lrt

# Check and note public key on Primary DB Server
more id_rsa.pub

postgres@postgres-primary:~/.ssh$ more id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCeBnbcCMJHeSCbM8J3QzP4xzCBDciHV2CyrbcoCTLCQLCmGVgcwt8WwV+x/s+LaL3UljxHGE/73D1ZqmF0Yg+KKwN6n1Lj+QXvEesqu3bpIRBt+z8+keh1PA0eOO2g/r9glcC7G1NxBpG01uOU2EoZnR+R67rNvsrMSzBuIwuVqCIhkawIX7PldqGVGJW1LuJF9v7RIQ/HrVR5vXLyY1ctGrPgeErAja6165Vgr0LYy6GMxIoPVPnqtvCVNFQtLkpCOZ/gGm/uLp5fgGq6VDaKWyaTjJjqHVGmvZTCdy25LShZAxMHap0q02WkyyMOw7D0RFSt1My3ObUi6RSuh/CGvngQU9mxzpOz2SIIQYivUMMdh8Dp8tMwnombjECug2ZBOZ3LVzSMFzUeBTBI/2q+3bhnU5X5X7XmjKQLqxPVbMjG6Qcwgu2HG9LKJjc86AKEbSDET+eqqlnuq0meb1fwy0rOPLSjpuk2D1p7IWxySrFlL19AFY1qoThDujR8tbM= postgres@postgres-primary

Create authorized_keys File on Backup Server

If you do not paste the public key as a single line into the authorized_keys file, you will get an error.

#Switch to postgres user on Backup Server
sudo su - postgres

#Switch to hidden directory under user’s home directory on Backup Server
cd ~/.ssh

# Create a file which contains .pub key of the primary db server
# paste the contains from the primary db server of .pub key
vi authorized_keys

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCeBnbcCMJHeSCbM8J3QzP4xzCBDciHV2CyrbcoCTLCQLCmGVgcwt8WwV+x/s+LaL3UljxHGE/73D1ZqmF0Yg+KKwN6n1Lj+QXvEesqu3bpIRBt+z8+keh1PA0eOO2g/r9glcC7G1NxBpG01uOU2EoZnR+R67rNvsrMSzBuIwuVqCIhkawIX7PldqGVGJW1LuJF9v7RIQ/HrVR5vXLyY1ctGrPgeErAja6165Vgr0LYy6GMxIoPVPnqtvCVNFQtLkpCOZ/gGm/uLp5fgGq6VDaKWyaTjJjqHVGmvZTCdy25LShZAxMHap0q02WkyyMOw7D0RFSt1My3ObUi6RSuh/CGvngQU9mxzpOz2SIIQYivUMMdh8Dp8tMwnombjECug2ZBOZ3LVzSMFzUeBTBI/2q+3bhnU5X5X7XmjKQLqxPVbMjG6Qcwgu2HG9LKJjc86AKEbSDET+eqqlnuq0meb1fwy0rOPLSjpuk2D1p7IWxySrFlL19AFY1qoThDujR8tbM= postgres@postgres-primary

# Check authorized_keys on Backup Server
more authorized_keys

Create authorized_keys File on Primary DB Server

If you do not paste the public key as a single line into the authorized_keys file, you will get an error.

#Switch to postgres user on Primary DB Server
sudo su - postgres

#Switch to hidden directory under user’s home directory on Primary DB Server
cd ~/.ssh

# Create a file which contains .pub key of the backup server
# paste the contains from the backup server of .pub key
vi authorized_keys

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2+0LaXhqf9IdlKtUOfMSt52gDBzm/lVHRCUe+bQUr4R0skWFnQ2aBHXM4Z9z5UE2HaxPVrCtl5yMHMakzz39XSioaO728XBKyuP8nEW9XziYOsLuIy7zmJcA0TAubM9+QaXrIbNI2wY6jesNuz22Bbk2tPRL1eJK7gotGZ4pmB9KFk0gCzrDz2IuvnzFO+f85/KRmlm2HwkN1d9leSqRHdQSg/T6TVoJR87hn2YdVNWSQgTalDDC/h3D5obM0/SVBYiKbxiZXqCPPhICVVk/cXJHyMX5p5G+maHUJ2Yk5D/DfJrRwbfWYmvz3iux7ihExJqRTMRohoWk8/S6RgKMWk+FPNtfKukxf72LcNQVpeNdbwxqU4azrCqjeSAdyZGk6hYCs/X50iC9dMOQQw03XN7YmjTZ3lQfT8RlnjjAcbd6K0tKAQBhAdBW0xV5m53N3iYI/CsDgtSsc9+uqMaaW328raF728xMOF8j9t3vJZRBEgyJdzr6m7irfzLSLqXU= postgres@postgres-pgbackrest

# Check authorized_keys on Primary DB Server
more authorized_keys

Test SSH without Password between Primary DB Server and Backup Server using postgres User

#Switch to postgres user on Primary DB Server and Backup Server
sudo su - postgres

#Test ssh from postgres-primary to postgres-pgbackrest
postgres@postgres-primary:~$ ssh postgres-pgbackrest

#Test ssh from postgres-pgbackrest to postgres-primary
postgres@postgres-pgbackrest:~$ ssh postgres-primary

10-Set Passwordless SSH between Restore Test Server & Backup Server

Note SSH Key on Backup Server

#Switch to postgres user on Backup Server
sudo su - postgres
 
#Switch to hidden directory under user’s home directory on Backup Server
cd ~/.ssh

# Check the created files. public key with .pub extension and private key without any extension on Backup Server
ls -lrt

# Check and note public key on Backup Server
more id_rsa.pub

postgres@postgres-pgbackrest:~/.ssh$ more id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2+0LaXhqf9IdlKtUOfMSt52gDBzm/lVHRCUe+bQUr4R0skWFnQ2aBHXM4Z9z5UE2HaxPVrCtl5yMHMakzz39XSioaO728XBKyuP8nEW9XziYOsLuIy7zmJcA0TAubM9+QaXrIbNI2wY6jesNuz22Bbk2tPRL1eJK7gotGZ4pmB9KFk0gCzrDz2IuvnzFO+f85/KRmlm2HwkN1d9leSqRHdQSg/T6TVoJR87hn2YdVNWSQgTalDDC/h3D5obM0/SVBYiKbxiZXqCPPhICVVk/cXJHyMX5p5G+maHUJ2Yk5D/DfJrRwbfWYmvz3iux7ihExJqRTMRohoWk8/S6RgKMWk+FPNtfKukxf72LcNQVpeNdbwxqU4azrCqjeSAdyZGk6hYCs/X50iC9dMOQQw03XN7YmjTZ3lQfT8RlnjjAcbd6K0tKAQBhAdBW0xV5m53N3iYI/CsDgtSsc9+uqMaaW328raF728xMOF8j9t3vJZRBEgyJdzr6m7irfzLSLqXU= postgres@postgres-pgbackrest

Generate and Note SSH Key on Restore Test Server

#Switch to postgres user on Restore Test Server
sudo su - postgres
 
#Generate the SSH key on Restore Test Server
ssh-keygen -t rsa

#Switch to hidden directory under user’s home directory on Restore Test Server
cd ~/.ssh

# Check the created files. public key with .pub extension and private key without any extension on Restore Test Server
ls -lrt

# Check and note public key on Restore Test Server
more id_rsa.pub

postgres@postgres-restoretest:~/.ssh$ more id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoqsrVUcN/mOr7WrFcZEPz5QBv7cBRnyZyNnkl2xaHk4yWJIWmU39MQFKYZ6qX7wpum+AdrjknzrcDrt016X6lv+1D65+AZ8vJ/C+R5Wriq1owZE8lLzG+174WaUWR8XaCAwPAC4J37OmV9UruRgPARsz7sn8iwReV4gT8bG9XgdM9WSEobdBQMnz1OSZM4NFEqSiSd9+cVOH7cTCp3Q9piX0G0ZpQhK3a99HvQgws1ckWc8s884zIeULMsOlWIEdAJ7W+F6kK2exblgWCDihKdxEtEkdeNgW7u91GJnbTsdbRzyH34WHR3pyPnkNFP67MuYsGI42HYMgcIdYUii1e+Clb6j2+okHP8hGc32TooF/DkP85XenSm+g/usxGfV8+XBBr7uWhTFjI6Q6rqo1vVqzGwWTgArcxTBYuN/L+ibhmVdd/9jslM/QRm9f+3HZG+POORFohSKBWdiBsx4ybabBZTlSFrxlIuIb8RmJJI6kvqcs8eGD9IiNzO/duskc= postgres@postgres-restoretest

Add Restore Test Server’s Public Key to Backup Server’s authorized_keys File on Backup Server

If you do not paste the public key as a single line into the authorized_keys file, you will get an error.

#Switch to postgres user on Backup Server
sudo su - postgres

#Switch to hidden directory under user’s home directory on Backup Server
cd ~/.ssh

# Add the contains from the restore test server of .pub key
vi authorized_keys

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoqsrVUcN/mOr7WrFcZEPz5QBv7cBRnyZyNnkl2xaHk4yWJIWmU39MQFKYZ6qX7wpum+AdrjknzrcDrt016X6lv+1D65+AZ8vJ/C+R5Wriq1owZE8lLzG+174WaUWR8XaCAwPAC4J37OmV9UruRgPARsz7sn8iwReV4gT8bG9XgdM9WSEobdBQMnz1OSZM4NFEqSiSd9+cVOH7cTCp3Q9piX0G0ZpQhK3a99HvQgws1ckWc8s884zIeULMsOlWIEdAJ7W+F6kK2exblgWCDihKdxEtEkdeNgW7u91GJnbTsdbRzyH34WHR3pyPnkNFP67MuYsGI42HYMgcIdYUii1e+Clb6j2+okHP8hGc32TooF/DkP85XenSm+g/usxGfV8+XBBr7uWhTFjI6Q6rqo1vVqzGwWTgArcxTBYuN/L+ibhmVdd/9jslM/QRm9f+3HZG+POORFohSKBWdiBsx4ybabBZTlSFrxlIuIb8RmJJI6kvqcs8eGD9IiNzO/duskc= postgres@postgres-restoretest

# Check authorized_keys on Backup Server
more authorized_keys

Create authorized_keys File on Restore Test Server

If you do not paste the public key as a single line into the authorized_keys file, you will get an error.

#Switch to postgres user on Restore Test Server
sudo su - postgres

#Switch to hidden directory under user’s home directory on Restore Test Server
cd ~/.ssh

# Create a file which contains .pub key of the backup server
# paste the contains from the backup server of .pub key
vi authorized_keys

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2+0LaXhqf9IdlKtUOfMSt52gDBzm/lVHRCUe+bQUr4R0skWFnQ2aBHXM4Z9z5UE2HaxPVrCtl5yMHMakzz39XSioaO728XBKyuP8nEW9XziYOsLuIy7zmJcA0TAubM9+QaXrIbNI2wY6jesNuz22Bbk2tPRL1eJK7gotGZ4pmB9KFk0gCzrDz2IuvnzFO+f85/KRmlm2HwkN1d9leSqRHdQSg/T6TVoJR87hn2YdVNWSQgTalDDC/h3D5obM0/SVBYiKbxiZXqCPPhICVVk/cXJHyMX5p5G+maHUJ2Yk5D/DfJrRwbfWYmvz3iux7ihExJqRTMRohoWk8/S6RgKMWk+FPNtfKukxf72LcNQVpeNdbwxqU4azrCqjeSAdyZGk6hYCs/X50iC9dMOQQw03XN7YmjTZ3lQfT8RlnjjAcbd6K0tKAQBhAdBW0xV5m53N3iYI/CsDgtSsc9+uqMaaW328raF728xMOF8j9t3vJZRBEgyJdzr6m7irfzLSLqXU= postgres@postgres-pgbackrest

# Check authorized_keys on Restore Test Server
more authorized_keys

Test SSH without Password between Restore Test Server and Backup Server using postgres User

#Switch to postgres user on Restore Test Server and Backup Server
sudo su - postgres

#Test ssh from postgres-restoretest to postgres-pgbackrest
postgres@postgres-restoretest:~$ ssh postgres-pgbackrest

#Test ssh from postgres-pgbackrest to postgres-restoretest
postgres@postgres-pgbackrest:~$ ssh postgres-restoretest

11-Configure Archive Parameters on Primary DB Server

# Switch to postgres user on the primary db server
sudo su - postgres

# Set archive_mode and archive_command on the primary db server postgresql.conf file
vi /etc/postgresql/16/main/postgresql.conf

archive_mode = on
archive_command = 'pgbackrest --config=/etc/pgbackrest/pgbackrest.conf  --stanza=demo archive-push %p'

# Check parameters on the primary db server postgresql.conf file
more /etc/postgresql/16/main/postgresql.conf | grep archive_mode
more /etc/postgresql/16/main/postgresql.conf | grep archive_command

# Switch to root user
exit

# It requires the restart of the primary db server service
systemctl restart postgresql@16-main.service
systemctl status postgresql@16-main.service

# Check the parameters on the primary db server
su - postgres
psql
show archive_mode;
show archive_command;

12-Configure the Stanza on Backup Server

Create Stanza on Backup Server

# Switch to postgres user on the backup server
sudo su - postgres

#Using stanza-create command on the backup server
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail stanza-create

Validate Stanza Setup on Backup Server

# Switch to postgres user on the backup server
sudo su - postgres

#Validate the setup before taking backups using the check command on the backup server
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail check

13-Try to Take First Full Backup on Backup Server

# Switch to Postgres user
sudo su - postgres

# Check config
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail check

# Try to take first full backup
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail backup --type=full

14-Try to Schedule First Backups on Backup Server

# Switch to Postgres
sudo su - postgres

# Edit the crontab
crontab -e

# We will take Wednesday full backups at 01:15 a.m. server time. Rest of the days we wil take incremental backups at the same time.
#m h   dom mon dow           command
15 08  *   *   3             pgbackrest --type=full --stanza=demo backup
15 08  *   *   0,1,2,4,5,6   pgbackrest --type=incr --stanza=demo backup

15-Check Backup & Expire Log Files on Backup Server

# Switch to Postgres
sudo su - postgres

# Check log files
cd /var/log/pgbackrest/
ls -lrt

more demo-backup.log | grep '2025-07-16 08:15' | head -7
more demo-backup.log | grep '2025-07-16 08:15' | tail -7

more demo-expire.log | grep '2025-07-16 08:15' | tails

16-Check Detail Backup Information on Backup Server

# Switch to Postgres User
sudo su - postgres

# Check Backup Details 
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail info

17-Full Backup Restore Steps on Restore Test Server

# Check Postgres Setup on the Restore Test Server
pg_lsclusters

# Stop the DB Service on the Restore Test Server
systemctl stop postgresql@16-main.service

# Check the DB Service on the Restore Test Server
systemctl status postgresql@16-main.service

# Create Empty Data Directory or Rename Existing Data Directory
sudo su - postgres
mv /var/lib/postgresql/16/main /var/lib/postgresql/16/main_20250716
# Restore Backup
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=info restore
# Check the Restored Environment
pg_lsclusters
cd /var/lib/postgresql/16/
ls -lrt
exit
systemctl start postgresql@16-main.service
systemctl status postgresql@16-main.service
sudo su - postgres
pg_lsclusters
psql
\l

18-Delta Changes Restore Steps on Restore Test Server

Perform Full Backup on Backup Server

# Switch to Postgres user
sudo su - postgres

# Check config
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail check

# Perform full backup
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail backup --type=full

Check Full Backup Information on Backup Server

# Switch to Postgres User
sudo su - postgres

# Check Backup Details 
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail info

Create Test Database, Table and Insert Test Data on Primary DB Server

sudo su - postgres

psql -U postgres -d postgres <<EOF
CREATE DATABASE restore_test_01;
\c restore_test_01
CREATE TABLE restore_test_01 (
    id SERIAL PRIMARY KEY,
    name TEXT NOT NULL,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
INSERT INTO restore_test_01 (name) VALUES
('Alice'), ('Bob'), ('Charlie');
SELECT * FROM restore_test_01;
EOF

Perform Incremental Backup on Backup Server

# Switch to Postgres user
sudo su - postgres

# Check config
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail check

# Perform incremental backup
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail backup --type=incr

Check Incremental Backup Information on Backup Server

# Switch to Postgres User
sudo su - postgres

# Check Backup Details 
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail info

Restore Delta Changes on Restore Test Server

# Check Postgres on Restore Test Server
pg_lsclusters

# Stop the DB Cluster on Restore Test Server
sudo systemctl stop postgresql@16-main.service

# Switch to Postgres User on Restore Test Server
sudo su - postgres

# Restore Delta Changes on Restore Test Server
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=info restore --delta

Check the Database, Table, and Data Created Before the Incremental Backup on Restore Test Server

# Switch to Root User on Restore Test Server
su - root

# Start the DB Cluster on Restore Test Server
sudo systemctl start postgresql@16-main.service

# Check the Status of DB Cluster on Restore Test Server
sudo systemctl status postgresql@16-main.service

# Check the Database, Table, and Data Created Before the Incremental Backup on Restore Test Server
sudo su - postgres

psql -U postgres -d postgres <<EOF
\c restore_test_01
SELECT * FROM restore_test_01;
EOF

19-Specific Database Restore Steps on Restore Test Server

Create Test Database (restore_test_02), Table and Insert Test Data on Primary DB Server

sudo su - postgres

psql -U postgres -d postgres <<EOF
CREATE DATABASE restore_test_02;
\c restore_test_02
CREATE TABLE restore_test_02 (
    id SERIAL PRIMARY KEY,
    name TEXT NOT NULL,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
INSERT INTO restore_test_02 (name) VALUES
('Alice'), ('Bob'), ('Charlie');
SELECT * FROM restore_test_02;
EOF

Perform Incremental Backup on Backup Server

# Switch to Postgres user
sudo su - postgres

# Check config
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail check

# Perform incremental backup
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail backup --type=incr

Check Incremental Backup Information on Backup Server

# Switch to Postgres User
sudo su - postgres

# Check Backup Details 
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail info

Restore Specific Database (restore_test_02) on Restore Test Server

# Check Postgres on Restore Test Server
pg_lsclusters

# Stop the DB Cluster on Restore Test Server
sudo systemctl stop postgresql@16-main.service

# Create Empty Data Directory or Rename Existing Data Directory
sudo su - postgres
mv /var/lib/postgresql/16/main /var/lib/postgresql/16/main_20250717

# Restore Specific Database on Restore Test Server
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=info restore --db-include=restore_test_02

Check the restore_test_02 Database, Table, and Data on Restore Test Server

# Switch to Root User on Restore Test Server
su - root

# Start the DB Cluster on Restore Test Server
sudo systemctl start postgresql@16-main.service

# Check the Status of DB Cluster on Restore Test Server
sudo systemctl status postgresql@16-main.service

# Check the restore_test_02 Database, Table, and Data on Restore Test Server
sudo su - postgres

psql -U postgres -d postgres <<EOF
\l
\c restore_test_02
SELECT * FROM restore_test_02;
EOF

Why does restore_test_01 Appear Even Though the Database has not been Restored?

- It is expected situation. 
- You can see restore_test_01 database but you can not connect. 
- restore_test_01 database can be dropped.

sudo su - postgres
psql
\l
\c restore_test_01
drop database restore_test_01;
\l

20-Point-In-Time Recovery (PITR) Steps using Time on Restore Test Server

Truncate Test Table Data and Note Time Before Truncate on Primary DB Server

Target Time: 2025-07-20 10:24:54

# Truncate Test Table Data and Note Time at which Data was Deleted
sudo su - postgres

psql -U postgres -d postgres <<EOF
\l
\c restore_test_02
SELECT * FROM restore_test_02;
select now();
TRUNCATE TABLE restore_test_02;
SELECT * FROM restore_test_02;
EOF

Perform Incremental Backup on Backup Server

# Switch to Postgres user
sudo su - postgres

# Check config
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail check

# Perform incremental backup
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail backup --type=incr

Check Incremental Backup Information on Backup Server

# Switch to Postgres User
sudo su - postgres

# Check Backup Details 
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail info

Perform PITR using Time on Restore Test Server

Target Time: 2025-07-20 10:24:54

# Check Postgres on Restore Test Server
pg_lsclusters

# Stop the DB Cluster on Restore Test Server
sudo systemctl stop postgresql@16-main.service

# Create Empty Data Directory or Rename Existing Data Directory
sudo su - postgres
mv /var/lib/postgresql/16/main /var/lib/postgresql/16/main_20250720

# Perform PITR on Restore Test Server
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=info --delta --type=time "--target=2025-07-20 10:24:54" --target-action=promote restore

Check Deleted Test Table Data Recovered or not on Restore Test Server

# Switch to Root User on Restore Test Server
su - root

# Start the DB Cluster on Restore Test Server
sudo systemctl start postgresql@16-main.service

# Check the Status of DB Cluster on Restore Test Server
sudo systemctl status postgresql@16-main.service

# Check the restore_test_02 Database, Table, and Data on Restore Test Server
sudo su - postgres

psql -U postgres -d postgres <<EOF
\l
\c restore_test_02
SELECT * FROM restore_test_02;
EOF

21-Point-In-Time Recovery (PITR) Steps using LSN on Restore Test Server

Truncate Test Table Data and Note LSN Before Truncate on Primary DB Server

Target LSN: 0/21000500

# Truncate Test Table Data and Note LSN Before Truncate on Primary DB Server
sudo su - postgres

psql -U postgres -d postgres <<EOF
\l
\c restore_test_02
SELECT * FROM restore_test_02;
select pg_current_wal_lsn();
TRUNCATE TABLE restore_test_02;
SELECT * FROM restore_test_02;
EOF

Perform Incremental Backup on Backup Server

# Switch to Postgres user
sudo su - postgres

# Check config
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail check

# Perform incremental backup
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail backup --type=incr

Check Incremental Backup Information on Backup Server

# Switch to Postgres User
sudo su - postgres

# Check Backup Details 
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail info

Perform PITR using LSN on Restore Test Server

Target LSN: 0/21000500

# Check Postgres on Restore Test Server
pg_lsclusters

# Stop the DB Cluster on Restore Test Server
sudo systemctl stop postgresql@16-main.service

# Create Empty Data Directory or Rename Existing Data Directory
sudo su - postgres
mv /var/lib/postgresql/16/main /var/lib/postgresql/16/main_20250722

# Perform PITR on Restore Test Server
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=info --delta --type=lsn "--target=0/21000500" --target-action=promote restore

Check Deleted Test Table Data Recovered or not on Restore Test Server

# Switch to Root User on Restore Test Server
su - root

# Start the DB Cluster on Restore Test Server
sudo systemctl start postgresql@16-main.service

# Check the Status of DB Cluster on Restore Test Server
sudo systemctl status postgresql@16-main.service

# Check the restore_test_02 Database, Table, and Data on Restore Test Server
sudo su - postgres

psql -U postgres -d postgres <<EOF
\l
\c restore_test_02
SELECT * FROM restore_test_02;
EOF

22-Multi-Repo Configuration for Backup Redundancy (If Needed)

Create New Backup Directory for New Repo on Backup Server

I will use same backup server for test purpose. You can use different server located on DRC or Cloud environment.

sudo mkdir -p /data/backups2
sudo chmod 750 /data/backups2
sudo chown postgres:postgres /data/backups2
ls -lrt /data

Edit pgbackrest.conf file and Add One More Repo on Backup Server

sudo su - postgres

# Edit pgbackrest.conf file and Add One More Repo on Backup Server
vi /etc/pgbackrest/pgbackrest.conf

[global]
repo1-path=/data/backups
repo2-path=/data/backups2

log-level-file=detail
start-fast=y

repo1-retention-full=2
repo2-retention-full=2

archive-async=y
spool-path=/var/spool/pgbackrest

[demo]
pg1-host=192.168.1.151
pg1-path=/var/lib/postgresql/16/main
pg1-host-user=postgres

# Check New pgbackrest.conf file on Backup Server
more /etc/pgbackrest/pgbackrest.conf

Edit pgbackrest.conf file and Add One More Repo on Primary DB Server

sudo su - postgres

# Edit pgbackrest.conf file and Add One More Repo on Primary DB Server
vi /etc/pgbackrest/pgbackrest.conf

[global]
repo1-host=192.168.1.152
repo1-path=/data/backups
repo1-host-user=postgres

repo2-host=192.168.1.152
repo2-path=/data/backups2
repo2-host-user=postgres

log-level-file=detail
start-fast=y

archive-async=y
spool-path=/var/spool/pgbackrest

[demo]
pg1-path=/var/lib/postgresql/16/main
pg1-host-user=postgres

# Check New pgbackrest.conf file on Primary DB Server
more /etc/pgbackrest/pgbackrest.conf

Edit pgbackrest.conf file and Add One More Repo on Restore Test Server

sudo su - postgres

# Edit pgbackrest.conf file and Add One More Repo on Restore Test Server
vi /etc/pgbackrest/pgbackrest.conf

[global]
repo1-host=192.168.1.152
repo1-path=/data/backups
repo1-host-user=postgres

repo2-host=192.168.1.152
repo2-path=/data/backups2
repo2-host-user=postgres

log-level-file=detail
start-fast=y

archive-async=y
spool-path=/var/spool/pgbackrest

[demo]
pg1-path=/var/lib/postgresql/16/main
pg1-host-user=postgres

# Check New pgbackrest.conf file on Restore Test Server
more /etc/pgbackrest/pgbackrest.conf

Create pgBackRest archive Directory and Adjust Permissions on Primary DB Server

sudo mkdir -p /var/spool/pgbackrest/archive
sudo chown -R postgres:postgres /var/spool/pgbackrest
sudo chmod -R 750 /var/spool/pgbackrest
sudo systemctl restart postgresql@16-main
sudo systemctl status postgresql@16-main

Create the Stanza Again on Backup Server

# Switch to postgres user on the backup server
sudo su - postgres

# Using stanza-create command on the backup server
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail stanza-create

Validate Stanza Setup on Backup Server

# Switch to postgres user on the backup server
sudo su - postgres

#Validate the setup before taking backups using the check command on the backup server
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail check

Try to Take First Full Backup to repo2 on Backup Server

# Switch to postgres user on the backup server
sudo su - postgres

# Check config
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail check

# Try to take first full backup
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail backup --type=full --repo=2

Check Backup Information for repo2 on Backup Server

# Switch to Postgres User
sudo su - postgres

# Check Backup Details For Repo2
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail info --repo=2

23-Configure Backup Retention Parameters

General Information about Backup Retention Parameters

Backup Retention Parameters in pgBackRest:

  • repo1-retention-full-type can have values time or count. Default is count.
    • repo1-retention-full-type
    • repo1-retention-full
    • repo1-retention-diff
  • Time denotes how many days backup can be retained, and count denotes how many copies of backup can be retained
  • Not only can we define how many full backups can be retained but also how many differential backups can be retained using repo1-retention-full and repo1-retention-diff
  • There is no separate retention parameter for incremental backups in pgBackRest.
    • The lifespan of incremental backups is managed indirectly through the retention settings of the full and differential backups they depend on.
    • The parameter repo1-retention-incr is not valid in pgBackRest.

Set Backup Retention Parameters on Backup Server

# Switch to Postgres User
sudo su - postgres

# Edit pgbackrest.conf file on Backup Server
vi /etc/pgbackrest/pgbackrest.conf

[global]
repo1-path=/data/backups
repo2-path=/data/backups2

log-level-file=detail
start-fast=y

repo1-retention-full-type=count
repo1-retention-full=2

repo2-retention-full-type=time
repo2-retention-full=1

archive-async=y
spool-path=/var/spool/pgbackrest

[demo]
pg1-host=192.168.1.151
pg1-path=/var/lib/postgresql/16/main
pg1-host-user=postgres

# Check New pgbackrest.conf file on Backup Server
more /etc/pgbackrest/pgbackrest.conf

Set Backup Retention Parameters on Primary DB Server

# Switch to Postgres User
sudo su - postgres

# Edit pgbackrest.conf file on Primary DB Server
vi /etc/pgbackrest/pgbackrest.conf

[global]
repo1-host=192.168.1.152
repo1-path=/data/backups
repo1-host-user=postgres
repo1-retention-full-type=count
repo1-retention-full=2

repo2-host=192.168.1.152
repo2-path=/data/backups2
repo2-host-user=postgres
repo2-retention-full-type=time
repo2-retention-full=1

log-level-file=detail
start-fast=y

archive-async=y
spool-path=/var/spool/pgbackrest

[demo]
pg1-path=/var/lib/postgresql/16/main
pg1-host-user=postgres

# Check New pgbackrest.conf file on Primary DB Server
more /etc/pgbackrest/pgbackrest.conf

Set Backup Retention Parameters on Restore Test Server

# Switch to Postgres User
sudo su - postgres

# Edit pgbackrest.conf file on Restore Test Server
vi /etc/pgbackrest/pgbackrest.conf

[global]
repo1-host=192.168.1.152
repo1-path=/data/backups
repo1-host-user=postgres
repo1-retention-full-type=count
repo1-retention-full=2

repo2-host=192.168.1.152
repo2-path=/data/backups2
repo2-host-user=postgres
repo2-retention-full-type=time
repo2-retention-full=1

log-level-file=detail
start-fast=y

archive-async=y
spool-path=/var/spool/pgbackrest

[demo]
pg1-path=/var/lib/postgresql/16/main
pg1-host-user=postgres

# Check New pgbackrest.conf file on Restore Test Server
more /etc/pgbackrest/pgbackrest.conf

Ensure Retention Policy Works for repo1 on Backup Server

  • You should see maximum two full backups.
  • Since we have below parameters
    • repo1-retention-full-type=count
    • repo1-retention-full=2
  • There is no separate retention parameter for incremental backups in pgBackRest.
    • The lifespan of incremental backups is managed indirectly through the retention settings of the full and differential backups they depend on.
    • The parameter repo1-retention-incr is not valid in pgBackRest.
# Switch to Postgres User
sudo su - postgres

# Check How Many Backups Exist
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail info --repo=1

# Perform Full Backup for repo1
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=info backup --type=full --repo=1

# Perform Incremental Backup for repo1
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=info backup --type=incr --repo=1

Ensure Retention Policy Works for repo2 on Backup Server

  • Since the repo2-retention-full-type parameter is set to time and repo2-retention-full is set to 1, it is normal for more than one full and incremental backup to appear for the same day.
    • repo2-retention-full-type=time
    • repo2-retention-full=1
# Switch to Postgres User
sudo su - postgres

# Check How Many Backups Exist
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail info --repo=2

# Perform Full Backup for repo1
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=info backup --type=full --repo=2

# Perform Incremental Backup for repo1
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=info backup --type=incr --repo=2

24-Expire Backups Manually (If Needed)

# Switch to Postgres User
sudo su - postgres

# Check How Many Backups Exist
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail info --repo=1

# Expire Backup Using Set
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail expire --set=20250720-173624F --repo=1

# Check How Many Backups Exist
pgbackrest --config=/etc/pgbackrest/pgbackrest.conf --stanza=demo --log-level-console=detail info --repo=1

How to Setup MySQL HA InnoDB Cluster with HA Disaster Recovery Environment using MySQL Router & KeepAliveD on Redhat-Based Linux (Rocky Linux)?

In the following article, we completed the MySQL HA InnoDB Cluster environment setup. In this article, we’ll create a MySQL InnoDB ClusterSet to create a Disaster Recovery environment. We’ll then examine how to switch roles between Primary and Replica Clusters.

How to Setup MySQL High Availability InnoDB Cluster with MySQL Router & KeepAliveD?

High Level Installation and Configuration Steps

1-Check Environment Details
2-Set Hostnames on DRC Servers
3-Update /etc/hosts Files on All Servers
4-Disable Firewall & Selinux on DRC Servers
5-Install MySQL Server on MySQL-04 & MySQL-05 & MySQL-06
6-Install MySQL Shell (mysqlsh) on MySQL-04 & MySQL-05 & MySQL-06
7-Install MySQL Router on MySQL-Router03 & MySQL-Router04
8-Set MySQL Shell Password on MySQL-04 & MySQL-05 & MySQL-06
9-Check Current Cluster Status on Any of Primary MySQL Servers (MySQL-01 & MySQL-02 & MySQL-03)
10-Check Current MySQL Router Status on Any of Primary MySQL Router Servers (MySQL-Router01 & MySQL-Router02)
11-Create ClusterSet on Any of Primary MySQL Servers (MySQL-01 & MySQL-02 & MySQL-03)
12-Check Again Current MySQL Router Status on Any of Primary MySQL Router Servers (MySQL-Router01 & MySQL-Router02)
13-Check Currently Used MySQL Routers on Any of Primary MySQL Servers (MySQL-01 & MySQL-02 & MySQL-03)
14-Re-Bootstrap MySQL Routers on MySQL-Router01 & MySQL-Router02
15-Check New MySQL Router Status on Any of Primary MySQL Router Servers (MySQL-Router01 & MySQL-Router02)
16-Check Again Currently Used MySQL Routers on Any of Primary MySQL Servers (MySQL-01 & MySQL-02 & MySQL-03)
17-Prepare MySQL-04 Instance for Replica Cluster Creation
18-Prepare MySQL-05 Instance for Replica Cluster Creation
19-Prepare MySQL-06 Instance for Replica Cluster Creation
20-(If Needed) Regenerate the Server_UUIDs for DRC MySQL Servers (MySQL-04 & MySQL-05 & MySQL-06)
21-Create Replica Cluster (For DR) & Add First Replica MySQL-04 Server on Any of Servers
22-Check Replica Cluster Status on Any of Servers
23-Add MySQL-05 Instance to the Replica Cluster on Any of Servers
24-Add MySQL-06 Instance to the Replica Cluster on Any of Servers
25-Check Latest Replica Cluster Status on Any of Servers
26-Configure MySQL Router on MySQL-Router03 & MySQL-Router04
27-Check Newly Added MySQL Router Status on Any of DRC MySQL Router Servers (MySQL-Router03 & MySQL-Router04)
28-Check Newly Added MySQL Routers (MySQL-Router03 & MySQL-Router04) on Any of Servers
29-Install KeepAliveD on MySQL-Router03 & MySQL-Router04
30-Configure KeepAliveD on MySQL-Router03 & MySQL-Router04
31-Enable/Start KeepAliveD Service on MySQL-Router03 & MySQL-Router04
32-Test Primary Cluster KeepAliveD Virtual IP and Ports (RW & RO)
33-Create Table and Insert Some Data for Test Purpose on Primary MySQL-01 Instance
34-Set Replica Cluster (DR_INNODB_CLS) as Primary Cluster & Check ClusterSet Status
35-Check New Primary Cluster Status on Any of DRC MySQL Servers (MySQL-04 & MySQL-05 & MySQL-06)
36-Check Newly Created Table on DRC MySQL Servers (MySQL-04 & MySQL-05 & MySQL-06)
37-Test New Primary Cluster KeepAliveD Virtual IP and Ports (RW & RO)
38-Create Table and Insert Some Data for Test Purpose on New Primary DRC MySQL-04 Server
39-Set Original Primary Cluster (INNODB_CLS) as Primary Cluster & Check ClusterSet Status
40-Check Original Primary Cluster Status on Any of Primary MySQL Servers (MySQL-01 & MySQL-02 & MySQL-03)
41-Check Table and Data Created While DRC Servers Are Primary on Primary MySQL Servers (MySQL-01 & MySQL-02 & MySQL-03)

Low Level Installation and Configuration Steps

1-Check Environment Details

2-Set Hostnames on DRC Servers

192.168.1.156 -> sudo hostnamectl set-hostname rocky95-mysql04
192.168.1.157 -> sudo hostnamectl set-hostname rocky95-mysql05
192.168.1.158 -> sudo hostnamectl set-hostname rocky95-mysql06
192.168.1.159 -> sudo hostnamectl set-hostname rocky95-mysqlrouter03
192.168.1.160 -> sudo hostnamectl set-hostname rocky95-mysqlrouter04

3-Update /etc/hosts Files on All Servers

vi /etc/hosts

192.168.1.151 rocky95-mysql01
192.168.1.152 rocky95-mysql02
192.168.1.153 rocky95-mysql03
192.168.1.154 rocky95-mysqlrouter01
192.168.1.155 rocky95-mysqlrouter02
192.168.1.156 rocky95-mysql04
192.168.1.157 rocky95-mysql05
192.168.1.158 rocky95-mysql06
192.168.1.159 rocky95-mysqlrouter03
192.168.1.160 rocky95-mysqlrouter04

4-Disable Firewall & Selinux on DRC Servers

# Disable & Check Firewall
sudo systemctl status firewalld
sudo systemctl stop firewalld
sudo systemctl disable firewalld
sudo systemctl status firewalld

# Disable & Check Selinux
vi /etc/selinux/config
SELINUX=disabled
reboot
sestatus

5-Install MySQL Server on MySQL-04 & MySQL-05 & MySQL-06

How to Install MySQL for Redhat-Based Linux (Rocky Linux)?

6-Install MySQL Shell (mysqlsh) on MySQL-04 & MySQL-05 & MySQL-06

MySQL Shell is an advanced command-line client and code editor for MySQL. In addition to SQL, MySQL Shell also offers scripting capabilities for JavaScript and Python.

# Installation
wget https://repo.mysql.com/mysql80-community-release-el9-5.noarch.rpm
rpm -ivh mysql80-community-release-el9-5.noarch.rpm
yum install mysql-shell -y

# Check
# mysqlsh --version
mysqlsh   Ver 8.0.43 for Linux on x86_64 - for MySQL 8.0.43 (MySQL Community Server (GPL))

7-Install MySQL Router on MySQL-Router03 & MySQL-Router04

# Installation
wget https://repo.mysql.com/mysql80-community-release-el9-5.noarch.rpm
rpm -ivh mysql80-community-release-el9-5.noarch.rpm
yum install mysql-router -y

# Check
# mysqlrouter --version
MySQL Router  Ver 8.0.43 for Linux on x86_64 (MySQL Community - GPL)

8-Set MySQL Shell Password on MySQL-04 & MySQL-05 & MySQL-06

# mysqlsh --uri root@localhost
Please provide the password for 'root@localhost': *********
Save password for 'root@localhost'? [Y]es/[N]o/Ne[v]er (default No): Y
MySQL Shell 8.0.43

Copyright (c) 2016, 2025, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
Creating a session to 'root@localhost'
Fetching schema names for auto-completion... Press ^C to stop.
Your MySQL connection id is 8 (X protocol)
Server version: 8.0.36 Source distribution
No default schema selected; type \use <schema> to set one.
 MySQL  localhost:33060+ ssl  JS >

9-Check Current Cluster Status on Any of Primary MySQL Servers (MySQL-01 & MySQL-02 & MySQL-03)

mysqlsh --uri root@localhost
\c innodbcluster@rocky95-mysql01:3306
dba.getCluster('INNODB_CLS').status()

10-Check Current MySQL Router Status on Any of Primary MySQL Router Servers (MySQL-Router01 & MySQL-Router02)

tail -10f /data/mysqlrouter/log/mysqlrouter.log

11-Create ClusterSet on Any of Primary MySQL Servers (MySQL-01 & MySQL-02 & MySQL-03)

mysqlsh --uri root@localhost
\c innodbcluster@rocky95-mysql01:3306
var v_cluster= dba.getCluster('INNODB_CLS')
v_cluster.createClusterSet('InnoDBClusterSet')

12-Check Again Current MySQL Router Status on Any of Primary MySQL Router Servers (MySQL-Router01 & MySQL-Router02)

Take caution the following warning!!!

WARNING [7fb188655640] The target Cluster is part of a ClusterSet, but this Router was not bootstrapped to use the ClusterSet. Treating the Cluster as a standalone Cluster. Please bootstrap the Router again if you want to use ClusterSet capabilities.
tail -10f /data/mysqlrouter/log/mysqlrouter.log

13-Check Currently Used MySQL Routers on Any of Primary MySQL Servers (MySQL-01 & MySQL-02 & MySQL-03)

mysqlsh --uri root@localhost
\c innodbcluster@rocky95-mysql01:3306
var v_clusterset=dba.getClusterSet()
v_clusterset.listRouters()

Sample Output
Take caution the following warning!!!
“WARNING: Router must be bootstrapped again for the ClusterSet to be recognized.”

[root@rocky95-mysql01 ~]# mysqlsh --uri root@localhost
MySQL Shell 8.0.40

Copyright (c) 2016, 2024, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
Creating a session to 'root@localhost'
Fetching schema names for auto-completion... Press ^C to stop.
Your MySQL connection id is 70898 (X protocol)
Server version: 8.0.36 Source distribution
No default schema selected; type \use <schema> to set one.
 MySQL  localhost:33060+ ssl  JS > 
 MySQL  localhost:33060+ ssl  JS > \c innodbcluster@rocky95-mysql01:3306
Creating a session to 'innodbcluster@rocky95-mysql01:3306'
Fetching schema names for auto-completion... Press ^C to stop.
Closing old connection...
Your MySQL connection id is 70918
Server version: 8.0.36 Source distribution
No default schema selected; type \use <schema> to set one.
 MySQL  rocky95-mysql01:3306 ssl  JS > 
 MySQL  rocky95-mysql01:3306 ssl  JS > var v_clusterset=dba.getClusterSet()
 MySQL  rocky95-mysql01:3306 ssl  JS > 
 MySQL  rocky95-mysql01:3306 ssl  JS > v_clusterset.listRouters()
WARNING: The following Routers were bootstrapped before the ClusterSet was created: [rocky95-mysqlrouter01::system, rocky95-mysqlrouter02::system, rocky95-mysqlrouter01::, rocky95-mysqlrouter02::]. Please re-bootstrap the Routers to ensure the ClusterSet is recognized and the configurations are updated. Otherwise, Routers will operate as if the Clusters were standalone.
{
    "domainName": "InnoDBClusterSet", 
    "routers": {
        "rocky95-mysqlrouter01::": {
            "hostname": "rocky95-mysqlrouter01", 
            "lastCheckIn": "2025-08-02 20:36:03", 
            "roPort": "6447", 
            "roXPort": "6449", 
            "routerErrors": [
                "WARNING: Router must be bootstrapped again for the ClusterSet to be recognized."
            ], 
            "rwPort": "6446", 
            "rwXPort": "6448", 
            "targetCluster": null, 
            "version": "8.0.40"
        }, 
        "rocky95-mysqlrouter01::system": {
            "hostname": "rocky95-mysqlrouter01", 
            "lastCheckIn": "2025-01-18 15:33:05", 
            "roPort": "6447", 
            "roXPort": "6449", 
            "routerErrors": [
                "WARNING: Router must be bootstrapped again for the ClusterSet to be recognized."
            ], 
            "rwPort": "6446", 
            "rwXPort": "6448", 
            "targetCluster": null, 
            "version": "8.0.40"
        }, 
        "rocky95-mysqlrouter02::": {
            "hostname": "rocky95-mysqlrouter02", 
            "lastCheckIn": "2025-08-02 20:36:03", 
            "roPort": "6447", 
            "roXPort": "6449", 
            "routerErrors": [
                "WARNING: Router must be bootstrapped again for the ClusterSet to be recognized."
            ], 
            "rwPort": "6446", 
            "rwXPort": "6448", 
            "targetCluster": null, 
            "version": "8.0.40"
        }, 
        "rocky95-mysqlrouter02::system": {
            "hostname": "rocky95-mysqlrouter02", 
            "lastCheckIn": "2025-01-18 15:26:49", 
            "roPort": "6447", 
            "roXPort": "6449", 
            "routerErrors": [
                "WARNING: Router must be bootstrapped again for the ClusterSet to be recognized."
            ], 
            "rwPort": "6446", 
            "rwXPort": "6448", 
            "targetCluster": null, 
            "version": "8.0.40"
        }
    }
}
 MySQL  rocky95-mysql01:3306 ssl  JS >

14-Re-Bootstrap MySQL Routers on MySQL-Router01 & MySQL-Router02

systemctl status mysqlrouter.service
systemctl stop mysqlrouter.service
mysqlrouter --bootstrap innodbcluster@rocky95-mysql01 -d /data/mysqlrouter --user=mysqlrouter --force
InnoDBcluster_2025! ---> Password
systemctl start mysqlrouter
systemctl status mysqlrouter.service

Conf File Path=/data/mysqlrouter/mysqlrouter.conf
Log File Path=/data/mysqlrouter/log/mysqlrouter.log

15-Check New MySQL Router Status on Any of Primary MySQL Router Servers (MySQL-Router01 & MySQL-Router02)

Depending on your environment you should see something like this:

Enabling GR notices for cluster 'INNODB_CLS' changes on node rocky95-mysql01:33060
Enabling GR notices for cluster 'INNODB_CLS' changes on node rocky95-mysql03:33060
Enabling GR notices for cluster 'INNODB_CLS' changes on node rocky95-mysql02:33060
tail -10f /data/mysqlrouter/log/mysqlrouter.log

16-Check Again Currently Used MySQL Routers on Any of Primary MySQL Servers (MySQL-01 & MySQL-02 & MySQL-03)

mysqlsh --uri root@localhost
\c innodbcluster@rocky95-mysql01:3306
var v_clusterset=dba.getClusterSet()
v_clusterset.listRouters()

Ignore the Warning
Since current routers (rocky95-mysqlrouter01::, rocky95-mysqlrouter02::) are working perfectly and ClusterSet-aware, it’s safe to ignore (rocky95-mysqlrouter01::system, rocky95-mysqlrouter02::system). The warning is informational only and doesn’t impact functionality.

17-Prepare MySQL-04 Instance for Replica Cluster Creation

Run below commands on MySQL-04 instance.

mysqlsh --uri root@localhost
dba.configureInstance()
Select 2
innodbcluster
InnoDBcluster_2025!
y
y
dba.checkInstanceConfiguration('innodbcluster@rocky95-mysql04')
InnoDBcluster_2025!
y

Sample Output

[root@rocky95-mysql04 ~]# mysqlsh --uri root@localhost
MySQL Shell 8.0.43

Copyright (c) 2016, 2025, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
Creating a session to 'root@localhost'
Fetching schema names for auto-completion... Press ^C to stop.
Your MySQL connection id is 18 (X protocol)
Server version: 8.0.36 Source distribution
No default schema selected; type \use <schema> to set one.
 MySQL  localhost:33060+ ssl  JS > dba.configureInstance()
Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as rocky95-mysql04:3306
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

ERROR: User 'root' can only connect from 'localhost'. New account(s) with proper source address specification to allow remote connection from all instances must be created to manage the cluster.

1) Create remotely usable account for 'root' with same grants and password
2) Create a new admin account for InnoDB cluster with minimal required grants
3) Ignore and continue
4) Cancel

Please select an option [1]: 2
Please provide an account name (e.g: icroot@%) to have it created with the necessary
privileges or leave empty and press Enter to cancel.
Account Name: innodbcluster
Password for new account: *******************
Confirm password: *******************

applierWorkerThreads will be set to the default value of 4.

NOTE: Some configuration options need to be fixed:
+----------------------------------------+---------------+----------------+--------------------------------------------------+
| Variable                               | Current Value | Required Value | Note                                             |
+----------------------------------------+---------------+----------------+--------------------------------------------------+
| binlog_transaction_dependency_tracking | COMMIT_ORDER  | WRITESET       | Update the server variable                       |
| enforce_gtid_consistency               | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                              | OFF           | ON             | Update read-only variable and restart the server |
| server_id                              | 1             | <unique ID>    | Update read-only variable and restart the server |
+----------------------------------------+---------------+----------------+--------------------------------------------------+

Some variables need to be changed, but cannot be done dynamically on the server.
Do you want to perform the required configuration changes? [y/n]: y
Do you want to restart the instance after configuring it? [y/n]: y

Creating user innodbcluster@%.
Account innodbcluster@% was successfully created.

Configuring instance...

WARNING: '@@binlog_transaction_dependency_tracking' is deprecated and will be removed in a future release. (Code 1287).
The instance 'rocky95-mysql04:3306' was configured to be used in an InnoDB cluster.
Restarting MySQL...
NOTE: MySQL server at rocky95-mysql04:3306 was restarted.
 MySQL  localhost:33060+ ssl  JS > dba.checkInstanceConfiguration('innodbcluster@rocky95-mysql04')
Please provide the password for 'innodbcluster@rocky95-mysql04': *******************
Save password for 'innodbcluster@rocky95-mysql04'? [Y]es/[N]o/Ne[v]er (default No): y
Validating local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as rocky95-mysql04:3306
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...
Instance configuration is compatible with InnoDB cluster

The instance 'rocky95-mysql04:3306' is valid to be used in an InnoDB cluster.

{
    "status": "ok"
}
 MySQL  localhost:33060+ ssl  JS >

18-Prepare MySQL-05 Instance for Replica Cluster Creation

Run below commands on MySQL-05 instance.

mysqlsh --uri root@localhost
dba.configureInstance()
Select 2
innodbcluster
InnoDBcluster_2025!
y
y
dba.checkInstanceConfiguration('innodbcluster@rocky95-mysql05')
InnoDBcluster_2025!
y

Sample Output

[root@rocky95-mysql05 ~]# mysqlsh --uri root@localhost
MySQL Shell 8.0.43

Copyright (c) 2016, 2025, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
Creating a session to 'root@localhost'
Fetching schema names for auto-completion... Press ^C to stop.
Your MySQL connection id is 11 (X protocol)
Server version: 8.0.36 Source distribution
No default schema selected; type \use <schema> to set one.
 MySQL  localhost:33060+ ssl  JS > dba.configureInstance()
Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as rocky95-mysql05:3306
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

ERROR: User 'root' can only connect from 'localhost'. New account(s) with proper source address specification to allow remote connection from all instances must be created to manage the cluster.

1) Create remotely usable account for 'root' with same grants and password
2) Create a new admin account for InnoDB cluster with minimal required grants
3) Ignore and continue
4) Cancel

Please select an option [1]: 2
Please provide an account name (e.g: icroot@%) to have it created with the necessary
privileges or leave empty and press Enter to cancel.
Account Name: innodbcluster
Password for new account: *******************
Confirm password: *******************

applierWorkerThreads will be set to the default value of 4.

NOTE: Some configuration options need to be fixed:
+----------------------------------------+---------------+----------------+--------------------------------------------------+
| Variable                               | Current Value | Required Value | Note                                             |
+----------------------------------------+---------------+----------------+--------------------------------------------------+
| binlog_transaction_dependency_tracking | COMMIT_ORDER  | WRITESET       | Update the server variable                       |
| enforce_gtid_consistency               | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                              | OFF           | ON             | Update read-only variable and restart the server |
| server_id                              | 1             | <unique ID>    | Update read-only variable and restart the server |
+----------------------------------------+---------------+----------------+--------------------------------------------------+

Some variables need to be changed, but cannot be done dynamically on the server.
Do you want to perform the required configuration changes? [y/n]: y
Do you want to restart the instance after configuring it? [y/n]: y

Creating user innodbcluster@%.
Account innodbcluster@% was successfully created.

Configuring instance...

WARNING: '@@binlog_transaction_dependency_tracking' is deprecated and will be removed in a future release. (Code 1287).
The instance 'rocky95-mysql05:3306' was configured to be used in an InnoDB cluster.
Restarting MySQL...
NOTE: MySQL server at rocky95-mysql05:3306 was restarted.
MySQL  localhost:33060+ ssl  JS > dba.checkInstanceConfiguration('innodbcluster@rocky95-mysql05')
Please provide the password for 'innodbcluster@rocky95-mysql05': *******************
Save password for 'innodbcluster@rocky95-mysql05'? [Y]es/[N]o/Ne[v]er (default No): y
Validating local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as rocky95-mysql05:3306
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...
Instance configuration is compatible with InnoDB cluster

The instance 'rocky95-mysql05:3306' is valid to be used in an InnoDB cluster.

{
    "status": "ok"
}
 MySQL  localhost:33060+ ssl  JS >

19-Prepare MySQL-06 Instance for Replica Cluster Creation

Run below commands on MySQL-06 instance.

mysqlsh --uri root@localhost
dba.configureInstance()
Select 2
innodbcluster
InnoDBcluster_2025!
y
y
dba.checkInstanceConfiguration('innodbcluster@rocky95-mysql06')
InnoDBcluster_2025!
y

Sample Output

[root@rocky95-mysql06 ~]# mysqlsh --uri root@localhost
MySQL Shell 8.0.43

Copyright (c) 2016, 2025, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
Creating a session to 'root@localhost'
Fetching schema names for auto-completion... Press ^C to stop.
Your MySQL connection id is 11 (X protocol)
Server version: 8.0.36 Source distribution
No default schema selected; type \use <schema> to set one.
 MySQL  localhost:33060+ ssl  JS > dba.configureInstance()
Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as rocky95-mysql06:3306
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

ERROR: User 'root' can only connect from 'localhost'. New account(s) with proper source address specification to allow remote connection from all instances must be created to manage the cluster.

1) Create remotely usable account for 'root' with same grants and password
2) Create a new admin account for InnoDB cluster with minimal required grants
3) Ignore and continue
4) Cancel

Please select an option [1]: 2
Please provide an account name (e.g: icroot@%) to have it created with the necessary
privileges or leave empty and press Enter to cancel.
Account Name: innodbcluster
Password for new account: *******************
Confirm password: *******************

applierWorkerThreads will be set to the default value of 4.

NOTE: Some configuration options need to be fixed:
+----------------------------------------+---------------+----------------+--------------------------------------------------+
| Variable                               | Current Value | Required Value | Note                                             |
+----------------------------------------+---------------+----------------+--------------------------------------------------+
| binlog_transaction_dependency_tracking | COMMIT_ORDER  | WRITESET       | Update the server variable                       |
| enforce_gtid_consistency               | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                              | OFF           | ON             | Update read-only variable and restart the server |
| server_id                              | 1             | <unique ID>    | Update read-only variable and restart the server |
+----------------------------------------+---------------+----------------+--------------------------------------------------+

Some variables need to be changed, but cannot be done dynamically on the server.
Do you want to perform the required configuration changes? [y/n]: y
Do you want to restart the instance after configuring it? [y/n]: y

Creating user innodbcluster@%.
Account innodbcluster@% was successfully created.

Configuring instance...

WARNING: '@@binlog_transaction_dependency_tracking' is deprecated and will be removed in a future release. (Code 1287).
The instance 'rocky95-mysql06:3306' was configured to be used in an InnoDB cluster.
Restarting MySQL...
NOTE: MySQL server at rocky95-mysql06:3306 was restarted.
 MySQL  localhost:33060+ ssl  JS > dba.checkInstanceConfiguration('innodbcluster@rocky95-mysql06')
Please provide the password for 'innodbcluster@rocky95-mysql06': *******************
Save password for 'innodbcluster@rocky95-mysql06'? [Y]es/[N]o/Ne[v]er (default No): y
Validating local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as rocky95-mysql06:3306
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...
Instance configuration is compatible with InnoDB cluster

The instance 'rocky95-mysql06:3306' is valid to be used in an InnoDB cluster.

{
    "status": "ok"
}
 MySQL  localhost:33060+ ssl  JS >

20-(If Needed) Regenerate the Server_UUIDs for DRC MySQL Servers (MySQL-04 & MySQL-05 & MySQL-06)

Because I created the DRC MySQL database servers from a template VM, they all have the same server_uuid. We need to make sure they are all different before creating a replica cluster.

# Check Current Server_uuid

mysql -u root -pDeuce4040
SELECT @@server_uuid;
sudo systemctl stop mysqld
rm -f /var/lib/mysql/auto.cnf
sudo systemctl start mysqld
mysql -u root -pDeuce4040
SELECT @@server_uuid;

21-Create Replica Cluster (For DR) & Add First Replica MySQL-04 Server on Any of Servers

mysqlsh --uri root@localhost
\c innodbcluster@rocky95-mysql01:3306
var v_clusterset=dba.getClusterSet()


# Syntax
v_clusterset.createReplicaCluster('<replica_hostname>:<port>', '<replica_cluster_name>')
# Command
v_clusterset.createReplicaCluster('rocky95-mysql04:3306', 'DR_INNODB_CLS')

22-Check Replica Cluster Status on Any of Servers

mysqlsh --uri root@localhost
\c innodbcluster@rocky95-mysql01:3306
dba.getCluster('DR_INNODB_CLS').status()

23-Add MySQL-05 Instance to the Replica Cluster on Any of Servers

mysqlsh --uri root@localhost
\c innodbcluster@rocky95-mysql01:3306
var v_replica_cluster=dba.getCluster('DR_INNODB_CLS')
v_replica_cluster.addInstance('rocky95-mysql05:3306')

24-Add MySQL-06 Instance to the Replica Cluster on Any of Servers

mysqlsh --uri root@localhost
\c innodbcluster@rocky95-mysql01:3306
var v_replica_cluster=dba.getCluster('DR_INNODB_CLS')
v_replica_cluster.addInstance('rocky95-mysql06:3306')

25-Check Latest Replica Cluster Status on Any of Servers

mysqlsh --uri root@localhost
\c innodbcluster@rocky95-mysql01:3306
dba.getCluster('DR_INNODB_CLS').status()

26-Configure MySQL Router on MySQL-Router03 & MySQL-Router04

# Use Below Command for Default Data and Log Paths
mysqlrouter --bootstrap innodbcluster@rocky95-mysql01 --user mysqlrouter
InnoDBcluster_2025! ---> Password

Conf File Path=/etc/mysqlrouter/mysqlrouter.conf
Log File Path=/var/log/mysqlrouter/mysqlrouter.log

# Use Below Command for Custom Data and Log Paths
mkdir -p /data/mysqlrouter
mysqlrouter --bootstrap innodbcluster@rocky95-mysql01 -d /data/mysqlrouter --user mysqlrouter --force
InnoDBcluster_2025! ---> Password

Conf File Path=/data/mysqlrouter/mysqlrouter.conf
Log File Path=/data/mysqlrouter/log/mysqlrouter.log

# Enable and Start Service
systemctl enable mysqlrouter
systemctl start mysqlrouter
systemctl status mysqlrouter

Output of MySQL-Router03

Output of MySQL-Router04

27-Check Newly Added MySQL Router Status on Any of DRC MySQL Router Servers (MySQL-Router03 & MySQL-Router04)

tail -10f /data/mysqlrouter/log/mysqlrouter.log

28-Check Newly Added MySQL Routers (MySQL-Router03 & MySQL-Router04) on Any of Servers

mysqlsh --uri root@localhost
\c innodbcluster@rocky95-mysql01:3306
var v_clusterset=dba.getClusterSet()
v_clusterset.listRouters()
[root@rocky95-mysql01 ~]# mysqlsh --uri root@localhost
MySQL Shell 8.0.40

Copyright (c) 2016, 2024, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
Creating a session to 'root@localhost'
Fetching schema names for auto-completion... Press ^C to stop.
Your MySQL connection id is 16877 (X protocol)
Server version: 8.0.36 Source distribution
No default schema selected; type \use <schema> to set one.
 MySQL  localhost:33060+ ssl  JS > \c innodbcluster@rocky95-mysql01:3306
Creating a session to 'innodbcluster@rocky95-mysql01:3306'
Fetching schema names for auto-completion... Press ^C to stop.
Closing old connection...
Your MySQL connection id is 16888
Server version: 8.0.36 Source distribution
No default schema selected; type \use <schema> to set one.
 MySQL  rocky95-mysql01:3306 ssl  JS > var v_clusterset=dba.getClusterSet()
 MySQL  rocky95-mysql01:3306 ssl  JS > v_clusterset.listRouters()
WARNING: The following Routers were bootstrapped before the ClusterSet was created: [rocky95-mysqlrouter01::system, rocky95-mysqlrouter02::system]. Please re-bootstrap the Routers to ensure the ClusterSet is recognized and the configurations are updated. Otherwise, Routers will operate as if the Clusters were standalone.
{
    "domainName": "InnoDBClusterSet", 
    "routers": {
        "rocky95-mysqlrouter01::": {
            "hostname": "rocky95-mysqlrouter01", 
            "lastCheckIn": "2025-08-03 12:12:12", 
            "roPort": "6447", 
            "roXPort": "6449", 
            "rwPort": "6446", 
            "rwXPort": "6448", 
            "targetCluster": null, 
            "version": "8.0.40"
        }, 
        "rocky95-mysqlrouter01::system": {
            "hostname": "rocky95-mysqlrouter01", 
            "lastCheckIn": "2025-01-18 15:33:05", 
            "roPort": "6447", 
            "roXPort": "6449", 
            "routerErrors": [
                "WARNING: Router must be bootstrapped again for the ClusterSet to be recognized."
            ], 
            "rwPort": "6446", 
            "rwXPort": "6448", 
            "targetCluster": null, 
            "version": "8.0.40"
        }, 
        "rocky95-mysqlrouter02::": {
            "hostname": "rocky95-mysqlrouter02", 
            "lastCheckIn": "2025-08-03 12:12:12", 
            "roPort": "6447", 
            "roXPort": "6449", 
            "rwPort": "6446", 
            "rwXPort": "6448", 
            "targetCluster": null, 
            "version": "8.0.40"
        }, 
        "rocky95-mysqlrouter02::system": {
            "hostname": "rocky95-mysqlrouter02", 
            "lastCheckIn": "2025-01-18 15:26:49", 
            "roPort": "6447", 
            "roXPort": "6449", 
            "routerErrors": [
                "WARNING: Router must be bootstrapped again for the ClusterSet to be recognized."
            ], 
            "rwPort": "6446", 
            "rwXPort": "6448", 
            "targetCluster": null, 
            "version": "8.0.40"
        }, 
        "rocky95-mysqlrouter03::": {
            "hostname": "rocky95-mysqlrouter03", 
            "lastCheckIn": "2025-08-03 13:42:31", 
            "roPort": "6447", 
            "roXPort": "6449", 
            "rwPort": "6446", 
            "rwXPort": "6448", 
            "targetCluster": null, 
            "version": "8.0.43"
        }, 
        "rocky95-mysqlrouter04::": {
            "hostname": "rocky95-mysqlrouter04", 
            "lastCheckIn": "2025-08-03 13:43:48", 
            "roPort": "6447", 
            "roXPort": "6449", 
            "rwPort": "6446", 
            "rwXPort": "6448", 
            "targetCluster": null, 
            "version": "8.0.43"
        }
    }
}
 MySQL  rocky95-mysql01:3306 ssl  JS >

29-Install KeepAliveD on MySQL-Router03 & MySQL-Router04

sudo yum -y install keepalived
mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.backup

30-Configure KeepAliveD on MySQL-Router03 & MySQL-Router04

  • Important Points
    • state is important.
    • Use your own public interface name. Check it via ifconfig command.
    • virtual_router_id should be same.
    • priority is important.
    • Use your own virtual_ipaddress.

MySQL-Router03

vi /etc/keepalived/keepalived.conf

vrrp_script chk_mysqlrouter {
    script "/bin/killall -0 /usr/bin/mysqlrouter"  # Check if mysqlrouter is running
    interval 2          # Check every 2 seconds
    weight 2            # Increase priority by 2 if script succeeds
    fall 2              # Consider the service down after 2 consecutive failures
    rise 2              # Consider the service back up after 2 consecutive successes
}

vrrp_instance VI_1 {
    state MASTER
    interface enp0s3
    virtual_router_id 51
    priority 102
    advert_int 1
    virtual_ipaddress {
        192.168.1.101
    }

    track_script {
        chk_mysqlrouter
    }
}

MySQL-Router04

vi /etc/keepalived/keepalived.conf

vrrp_script chk_mysqlrouter {
    script "/bin/killall -0 /usr/bin/mysqlrouter"  # Check if mysqlrouter is running
    interval 2          # Check every 2 seconds
    weight 2            # Increase priority by 2 if script succeeds
    fall 2              # Consider the service down after 2 consecutive failures
    rise 2              # Consider the service back up after 2 consecutive successes
}

vrrp_instance VI_2 {
    state BACKUP
    interface enp0s3
    virtual_router_id 51
    priority 101
    advert_int 1
    virtual_ipaddress {
        192.168.1.101
    }

    track_script {
        chk_mysqlrouter
    }
}

31-Enable/Start KeepAliveD Service on MySQL-Router03 & MySQL-Router04

systemctl enable keepalived
systemctl start keepalived
systemctl status keepalived
# Check KeepaliveD IP
ip a | grep 192.168.1.101

32-Test Primary Cluster KeepAliveD Virtual IP and Ports (RW & RO)

# RW Test
while [ 1 ]; do sleep 1; mysql -h 192.168.1.101 -P 6446 -uinnodbcluster -pInnoDBcluster_2025! -e "select @@hostname;"; done
# RO Test
while [ 1 ]; do sleep 1; mysql -h 192.168.1.101 -P 6447 -uinnodbcluster -pInnoDBcluster_2025! -e "select @@hostname;"; done

33-Create Table and Insert Some Data for Test Purpose on Primary MySQL-01 Instance

mysqlsh --uri root@localhost

\sql

create database primary_testdb;

use primary_testdb;

CREATE TABLE primary_testdb_table (id INT AUTO_INCREMENT PRIMARY KEY, replication_test varchar(50));

INSERT INTO primary_testdb_table (replication_test) VALUES
('Test01 was inserted by mysql01'),('Test02 was inserted by mysql01'),
('Test03 was inserted by mysql01'),('Test04 was inserted by mysql01'),
('Test05 was inserted by mysql01'),('Test06 was inserted by mysql01');

SELECT * FROM primary_testdb_table;

select @@hostname;

34-Set Replica Cluster (DR_INNODB_CLS) as Primary Cluster & Check ClusterSet Status

mysqlsh --uri root@localhost
\c innodbcluster@rocky95-mysql01:3306
var v_clusterset=dba.getClusterSet()
v_clusterset.setPrimaryCluster('DR_INNODB_CLS')
v_clusterset.status()

35-Check New Primary Cluster Status on Any of DRC MySQL Servers (MySQL-04 & MySQL-05 & MySQL-06)

mysqlsh --uri root@localhost

\c innodbcluster@rocky95-mysql04:3306
InnoDBcluster_2025!

dba.getCluster('DR_INNODB_CLS').status()

36-Check Newly Created Table on DRC MySQL Servers (MySQL-04 & MySQL-05 & MySQL-06)

mysqlsh --uri root@localhost

\sql

use primary_testdb;

SELECT * FROM primary_testdb_table;

select @@hostname;

37-Test New Primary Cluster KeepAliveD Virtual IP and Ports (RW & RO)

# RW Test
while [ 1 ]; do sleep 1; mysql -h 192.168.1.101 -P 6446 -uinnodbcluster -pInnoDBcluster_2025! -e "select @@hostname;"; done
# RO Test
while [ 1 ]; do sleep 1; mysql -h 192.168.1.101 -P 6447 -uinnodbcluster -pInnoDBcluster_2025! -e "select @@hostname;"; done

38-Create Table and Insert Some Data for Test Purpose on New Primary DRC MySQL-04 Server

mysqlsh --uri root@localhost

\sql

create database new_primary_testdb;

use new_primary_testdb;

CREATE TABLE new_primary_testdb_table (id INT AUTO_INCREMENT PRIMARY KEY, replication_test varchar(50));

INSERT INTO new_primary_testdb_table (replication_test) VALUES
('Test01 was inserted by mysql04'),('Test02 was inserted by mysql04'),
('Test03 was inserted by mysql04'),('Test04 was inserted by mysql04'),
('Test05 was inserted by mysql04'),('Test06 was inserted by mysql04');

SELECT * FROM new_primary_testdb_table;

select @@hostname;

39-Set Original Primary Cluster (INNODB_CLS) as Primary Cluster & Check ClusterSet Status

mysqlsh --uri root@localhost
\c innodbcluster@rocky95-mysql01:3306
var v_clusterset=dba.getClusterSet()
v_clusterset.setPrimaryCluster('INNODB_CLS')
v_clusterset.status()

40-Check Original Primary Cluster Status on Any of Primary MySQL Servers (MySQL-01 & MySQL-02 & MySQL-03)

mysqlsh --uri root@localhost

\c innodbcluster@rocky95-mysql01:3306
InnoDBcluster_2025!

dba.getCluster('INNODB_CLS').status()

41-Check Table and Data Created While DRC Servers Are Primary on Primary MySQL Servers (MySQL-01 & MySQL-02 & MySQL-03)

mysqlsh --uri root@localhost

\sql

use new_primary_testdb;

SELECT * FROM new_primary_testdb_table;

select @@hostname;

How to Reset MySQL Root Password for MySQL 8.0?

I have tried to document step by step How to Reset MySQL Root Password for MySQL 8.0. I hope it will be useful for those who need it!

High Level Steps

1-Overview
2-Stop the MySQL Service
3-Start MySQL in Password-less Mode as the mysql User
4-Connect to MySQL without a Password
5-Reset the root User’s Password and Authentication Plugin
6-Stop the MySQL Server and Restart it Normally
7-Log in as root and Set a New Password
8-(Optional) Verify Plugin and Host Settings
9-Connect as root with Your New Password

Low Level Steps

1-Overview

  • These steps are for when you cannot access MySQL as root (e.g., ERROR 1045).
  • You will start MySQL with –skip-grant-tables to reset the password.

2-Stop the MySQL Service

systemctl stop mysqld

systemctl status mysqld.service

3-Start MySQL in Password-less Mode as the mysql User

If this locks your terminal session, open a new terminal and continue the next steps there.

su -s /bin/bash mysql -c "/usr/sbin/mysqld --skip-grant-tables --skip-networking"

4-Connect to MySQL without a Password

mysql

5-Reset the root User’s Password and Authentication Plugin

After this, the root user will be allowed to connect without a password.

-- Set the authentication plugin to mysql_native_password
UPDATE mysql.user 
SET plugin = 'mysql_native_password' 
WHERE user = 'root' AND host = 'localhost';

-- Clear the password field
UPDATE mysql.user 
SET authentication_string = '' 
WHERE user = 'root' AND host = 'localhost';

-- Reload privilege tables
FLUSH PRIVILEGES;

6-Stop the MySQL Server and Restart it Normally

pkill -f mysqld
systemctl start mysqld
systemctl status mysqld

7-Log in as root and Set a New Password

mysql -u root
ALTER USER 'root'@'localhost' IDENTIFIED BY 'StrongPassword2025!';
FLUSH PRIVILEGES;

8-(Optional) Verify Plugin and Host Settings

SELECT user, host, plugin FROM mysql.user WHERE user = 'root';

9-Connect as root with Your New Password

mysql -u root -p
StrongPassword2025!

How to Install MySQL Enterprise Server Commercial Edition for Redhat-Based Linux (Rocky Linux)?

I have tried to document step by step How to Install MySQL Enterprise Server Commercial Edition for Redhat-Based Linux (Rocky Linux). I hope it will be useful for those who need it!

High Level Installation and Configuration Steps

1- Install Required Packages
2- Download RPMs from Oracle Software Delivery Cloud
3- Upload RPMs to Servers
4- Install MySQL Enterprise Server
5- Check & Start & Enable MySQL Service
6- Get Temporary Root Password
7- Perform Secure MySQL Installation
8- Verify Installation

Low Level Installation and Configuration Steps

1- Install Required Packages

sudo dnf update -y
sudo dnf install wget curl gnupg2 -y

2- Download RPMs from Oracle Software Delivery Cloud

3- Upload RPMs to Servers

sftp> put C:\Users\serhat.celik\Downloads\V1051243-01.zip /tmp
Uploading V1051243-01.zip to /tmp/V1051243-01.zip
put: /tmp: The system cannot find the file specified. 
  100% 1060998KB  48227KB/s 00:00:22     
C:\Users\serhat.celik\Downloads\V1051243-01.zip: 1086462570 bytes transferred in 22 seconds (48227 KB/s)
sftp>

4- Install MySQL Enterprise Server

rpm -ivh mysql-commercial-common-8.4.6-1.1.el9.x86_64.rpm
rpm -ivh mysql-commercial-client-plugins-8.4.6-1.1.el9.x86_64.rpm
rpm -ivh mysql-commercial-libs-8.4.6-1.1.el9.x86_64.rpm
rpm -ivh mysql-commercial-client-8.4.6-1.1.el9.x86_64.rpm
rpm -ivh mysql-commercial-icu-data-files-8.4.6-1.1.el9.x86_64.rpm
rpm -ivh mysql-commercial-server-8.4.6-1.1.el9.x86_64.rpm

[root@mysqlEE-primary V1051243-01]# rpm -ivh mysql-commercial-common-8.4.6-1.1.el9.x86_64.rpm 
warning: mysql-commercial-common-8.4.6-1.1.el9.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID a8d3785c: NOKEY
Verifying...                          ################################# [100%]
Preparing...                          ################################# [100%]
Updating / installing...
   1:mysql-commercial-common-8.4.6-1.1################################# [100%]
[root@mysqlEE-primary V1051243-01]#

[root@mysqlEE-primary V1051243-01]# rpm -ivh mysql-commercial-client-plugins-8.4.6-1.1.el9.x86_64.rpm 
warning: mysql-commercial-client-plugins-8.4.6-1.1.el9.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID a8d3785c: NOKEY
Verifying...                          ################################# [100%]
Preparing...                          ################################# [100%]
Updating / installing...
   1:mysql-commercial-client-plugins-8################################# [100%]
[root@mysqlEE-primary V1051243-01]#

[root@mysqlEE-primary V1051243-01]# rpm -ivh mysql-commercial-libs-8.4.6-1.1.el9.x86_64.rpm 
warning: mysql-commercial-libs-8.4.6-1.1.el9.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID a8d3785c: NOKEY
Verifying...                          ################################# [100%]
Preparing...                          ################################# [100%]
Updating / installing...
   1:mysql-commercial-libs-8.4.6-1.1.e################################# [100%]
[root@mysqlEE-primary V1051243-01]#

[root@mysqlEE-primary V1051243-01]# rpm -ivh mysql-commercial-client-8.4.6-1.1.el9.x86_64.rpm 
warning: mysql-commercial-client-8.4.6-1.1.el9.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID a8d3785c: NOKEY
Verifying...                          ################################# [100%]
Preparing...                          ################################# [100%]
Updating / installing...
   1:mysql-commercial-client-8.4.6-1.1################################# [100%]
[root@mysqlEE-primary V1051243-01]#

[root@mysqlEE-primary V1051243-01]# rpm -ivh mysql-commercial-icu-data-files-8.4.6-1.1.el9.x86_64.rpm 
warning: mysql-commercial-icu-data-files-8.4.6-1.1.el9.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID a8d3785c: NOKEY
Verifying...                          ################################# [100%]
Preparing...                          ################################# [100%]
Updating / installing...
   1:mysql-commercial-icu-data-files-8################################# [100%]
[root@mysqlEE-primary V1051243-01]#

[root@mysqlEE-primary V1051243-01]# rpm -ivh mysql-commercial-server-8.4.6-1.1.el9.x86_64.rpm 
warning: mysql-commercial-server-8.4.6-1.1.el9.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID a8d3785c: NOKEY
Verifying...                          ################################# [100%]
Preparing...                          ################################# [100%]
Updating / installing...
   1:mysql-commercial-server-8.4.6-1.1################################# [100%]
[root@mysqlEE-primary V1051243-01]#
pwd
ls -lrt
rpm -qa | grep mysql-commercial

5- Check & Start & Enable MySQL Service

sudo systemctl status mysqld
sudo systemctl enable mysqld
sudo systemctl start mysqld
sudo systemctl status mysqld

6- Get Temporary Root Password

sudo grep 'temporary password' /var/log/mysqld.log

VjrEvd9>j3s)

7- Perform Secure MySQL Installation

sudo mysql_secure_installation

8- Verify Installation

mysql --version
mysql -uroot -pStrongPassword2025!
SELECT VERSION();

How to Prevent Creation of Non-PK Tables in MySQL InnoDB Cluster Environment?

I have tried to document step by step How to Prevent Creation of Non-PK Tables in MySQL InnoDB Cluster Environment. I hope it will be useful for those who need it!

High Level Steps

1. General Information
1.1. Which Parameter Prevents the Creation of Tables Without a Primary Key in a MySQL InnoDB Cluster Environment?
1.2. Can the sql_require_primary_key Parameter Be Changed Dynamically Without Restarting the DB Service?
2. Test Operations
2.1. sql_require_primary_key=OFF & Non-PK Table Can Be Created
2.1.1. Check sql_require_primary_key Parameter
2.1.2. Try to Create Table Without PK
2.2. sql_require_primary_key=ON & Non-PK Table Can Not Be Created
2.2.1. Add sql_require_primary_key Parameter to MySQL Server Conf File
2.2.2. Restart & Check MySQL Service
2.2.3. Check Primary Node in the Cluster
2.2.4. Change Primary Node (If Needed)
2.2.5. Try to Create Table Without PK

1. General Information

1.1. Which Parameter Prevents the Creation of Tables Without a Primary Key in a MySQL InnoDB Cluster Environment?

In a MySQL InnoDB Cluster environment, the creation of tables without a Primary Key (PK) can be prevented using the sql_require_primary_key parameter.

1.1.1. Explanation
When sql_require_primary_key=ON is set, any CREATE TABLE or ALTER TABLE statement that attempts to define an InnoDB table without a primary key will result in an error.

1.1.2. Usage
This setting is recommended especially in replication environments and high availability architectures such as InnoDB Cluster, to maintain data consistency. A missing primary key in an InnoDB table can lead to issues related to replication, clustering, and performance.

[mysqld]
sql_require_primary_key = ON

1.1.3. Notes

  • This parameter applies only to InnoDB tables.
  • Its default value is OFF.
  • It is supported from MySQL 8.0.13 onwards.

1.1.4. Checking the Value

SHOW VARIABLES LIKE 'sql_require_primary_key';

1.2. Can the sql_require_primary_key Parameter Be Changed Dynamically Without Restarting the DB Service?

No, sql_require_primary_key is not a dynamic system variable, it cannot be changed without restarting the database service.

1.2.1. Details
You can check the current value using:

SHOW VARIABLES LIKE 'sql_require_primary_key';

However, if you try to run:

SET GLOBAL sql_require_primary_key = ON;

You will receive an error, because this variable:

  • Can only be configured via the configuration file (my.cnf)
  • Cannot be modified using SET GLOBAL/SESSION commands.

2. Test Operations

2.1. sql_require_primary_key=OFF & Non-PK Table Can Be Created

2.1.1. Check sql_require_primary_key Parameter

SHOW VARIABLES LIKE 'sql_require_primary_key';

2.1.2. Try to Create Table Without PK

Since the sql_require_primary_key parameter is OFF, a table without a PK is allowed to be created as expected.

mysqlsh --uri root@localhost
\sql
show databases;
use testdb;

CREATE TABLE customers (
    customer_id INT,
    name VARCHAR(100),
    email VARCHAR(100),
    created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);

2.2. sql_require_primary_key=ON & Non-PK Table Can Not Be Created

2.2.1. Add sql_require_primary_key Parameter to MySQL Server Conf File
Add sql_require_primary_key parameter on all servers in the cluster.

vi /etc/my.cnf.d/mysql-server.cnf

# Prevent creation of InnoDB tables without a primary key (recommended for replication and HA).
sql_require_primary_key = ON

# Check sql_require_primary_key Parameter
more /etc/my.cnf.d/mysql-server.cnf | grep sql_require_primary_key

2.2.2. Restart & Check MySQL Service
Restart all servers in the cluster to take effect.

systemctl restart mysqld.service

systemctl status mysqld.service

2.2.3. Check Primary Node in the Cluster

mysqlsh --uri root@localhost
\c innodbcluster@rocky95-mysql01:3306
var v_cluster= dba.getCluster('INNODB_CLS')
v_cluster.status()

2.2.4. Change Primary Node (If Needed)

v_cluster.setPrimaryInstance('innodbcluster@rocky95-mysql01')
v_cluster.status()

2.2.5. Try to Create Table Without PK
Since the sql_require_primary_key parameter is ON, a table without a PK is not allowed to be created as expected.

mysqlsh --uri root@localhost
\sql
show databases;
use testdb;

CREATE TABLE customers (
    customer_id INT,
    name VARCHAR(100),
    email VARCHAR(100),
    created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);

How to Set Up PostgreSQL Streaming Replication on Ubuntu and Perform Failover Operations?

I have tried to document step by step How to Set Up PostgreSQL Streaming Replication on Ubuntu and Perform Failover Operations?. I hope it will be useful for those who need it!

High Level Installation and Configuration Steps

1-Setup Environment Details
2-Set Hostnames on Each Server
3-Update /etc/hosts Files on All Servers
4-Disable Firewall on All Servers
5-Install PostgreSQL on All Servers
6-Check PostgreSQL Service on Primary Server (Master)
7-Stop PostgreSQL Service & Delete Existing Default Data on Standby Server (Replica)
8-Edit the PostgreSQL Configuration File on the Primary Server (Master) to Allow Replication
9-Create Replication User on Primary Server (Master)
10-Add Rule in the Host-Based Authentication File on Primary Server (Master)
11-Create Some Databases and Tables with Data on Primary Server (Master) for Test Purpose Before Replication
12-Run the pg_basebackup Command on Standby Server (Replica)
13-Create Some Databases and Tables with Data on Primary Server (Master) for Test Purpose After Replication
14-Check Test Data on Standby Server (Replica)
15-Check Replication Status on Primary Server (Master)
16-Check Wal Receiver Status on Standby Server (Replica)
17-Failover to Standby Server (Replica) Steps (If Needed)
18-Failover to Original Primary Server (Master) Steps (If Needed)
19-Re-Create Standby Server Steps (If Needed)

1-Setup Environment Details

ROLEHOSTNAMEIPOS VERDB VER
Primary Server (Master)postgres-primary192.168.1.151Ubuntu 24.04PostgreSQL 16.9
Standby Server (Replica)postgres-standby192.168.1.152Ubuntu 24.04PostgreSQL 16.9

2-Set Hostnames on Each Server

192.168.1.151 -> sudo hostnamectl set-hostname postgres-primary
192.168.1.152 -> sudo hostnamectl set-hostname postgres-standby

3-Update /etc/hosts Files on All Servers

This step should be repeated on below servers.

  • postgres-primary
  • postgres-standby
vi /etc/hosts

192.168.1.151 postgres-primary
192.168.1.152 postgres-standby

4-Disable Firewall on All Servers

This step should be repeated on below servers.

  • postgres-primary
  • postgres-standby
systemctl status ufw
systemctl stop ufw
sudo systemctl disable ufw
sudo systemctl mask ufw
systemctl status ufw

5-Install PostgreSQL on All Servers

These steps should be repeated on each of the PostgreSQL Servers.

  • postgres-primary
  • postgres-standby

Install PostgreSQL Package Tools

This package provides tools required to manage PostgreSQL versions and to install the official PostgreSQL APT repository.

sudo apt update
sudo apt install -y postgresql-common

Add the Official PostgreSQL APT Repository

This script automatically adds the PostgreSQL Global Development Group (PGDG) APT repository to your system, enabling access to the latest PostgreSQL versions — including PostgreSQL 16.

sudo /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh

Update Package List and Install PostgreSQL 16 with Additional Modules

  • postgresql-16: The core PostgreSQL 16 server package.
  • postgresql-contrib-16: Adds useful extensions like pg_stat_statements, uuid-ossp, and more.
sudo apt update
sudo apt install -y postgresql-16 postgresql-contrib-16
psql --version

6-Check PostgreSQL Service on Primary Server (Master)

Unlike Redhat-based operating systems, initdb is done automatically in Debian-based operating systems. There is no need for manual initdb for Ubuntu.

sudo systemctl status postgresql@16-main.service

7-Stop PostgreSQL Service & Delete Existing Default Data on Standby Server (Replica)

Since the initdb process is done automatically in Ubuntu, we stop the postgres service and delete the data directory contents on the standby server.

sudo systemctl status postgresql@16-main
sudo systemctl stop postgresql@16-main
ls -lrt /var/lib/postgresql/16/main
sudo rm -rf /var/lib/postgresql/16/main/*
ls -lrt /var/lib/postgresql/16/main

8-Edit the PostgreSQL Configuration File on the Primary Server (Master) to Allow Replication

su - postgres

cp /etc/postgresql/16/main/postgresql.conf /etc/postgresql/16/main/postgresql.conf_20250702

vi /etc/postgresql/16/main/postgresql.conf

listen_addresses='*'
wal_level=replica
max_wal_senders=10
wal_keep_size=256
hot_standby=on

Save and exit the configuration file.

::::: Explanation of the options :::::
📌 listen_addresses = '*'
- Tells PostgreSQL to listen on all network interfaces.
- Required so the standby server can connect to the primary server for replication.
📌 wal_level = replica
- Sets the level of information written to the WAL (Write-Ahead Log).
- The replica level is the minimum required for physical replication.
- Enables WAL archiving and streaming replication.
📌 max_wal_senders = 10
- Defines how many WAL sender processes the primary server can run simultaneously.
- Each standby server that connects consumes one WAL sender.
- Set this according to the number of standbys you plan to support.
📌 wal_keep_size = 256
- Specifies the minimum size (in MB) of WAL files to keep.
- Helps prevent replication failure due to missing WAL files if the standby is delayed.
📌 hot_standby = on
- Allows the standby server to accept read-only queries while in recovery mode.
- This setting must be enabled on the standby node.

9-Create Replication User on Primary Server (Master)

su - postgres
psql
CREATE ROLE replicator WITH REPLICATION LOGIN ENCRYPTED PASSWORD 'replicator_password_2025';

::::: Explanation of the options :::::
📌 CREATE ROLE replicator: Creates a new role named replicator.
📌 WITH REPLICATION: Grants the role permission to use streaming replication.
📌 LOGIN: Allows the role to log in (without it, it's just a role, not a usable user).
📌 ENCRYPTED PASSWORD '...': Stores the password securely in the catalog.

10-Add Rule in the Host-Based Authentication File on Primary Server (Master)

Add standby server ip address to the primary server’s pg_hba.conf to allow connection from standby server to primary server.

su - postgres

cp /etc/postgresql/16/main/pg_hba.conf /etc/postgresql/16/main/pg_hba.conf_20250702

vi /etc/postgresql/16/main/pg_hba.conf
# It is standby server ip address and was added for stream replication
host    replication     replicator      192.168.1.152/32       md5

more /etc/postgresql/16/main/pg_hba.conf |grep 192.168.1.152

exit

systemctl restart postgresql@16-main.service

systemctl status postgresql@16-main.service

::::: Explanation of the options :::::
📌host: This connection is allowed over TCP/IP.
📌replication: This connection is for replication purposes (i.e., streaming replication).
📌replicator: The PostgreSQL user allowed to connect for replication.
📌192.168.1.152/32: The client IP address allowed to connect — here only exactly 192.168.1.152.
📌md5: Authentication method — md5 password authentication.

11-Create Some Databases and Tables with Data on Primary Server (Master) for Test Purpose Before Replication

1. Login to PostgreSQL
sudo -u postgres psql

2. Create Test Databases
CREATE DATABASE testdb1;
CREATE DATABASE testdb2;

3. Connect to First Database
\c testdb1

4. Create Tables
CREATE TABLE employees (
    id SERIAL PRIMARY KEY,
    name VARCHAR(100),
    department VARCHAR(50),
    salary NUMERIC
);

CREATE TABLE departments (
    id SERIAL PRIMARY KEY,
    name VARCHAR(50),
    location VARCHAR(100)
);

5. Insert Sample Data
INSERT INTO employees (name, department, salary) VALUES
('Alice', 'IT', 6000),
('Bob', 'HR', 5000),
('Carol', 'Finance', 5500);

INSERT INTO departments (name, location) VALUES
('IT', 'New York'),
('HR', 'Chicago'),
('Finance', 'San Francisco');

6. Check the Data
SELECT * FROM employees;
SELECT * FROM departments;

7. Switch to Second Database
\c testdb2

8. Create a Simple Table and Insert Data
CREATE TABLE products (
    id SERIAL PRIMARY KEY,
    name TEXT,
    price NUMERIC
);

INSERT INTO products (name, price) VALUES
('Laptop', 1500),
('Keyboard', 100),
('Mouse', 50);

9. Check the Data
SELECT * FROM products;

12-Run the pg_basebackup Command on Standby Server (Replica)

  • Primary Server IP: 192.168.1.151
  • Standby Server Data Path: /var/lib/postgresql/16/main
sudo su - postgres

pg_basebackup -h 192.168.1.151 -U replicator -Fp -Xs -P -R -D /var/lib/postgresql/16/main

####################################################### Nohup Version ##############################################
sudo su - postgres

# Create a .pgpass file (on the standby server, as the postgres user)
vi ~/.pgpass

# Add the following line (make sure to press Enter at the end of the line):
192.168.1.151:5432:*:replicator:replicator_password_2025

# Set correct file permissions:
chmod 600 ~/.pgpass

nohup pg_basebackup -h 192.168.1.151 -U replicator -Fp -Xs -P -R -D /var/lib/postgresql/16/main > pg_basebackup.log 2>&1 &

Explanation:
nohup: Keeps the process running even if the terminal is closed.
> pg_basebackup.log 2>&1: Redirects both standard output and error to pg_basebackup.log.
&: Runs the command in the background.

You can monitor progress with:
tail -444f pg_basebackup.log
####################################################### Nohup Version ##############################################

ls -lrt /var/lib/postgresql/16/main
# Notice that standby.signal is created, standby.signal file to indicate this server is a standby.

exit

systemctl enable --now postgresql@16-main.service

systemctl status postgresql@16-main.service

::::: Explanation of the options :::::
📌-h 192.168.1.151 — connect to the primary server at this IP
📌-U replicator — use the user named replicator for authentication
📌-Fp — output format plain (directory)
📌-Xs — include WAL files (stream them) to keep the backup consistent
📌-P — show progress during backup
📌-R — write a standby.signal file for automatic replication setup
📌-D /var/lib/postgresql/16/main — destination directory for the base backup (typically the data directory on standby)

13-Create Some Databases and Tables with Data on Primary Server (Master) for Test Purpose After Replication

1. Login to PostgreSQL
sudo -u postgres psql

2. Create Test Databases
CREATE DATABASE testdb3;

3. Connect to the Third Database
\c testdb3

4. Create Tables
CREATE TABLE employees (
    id SERIAL PRIMARY KEY,
    name VARCHAR(100),
    department VARCHAR(50),
    salary NUMERIC
);

CREATE TABLE departments (
    id SERIAL PRIMARY KEY,
    name VARCHAR(50),
    location VARCHAR(100)
);

5. Insert Sample Data
INSERT INTO employees (name, department, salary) VALUES
('Alice', 'IT', 6000),
('Bob', 'HR', 5000),
('Carol', 'Finance', 5500);

INSERT INTO departments (name, location) VALUES
('IT', 'New York'),
('HR', 'Chicago'),
('Finance', 'San Francisco');

6. Check the Data
SELECT * FROM employees;
SELECT * FROM departments;

14-Check Test Data on Standby Server (Replica)

1. Login to PostgreSQL
sudo -u postgres psql

2. Connect to the Third Database
\c testdb3

3. Check the Data
SELECT * FROM employees;
SELECT * FROM departments;

15-Check Replication Status on Primary Server (Master)

su - postgres

psql -x -c "SELECT * FROM pg_stat_replication;"

This output confirms that streaming replication between the primary and standby servers is working correctly.

::::: Explanation of the options :::::
📌usename: replicator – replication user.
📌client_addr: 192.168.1.152 – IP of the standby server.
📌state: streaming – replication is actively streaming WAL data.
📌sent_lsn / write_lsn / flush_lsn / replay_lsn: All are equal (0/3439980), meaning the standby has caught up with the primary WAL position.
📌sync_state: async – replication is in asynchronous mode.
📌reply_time: Timestamp of the last acknowledgement from the standby.

Since there are no lags reported, your setup is healthy and synchronized.

16-Check Wal Receiver Status on Standby Server (Replica)

su - postgres

psql -x -c "select * from pg_stat_wal_receiver;"
::::: Explanation of the options :::::
📌status: streaming → The standby is actively streaming WALs from the primary.
📌written_lsn / flushed_lsn: 0/3439980 → WAL records are being received and flushed.
📌sender_host: 192.168.1.151 → This confirms it's connected to the correct primary server.
📌last_msg_send_time and last_msg_receipt_time → WAL messages are being exchanged recently (a few seconds ago).
📌latest_end_time: 2025-07-02 22:13:08 → Latest WAL entry received timestamp, although it seems a bit old (check if primary is idle or paused).
📌slot_name is empty → You’re not using a replication slot. This is OK for basic replication, but:
-Without a slot, WAL segments could be recycled before the standby has fetched them if it falls behind.
-To make replication more resilient, consider configuring a replication slot.
📌latest_end_time is lagging (22:13:08) while last_msg_receipt_time is current (22:15:36) → This might indicate that the primary hasn't generated new WALs recently, or there is a lag in applying them on standby. You can cross-check this with:
SELECT pg_current_wal_lsn();  -- on primary
SELECT pg_last_wal_replay_lsn();  -- on standby

17-Failover to Standby Server (Replica) Steps (If Needed)

17-1-Enable wal_log_hints for Failover on Primary Server (Master)

su - postgres

vi /etc/postgresql/16/main/postgresql.conf

wal_log_hints=on

Save and exit the configuration file.

exit

systemctl restart postgresql@16-main.service

systemctl status postgresql@16-main.service

17-2-Stop PostgreSQL Service on Primary Server (Master)

sudo su -

systemctl stop postgresql@16-main.service

systemctl status postgresql@16-main.service

17-3-Check Recovery Mode on Standby Server (Replica)

su - postgres

psql -c "SELECT pg_is_in_recovery();"

17-4-Promote Standby Server as New Primary Server (Master)

su - postgres
/usr/lib/postgresql/16/bin/pg_ctl promote -D /var/lib/postgresql/16/main

17-5-Verify Recovery Mode on New Primary Server (Master)

It should return false.

su - postgres
psql -c "SELECT pg_is_in_recovery();"

18-Failover to Original Primary Server (Master) Steps (If Needed)

18-1-Stop PostgreSQL Service & Delete Existing Data on Old Primary Server

sudo systemctl status postgresql@16-main
sudo systemctl stop postgresql@16-main
ls -lrt /var/lib/postgresql/16/main
sudo rm -rf /var/lib/postgresql/16/main/*
ls -lrt /var/lib/postgresql/16/main

18-2-Edit the PostgreSQL Configuration File on the New Primary Server to Allow Replication

su - postgres

cp /etc/postgresql/16/main/postgresql.conf /etc/postgresql/16/main/postgresql.conf_20250702

vi /etc/postgresql/16/main/postgresql.conf

listen_addresses='*'
wal_level=replica
max_wal_senders=10
wal_keep_size=256
hot_standby=on

Save and exit the configuration file.

18-3-Add Rule in the Host-Based Authentication File on New Primary Server

Add standby server ip address to the primary server’s pg_hba.conf to allow connection from standby server to primary server.

su - postgres

cp /etc/postgresql/16/main/pg_hba.conf /etc/postgresql/16/main/pg_hba.conf_20250702

vi /etc/postgresql/16/main/pg_hba.conf
# It is standby server ip address and was added for stream replication
host    replication     replicator      192.168.1.151/32       md5

more /etc/postgresql/16/main/pg_hba.conf |grep 192.168.1.151

exit

systemctl restart postgresql@16-main.service

systemctl status postgresql@16-main.service

18-4-Run the pg_basebackup Command on Old Primary Server

  • Primary Server IP: 192.168.1.152
  • Standby Server Data Path: /var/lib/postgresql/16/main
sudo su - postgres

pg_basebackup -h 192.168.1.152 -U replicator -Fp -Xs -P -R -D /var/lib/postgresql/16/main

####################################################### Nohup Version ##############################################
sudo su - postgres

# Create a .pgpass file (on the standby server, as the postgres user)
vi ~/.pgpass

# Add the following line (make sure to press Enter at the end of the line):
192.168.1.152:5432:*:replicator:replicator_password_2025

# Set correct file permissions:
chmod 600 ~/.pgpass

nohup pg_basebackup -h 192.168.1.152 -U replicator -Fp -Xs -P -R -D /var/lib/postgresql/16/main > pg_basebackup.log 2>&1 &

Explanation:
nohup: Keeps the process running even if the terminal is closed.
> pg_basebackup.log 2>&1: Redirects both standard output and error to pg_basebackup.log.
&: Runs the command in the background.

You can monitor progress with:
tail -444f pg_basebackup.log
####################################################### Nohup Version ##############################################

ls -lrt /var/lib/postgresql/16/main
# Notice that standby.signal is created, standby.signal file to indicate this server is a standby.

exit

systemctl status postgresql@16-main.service

systemctl start postgresql@16-main.service

18-5-Create Some Databases and Tables with Data on New Primary Server for Test Purpose After Replication

1. Login to PostgreSQL
sudo -u postgres psql

2. Create Test Databases
CREATE DATABASE testdb4;

3. Connect to the Fourth Database
\c testdb4

4. Create Tables
CREATE TABLE employees (
    id SERIAL PRIMARY KEY,
    name VARCHAR(100),
    department VARCHAR(50),
    salary NUMERIC
);

CREATE TABLE departments (
    id SERIAL PRIMARY KEY,
    name VARCHAR(50),
    location VARCHAR(100)
);

5. Insert Sample Data
INSERT INTO employees (name, department, salary) VALUES
('Alice', 'IT', 6000),
('Bob', 'HR', 5000),
('Carol', 'Finance', 5500);

INSERT INTO departments (name, location) VALUES
('IT', 'New York'),
('HR', 'Chicago'),
('Finance', 'San Francisco');

6. Check the Data
SELECT * FROM employees;
SELECT * FROM departments;

18-6-Check Test Data on Old Primary Server

1. Login to PostgreSQL
sudo -u postgres psql

2. Connect to the Third Database
\c testdb4

3. Check the Data
SELECT * FROM employees;
SELECT * FROM departments;

18-7-Check Replication Status on New Primary Server

su - postgres

psql -x -c "SELECT * FROM pg_stat_replication;"

18-8-Check Wal Receiver Status on Old Primary Server

su - postgres

psql -x -c "select * from pg_stat_wal_receiver;"

18-9-Enable wal_log_hints for Failover on New Primary Server

su - postgres

vi /etc/postgresql/16/main/postgresql.conf

wal_log_hints=on

Save and exit the configuration file.

exit

systemctl restart postgresql@16-main.service

systemctl status postgresql@16-main.service

18-10-Stop PostgreSQL Service on New Primary Server

sudo su -

systemctl stop postgresql@16-main.service

systemctl status postgresql@16-main.service

18-11-Check Recovery Mode on Old Primary Server

su - postgres

psql -c "SELECT pg_is_in_recovery();"

18-12-Promote Old Primary Server as Original Primary Server (Master)

su - postgres
/usr/lib/postgresql/16/bin/pg_ctl promote -D /var/lib/postgresql/16/main

18-13-Verify Recovery Mode on New Primary Server (Master)

It should return false.

su - postgres
psql -c "SELECT pg_is_in_recovery();"

19-Re-Create Standby Server Steps (If Needed)

19-1-Stop PostgreSQL Service & Delete Existing Data on Standby Server (Replica)

sudo systemctl status postgresql@16-main
sudo systemctl stop postgresql@16-main
ls -lrt /var/lib/postgresql/16/main
sudo rm -rf /var/lib/postgresql/16/main/*
ls -lrt /var/lib/postgresql/16/main

19-2-Run the pg_basebackup Command on Standby Server (Replica)

  • Primary Server IP: 192.168.1.151
  • Standby Server Data Path: /var/lib/postgresql/16/main
sudo su - postgres

pg_basebackup -h 192.168.1.151 -U replicator -Fp -Xs -P -R -D /var/lib/postgresql/16/main

####################################################### Nohup Version ##############################################
sudo su - postgres

nohup pg_basebackup -h 192.168.1.151 -U replicator -Fp -Xs -P -R -D /var/lib/postgresql/16/main > pg_basebackup.log 2>&1 &

You can monitor progress with:
tail -444f pg_basebackup.log
####################################################### Nohup Version ##############################################

ls -lrt /var/lib/postgresql/16/main
# Notice that standby.signal is created, standby.signal file to indicate this server is a standby.

exit

systemctl start postgresql@16-main.service

systemctl status postgresql@16-main.service

19-3-Create Some Databases and Tables with Data on Primary Server (Master) for Test Purpose After Replication

1. Login to PostgreSQL
sudo -u postgres psql

2. Create Test Databases
CREATE DATABASE testdb5;

3. Connect to the Fifth Database
\c testdb5

4. Create Tables
CREATE TABLE employees (
    id SERIAL PRIMARY KEY,
    name VARCHAR(100),
    department VARCHAR(50),
    salary NUMERIC
);

CREATE TABLE departments (
    id SERIAL PRIMARY KEY,
    name VARCHAR(50),
    location VARCHAR(100)
);

5. Insert Sample Data
INSERT INTO employees (name, department, salary) VALUES
('Alice', 'IT', 6000),
('Bob', 'HR', 5000),
('Carol', 'Finance', 5500);

INSERT INTO departments (name, location) VALUES
('IT', 'New York'),
('HR', 'Chicago'),
('Finance', 'San Francisco');

6. Check the Data
SELECT * FROM employees;
SELECT * FROM departments;

19-4-Check Test Data on Standby Server (Replica)

1. Login to PostgreSQL
sudo -u postgres psql

2. Connect to the Fifth Database
\c testdb5

3. Check the Data
SELECT * FROM employees;
SELECT * FROM departments;

19-5-Check Replication Status on Primary Server (Master)

su - postgres

psql -x -c "SELECT * FROM pg_stat_replication;"

19-6-Check Wal Receiver Status on Standby Server (Replica)

su - postgres

psql -x -c "select * from pg_stat_wal_receiver;"

MySQL InnoDB Cluster vs Galera Cluster

Here’s a concise comparison of MySQL InnoDB Cluster and Galera Cluster, focusing on architecture, replication, availability, and use cases.

🔁 Replication Technology

FeatureMySQL InnoDB ClusterGalera Cluster
Replication TypeAsynchronous / Semi-synchronous (Group Replication)Synchronous multi-master (via WSREP)
Conflict HandlingBuilt-in conflict detection with Group ReplicationVirtually no conflicts due to synchronous replication
Write ScalabilityWrites go to a single primary nodeAll nodes are writable (true multi-master)

🏗️ Architecture and Setup

FeatureMySQL InnoDB ClusterGalera Cluster
Setup ComplexityEasier with MySQL Shell and AdminAPIMore complex, manual configuration
Native ToolsMySQL Shell, Router, AdminAPINo official GUI/management; requires manual scripts
Load BalancerMySQL Router, ProxySQLRequires HAProxy or ProxySQL for load balancing

🔐 Data Consistency & Availability

FeatureMySQL InnoDB ClusterGalera Cluster
Consistency ModelEventual / Tunable consistencyStrong consistency (writes are committed on all nodes)
FailoverAutomatic failover supported nativelyAutomatic failover supported
Node RolesSingle-Primary or Multi-Primary modeAll nodes act as primary

📦 Backup, Restore & Tooling

FeatureMySQL InnoDB ClusterGalera Cluster
MySQL Enterprise BackupFully supportedNot officially supported
SST/IST Sync MechanismsNot required (replicates binlogs)Uses SST/IST for syncing new or recovering nodes

🎯 Use Case Fit

FeatureMySQL InnoDB ClusterGalera Cluster
Best ForEnterprises using MySQL Enterprise; OLTP systemsSynchronous HA environments, write-anywhere workloads
Recommended TopologyRead scale-out, write to one nodeWrite/read scale-out with even distribution

Pros and Cons Summary

Cluster Type✅ Pros❌ Cons
InnoDB ClusterOfficial MySQL support, easier setup, integrated toolsNot truly multi-master (write-splitting discouraged)
Galera ClusterTrue multi-master, synchronous consistency, high availabilitySensitive to network latency, more complex operations

pg_basebackup vs pgBackRest

pg_basebackup and pgBackRest are PostgreSQL database backup tools, but they have significant differences in their intended use, features, and flexibility. The following table summarizes the comparison of these two tools.

Comparison: pg_basebackup vs pgBackRest

Featurepg_basebackuppgBackRest
Official SupportCore PostgreSQL utilityCommunity-supported, widely adopted
Backup TypeFull physical backup onlyFull, incremental, and differential physical backups
Restore ProcessManualAutomated and flexible
WAL ArchivingRequires external scriptingBuilt-in WAL archiving and restore support
EncryptionNot supported nativelySupported
CompressionNot supportedSupported
Backup ValidationNot availableSupported
Multi-Server SupportNoYes (parallel backup & restore, remote backup servers)
Backup TargetsLocal or remote serverLocal, remote, and cloud (e.g., S3)
SchedulingNeeds manual cron setupEasily scriptable and cron-friendly
Monitoring/LoggingBasic stdout loggingAdvanced logging and reporting
Ease of UseSimple (good for basic needs)More complex but highly powerful for production environments

When to use pg_basebackup?

  • For quick and simple full backups
  • When setting up streaming replication (base backup required)
  • If WAL archiving or incremental backups are not needed

When to use pgBackRest?

  • For professional backup strategies in production
  • When you need incremental backups and fast restores
  • When you require features like WAL archiving, encryption, compression, and scheduling
  • For large databases and multi-server environments

Conclusion:

  • For simple use cases or replication setup → Use pg_basebackup
  • For robust, secure, and automated backup/restore workflows → Use pgBackRest

Load Balancer Options and Comparisons for MySQL InnoDB Cluster || MySQL Router vs ProxySQL vs HAProxy

Here’s a detailed explanation of the load balancer options you can use in a MySQL InnoDB Cluster environment, including their advantages and disadvantages.

1. MySQL Router

An official product provided and maintained by Oracle, designed specifically for MySQL InnoDB Cluster.

Advantages:

  • Tight integration with InnoDB Cluster (auto-discovery via MySQL Shell).
  • Supports automatic failover and role-based routing (e.g., read/write split).
  • Easy to install and configure.
  • Native tool, officially supported.

Disadvantages:

  • Limited to basic routing algorithms (e.g., round-robin).
  • Lacks advanced query routing or performance analytics.

2. ProxySQL

A powerful, SQL-aware proxy with advanced routing and connection management capabilities.

Advantages:

  • Provides connection pooling – ideal for environments with a high number of short-lived connections.
  • Advanced query routing, caching, rewriting, and user-based routing.
  • Highly customizable health checks and load balancing.
  • Can be integrated with InnoDB Cluster (via scripts, scheduler, or monitoring tools).

Disadvantages:

  • More complex to configure compared to MySQL Router.
  • No native auto-discovery or cluster metadata awareness (requires custom setup).
  • Requires scripting or external monitoring tools for full failover management.

3. HAProxy

A well-known TCP/HTTP load balancer; works at layer 4 or layer 7.

Advantages:

  • Very stable and fast, widely used in production.
  • Supports health checks and failover via custom logic.
  • Lightweight and efficient for TCP-based MySQL load balancing.

Disadvantages:

  • Not MySQL-protocol aware – cannot inspect or route queries based on SQL logic.
  • No connection pooling or advanced query logic.
  • No native integration with InnoDB Cluster – all configuration is manual.

4. Keepalived (for High Availability)

Used to provide High Availability by assigning a virtual IP to two or more load balancer nodes (e.g., ProxySQL or MySQL Router). Not a load balancer itself, but works alongside.

Comparison Table

FeatureMySQL RouterProxySQLHAProxy
InnoDB Cluster Integration✅ Native🔶 Partial/Manual❌ None
Connection Pooling❌ No✅ Yes❌ No
Query Routing Capabilities🔶 Basic✅ Advanced❌ None
Health Check Support✅ Basic✅ Advanced✅ Moderate
Failover Management✅ Automatic🔶 Script Required🔶 Script Required
Ease of Configuration✅ Easy❌ Medium/Hard🔶 Medium
Performance🔶 Moderate✅ High✅ High

Recommendation Based on Use Case

  • For small to medium environments or quick deployment → use MySQL Router.
  • For large-scale environments, complex routing, or high concurrency → use ProxySQL with Keepalived for HA.
  • For simple, high-speed TCP-level balancing → use HAProxy (less flexible).