How to Setup PostgreSQL High Availability using Patroni & Etcd & HAProxy & Keepalived for Debian (Ubuntu)?

I have tried to document step by step How to Setup PostgreSQL High Availability using Patroni & Etcd & HAProxy & Keepalived for Debian (Ubuntu). I hope it will be useful for those who need it!

High Level Installation and Configuration Steps

1-Setup Environment Details
2-Set Hostnames on Each Server
3-Update /etc/hosts Files on All Servers
4-Disable Firewall on All Servers
5-Install PostgreSQL
6-Install Etcd
7-Configure Etcd
8-Install Patroni
9-Configure Patroni
10-Enable & Start Patroni
11-Reconfigure Etcd Cluster
12-Install & Configure & Start HAProxy
13-Install & Configure & Start Keepalived
14-Create Table and Insert Some Data for Test Purpose on Leader Node
15-Check Newly Created Table & Data on Replica Nodes
16-Shutdown HAProxy-01 and Ensure DB Connections Work as Expected Using KeepAliveD Virtual IP Address and Ports (RW & RO)
17-Shutdown HAProxy-02 and Ensure DB Connections Work as Expected Using KeepAliveD Virtual IP Address and Ports (RW & RO)
18-KeepAliveD Test Script Using Virtual IP Address and Ports (RW & RO)

1-Setup Environment Details

Ubuntu 24.04 – PostgreSQL 16.9 – Patroni 4.0.5 – Etcd 3.6.0 – HAProxy 2.8.5 – Keepalived 2.2.8

🔹 PostgreSQL 🔹

  • What it does: PostgreSQL is a powerful, open-source relational database management system.
  • Purpose: It stores and manages your data reliably, supporting complex queries, transactions, and data integrity.

🔹 Patroni 🔹

  • What it does: Patroni is a high-availability (HA) solution for PostgreSQL using distributed consensus.
  • Purpose: It manages PostgreSQL replication and failover automatically, ensuring that one node is always the primary and the others are replicas.

🔹 Etcd 🔹

  • What it does: Etcd is a distributed key-value store used for service discovery and coordination.
  • Purpose: In this setup, it helps Patroni coordinate cluster state and perform leader election across nodes.

🔹 HAProxy 🔹

  • What it does: HAProxy is a high-performance TCP/HTTP load balancer.
  • Purpose: It routes traffic to the correct PostgreSQL node (usually the primary), ensuring continuous database access for applications.

🔹 Keepalived 🔹

  • What it does: Keepalived provides IP failover via the Virtual Router Redundancy Protocol (VRRP).
  • Purpose: It ensures that the HAProxy IP address is always available, even if one HAProxy node fails, by switching the virtual IP to the healthy node.

2-Set Hostnames on Each Server

192.168.1.151 -> sudo hostnamectl set-hostname postgres-patroni-etcd-01
192.168.1.152 -> sudo hostnamectl set-hostname postgres-patroni-etcd-02
192.168.1.153 -> sudo hostnamectl set-hostname postgres-patroni-etcd-03
192.168.1.154 -> sudo hostnamectl set-hostname haproxy-keepalived-01
192.168.1.155 -> sudo hostnamectl set-hostname haproxy-keepalived-02

3-Update /etc/hosts Files on All Servers

This step should be repeated on below servers.

  • postgres-patroni-etcd-01
  • postgres-patroni-etcd-02
  • postgres-patroni-etcd-03
  • haproxy-keepalived-01
  • haproxy-keepalived-02

vi /etc/hosts

192.168.1.151 postgres-patroni-etcd-01
192.168.1.152 postgres-patroni-etcd-02
192.168.1.153 postgres-patroni-etcd-03
192.168.1.154 haproxy-keepalived-01
192.168.1.155 haproxy-keepalived-02

4-Disable Firewall on All Servers

This step should be repeated on below servers.

  • postgres-patroni-etcd-01
  • postgres-patroni-etcd-02
  • postgres-patroni-etcd-03
  • haproxy-keepalived-01
  • haproxy-keepalived-02
systemctl status ufw
systemctl stop ufw
sudo systemctl disable ufw
sudo systemctl mask ufw
systemctl status ufw

5-Install PostgreSQL

These steps should be repeated on each of the 3 PostgreSQL Servers.

  • postgres-patroni-etcd-01
  • postgres-patroni-etcd-02
  • postgres-patroni-etcd-03

Install PostgreSQL Package Tools

This package provides tools required to manage PostgreSQL versions and to install the official PostgreSQL APT repository.

sudo apt update
sudo apt install -y postgresql-common

Add the Official PostgreSQL APT Repository

This script automatically adds the PostgreSQL Global Development Group (PGDG) APT repository to your system, enabling access to the latest PostgreSQL versions — including PostgreSQL 16.

sudo /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh

Update Package List and Install PostgreSQL 16 with Additional Modules

  • postgresql-16: The core PostgreSQL 16 server package.
  • postgresql-contrib-16: Adds useful extensions like pg_stat_statements, uuid-ossp, and more.
sudo apt update
sudo apt install -y postgresql-16 postgresql-contrib-16

Stop and Disable PostgreSQL Service

Patroni will take control of the PostgreSQL process — including startup, shutdown, and restart. Therefore, the default systemd-managed PostgreSQL service must be stopped and disabled.

sudo systemctl stop postgresql
sudo systemctl disable postgresql

6-Install Etcd

These steps should be repeated on each of the 3 Etcd Servers.

  • postgres-patroni-etcd-01
  • postgres-patroni-etcd-02
  • postgres-patroni-etcd-03

Install Prerequisites

sudo apt update
sudo apt-get install -y wget curl

Download Latest Etcd Release

Go to https://github.com/etcd-io/etcd/releases and copy the latest Linux tarball link (adjust the version if needed):

cd /tmp

wget https://github.com/etcd-io/etcd/releases/download/v3.6.0/etcd-v3.6.0-linux-amd64.tar.gz

Extract and Rename Directory

tar xvf etcd-v3.6.0-linux-amd64.tar.gz
mv etcd-v3.6.0-linux-amd64 etcd

Move Binaries to /usr/local/bin

sudo mv etcd/etcd* /usr/local/bin/

Verify Installation

etcd --version
etcdctl version

Create System User for Etcd

sudo useradd --system --home /var/lib/etcd --shell /bin/false etcd

Prepare Etcd Configuration Directory

sudo mkdir -p /etc/etcd
chown -R etcd:etcd /etc/etcd

7-Configure Etcd

Create Etcd Config File

Etcd Node1 (postgres-patroni-etcd-01) (192.168.1.151)

sudo vi /etc/etcd/etcd.env

ETCD_NAME="postgres-patroni-etcd-01"
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_INITIAL_CLUSTER="postgres-patroni-etcd-01=http://192.168.1.151:2380,postgres-patroni-etcd-02=http://192.168.1.152:2380,postgres-patroni-etcd-03=http://192.168.1.153:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.151:2380"
ETCD_LISTEN_PEER_URLS="http://192.168.1.151:2380,http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.151:2379,http://localhost:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.151:2379"

Etcd Node2 (postgres-patroni-etcd-02) (192.168.1.152)

sudo vi /etc/etcd/etcd.env

ETCD_NAME="postgres-patroni-etcd-02"
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_INITIAL_CLUSTER="postgres-patroni-etcd-01=http://192.168.1.151:2380,postgres-patroni-etcd-02=http://192.168.1.152:2380,postgres-patroni-etcd-03=http://192.168.1.153:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.152:2380"
ETCD_LISTEN_PEER_URLS="http://192.168.1.152:2380,http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.152:2379,http://localhost:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.152:2379"

Etcd Node3 (postgres-patroni-etcd-03) (192.168.1.153)

sudo vi /etc/etcd/etcd.env

ETCD_NAME="postgres-patroni-etcd-03"
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_INITIAL_CLUSTER="postgres-patroni-etcd-01=http://192.168.1.151:2380,postgres-patroni-etcd-02=http://192.168.1.152:2380,postgres-patroni-etcd-03=http://192.168.1.153:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.153:2380"
ETCD_LISTEN_PEER_URLS="http://192.168.1.153:2380,http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.153:2379,http://localhost:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.153:2379"

Create Etcd Service

These steps should be repeated on each of the 3 Etcd Servers.

  • postgres-patroni-etcd-01
  • postgres-patroni-etcd-02
  • postgres-patroni-etcd-03
sudo mkdir -p /var/lib/etcd 
sudo chown -R etcd:etcd /var/lib/etcd

sudo vi /etc/systemd/system/etcd.service

[Unit]
Description=etcd key-value store
Documentation=https://github.com/etcd-io/etcd
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd
EnvironmentFile=/etc/etcd/etcd.env
ExecStart=/usr/local/bin/etcd
Restart=always
RestartSec=10s
LimitNOFILE=40000
User=etcd
Group=etcd

[Install]
WantedBy=multi-user.target

Start Etcd Service

These steps should be repeated on each of the 3 Etcd Servers.

  • postgres-patroni-etcd-01
  • postgres-patroni-etcd-02
  • postgres-patroni-etcd-03
# 1. Reload Systemd Manager Configuration
# This reloads unit files to recognize any changes made to the etcd.service definition.
sudo systemctl daemon-reload

# 2. Enable etcd to Start at Boot
sudo systemctl enable etcd

# 3. Start the Etcd Service
# On the first node, this may appear to hang. That’s expected — etcd waits for quorum (a majority of configured members in the cluster) before fully initializing.The behavior changes once another node joins and the cluster forms quorum.
sudo systemctl start etcd

# 4. Check Etcd Service Status
sudo systemctl status etcd

# 5. Check Etcd Logs for Troubleshooting or Confirmation
journalctl -xeu etcd.service

Check Etcd Cluster

These steps can be perfomed on one of the 3 Etcd Servers.

  • postgres-patroni-etcd-01
  • postgres-patroni-etcd-02
  • postgres-patroni-etcd-03

Check Cluster Members

etcdctl member list

Expected Output

Check Endpoint Health

etcdctl endpoint health --endpoints=http://192.168.1.151:2379,http://192.168.1.152:2379,http://192.168.1.153:2379

Expected Output

Check Cluster Leader

etcdctl --endpoints=http://192.168.1.151:2379,http://192.168.1.152:2379,http://192.168.1.153:2379 endpoint status --write-out=table

OR

etcdctl endpoint status --cluster --write-out=table

Expected Output

8-Install Patroni

These steps should be repeated on each of the 3 Patroni Servers.

  • postgres-patroni-etcd-01
  • postgres-patroni-etcd-02
  • postgres-patroni-etcd-03
# Create Needed Directories
mkdir -p /data
mkdir -p /data/pg/data/16
mkdir -p /data/pg/archive
mkdir -p /data/pg/logs
mkdir -p /data/pg/logs/postgres
mkdir -p /data/pg/logs/patroni
mkdir -p /data/pg/wal
mkdir -p /data/pg/patroni
chown -R postgres:postgres /data
chmod -R 700 /data/pg/data/16/

# Install Patroni
sudo apt install -y patroni

9-Configure Patroni

Create Patroni Config File

Patroni Node1 (postgres-patroni-etcd-01) (192.168.1.151)

sudo vi /data/pg/patroni/patroni.yml

scope: postgresql-cluster
namespace: /service/
name: postgres-patroni-etcd-01  # node1

etcd3:
  hosts: 192.168.1.151:2379,192.168.1.152:2379,192.168.1.153:2379  # etcd cluster nodes
  
log:
    dir: /data/pg/logs/patroni
    level: INFO

restapi:
  listen: 0.0.0.0:8008
  connect_address: 192.168.1.151:8008  # IP for node1's REST API

bootstrap:
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576  # Failover parameters
    postgresql:
        pg_hba:  # Access rules
        - host replication replicator 127.0.0.1/32 md5
        - host replication replicator 192.168.1.151/32 md5
        - host replication replicator 192.168.1.152/32 md5
        - host replication replicator 192.168.1.153/32 md5
        - host all all 127.0.0.1/32 md5
        - host all all 0.0.0.0/0 md5
        - local all all md5
  initdb:
    - encoding: UTF8
    - data-checksums

postgresql:
  listen: 0.0.0.0:5432
  connect_address: 192.168.1.151:5432  # IP for node1's PostgreSQL
  data_dir: /data/pg/data/16
  bin_dir: /usr/lib/postgresql/16/bin  # Binary directory for PostgreSQL 16
  authentication:
    superuser:
      username: postgres
      password: "postgres2025!"  # Superuser password - be sure to change
    replication:
      username: replicator
      password: "replicator2025!"  # Replication password - be sure to change
  parameters:
    max_connections: 100
    shared_buffers: 256MB

tags:
  nofailover: false
  noloadbalance: false
  clonefrom: false

Patroni Node2(postgres-patroni-etcd-02) (192.168.1.152)

sudo vi /data/pg/patroni/patroni.yml

scope: postgresql-cluster
namespace: /service/
name: postgres-patroni-etcd-02  # node2

etcd3:
  hosts: 192.168.1.151:2379,192.168.1.152:2379,192.168.1.153:2379  # etcd cluster nodes
  
log:
    dir: /data/pg/logs/patroni
    level: INFO

restapi:
  listen: 0.0.0.0:8008
  connect_address: 192.168.1.152:8008  # IP for node2's REST API

bootstrap:
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576  # Failover parameters
    postgresql:
        pg_hba:  # Access rules
        - host replication replicator 127.0.0.1/32 md5
        - host replication replicator 192.168.1.151/32 md5
        - host replication replicator 192.168.1.152/32 md5
        - host replication replicator 192.168.1.153/32 md5
        - host all all 127.0.0.1/32 md5
        - host all all 0.0.0.0/0 md5
        - local all all md5
  initdb:
    - encoding: UTF8
    - data-checksums

postgresql:
  listen: 0.0.0.0:5432
  connect_address: 192.168.1.152:5432  # IP for node2's PostgreSQL
  data_dir: /data/pg/data/16
  bin_dir: /usr/lib/postgresql/16/bin  # Binary directory for PostgreSQL 16
  authentication:
    superuser:
      username: postgres
      password: "postgres2025!"  # Superuser password - be sure to change
    replication:
      username: replicator
      password: "replicator2025!"  # Replication password - be sure to change
  parameters:
    max_connections: 100
    shared_buffers: 256MB

tags:
  nofailover: false
  noloadbalance: false
  clonefrom: false

Patroni Node3(postgres-patroni-etcd-03) (192.168.1.153)

sudo vi /data/pg/patroni/patroni.yml

scope: postgresql-cluster
namespace: /service/
name: postgres-patroni-etcd-03  # node3

etcd3:
  hosts: 192.168.1.151:2379,192.168.1.152:2379,192.168.1.153:2379  # etcd cluster nodes
  
log:
    dir: /data/pg/logs/patroni
    level: INFO

restapi:
  listen: 0.0.0.0:8008
  connect_address: 192.168.1.153:8008  # IP for node3's REST API

bootstrap:
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576  # Failover parameters
    postgresql:
        pg_hba:  # Access rules
        - host replication replicator 127.0.0.1/32 md5
        - host replication replicator 192.168.1.151/32 md5
        - host replication replicator 192.168.1.152/32 md5
        - host replication replicator 192.168.1.153/32 md5
        - host all all 127.0.0.1/32 md5
        - host all all 0.0.0.0/0 md5
        - local all all md5
  initdb:
    - encoding: UTF8
    - data-checksums

postgresql:
  listen: 0.0.0.0:5432
  connect_address: 192.168.1.153:5432  # IP for node3's PostgreSQL
  data_dir: /data/pg/data/16
  bin_dir: /usr/lib/postgresql/16/bin  # Binary directory for PostgreSQL 16
  authentication:
    superuser:
      username: postgres
      password: "postgres2025!"  # Superuser password - be sure to change
    replication:
      username: replicator
      password: "replicator2025!"  # Replication password - be sure to change
  parameters:
    max_connections: 100
    shared_buffers: 256MB

tags:
  nofailover: false
  noloadbalance: false
  clonefrom: false

Edit Patroni Service File

These steps should be repeated on each of the 3 Patroni Servers.

  • postgres-patroni-etcd-01
  • postgres-patroni-etcd-02
  • postgres-patroni-etcd-03
mv /usr/lib/systemd/system/patroni.service /usr/lib/systemd/system/patroni.service_original

vi /usr/lib/systemd/system/patroni.service

[Unit]
Description=Runners to orchestrate a high-availability PostgreSQL
After=syslog.target network.target

[Service]
Type=simple

User=postgres
Group=postgres

# Read in configuration file if it exists, otherwise proceed
EnvironmentFile=-/etc/patroni_env.conf

# The default is the user's home directory, and if you want to change it, you must provide an absolute path.
# WorkingDirectory=/home/sameuser

# Where to send early-startup messages from the server
# This is normally controlled by the global default set by systemd
#StandardOutput=syslog

# Pre-commands to start watchdog device
# Uncomment if watchdog is part of your patroni setup
#ExecStartPre=-/usr/bin/sudo /sbin/modprobe softdog
#ExecStartPre=-/usr/bin/sudo /bin/chown postgres /dev/watchdog

# Start the patroni process
ExecStart=/usr/bin/patroni /data/pg/patroni/patroni.yml

# Send HUP to reload from patroni.yml
ExecReload=/bin/kill -s HUP $MAINPID

# Only kill the patroni process, not it's children, so it will gracefully stop postgres
KillMode=process

# Give a reasonable amount of time for the server to start up/shut down
TimeoutSec=30

# Restart the service if it crashed
Restart=on-failure

[Install]
WantedBy=multi-user.target

10-Enable & Start Patroni

These steps should be repeated on each of the 3 Patroni Servers.

  • postgres-patroni-etcd-01
  • postgres-patroni-etcd-02
  • postgres-patroni-etcd-03
chown -R postgres:postgres /data
chmod -R 700 /data/pg/data/16/
systemctl daemon-reload
systemctl enable patroni
systemctl start patroni
systemctl status patroni

Check Patroni Leader & Replica

sudo patronictl -c /data/pg/patroni/patroni.yml list
tail -444f /data/pg/logs/patroni/patroni.log
curl -k http://192.168.1.151:8008/primary
curl -k http://192.168.1.152:8008/primary
curl -k http://192.168.1.153:8008/primary

11-Reconfigure Etcd Cluster

These steps should be repeated on each of the 3 Etcd Servers.

  • postgres-patroni-etcd-01
  • postgres-patroni-etcd-02
  • postgres-patroni-etcd-03
sudo vi /etc/etcd/etcd.env

Change ETCD_INITIAL_CLUSTER_STATE parameter;

FROM
ETCD_INITIAL_CLUSTER_STATE="new"

TO
ETCD_INITIAL_CLUSTER_STATE="existing"

12-Install & Configure & Start HAProxy

These steps should be repeated on each of the 2 HAProxy Servers.

  • haproxy-keepalived-01
  • haproxy-keepalived-02

Install HAProxy

sudo apt -y update
sudo apt -y install haproxy

Configure HAProxy

mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg_original

sudo vi /etc/haproxy/haproxy.cfg

frontend postgres_rw_frontend
    bind *:5432
    mode tcp
    default_backend postgres_rw_backend

frontend postgres_ro_frontend
    bind *:5433
    mode tcp
    default_backend postgres_ro_backend

backend postgres_rw_backend
    mode tcp
    option httpchk GET /primary
    http-check expect status 200
    timeout connect 5s
    timeout client 30s
    timeout server 30s
    server postgresql-01 192.168.1.151:5432 check port 8008
    server postgresql-02 192.168.1.152:5432 check port 8008
    server postgresql-03 192.168.1.153:5432 check port 8008

backend postgres_ro_backend
    mode tcp
    option httpchk GET /replica
    http-check expect status 200
    balance roundrobin
    timeout connect 5s
    timeout client 30s
    timeout server 30s
    server postgresql-01 192.168.1.151:5432 check port 8008
    server postgresql-02 192.168.1.152:5432 check port 8008
    server postgresql-03 192.168.1.153:5432 check port 8008

🔎 Descriptions

  1. /primary → This is the HTTP endpoint provided by Patroni that returns HTTP 200 OK only if the node is the leader (read-write).
  2. /replica → This is the HTTP endpoint that returns HTTP 200 OK only if the node is a replica (read-only).
  3. 5432 → Used for read-write connections; applications should connect here to perform write operations.
  4. 5433 → Used for read-only connections; applications should connect here for read queries.
  5. balance roundrobin → Distributes read-only connections evenly across available replica nodes to achieve load balancing.

Start HAProxy

sudo systemctl start haproxy

sudo systemctl status haproxy

sudo tail -f /var/log/syslog | grep haproxy

13-Install & Configure & Start Keepalived

These steps should be repeated on each of the 2 Keepalived Servers.

  • haproxy-keepalived-01
  • haproxy-keepalived-02

Install Keepalived

sudo apt update
sudo apt install keepalived -y

Configure Keepalived

Create Check Script on Each Node

sudo vi /etc/keepalived/check_haproxy.sh

#!/bin/bash

# Define the port to check (e.g., HAProxy frontend port)
PORT=5432

# Check if HAProxy is running
if ! pidof haproxy > /dev/null; then
    echo "HAProxy is not running"
    exit 1
fi

# Check if HAProxy is listening on the expected port
if ! ss -ltn | grep -q ":${PORT}"; then
    echo "HAProxy is not listening on port ${PORT}"
    exit 2
fi

# All checks passed
exit 0

Create User to Execute Script on Each Node

sudo useradd -r -s /bin/false keepalived_script

sudo chmod +x /etc/keepalived/check_haproxy.sh
sudo chown keepalived_script:keepalived_script /etc/keepalived/check_haproxy.sh
sudo chmod 700 /etc/keepalived/check_haproxy.sh

Keepalived Node1 (haproxy-keepalived-01) (192.168.1.154)

sudo vi /etc/keepalived/keepalived.conf

global_defs {
    enable_script_security
    script_user keepalived_script
}

vrrp_script check_haproxy {
    script "/etc/keepalived/check_haproxy.sh"
    interval 2
    fall 3
    rise 2
}

vrrp_instance VI_1 {
    state MASTER
    interface enp0s3 # update with your nic
    virtual_router_id 51
    priority 100
    advert_int 1
    virtual_ipaddress {
        192.168.1.100
    }
    track_script {
        check_haproxy
    }
}

Keepalived Node2 (haproxy-keepalived-02) (192.168.1.155)

sudo vi /etc/keepalived/keepalived.conf

global_defs {
    enable_script_security
    script_user keepalived_script
}

vrrp_script check_haproxy {
    script "/etc/keepalived/check_haproxy.sh"
    interval 2
    fall 3
    rise 2
}

vrrp_instance VI_1 {
    state BACKUP
    interface enp0s3 # update with your nic
    virtual_router_id 51
    priority 90
    advert_int 1
    virtual_ipaddress {
        192.168.1.100
    }
    track_script {
        check_haproxy
    }
}

Start Keepalived

sudo systemctl restart keepalived

sudo systemctl status keepalived

Check Keepalived Master & Backup Node

sudo journalctl -u keepalived -f
ip a

14-Create Table and Insert Some Data for Test Purpose on Leader Node

patronictl -c /data/pg/patroni/patroni.yml list

psql -U postgres

CREATE DATABASE testdb;

\c testdb

CREATE TABLE replication_test_table (
    id SERIAL PRIMARY KEY,
    replication_test VARCHAR(50)
);

INSERT INTO replication_test_table (replication_test) VALUES
('Test01 was inserted by Master-192.168.1.151'),
('Test02 was inserted by Master-192.168.1.151'),
('Test03 was inserted by Master-192.168.1.151'),
('Test04 was inserted by Master-192.168.1.151'),
('Test05 was inserted by Master-192.168.1.151'),
('Test06 was inserted by Master-192.168.1.151');

SELECT * FROM replication_test_table;

15-Check Newly Created Table & Data on Replica Nodes

patronictl -c /data/pg/patroni/patroni.yml list

psql -U postgres

\c testdb

SELECT * FROM replication_test_table;

16-Shutdown HAProxy-01 and Ensure DB Connections Work as Expected Using KeepAliveD Virtual IP Address and Ports (RW & RO)

Check Install & Configure & Start HAProxy Section
- Read/Write Connections:5432
- Read/Only Connections:5433
-- Read/Write Connection Test
PGPASSWORD='postgres2025!' psql -h 192.168.1.100 -p 5432 -U postgres -d postgres -c "SELECT inet_server_addr();"
-- Read/Only Connection Test
PGPASSWORD='postgres2025!' psql -h 192.168.1.100 -p 5433 -U postgres -d postgres -c "SELECT inet_server_addr();"

17-Shutdown HAProxy-02 and Ensure DB Connections Work as Expected Using KeepAliveD Virtual IP Address and Ports (RW & RO)

Check Install & Configure & Start HAProxy Section
- Read/Write Connections:5432
- Read/Only Connections:5433
-- Read/Write Connection Test
PGPASSWORD='postgres2025!' psql -h 192.168.1.100 -p 5432 -U postgres -d postgres -c "SELECT inet_server_addr();"
-- Read/Only Connection Test
PGPASSWORD='postgres2025!' psql -h 192.168.1.100 -p 5433 -U postgres -d postgres -c "SELECT inet_server_addr();"

18-KeepAliveD Test Script Using Virtual IP Address and Ports (RW & RO)

# RW Test
while [ 1 ]
do
sleep 1
PGPASSWORD='postgres2025!' psql -h 192.168.1.100 -p 5432 -U postgres -d postgres -c "SELECT inet_server_addr();"
done
# RO Test
while [ 1 ]
do
sleep 1
PGPASSWORD='postgres2025!' psql -h 192.168.1.100 -p 5433 -U postgres -d postgres -c "SELECT inet_server_addr();"
done

Some Useful Commands

ETCD Commands

Check Cluster Members

etcdctl member list

Check Endpoint Health

etcdctl endpoint health --endpoints=http://192.168.1.151:2379,http://192.168.1.152:2379,http://192.168.1.153:2379

Check Cluster Leader

etcdctl --endpoints=http://192.168.1.151:2379,http://192.168.1.152:2379,http://192.168.1.153:2379 endpoint status --write-out=table

OR

etcdctl endpoint status --cluster --write-out=table

PATRONI Commands

Check Patroni Leader & Replica Status

sudo patronictl -c /data/pg/patroni/patroni.yml list
tail -444f /data/pg/logs/patroni/patroni.log
curl -k http://192.168.1.151:8008/primary
curl -k http://192.168.1.152:8008/primary
curl -k http://192.168.1.153:8008/primary

Check & Edit & Reload Patroni Config

# Check Config
sudo patronictl -c /data/pg/patroni/patroni.yml show-config

# Edit Config
# After saving these will be replicated to all nodes
sudo patronictl -c /data/pg/patroni/patroni.yml edit-config

# Reload Config
# postgresql-cluster is name of your cluster
patronictl -c /data/pg/patroni/patroni.yml reload postgresql-cluster

Patroni Switchover

# 1. Check the Cluster Status
patronictl -c /data/pg/patroni/patroni.yml list

# 2. Perform the Switchover
patronictl -c /data/pg/patroni/patroni.yml switchover

Primary [postgres-patroni-etcd-02]: ENTER

Candidate ['postgres-patroni-etcd-01', 'postgres-patroni-etcd-03'] []: postgres-patroni-etcd-01

When should the switchover take place (e.g. 2025-05-29T17:08 )  [now]: ENTER

Are you sure you want to switchover cluster postgresql-cluster, demoting current leader postgres-patroni-etcd-03? [y/N]: y

2025-05-29 16:08:44.22239 Successfully switched over to "postgres-patroni-etcd-01"

Reinitialize Replica

# Syntax
patronictl --config-file=<patroni_yml_file_path> reinit <cluster_name> <member_name>

# Example
patronictl --config-file=/data/pg/patroni/patroni.yml reinit postgresql-cluster postgres-patroni-etcd-01

Restart PostgreSQL Nodes

# Syntax
patronictl restart <cluster_name> [<member_name>] [--role <role>] [--force]
# Restart All Nodes
patronictl --config-file=/data/pg/patroni/patroni.yml restart postgresql-cluster
# Restart Specific Node
patronictl --config-file=/data/pg/patroni/patroni.yml restart postgresql-cluster postgres-patroni-etcd-01

KEEPALIVED Commands

Check Keepalived Master & Backup Node

sudo journalctl -u keepalived -f
ip a

Step by Step || Oracle RAC (Primary&Standby) || Automated (opatchauto apply) || 19.27 Release Update

I have tried to document step by step How to Apply 19.27 Oracle Release Update Patches for RAC (Primary&Standby) using Automated (opatchauto apply) Method. I hope it will be useful for those who need it!

::::: Manuel (opatch apply) Method :::::
Step by Step || Oracle RAC (Primary&Standby) || Manuel (opatch apply) || 19.27 Release Update

::::: Oracle Patching || Manuel (opatch apply) vs Automated (opatchauto apply) Method :::::
Oracle Patching || Manuel (opatch apply) vs Automated (opatchauto apply) Method

High Level Steps

1- Check Environment Details
2- Prerequisites
3- Download Patches
4- Check New Patch Numbers & Descriptions
5- Check Invalid Objects and DBA_REGISTRY (Before Patch)
6- Check Current OPatch Version & Installed Patches on All Servers
7- Copy Patches to All Servers
8- Update OPATCH Utility Both Oracle Homes and Grid Homes on All Servers
9- Unzip the Patch on All Servers
10- Run Analyze Both Grid and Oracle Homes on All Servers
11- Apply Patches
11.1- racstandby01
11.2- racstandby02
11.3- racprimary01
11.4- racprimary02
12- Check Applied Patches Both Grid and Oracle Homes
13- Check & Apply Datapatch (If Needed)
13.1- Should I run Datapatch?
13.2- Check Datapatch Status
13.3- Apply Datapatch Manually (If Needed)
14- Check Invalid Objects
15- Patch Deinstallation Steps (If Needed)
15.1- Rollback Applied Patches
15.2- Check New Patch Levels Both Grid and Oracle Homes
15.3- Rollback SQL Changes from the the Database
15.4- Check Invalid Objects and DBA_REGISTRY (After Rollback)

1- Check Environment Details

2- Prerequisites

3- Download Patches

4- Check New Patch Numbers & Descriptions

5- Check Invalid Objects and DBA_REGISTRY (Before Patch)

su - oracle
. db.env
sqlplus / as sysdba

set lines 200 pages 500
col owner for a15
col OBJECT_NAME for a35
select owner, object_name, object_type, status from dba_objects where status='INVALID';

-- Run UTLRP if You Have Invalid Objects
@?/rdbms/admin/utlrp.sql

col COMP_NAME for a40
select COMP_ID, COMP_NAME, VERSION, STATUS from dba_registry;

col ACTION_TIME for a30
col action for a30
col version for a10
col BUNDLE_SERIES for a30
col COMMENTS for a47
select ACTION_TIME, ACTION, VERSION, BUNDLE_SERIES, COMMENTS from dba_registry_history;

6- Check Current OPatch Version & Installed Patches

su - oracle

# DB HOME
. db.env
which opatch
opatch version
opatch lspatches

# GRID HOME
. grid.env
which opatch
opatch version
opatch lspatches

7- Copy Patches to All Servers

I have copied patches to /u01/software/ directories.

mkdir -p /u01/software
cp /media/sf_VIRTUALBOX_SHARED/* /u01/software/
chown -R oracle:oinstall /u01/software/
ls -lrt /u01/software/

8- Update OPATCH Utility Both Oracle Homes and Grid Homes

You must use the OPatch utility version 12.2.0.1.45 or later to apply this patch.

# GRID HOME
cp /u01/software/p6880880_190000_Linux-x86-64.zip /u01/app/19c/grid/
cd /u01/app/19c/grid/
mv OPatch OPatch_20250421
unzip p6880880_190000_Linux-x86-64.zip
. grid.env
which opatch
opatch version

# DB HOME
cp /u01/software/p6880880_190000_Linux-x86-64.zip /u01/app/oracle/product/19c/dbhome_1/
cd /u01/app/oracle/product/19c/dbhome_1/
mv OPatch OPatch_20250421
unzip p6880880_190000_Linux-x86-64.zip
. db.env
which opatch
opatch version

9- Unzip the Patch on All Servers

su - oracle
unzip /u01/software/p37641958_190000_Linux-x86-64.zip -d /backup/RU_patches/
chown -R oracle:oinstall /backup/RU_patches/

10- Run Analyze Both Grid and Oracle Homes on All Servers

# Create TEMP Patch Directory
su - root
mkdir -p /u01/temp_1927releaseupdate
cd /u01/temp_1927releaseupdate

# Set Grid Home Path
export ORACLE_HOME=/u01/app/19c/grid/
export PATH=$ORACLE_HOME/OPatch:$PATH
opatchauto apply /backup/RU_patches/37641958/ -analyze

Sample Analyze Output

[root@racstandby01 ~]# mkdir -p /u01/temp_1927releaseupdate
[root@racstandby01 ~]# cd /u01/temp_1927releaseupdate
[root@racstandby01 temp_1927releaseupdate]# export ORACLE_HOME=/u01/app/19c/grid/
[root@racstandby01 temp_1927releaseupdate]# export PATH=$ORACLE_HOME/OPatch:$PATH
[root@racstandby01 temp_1927releaseupdate]# opatchauto apply /backup/RU_patches/37641958/ -analyze

OPatchauto session is initiated at Sun Apr 27 16:26:07 2025

System initialization log file is /u01/app/19c/grid/cfgtoollogs/opatchautodb/systemconfig2025-04-27_04-26-21PM.log.

Session log file is /u01/app/19c/grid/cfgtoollogs/opatchauto/opatchauto2025-04-27_04-27-08PM.log
The id for this session is T2EF

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19c/grid
Patch applicability verified successfully on home /u01/app/19c/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19c/dbhome_1
Patch applicability verified successfully on home /u01/app/oracle/product/19c/dbhome_1

Executing patch validation checks on home /u01/app/19c/grid
Patch validation checks successfully completed on home /u01/app/19c/grid

Verifying SQL patch applicability on home /u01/app/oracle/product/19c/dbhome_1
Skipping SQL patch step execution on standby database : DRTEST
SQL patch applicability verified successfully on home /u01/app/oracle/product/19c/dbhome_1

OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:racstandby01
CRS Home:/u01/app/19c/grid
Version:19.0.0.0.0

==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /backup/RU_patches/37641958/37654975
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-27_16-28-07PM_1.log

Patch: /backup/RU_patches/37641958/37643161
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-27_16-28-07PM_1.log

Patch: /backup/RU_patches/37641958/36758186
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-27_16-28-07PM_1.log

Patch: /backup/RU_patches/37641958/37762426
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-27_16-28-07PM_1.log

Patch: /backup/RU_patches/37641958/37642901
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-27_16-28-07PM_1.log

Host:racstandby01
RAC Home:/u01/app/oracle/product/19c/dbhome_1
Version:19.0.0.0.0

==Following patches were SKIPPED:

Patch: /backup/RU_patches/37641958/37643161
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /backup/RU_patches/37641958/36758186
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /backup/RU_patches/37641958/37762426
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /backup/RU_patches/37641958/37654975
Log: /u01/app/oracle/product/19c/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-27_16-31-31PM_1.log
Reason: /backup/RU_patches/37641958/37654975 is not required to be applied to oracle home /u01/app/oracle/product/19c/dbhome_1

Patch: /backup/RU_patches/37641958/37642901
Log: /u01/app/oracle/product/19c/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-27_16-31-31PM_1.log
Reason: /backup/RU_patches/37641958/37642901 is not required to be applied to oracle home /u01/app/oracle/product/19c/dbhome_1

OPatchauto session completed at Sun Apr 27 16:35:26 2025
Time taken to complete the session 9 minutes, 7 seconds
[root@racstandby01 temp_1927releaseupdate]#

11- Apply Patches

I will apply the patches to the servers in the following order without any service interruption.

11.1- racstandby01

Prepare First Standby Node (racstandby01) For Patching

# Stop MRP on racstandby01
su - oracle 
. db.env
env | grep SID
sqlplus / as sysdba
SELECT PROCESS, STATUS, THREAD#, SEQUENCE# FROM V$MANAGED_STANDBY WHERE PROCESS LIKE 'MRP%';
RECOVER MANAGED STANDBY DATABASE CANCEL;
SELECT PROCESS, STATUS, THREAD#, SEQUENCE# FROM V$MANAGED_STANDBY WHERE PROCESS LIKE 'MRP%';

# Start MRP on racstandby02
su - oracle
. db.env
env | grep SID
sqlplus / as sysdba
startup mount;
RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
SELECT PROCESS, STATUS, THREAD#, SEQUENCE# FROM V$MANAGED_STANDBY WHERE PROCESS LIKE 'MRP%';

# On Primary
select thread#,max(sequence#) from v$archived_log where archived='YES' group by thread#;
# On Standby:
select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;
# On Primary Instance 1 Perform log switches 3 to 4 times.
alter system switch logfile;
# On Primary Instance 2 Perform log switches 3 to 4 times.
alter system switch logfile;
# On Primary
select thread#,max(sequence#) from v$archived_log where archived='YES' group by thread#;
# On Standby:
select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;

Apply Grid Home & Oracle Home Patches For First Standby Node (racstandby01)

su - root

# Switch to TEMP Patch Directory
cd /u01/temp_1927releaseupdate

# Set Grid Home Path
export ORACLE_HOME=/u01/app/19c/grid/
export PATH=$ORACLE_HOME/OPatch:$PATH

# Apply Both Grid and Oracle Home Patches
/u01/app/19c/grid/OPatch/opatchauto apply /backup/RU_patches/37641958/

Grid Home & Oracle Home Patches Output For First Standby Node (racstandby01)

[root@racstandby01 ~]# cd /u01/temp_1927releaseupdate
[root@racstandby01 temp_1927releaseupdate]# export ORACLE_HOME=/u01/app/19c/grid/
[root@racstandby01 temp_1927releaseupdate]# export PATH=$ORACLE_HOME/OPatch:$PATH
[root@racstandby01 temp_1927releaseupdate]# /u01/app/19c/grid/OPatch/opatchauto apply /backup/RU_patches/37641958/

OPatchauto session is initiated at Sun Apr 27 16:39:38 2025

System initialization log file is /u01/app/19c/grid/cfgtoollogs/opatchautodb/systemconfig2025-04-27_04-39-49PM.log.

Session log file is /u01/app/19c/grid/cfgtoollogs/opatchauto/opatchauto2025-04-27_04-40-20PM.log
The id for this session is 1WZN

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19c/grid
Patch applicability verified successfully on home /u01/app/19c/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19c/dbhome_1
Patch applicability verified successfully on home /u01/app/oracle/product/19c/dbhome_1

Executing patch validation checks on home /u01/app/19c/grid
Patch validation checks successfully completed on home /u01/app/19c/grid

Verifying SQL patch applicability on home /u01/app/oracle/product/19c/dbhome_1
Skipping SQL patch step execution on standby database : DRTEST
SQL patch applicability verified successfully on home /u01/app/oracle/product/19c/dbhome_1

Preparing to bring down database service on home /u01/app/oracle/product/19c/dbhome_1
Successfully prepared home /u01/app/oracle/product/19c/dbhome_1 to bring down database service

Performing prepatch operations on CRS - bringing down CRS service on home /u01/app/19c/grid
Prepatch operation log file location: /u01/app/oracle/crsdata/racstandby01/crsconfig/crs_prepatch_apply_inplace_racstandby01_2025-04-27_04-47-08PM.log
CRS service brought down successfully on home /u01/app/19c/grid

Start applying binary patch on home /u01/app/19c/grid
Binary patch applied successfully on home /u01/app/19c/grid

Running rootadd_rdbms.sh on home /u01/app/19c/grid
Successfully executed rootadd_rdbms.sh on home /u01/app/19c/grid

Performing postpatch operations on CRS - starting CRS service on home /u01/app/19c/grid
Postpatch operation log file location: /u01/app/oracle/crsdata/racstandby01/crsconfig/crs_postpatch_apply_inplace_racstandby01_2025-04-27_05-01-00PM.log
CRS service started successfully on home /u01/app/19c/grid

Preparing home /u01/app/oracle/product/19c/dbhome_1 after database service restarted
No step execution required.........
 
OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:racstandby01
RAC Home:/u01/app/oracle/product/19c/dbhome_1
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /backup/RU_patches/37641958/37643161
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /backup/RU_patches/37641958/36758186
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /backup/RU_patches/37641958/37762426
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /backup/RU_patches/37641958/37654975
Log: /u01/app/oracle/product/19c/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-27_16-43-34PM_1.log
Reason: /backup/RU_patches/37641958/37654975 is not required to be applied to oracle home /u01/app/oracle/product/19c/dbhome_1

Patch: /backup/RU_patches/37641958/37642901
Log: /u01/app/oracle/product/19c/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-27_16-43-34PM_1.log
Reason: /backup/RU_patches/37641958/37642901 is not required to be applied to oracle home /u01/app/oracle/product/19c/dbhome_1

Host:racstandby01
CRS Home:/u01/app/19c/grid
Version:19.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /backup/RU_patches/37641958/36758186
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-27_16-48-45PM_1.log

Patch: /backup/RU_patches/37641958/37642901
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-27_16-48-45PM_1.log

Patch: /backup/RU_patches/37641958/37643161
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-27_16-48-45PM_1.log

Patch: /backup/RU_patches/37641958/37654975
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-27_16-48-45PM_1.log

Patch: /backup/RU_patches/37641958/37762426
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-27_16-48-45PM_1.log

OPatchauto session completed at Sun Apr 27 17:04:21 2025
Time taken to complete the session 24 minutes, 33 seconds
[root@racstandby01 temp_1927releaseupdate]#

11.2- racstandby02

Prepare Second Standby Node (racstandby02) For Patching

# Stop MRP on racstandby02
su - oracle 
. db.env
env | grep SID
sqlplus / as sysdba
SELECT PROCESS, STATUS, THREAD#, SEQUENCE# FROM V$MANAGED_STANDBY WHERE PROCESS LIKE 'MRP%';
RECOVER MANAGED STANDBY DATABASE CANCEL;
SELECT PROCESS, STATUS, THREAD#, SEQUENCE# FROM V$MANAGED_STANDBY WHERE PROCESS LIKE 'MRP%';

# Start MRP on racstandby01
su - oracle
. db.env
env | grep SID
sqlplus / as sysdba
startup mount;
RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
SELECT PROCESS, STATUS, THREAD#, SEQUENCE# FROM V$MANAGED_STANDBY WHERE PROCESS LIKE 'MRP%';


# On Primary
select thread#,max(sequence#) from v$archived_log where archived='YES' group by thread#;
# On Standby:
select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;
# On Primary Instance 1 Perform log switches 3 to 4 times.
alter system switch logfile;
# On Primary Instance 2 Perform log switches 3 to 4 times.
alter system switch logfile;
# On Primary
select thread#,max(sequence#) from v$archived_log where archived='YES' group by thread#;
# On Standby:
select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;

Apply Grid Home & Oracle Home Patches For Second Standby Node (racstandby02)

su - root

# Switch to TEMP Patch Directory
cd /u01/temp_1927releaseupdate

# Set Grid Home Path
export ORACLE_HOME=/u01/app/19c/grid/
export PATH=$ORACLE_HOME/OPatch:$PATH

# Apply Both Grid and Oracle Home Patches
/u01/app/19c/grid/OPatch/opatchauto apply /backup/RU_patches/37641958/

Grid Home & Oracle Home Patches Output For Second Standby Node (racstandby02)

[root@racstandby02 ~]# cd /u01/temp_1927releaseupdate
[root@racstandby02 temp_1927releaseupdate]# export ORACLE_HOME=/u01/app/19c/grid/
[root@racstandby02 temp_1927releaseupdate]# export PATH=$ORACLE_HOME/OPatch:$PATH
[root@racstandby02 temp_1927releaseupdate]# /u01/app/19c/grid/OPatch/opatchauto apply /backup/RU_patches/37641958/

OPatchauto session is initiated at Sun Apr 27 17:52:17 2025

System initialization log file is /u01/app/19c/grid/cfgtoollogs/opatchautodb/systemconfig2025-04-27_05-52-29PM.log.

Session log file is /u01/app/19c/grid/cfgtoollogs/opatchauto/opatchauto2025-04-27_05-53-55PM.log
The id for this session is C8VP

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19c/grid
Patch applicability verified successfully on home /u01/app/19c/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19c/dbhome_1
Patch applicability verified successfully on home /u01/app/oracle/product/19c/dbhome_1

Executing patch validation checks on home /u01/app/19c/grid
Patch validation checks successfully completed on home /u01/app/19c/grid

Executing patch validation checks on home /u01/app/oracle/product/19c/dbhome_1
Patch validation checks successfully completed on home /u01/app/oracle/product/19c/dbhome_1

Verifying SQL patch applicability on home /u01/app/oracle/product/19c/dbhome_1
Skipping SQL patch step execution on standby database : DRTEST
SQL patch applicability verified successfully on home /u01/app/oracle/product/19c/dbhome_1

Preparing to bring down database service on home /u01/app/oracle/product/19c/dbhome_1
Successfully prepared home /u01/app/oracle/product/19c/dbhome_1 to bring down database service

Performing prepatch operations on CRS - bringing down CRS service on home /u01/app/19c/grid
Prepatch operation log file location: /u01/app/oracle/crsdata/racstandby02/crsconfig/crs_prepatch_apply_inplace_racstandby02_2025-04-27_05-59-59PM.log
CRS service brought down successfully on home /u01/app/19c/grid

Performing prepatch operation on home /u01/app/oracle/product/19c/dbhome_1
Prepatch operation completed successfully on home /u01/app/oracle/product/19c/dbhome_1

Start applying binary patch on home /u01/app/oracle/product/19c/dbhome_1
Binary patch applied successfully on home /u01/app/oracle/product/19c/dbhome_1

Running rootadd_rdbms.sh on home /u01/app/oracle/product/19c/dbhome_1
Successfully executed rootadd_rdbms.sh on home /u01/app/oracle/product/19c/dbhome_1

Performing postpatch operation on home /u01/app/oracle/product/19c/dbhome_1
Postpatch operation completed successfully on home /u01/app/oracle/product/19c/dbhome_1

Start applying binary patch on home /u01/app/19c/grid
Binary patch applied successfully on home /u01/app/19c/grid

Running rootadd_rdbms.sh on home /u01/app/19c/grid
Successfully executed rootadd_rdbms.sh on home /u01/app/19c/grid

Performing postpatch operations on CRS - starting CRS service on home /u01/app/19c/grid
Postpatch operation log file location: /u01/app/oracle/crsdata/racstandby02/crsconfig/crs_postpatch_apply_inplace_racstandby02_2025-04-27_06-28-32PM.log
CRS service started successfully on home /u01/app/19c/grid

Preparing home /u01/app/oracle/product/19c/dbhome_1 after database service restarted
No step execution required......... 

Trying to apply SQL patch on home /u01/app/oracle/product/19c/dbhome_1
Skipping SQL patch step execution on standby database : DRTEST
No SQL patch operations are required on local node for this home

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:racstandby02
RAC Home:/u01/app/oracle/product/19c/dbhome_1
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /backup/RU_patches/37641958/37643161
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /backup/RU_patches/37641958/36758186
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /backup/RU_patches/37641958/37762426
Reason: This patch is not applicable to this specified target type - "rac_database"

==Following patches were SUCCESSFULLY applied:

Patch: /backup/RU_patches/37641958/37642901
Log: /u01/app/oracle/product/19c/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-27_18-01-48PM_1.log

Patch: /backup/RU_patches/37641958/37654975
Log: /u01/app/oracle/product/19c/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-27_18-01-48PM_1.log

Host:racstandby02
CRS Home:/u01/app/19c/grid
Version:19.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /backup/RU_patches/37641958/36758186
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-27_18-15-39PM_1.log

Patch: /backup/RU_patches/37641958/37642901
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-27_18-15-39PM_1.log

Patch: /backup/RU_patches/37641958/37643161
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-27_18-15-39PM_1.log

Patch: /backup/RU_patches/37641958/37654975
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-27_18-15-39PM_1.log

Patch: /backup/RU_patches/37641958/37762426
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-27_18-15-39PM_1.log

OPatchauto session completed at Sun Apr 27 18:32:36 2025
Time taken to complete the session 40 minutes, 8 seconds
[root@racstandby02 temp_1927releaseupdate]#

11.3- racprimary01

Apply Grid Home & Oracle Home Patches For First Primary Node (racprimary01)

su - root

# Switch to TEMP Patch Directory
cd /u01/temp_1927releaseupdate

# Set Grid Home Path
export ORACLE_HOME=/u01/app/19c/grid/
export PATH=$ORACLE_HOME/OPatch:$PATH

# Apply Both Grid and Oracle Home Patches
/u01/app/19c/grid/OPatch/opatchauto apply /backup/RU_patches/37641958/

11.4- racprimary02

Apply Grid Home & Oracle Home Patches For Second Primary Node (racprimary02)

su - root

# Switch to TEMP Patch Directory
cd /u01/temp_1927releaseupdate

# Set Grid Home Path
export ORACLE_HOME=/u01/app/19c/grid/
export PATH=$ORACLE_HOME/OPatch:$PATH

# Apply Both Grid and Oracle Home Patches
/u01/app/19c/grid/OPatch/opatchauto apply /backup/RU_patches/37641958/

12- Check Applied Patches Both Grid and Oracle Homes

su - oracle

# GRID HOME
. grid.env
opatch lspatches

# DB HOME
. db.env
opatch lspatches

13- Check & Apply Datapatch (If Needed)

13.1- Should I run Datapatch?

When you run opatchauto apply to patch an Oracle environment, datapatch is automatically executed — so you normally do not need to run it manually.

Here’s a bit more detail:

  • opatchauto handles the full patching flow:
    • It puts the database into upgrade mode.
    • It applies the binary patches to the Oracle Home.
    • It runs datapatch automatically to apply SQL changes inside the database.
    • It brings the database back to normal open mode after patching.

So under normal conditions, you don’t have to run datapatch manually after using opatchauto.

However, in some special cases:

  • If datapatch fails during the opatchauto process,
  • Or if you notice that the database was not properly patched (for example, you don’t see the patch information in the registry),

then you may need to manually run datapatch after troubleshooting.

13.2- Check Datapatch Status

After patching, run this query:

su - oracle
. db.env
sqlplus / as sysdba

col COMP_NAME for a40
select COMP_ID, COMP_NAME, VERSION, STATUS from dba_registry;

col ACTION_TIME for a30
col action for a30
col version for a10
col BUNDLE_SERIES for a30
col COMMENTS for a47
select ACTION_TIME, ACTION, VERSION, BUNDLE_SERIES, COMMENTS from dba_registry_history;

13.3- Apply Datapatch Manually (If Needed)

  • Now it’s time to run datapatch -verbose which updates the patch information at the database binary & dictionary level as well as binary files.
  • Important Note: For each separate database running on the same shared Oracle home being patched, run the datapatch utility.
su - oracle 
. db.env
cd $ORACLE_HOME/OPatch
./datapatch -verbose

14- Check Invalid Objects

su - oracle
. db.env
sqlplus / as sysdba

set lines 200 pages 500
col owner for a15
col OBJECT_NAME for a35
select owner, object_name, object_type, status from dba_objects where status='INVALID';

-- Run UTLRP if You Have Invalid Objects
@?/rdbms/admin/utlrp.sql

15- Patch Deinstallation Steps (If Needed)

15.1- Rollback Applied Patches

Patch rollback operations can be performed using the same commands in the following order, starting from the standby servers.

  • racstandby02
  • racstandby01
  • racprimary02
  • racprimary01
# Check Installed Patches For Grid Home
su - oracle
. grid.env
which opatch
opatch lspatches

################################################################
[oracle]$ opatch lspatches
36758186;DBWLM RELEASE UPDATE 19.0.0.0.0 (36758186)
37762426;TOMCAT RELEASE UPDATE 19.0.0.0.0 (37762426)
37643161;ACFS RELEASE UPDATE 19.27.0.0.0 (37643161)
37654975;OCW RELEASE UPDATE 19.27.0.0.0 (37654975)
37642901;Database Release Update : 19.27.0.0.250415 (37642901)
################################################################

# Check Installed Patches For Oracle Home
su - oracle
. db.env
which opatch
opatch lspatches

################################################################
[oracle]$ opatch lspatches
37654975;OCW RELEASE UPDATE 19.27.0.0.0 (37654975)
37642901;Database Release Update : 19.27.0.0.250415 (37642901)
################################################################

# Rollback 19.27 Patches From Grid Home
# Check CRS
su - root
/u01/app/19c/grid/bin/crsctl check crs
# Stop CRS
/u01/app/19c/grid/bin/crsctl stop crs
# Ensure Grid Home & Oracle Home Shutdown
ps -ef | grep pmon
# Create TEMP Patch Directory
su - root
mkdir -p /u01/temp_opatch
cd /u01/temp_opatch
# To roll back the patch from the Grid home
/u01/app/19c/grid/OPatch/opatchauto rollback /backup/RU_patches/37641958 -oh /u01/app/19c/grid

################################################################
==Following patches were SUCCESSFULLY rolled back:
Patch: /backup/RU_patches/37641958/37654975
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-24_00-44-49AM_1.log
Patch: /backup/RU_patches/37641958/37643161
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-24_00-44-49AM_1.log
Patch: /backup/RU_patches/37641958/36758186
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-24_00-44-49AM_1.log
Patch: /backup/RU_patches/37641958/37762426
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-24_00-44-49AM_1.log
Patch: /backup/RU_patches/37641958/37642901
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-24_00-44-49AM_1.log
################################################################

# Rollback 19.27 Patches From Oracle Home
su - root
cd /u01/temp_opatch
# To roll back the patch from the Oracle RAC database home
/u01/app/oracle/product/19c/dbhome_1/OPatch/opatchauto rollback /backup/RU_patches/37641958 -oh /u01/app/oracle/product/19c/dbhome_1

################################################################
==Following patches were SUCCESSFULLY rolled back:
Patch: /backup/RU_patches/37641958/37654975
Log: /u01/app/oracle/product/19c/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-24_01-07-20AM_1.log
Patch: /backup/RU_patches/37641958/37642901
Log: /u01/app/oracle/product/19c/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-24_01-07-20AM_1.log
################################################################

# Ensure All 19.27 Patches Uninstalled
# Check Current Patches For Grid Home
su - oracle
. grid.env
which opatch
opatch lspatches

################################################################
[oracle]$ opatch lspatches
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517247;ACFS RELEASE UPDATE 19.3.0.0.0 (29517247)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)
29401763;TOMCAT RELEASE UPDATE 19.0.0.0.0 (29401763)
################################################################

# Check Current Patches For Oracle Home
su - oracle
. db.env
which opatch
opatch lspatches

################################################################
[oracle]$ opatch lspatches
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)
################################################################

# Start CRS
su - root
/u01/app/19c/grid/bin/crsctl start crs

# Ensure Grid Home & Oracle Home Running
ps -ef | grep pmon

15.2- Check New Patch Levels Both Grid and Oracle Homes

su - oracle

# GRID HOME
. grid.env
opatch lspatches

# DB HOME
. db.env
opatch lspatches

15.3- Rollback SQL Changes from the the Database

  • Datapatch is run to complete the post-deinstall SQL deployment for the patch being deinstalled.
  • Important Note: For each separate database running on the same shared Oracle home being patched, run the datapatch utility.

It is enough to run it on one of the primary(racprimary01, racprimary02) nodes.

su - oracle 
. db.env
cd $ORACLE_HOME/OPatch
./datapatch -verbose

15.4- Check Invalid Objects and DBA_REGISTRY (After Rollback)

su - oracle
. db.env
sqlplus / as sysdba

set lines 200 pages 500
col owner for a15
col OBJECT_NAME for a35
select owner, object_name, object_type, status from dba_objects where status='INVALID';

-- Run UTLRP if You Have Invalid Objects
@?/rdbms/admin/utlrp.sql

col COMP_NAME for a40
select COMP_ID, COMP_NAME, VERSION, STATUS from dba_registry;

col ACTION_TIME for a30
col action for a30
col version for a10
col BUNDLE_SERIES for a30
col COMMENTS for a47
select ACTION_TIME, ACTION, VERSION, BUNDLE_SERIES, COMMENTS from dba_registry_history;

Oracle Patching || Manuel (opatch apply) vs Automated (opatchauto apply) Method

  • Use opatch apply when you want full manual control.
  • Use opatchauto apply when you want Oracle to automatically manage Grid Infrastructure patching and optional database home patching, without needing to manually stop/start resources.

Featureopatch applyopatchauto apply
ScopeManual patching for single Oracle homes (like RDBMS or GI).Automated patching for Grid Infrastructure (GI) and Database homes.
User ResponsibilityDBA must handle pre-checks, stopping services, applying patch, restarting manually.Automates the whole process: pre-checks, stop/start services, patch apply, validation.
Typical UseApply interim patches, one-off patches, small fixes.Apply major patches (like GI PSU, RU, or RUR patches).
Patches SupportedDatabase or Grid patches manually, usually small patches.Primarily for GI + RDBMS patches at once. Supports PSU, RU, RUR patches.
Commands Exampleopatch apply (inside the patch directory).opatchauto apply /path/to/patch
Needs Root?No (unless it’s a GI home — then root.sh after).Yes — must run as root user or use sudo because it patches GI services.
Introduced InAvailable from early versions (10g onwards).Introduced from Oracle 12c Release 2 (12.2) for GI patches.

Step by Step || Oracle RAC (Primary&Standby) || Manuel (opatch apply) || 19.27 Release Update

I have tried to document step by step How to Apply 19.27 Oracle Release Update Patches for RAC (Primary&Standby) using Manuel (opatch apply) Method. I hope it will be useful for those who need it!

::::: Automated (opatchauto apply) Method :::::
Step by Step || Oracle RAC (Primary&Standby) || Automated (opatchauto apply) || 19.27 Release Update

::::: Oracle Patching || Manuel (opatch apply) vs Automated (opatchauto apply) Method :::::
Oracle Patching || Manuel (opatch apply) vs Automated (opatchauto apply) Method

High Level Steps

1- Check Environment Details
2- Prerequisites
3- Download Patches
4- Check New Patch Numbers & Descriptions
5- Check Invalid Objects and DBA_REGISTRY (Before Patch)
6- Check Current OPatch Version & Installed Patches
7- Copy Patches to All Servers
8- Update OPATCH Utility Both Oracle Homes and Grid Homes
9- Unzip the Patch on All Servers
10- Run OPatch Conflict Checks Both Grid and Oracle Homes
10.1- Run OPatch Conflict Check For Grid Home
10.2- Sample OPatch Conflict Check Output For Grid Home
10.3- Run OPatch Conflict Check For Oracle Home
10.4- Sample OPatch Conflict Check Output For Oracle Home
11- Run OPatch System Space Checks Both Grid and Oracle Homes
11.1- Run OPatch System Check For Grid Homes
11.2- Sample OPatch System Check Output For Grid Home
11.3- Run OPatch System Check For Oracle Homes
11.4- Sample OPatch System Check Output For Oracle Homes
12- Apply Patches
12.1- racstandby01
12.2- racstandby02
12.3- racprimary01
12.4- racprimary02
13- Check Applied Patches Both Grid and Oracle Homes
14- Apply Datapatch
15- Check Invalid Objects and DBA_REGISTRY (After Patch)
16- Patch Deinstallation Steps (If Needed)
16.1- Rollback Applied Patches
16.2- Check New Patch Levels Both Grid and Oracle Homes
16.3- Rollback SQL Changes from the the Database
16.4- Check Invalid Objects and DBA_REGISTRY (After Rollback)

1- Check Environment Details

2- Prerequisites

3- Download Patches

4- Check New Patch Numbers & Descriptions

5- Check Invalid Objects and DBA_REGISTRY (Before Patch)

su - oracle
. db.env
sqlplus / as sysdba

set lines 200 pages 500
col owner for a15
col OBJECT_NAME for a35
select owner, object_name, object_type, status from dba_objects where status='INVALID';

-- Run UTLRP if You Have Invalid Objects
@?/rdbms/admin/utlrp.sql

col COMP_NAME for a40
select COMP_ID, COMP_NAME, VERSION, STATUS from dba_registry;

col ACTION_TIME for a30
col action for a30
col version for a10
col BUNDLE_SERIES for a30
col COMMENTS for a47
select ACTION_TIME, ACTION, VERSION, BUNDLE_SERIES, COMMENTS from dba_registry_history;

6- Check Current OPatch Version & Installed Patches

su - oracle

# DB HOME
. db.env
which opatch
opatch version
opatch lspatches

# GRID HOME
. grid.env
which opatch
opatch version
opatch lspatches

7- Copy Patches to All Servers

I have copied patches to /u01/software/ directories.

mkdir -p /u01/software
cp /media/sf_VIRTUALBOX_SHARED/* /u01/software/
chown -R oracle:oinstall /u01/software/
ls -lrt /u01/software/

8- Update OPATCH Utility Both Oracle Homes and Grid Homes

You must use the OPatch utility version 12.2.0.1.45 or later to apply this patch.

# GRID HOME
cp /u01/software/p6880880_190000_Linux-x86-64.zip /u01/app/19c/grid/
cd /u01/app/19c/grid/
mv OPatch OPatch_20250421
unzip p6880880_190000_Linux-x86-64.zip
. grid.env
which opatch
opatch version

# DB HOME
cp /u01/software/p6880880_190000_Linux-x86-64.zip /u01/app/oracle/product/19c/dbhome_1/
cd /u01/app/oracle/product/19c/dbhome_1/
mv OPatch OPatch_20250421
unzip p6880880_190000_Linux-x86-64.zip
. db.env
which opatch
opatch version

9- Unzip the Patch on All Servers

su - oracle
unzip /u01/software/p37641958_190000_Linux-x86-64.zip -d /u01/software/

10- Run OPatch Conflict Checks Both Grid and Oracle Homes

10.1- Run OPatch Conflict Check For Grid Home

su - oracle
. grid.env
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/37641958/37642901
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/37641958/37654975
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/37641958/37643161
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/37641958/37762426
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/37641958/36758186

10.2- Sample OPatch Conflict Check Output For Grid Home

[root@racstandby01 ~]# su - oracle
Last login: Mon Apr 21 23:38:48 +03 2025
[oracle@racstandby01 ~]$ 
[oracle@racstandby01 ~]$ . grid.env 
[oracle@racstandby01 ~]$ clear 
[oracle@racstandby01 ~]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/37641958/37642901
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/37641958/37643161
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/37641958/37762426
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/37641958/36758186Oracle Interim Patch Installer version 12.2.0.1.46
Copyright (c) 2025, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/19c/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/19c/grid/oraInst.loc
OPatch version    : 12.2.0.1.46
OUI version       : 12.2.0.7.0
Log file location : /u01/app/19c/grid/cfgtoollogs/opatch/opatch2025-04-21_23-41-40PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
[oracle@racstandby01 ~]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/37641958/37654975
Oracle Interim Patch Installer version 12.2.0.1.46
Copyright (c) 2025, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/19c/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/19c/grid/oraInst.loc
OPatch version    : 12.2.0.1.46
OUI version       : 12.2.0.7.0
Log file location : /u01/app/19c/grid/cfgtoollogs/opatch/opatch2025-04-21_23-42-16PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
[oracle@racstandby01 ~]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/37641958/37643161
Oracle Interim Patch Installer version 12.2.0.1.46
Copyright (c) 2025, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/19c/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/19c/grid/oraInst.loc
OPatch version    : 12.2.0.1.46
OUI version       : 12.2.0.7.0
Log file location : /u01/app/19c/grid/cfgtoollogs/opatch/opatch2025-04-21_23-42-22PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
[oracle@racstandby01 ~]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/37641958/37762426
Oracle Interim Patch Installer version 12.2.0.1.46
Copyright (c) 2025, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/19c/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/19c/grid/oraInst.loc
OPatch version    : 12.2.0.1.46
OUI version       : 12.2.0.7.0
Log file location : /u01/app/19c/grid/cfgtoollogs/opatch/opatch2025-04-21_23-42-26PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
[oracle@racstandby01 ~]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/37641958/36758186
Oracle Interim Patch Installer version 12.2.0.1.46
Copyright (c) 2025, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/19c/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/19c/grid/oraInst.loc
OPatch version    : 12.2.0.1.46
OUI version       : 12.2.0.7.0
Log file location : /u01/app/19c/grid/cfgtoollogs/opatch/opatch2025-04-21_23-42-43PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
[oracle@racstandby01 ~]$

10.3- Run OPatch Conflict Check For Oracle Home

su - oracle
. db.env
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/37641958/37642901
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/37641958/37654975

10.4- Sample OPatch Conflict Check Output For Oracle Home

[root@racstandby01 ~]# su - oracle
Last login: Mon Apr 21 23:41:08 +03 2025 on pts/1
[oracle@racstandby01 ~]$ . db.env 
[oracle@racstandby01 ~]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/37641958/37642901
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/37641958/37654975Oracle Interim Patch Installer version 12.2.0.1.46
Copyright (c) 2025, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/oracle/product/19c/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/19c/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.46
OUI version       : 12.2.0.7.0
Log file location : /u01/app/oracle/product/19c/dbhome_1/cfgtoollogs/opatch/opatch2025-04-21_23-49-49PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
[oracle@racstandby01 ~]$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/37641958/37654975
Oracle Interim Patch Installer version 12.2.0.1.46
Copyright (c) 2025, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/oracle/product/19c/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/19c/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.46
OUI version       : 12.2.0.7.0
Log file location : /u01/app/oracle/product/19c/dbhome_1/cfgtoollogs/opatch/opatch2025-04-21_23-51-39PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
[oracle@racstandby01 ~]$

11- Run OPatch System Space Checks Both Grid and Oracle Homes

11.1- Run OPatch System Check For Grid Homes

su - oracle
. grid.env

vi /tmp/patch_list_gihome.txt
# Insert belows
/u01/software/37641958/37642901
/u01/software/37641958/37654975
/u01/software/37641958/37643161
/u01/software/37641958/37762426
/u01/software/37641958/36758186

$ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt

11.2- Sample OPatch System Check Output For Grid Home

[root@racstandby01 ~]# su - oracle
Last login: Tue Apr 22 00:24:36 +03 2025
[oracle@racstandby01 ~]$ . grid.env [oracle@racstandby01 ~]$ vi /tmp/patch_list_gihome.txt[oracle@racstandby01 ~]$ $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt
Oracle Interim Patch Installer version 12.2.0.1.46
Copyright (c) 2025, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/19c/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/19c/grid/oraInst.loc
OPatch version    : 12.2.0.1.46
OUI version       : 12.2.0.7.0
Log file location : /u01/app/19c/grid/cfgtoollogs/opatch/opatch2025-04-22_00-27-39AM_1.log

Invoking prereq "checksystemspace"

Prereq "checkSystemSpace" passed.

OPatch succeeded.
[oracle@racstandby01 ~]$

11.3- Run OPatch System Check For Oracle Homes

su - oracle
. db.env

vi /tmp/patch_list_dbhome.txt
# Insert belows
/u01/software/37641958/37642901
/u01/software/37641958/37654975

$ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_dbhome.txt

11.4- Sample OPatch System Check Output For Oracle Homes

[root@racstandby01 ~]# su - oracle
Last login: Tue Apr 22 00:34:55 +03 2025 on pts/0
[oracle@racstandby01 ~]$ . db.env 
[oracle@racstandby01 ~]$ vi /tmp/patch_list_dbhome.txt
[oracle@racstandby01 ~]$ $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_dbhome.txt
Oracle Interim Patch Installer version 12.2.0.1.46
Copyright (c) 2025, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/oracle/product/19c/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/19c/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.46
OUI version       : 12.2.0.7.0
Log file location : /u01/app/oracle/product/19c/dbhome_1/cfgtoollogs/opatch/opatch2025-04-22_00-37-00AM_1.log

Invoking prereq "checksystemspace"

Prereq "checkSystemSpace" passed.

OPatch succeeded.
[oracle@racstandby01 ~]$

12- Apply Patches

I will apply the patches to the servers in the following order without any service interruption.

12.1- racstandby01

Prepare First Standby Node (racstandby01) For Patching

# Stop MRP on racstandby01
su - oracle 
. db.env
env | grep SID
sqlplus / as sysdba
SELECT PROCESS, STATUS, THREAD#, SEQUENCE# FROM V$MANAGED_STANDBY WHERE PROCESS LIKE 'MRP%';
RECOVER MANAGED STANDBY DATABASE CANCEL;
SELECT PROCESS, STATUS, THREAD#, SEQUENCE# FROM V$MANAGED_STANDBY WHERE PROCESS LIKE 'MRP%';

# Start MRP on racstandby02
su - oracle
. db.env
env | grep SID
sqlplus / as sysdba
startup mount;
RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
SELECT PROCESS, STATUS, THREAD#, SEQUENCE# FROM V$MANAGED_STANDBY WHERE PROCESS LIKE 'MRP%';

# On Primary
select thread#,max(sequence#) from v$archived_log where archived='YES' group by thread#;
# On Standby:
select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;
# On Primary Instance 1 Perform log switches 3 to 4 times.
alter system switch logfile;
# On Primary Instance 2 Perform log switches 3 to 4 times.
alter system switch logfile;
# On Primary
select thread#,max(sequence#) from v$archived_log where archived='YES' group by thread#;
# On Standby:
select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;

# Stop Oracle Home
su - oracle
. db.env
srvctl stop home -o /u01/app/oracle/product/19c/dbhome_1 -s STBY1_APR25_RU_20250422.txt -n racstandby01
ps -ef | grep pmon

# Stop&Prepare Grid Home with Root User
exit
/u01/app/19c/grid/crs/install/rootcrs.sh -prepatch
ps -ef | grep pmon

Apply Grid Home & Oracle Home Patches For First Standby Node (racstandby01)

# Apply Grid Home Patches
su - oracle
. grid.env
which opatch

#Database Release Update 19.27.0.0.250415
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/19c/grid -local -silent /u01/software/37641958/37642901
#OCW Release Update 19.27.0.0.250415
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/19c/grid -local -silent /u01/software/37641958/37654975
# ACFS Release Update 19.27.0.0.250415
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/19c/grid -local -silent /u01/software/37641958/37643161
# Tomcat Release Update 19.0.0.0.0
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/19c/grid -local -silent /u01/software/37641958/37762426
# DBWLM Release Update 19.0.0.0.0
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/19c/grid -local -silent /u01/software/37641958/36758186

opatch lspatches
$ORACLE_HOME/OPatch/opatch lsinv | grep -i applied

# Apply Oracle Home Patches
su - oracle 
. db.env
which opatch

# Run prepatch Before Oracle Home Patch Operation
/u01/software/37641958/37654975/custom/scripts/prepatch.sh -dbhome /u01/app/oracle/product/19c/dbhome_1

# Database Release Update 19.27.0.0.250415
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/oracle/product/19c/dbhome_1 -local -silent /u01/software/37641958/37642901
# OCW Release Update 19.27.0.0.250415
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/oracle/product/19c/dbhome_1 -local -silent /u01/software/37641958/37654975

# Apply OJVM from Oracle User 
# If patch available then apply using apply command
su - oracle 
. db.env
which opatch
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/oracle/product/19c/dbhome_1 -local -silent <patch_location>

opatch lspatches
$ORACLE_HOME/OPatch/opatch lsinv | grep -i applied

# Run postpatch After Oracle Home Patch Operation
/u01/software/37641958/37654975/custom/scripts/postpatch.sh -dbhome /u01/app/oracle/product/19c/dbhome_1

# Start Cluster via Root User
su - root
/u01/app/19c/grid/rdbms/install/rootadd_rdbms.sh
/u01/app/19c/grid/crs/install/rootcrs.sh -postpatch
ps -ef | grep pmon

# Start Oracle Home from Oracle User
su - oracle
. db.env
srvctl start home -o /u01/app/oracle/product/19c/dbhome_1 -s STBY1_APR25_RU_20250422.txt -n racstandby01
OR
srvctl start instance -d DRTEST -i DRTEST1
ps -ef | grep pmon

12.2- racstandby02

Prepare First Standby Node (racstandby02) For Patching

# Stop MRP on racstandby02
su - oracle 
. db.env
env | grep SID
sqlplus / as sysdba
SELECT PROCESS, STATUS, THREAD#, SEQUENCE# FROM V$MANAGED_STANDBY WHERE PROCESS LIKE 'MRP%';
RECOVER MANAGED STANDBY DATABASE CANCEL;
SELECT PROCESS, STATUS, THREAD#, SEQUENCE# FROM V$MANAGED_STANDBY WHERE PROCESS LIKE 'MRP%';

# Start MRP on racstandby01
su - oracle
. db.env
env | grep SID
sqlplus / as sysdba
startup mount;
RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
SELECT PROCESS, STATUS, THREAD#, SEQUENCE# FROM V$MANAGED_STANDBY WHERE PROCESS LIKE 'MRP%';


# On Primary
select thread#,max(sequence#) from v$archived_log where archived='YES' group by thread#;
# On Standby:
select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;
# On Primary Instance 1 Perform log switches 3 to 4 times.
alter system switch logfile;
# On Primary Instance 2 Perform log switches 3 to 4 times.
alter system switch logfile;
# On Primary
select thread#,max(sequence#) from v$archived_log where archived='YES' group by thread#;
# On Standby:
select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;

# Stop Oracle Home
su - oracle
. db.env
srvctl stop home -o /u01/app/oracle/product/19c/dbhome_1 -s STBY2_APR25_RU_20250422.txt -n racstandby02
ps -ef | grep pmon

# Stop&Prepare Grid Home with Root User
exit
/u01/app/19c/grid/crs/install/rootcrs.sh -prepatch
ps -ef | grep pmon

Apply Grid Home & Oracle Home Patches For First Standby Node (racstandby02)

# Apply Grid Home Patches
su - oracle
. grid.env
which opatch

#Database Release Update 19.27.0.0.250415
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/19c/grid -local -silent /u01/software/37641958/37642901
#OCW Release Update 19.27.0.0.250415
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/19c/grid -local -silent /u01/software/37641958/37654975
# ACFS Release Update 19.27.0.0.250415
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/19c/grid -local -silent /u01/software/37641958/37643161
# Tomcat Release Update 19.0.0.0.0
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/19c/grid -local -silent /u01/software/37641958/37762426
# DBWLM Release Update 19.0.0.0.0
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/19c/grid -local -silent /u01/software/37641958/36758186

opatch lspatches
$ORACLE_HOME/OPatch/opatch lsinv | grep -i applied

# Apply Oracle Home Patches
su - oracle 
. db.env
which opatch

# Run prepatch Before Oracle Home Patch Operation
/u01/software/37641958/37654975/custom/scripts/prepatch.sh -dbhome /u01/app/oracle/product/19c/dbhome_1

# Database Release Update 19.27.0.0.250415
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/oracle/product/19c/dbhome_1 -local -silent /u01/software/37641958/37642901
# OCW Release Update 19.27.0.0.250415
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/oracle/product/19c/dbhome_1 -local -silent /u01/software/37641958/37654975

# Apply OJVM from Oracle User 
# If patch available then apply using apply command
su - oracle 
. db.env
which opatch
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/oracle/product/19c/dbhome_1 -local -silent <patch_location>

opatch lspatches
$ORACLE_HOME/OPatch/opatch lsinv | grep -i applied

# Run postpatch After Oracle Home Patch Operation
/u01/software/37641958/37654975/custom/scripts/postpatch.sh -dbhome /u01/app/oracle/product/19c/dbhome_1

# Start Cluster via Root User
su - root
/u01/app/19c/grid/rdbms/install/rootadd_rdbms.sh
/u01/app/19c/grid/crs/install/rootcrs.sh -postpatch
ps -ef | grep pmon

# Start Oracle Home from Oracle User
su - oracle
. db.env
srvctl start home -o /u01/app/oracle/product/19c/dbhome_1 -s STBY2_APR25_RU_20250422.txt -n racstandby02
OR
srvctl start instance -d DRTEST -i DRTEST2
ps -ef | grep pmon

12.3- racprimary01

Prepare First Standby Node (racprimary01) For Patching

# Stop Oracle Home
su - oracle
. db.env
srvctl stop home -o /u01/app/oracle/product/19c/dbhome_1 -s PROD1_APR25_RU_20250422.txt -n racprimary01
ps -ef | grep pmon

# Stop&Prepare Grid Home with Root User
exit
/u01/app/19c/grid/crs/install/rootcrs.sh -prepatch
ps -ef | grep pmon

Apply Grid Home & Oracle Home Patches For First Standby Node (racprimary01)

# Apply Grid Home Patches
su - oracle
. grid.env
which opatch

#Database Release Update 19.27.0.0.250415
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/19c/grid -local -silent /u01/software/37641958/37642901
#OCW Release Update 19.27.0.0.250415
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/19c/grid -local -silent /u01/software/37641958/37654975
# ACFS Release Update 19.27.0.0.250415
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/19c/grid -local -silent /u01/software/37641958/37643161
# Tomcat Release Update 19.0.0.0.0
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/19c/grid -local -silent /u01/software/37641958/37762426
# DBWLM Release Update 19.0.0.0.0
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/19c/grid -local -silent /u01/software/37641958/36758186

opatch lspatches
$ORACLE_HOME/OPatch/opatch lsinv | grep -i applied

# Apply Oracle Home Patches
su - oracle 
. db.env
which opatch

# Run prepatch Before Oracle Home Patch Operation
/u01/software/37641958/37654975/custom/scripts/prepatch.sh -dbhome /u01/app/oracle/product/19c/dbhome_1

# Database Release Update 19.27.0.0.250415
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/oracle/product/19c/dbhome_1 -local -silent /u01/software/37641958/37642901
# OCW Release Update 19.27.0.0.250415
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/oracle/product/19c/dbhome_1 -local -silent /u01/software/37641958/37654975

# Apply OJVM from Oracle User 
# If patch available then apply using apply command
su - oracle 
. db.env
which opatch
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/oracle/product/19c/dbhome_1 -local -silent <patch_location>

opatch lspatches
$ORACLE_HOME/OPatch/opatch lsinv | grep -i applied

# Run postpatch After Oracle Home Patch Operation
/u01/software/37641958/37654975/custom/scripts/postpatch.sh -dbhome /u01/app/oracle/product/19c/dbhome_1

# Start Cluster via Root User
su - root
/u01/app/19c/grid/rdbms/install/rootadd_rdbms.sh
/u01/app/19c/grid/crs/install/rootcrs.sh -postpatch
ps -ef | grep pmon

# Start Oracle Home from Oracle User
su - oracle
. db.env
srvctl start home -o /u01/app/oracle/product/19c/dbhome_1 -s PROD1_APR25_RU_20250422.txt -n racprimary01
OR
srvctl start instance -d TEST -i TEST1
ps -ef | grep pmon

12.4- racprimary02

Prepare First Standby Node (racprimary02) For Patching

# Stop Oracle Home
su - oracle
. db.env
srvctl stop home -o /u01/app/oracle/product/19c/dbhome_1 -s PROD2_APR25_RU_20250422.txt -n racprimary02
ps -ef | grep pmon

# Stop&Prepare Grid Home with Root User
exit
/u01/app/19c/grid/crs/install/rootcrs.sh -prepatch
ps -ef | grep pmon

Apply Grid Home & Oracle Home Patches For First Standby Node (racprimary02)

# Apply Grid Home Patches
su - oracle
. grid.env
which opatch

#Database Release Update 19.27.0.0.250415
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/19c/grid -local -silent /u01/software/37641958/37642901
#OCW Release Update 19.27.0.0.250415
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/19c/grid -local -silent /u01/software/37641958/37654975
# ACFS Release Update 19.27.0.0.250415
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/19c/grid -local -silent /u01/software/37641958/37643161
# Tomcat Release Update 19.0.0.0.0
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/19c/grid -local -silent /u01/software/37641958/37762426
# DBWLM Release Update 19.0.0.0.0
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/19c/grid -local -silent /u01/software/37641958/36758186

opatch lspatches
$ORACLE_HOME/OPatch/opatch lsinv | grep -i applied

# Apply Oracle Home Patches
su - oracle 
. db.env
which opatch

# Run prepatch Before Oracle Home Patch Operation
/u01/software/37641958/37654975/custom/scripts/prepatch.sh -dbhome /u01/app/oracle/product/19c/dbhome_1

# Database Release Update 19.27.0.0.250415
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/oracle/product/19c/dbhome_1 -local -silent /u01/software/37641958/37642901
# OCW Release Update 19.27.0.0.250415
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/oracle/product/19c/dbhome_1 -local -silent /u01/software/37641958/37654975

# Apply OJVM from Oracle User 
# If patch available then apply using apply command
su - oracle 
. db.env
which opatch
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/oracle/product/19c/dbhome_1 -local -silent <patch_location>

opatch lspatches
$ORACLE_HOME/OPatch/opatch lsinv | grep -i applied

# Run postpatch After Oracle Home Patch Operation
/u01/software/37641958/37654975/custom/scripts/postpatch.sh -dbhome /u01/app/oracle/product/19c/dbhome_1

# Start Cluster via Root User
su - root
/u01/app/19c/grid/rdbms/install/rootadd_rdbms.sh
/u01/app/19c/grid/crs/install/rootcrs.sh -postpatch
ps -ef | grep pmon

# Start Oracle Home from Oracle User
su - oracle
. db.env
srvctl start home -o /u01/app/oracle/product/19c/dbhome_1 -s PROD2_APR25_RU_20250422.txt -n racprimary02
OR
srvctl start instance -d TEST -i TEST2
ps -ef | grep pmon

13- Check Applied Patches Both Grid and Oracle Homes

su - oracle

# GRID HOME
. grid.env
opatch lspatches

# DB HOME
. db.env
opatch lspatches

14- Apply Datapatch

  • Now it’s time to run datapatch -verbose which updates the patch information at the database binary & dictionary level as well as binary files.
  • Important Note: For each separate database running on the same shared Oracle home being patched, run the datapatch utility.
su - oracle 
. db.env
cd $ORACLE_HOME/OPatch
./datapatch -verbose

15- Check Invalid Objects and DBA_REGISTRY (After Patch)

su - oracle
. db.env
sqlplus / as sysdba

set lines 200 pages 500
col owner for a15
col OBJECT_NAME for a35
select owner, object_name, object_type, status from dba_objects where status='INVALID';

-- Run UTLRP if You Have Invalid Objects
@?/rdbms/admin/utlrp.sql

col COMP_NAME for a40
select COMP_ID, COMP_NAME, VERSION, STATUS from dba_registry;

col ACTION_TIME for a30
col action for a30
col version for a10
col BUNDLE_SERIES for a30
col COMMENTS for a47
select ACTION_TIME, ACTION, VERSION, BUNDLE_SERIES, COMMENTS from dba_registry_history;

16- Patch Deinstallation Steps (If Needed)

16.1- Rollback Applied Patches

Patch rollback operations can be performed using the same commands in the following order, starting from the standby servers.

  • racstandby02
  • racstandby01
  • racprimary02
  • racprimary01
# Check Installed Patches For Grid Home
su - oracle
. grid.env
which opatch
opatch lspatches

################################################################
[oracle]$ opatch lspatches
36758186;DBWLM RELEASE UPDATE 19.0.0.0.0 (36758186)
37762426;TOMCAT RELEASE UPDATE 19.0.0.0.0 (37762426)
37643161;ACFS RELEASE UPDATE 19.27.0.0.0 (37643161)
37654975;OCW RELEASE UPDATE 19.27.0.0.0 (37654975)
37642901;Database Release Update : 19.27.0.0.250415 (37642901)
################################################################

# Check Installed Patches For Oracle Home
su - oracle
. db.env
which opatch
opatch lspatches

################################################################
[oracle]$ opatch lspatches
37654975;OCW RELEASE UPDATE 19.27.0.0.0 (37654975)
37642901;Database Release Update : 19.27.0.0.250415 (37642901)
################################################################

# Rollback 19.27 Patches From Grid Home
# Check CRS
su - root
/u01/app/19c/grid/bin/crsctl check crs
# Stop CRS
/u01/app/19c/grid/bin/crsctl stop crs
# Ensure Grid Home & Oracle Home Shutdown
ps -ef | grep pmon
# Create TEMP Patch Directory
su - root
mkdir -p /u01/temp_opatch
cd /u01/temp_opatch
# To roll back the patch from the Grid home
/u01/app/19c/grid/OPatch/opatchauto rollback /u01/software/37641958 -oh /u01/app/19c/grid

################################################################
==Following patches were SUCCESSFULLY rolled back:
Patch: /u01/software/37641958/37654975
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-24_00-44-49AM_1.log
Patch: /u01/software/37641958/37643161
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-24_00-44-49AM_1.log
Patch: /u01/software/37641958/36758186
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-24_00-44-49AM_1.log
Patch: /u01/software/37641958/37762426
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-24_00-44-49AM_1.log
Patch: /u01/software/37641958/37642901
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-24_00-44-49AM_1.log
################################################################

# Rollback 19.27 Patches From Oracle Home
su - root
cd /u01/temp_opatch
# To roll back the patch from the Oracle RAC database home
/u01/app/oracle/product/19c/dbhome_1/OPatch/opatchauto rollback /u01/software/37641958 -oh /u01/app/oracle/product/19c/dbhome_1

################################################################
==Following patches were SUCCESSFULLY rolled back:
Patch: /u01/software/37641958/37654975
Log: /u01/app/oracle/product/19c/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-24_01-07-20AM_1.log
Patch: /u01/software/37641958/37642901
Log: /u01/app/oracle/product/19c/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2025-04-24_01-07-20AM_1.log
################################################################

# Ensure All 19.27 Patches Uninstalled
# Check Current Patches For Grid Home
su - oracle
. grid.env
which opatch
opatch lspatches

################################################################
[oracle]$ opatch lspatches
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517247;ACFS RELEASE UPDATE 19.3.0.0.0 (29517247)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)
29401763;TOMCAT RELEASE UPDATE 19.0.0.0.0 (29401763)
################################################################

# Check Current Patches For Oracle Home
su - oracle
. db.env
which opatch
opatch lspatches

################################################################
[oracle]$ opatch lspatches
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)
################################################################

# Start CRS
su - root
/u01/app/19c/grid/bin/crsctl start crs

# Ensure Grid Home & Oracle Home Running
ps -ef | grep pmon

16.2- Check New Patch Levels Both Grid and Oracle Homes

su - oracle

# GRID HOME
. grid.env
opatch lspatches

# DB HOME
. db.env
opatch lspatches

16.3- Rollback SQL Changes from the the Database

  • Datapatch is run to complete the post-deinstall SQL deployment for the patch being deinstalled.
  • Important Note: For each separate database running on the same shared Oracle home being patched, run the datapatch utility.

It is enough to run it on one of the primary(racprimary01, racprimary02) nodes.

su - oracle 
. db.env
cd $ORACLE_HOME/OPatch
./datapatch -verbose

16.4- Check Invalid Objects and DBA_REGISTRY (After Rollback)

su - oracle
. db.env
sqlplus / as sysdba

set lines 200 pages 500
col owner for a15
col OBJECT_NAME for a35
select owner, object_name, object_type, status from dba_objects where status='INVALID';

-- Run UTLRP if You Have Invalid Objects
@?/rdbms/admin/utlrp.sql

col COMP_NAME for a40
select COMP_ID, COMP_NAME, VERSION, STATUS from dba_registry;

col ACTION_TIME for a30
col action for a30
col version for a10
col BUNDLE_SERIES for a30
col COMMENTS for a47
select ACTION_TIME, ACTION, VERSION, BUNDLE_SERIES, COMMENTS from dba_registry_history;

How to Change Default Keepalived Log Path and Enable Log Rotation in Linux?

I have tried to document step by step How to Change Default Keepalived Log Path and Enable Log Rotation in Linux. I hope it will be useful for those who need it!

Summary

  • Keepalived Default Configuration Path: /etc/keepalived/keepalived.conf
  • Keepalived Default Log File Path: /var/log/messages
  • Sometimes it is necessary to change default log paths. Let’s change it 🙂

High Level Steps

1- Change Log Path
1-1. Check /etc/sysconfig/keepalived File
1-2. Edit /etc/sysconfig/keepalived File
1-3. Create New Directory to Keep Keepalived Logs
1-4. Edit /etc/rsyslog.conf File
1-5. Restart rsyslog and keepalived Services
1-6. Check Newly Created KeepaliveD Log File
2- Enable Log Rotation
2-1. Create Logrotate Configuration File
2.2. Add Logrotate Configuration
2.3. Test the Configuration
2.4. Manually Trigger Log Rotation
2.5. Verify the Rotation

1- Change Log Path

1-1. Check /etc/sysconfig/keepalived File

more /etc/sysconfig/keepalived

# Options for keepalived. See `keepalived --help' output and keepalived(8) and
# keepalived.conf(5) man pages for a list of all options. Here are the most
# common ones :
#
# --vrrp               -P    Only run with VRRP subsystem.
# --check              -C    Only run with Health-checker subsystem.
# --dont-release-vrrp  -V    Dont remove VRRP VIPs & VROUTEs on daemon stop.
# --dont-release-ipvs  -I    Dont remove IPVS topology on daemon stop.
# --dump-conf          -d    Dump the configuration data.
# --log-detail         -D    Detailed log messages.
# --log-facility       -S    0-7 Set local syslog facility (default=LOG_DAEMON)
#

KEEPALIVED_OPTIONS="-D"

1-2. Edit /etc/sysconfig/keepalived File

Add -S 6 to the KEEPALIVED_OPTIONS

cp /etc/sysconfig/keepalived /etc/sysconfig/keepalived_backup
vi /etc/sysconfig/keepalived

# Options for keepalived. See `keepalived --help' output and keepalived(8) and
# keepalived.conf(5) man pages for a list of all options. Here are the most
# common ones :
#
# --vrrp               -P    Only run with VRRP subsystem.
# --check              -C    Only run with Health-checker subsystem.
# --dont-release-vrrp  -V    Dont remove VRRP VIPs & VROUTEs on daemon stop.
# --dont-release-ipvs  -I    Dont remove IPVS topology on daemon stop.
# --dump-conf          -d    Dump the configuration data.
# --log-detail         -D    Detailed log messages.
# --log-facility       -S    0-7 Set local syslog facility (default=LOG_DAEMON)
#

KEEPALIVED_OPTIONS="-D -S 6"

1-3. Create New Directory to Keep Keepalived Logs

mkdir -p /data/keepalived/log

1-4. Edit /etc/rsyslog.conf File

Add below lines to the end of file.

cp /etc/rsyslog.conf /etc/rsyslog.conf_backup
vi /etc/rsyslog.conf

# Save keepalived logs
local6.*                                                /data/keepalived/log/keepalived.log

1-5. Restart rsyslog and keepalived Services

sudo systemctl restart rsyslog
sudo systemctl restart keepalived

1-6. Check Newly Created KeepaliveD Log File

tail -444f /data/keepalived/log/keepalived.log

2- Enable Log Rotation

2-1. Create Logrotate Configuration File

Create a new configuration file for Keepalived in /etc/logrotate.d/

sudo vi /etc/logrotate.d/keepalived

2-2. Add Logrotate Configuration

  • Add the following configuration to manage the rotation. Adjust the file path, rotation frequency, and retention based on your setup.
  • Replace /data/keepalived/log/keepalived.log with the path to your Keepalived log file.
/data/keepalived/log/keepalived.log {
daily
rotate 7
create 644
missingok
notifempty
compress
dateext
postrotate
systemctl reload keepalived > /dev/null 2>&1 || true
systemctl reload rsyslog
endscript
}
***** Option Explanations *****

** daily: Rotate the logs daily. You can change this to weekly or monthly if preferred.
** missingok: Don’t complain if the log file is missing.
** rotate 7: Keep 7 rotated logs.
** compress: Compress the rotated logs.
** delaycompress: Delay compression for one cycle to allow for easier inspection.
** notifempty: Do not rotate empty log files.
** create 644: Create a new log file with the specified permissions.
** dateext: Adds date to old log files. 
** postrotate: Reload the keepalived service after rotation to ensure it uses the new log file.

2-3. Test the Configuration

  • Run the following command to test your new logrotate configuration.
  • This runs logrotate in debug mode, showing what it would do without making changes.
sudo logrotate -d /etc/logrotate.d/keepalived

2-4. Manually Trigger Log Rotation

If the test looks good, you can manually trigger the rotation.

sudo logrotate -f /etc/logrotate.d/keepalived

2-5. Verify the Rotation

Check the rotated log files to ensure they’re correctly managed.

# pwd
/data/keepalived/log
# ls -lrt
-rw-r--r-- 1 root root 1162 Apr 21 10:12 keepalived.log-20250421.gz
-rw-r--r-- 1 root root 6370 Apr 21 10:14 keepalived.log

How to Install and Use SAR(System Activity Reporting) in Linux?

I have tried to document step by step How to Install and Use SAR(System Activity Reporting) in Linux. I hope it will be useful for those who need it!

SAR Installation

Install sysstat Package

##### For Debian Based Linux #####
sudo apt-get install sysstat

##### For Redhat Based Linux #####
sudo yum install sysstat

Set Retention as to Need

##### For Debian Based Linux #####
vi /etc/sysstat/sysstat
HISTORY=30
Save & Quit

# HISTORY: Number of days to retain SAR data. Default 7 Days.
more /etc/sysstat/sysstat | grep HISTORY
HISTORY=30

##### For Redhat Based Linux #####
vi /etc/sysconfig/sysstat
HISTORY=30
Save & Quit

# HISTORY: Number of days to retain SAR data. Default 28 Days.
more /etc/sysconfig/sysstat | grep HISTORY
HISTORY=30

Enable Stat Collection

##### For Debian Based Linux #####
vi /etc/default/sysstat
change ENABLED="false" to ENABLED="true"
Save & Quit

##### For Redhat Based Linux #####
vi /etc/sysconfig/sysstat
add ENABLED="true" to the end of file 
Save & Quit

Enable and Restart sysstat Service

##### Both Debian and Redhat Based Linux #####

sudo systemctl enable sysstat

sudo systemctl restart sysstat

sudo systemctl status sysstat

NOTE: SAR collection interval time is every 10 minutes. So first time you enable statistic collection, you need to wait for 10 minutes before running sar command to get meaningful data.

Check Generated Report Files

SAR file names start with sa and followed by current day of month. For Example: sa04

##### For Debian Based Linux #####
SAR reports are stored under /var/log/sysstat
# ls -lrt /var/log/sysstat
-rw-r--r-- 1 root root  17808 Apr  4 14:50 sa04

##### For Redhat Based Linux #####
SAR reports are stored under /var/log/sa
# ls -lrt /var/log/sa
-rw-r--r--. 1 root root 2420 Apr  4 14:50 sa04

Check Frequency of SAR Statistics Collection

##### For Debian Based Linux #####
more /etc/cron.d/sysstat

# Activity reports every 10 minutes everyday
5-55/10 * * * * root command -v debian-sa1 > /dev/null && debian-sa1 1 1

##### For Redhat Based Linux #####
more /usr/lib/systemd/system/sysstat-collect.timer | grep OnCalendar

# Activity reports every 10 minutes everyday
OnCalendar=*:00/10

Useful SAR Commands

##### CHECK CPU STATS #####
sar

sar -s 18:00:00 -e 19:00:00                                 Between 6PM and 7PM 

sar -f /var/log/sysstat/sa04                                (DEBIAN) 4th day of month
sar -f /var/log/sa/sa04                                     (REDHAT) 4th day of month

sar -f /var/log/sysstat/sa04 -s 18:00:00 -e 19:00:00        (DEBIAN) 4th day of month & 6pm-7pm
sar -f /var/log/sa/sa04 -s 18:00:00 -e 19:00:00             (REDHAT) 4th day of month & 6pm-7pm

##### CHECK RAM STATS #####
sar -r

sar -r 18:00:00 -e 19:00:00                                  Between 6PM and 7PM

sar -r -f /var/log/sysstat/sa04                              (DEBIAN) 4th day of month
sar -r -f /var/log/sa/sa04                                   (REDHAT) 4th day of month

sar -r -f /var/log/sysstat/sa04 -s 18:00:00 -e 19:00:00      (DEBIAN) 4th day of month & 6pm-7pm
sar -r -f /var/log/sa/sa04 -s 18:00:00 -e 19:00:00           (REDHAT) 4th day of month & 6pm-7pm

##### CHECK SWAP STATS #####
sar -S

sar -S 18:00:00 -e 19:00:00                                   Between 6PM and 7PM

sar -S -f /var/log/sysstat/sa04                               (DEBIAN) 4th day of month
sar -S -f /var/log/sa/sa04                                    (REDHAT) 4th day of month

sar -S -f /var/log/sysstat/sa04 -s 18:00:00 -e 19:00:00       (DEBIAN) 4th day of month & 6pm-7pm
sar -S -f /var/log/sa/sa04 -s 18:00:00 -e 19:00:00            (REDHAT) 4th day of month & 6pm-7pm

##### CHECK DISK STATS #####
sar -b

sar -b 18:00:00 -e 19:00:00                                    Between 6PM and 7PM

sar -b -f /var/log/sysstat/sa04                                (DEBIAN) 4th day of month
sar -b -f /var/log/sa/sa04                                     (REDHAT) 4th day of month

sar -b -f /var/log/sysstat/sa04 -s 18:00:00 -e 19:00:00        (DEBIAN) 4th day of month & 6pm-7pm
sar -b -f /var/log/sa/sa04 -s 18:00:00 -e 19:00:00             (REDHAT) 4th day of month & 6pm-7pm

##### CHECK NETWORK STATS #####
sar -n DEV

sar -n DEV 18:00:00 -e 19:00:00                                 Between 6PM and 7PM

sar -n DEV -f /var/log/sysstat/sa04                             (DEBIAN) 4th day of month
sar -n DEV -f /var/log/sa/sa04                                  (REDHAT) 4th day of month

sar -n DEV -f /var/log/sysstat/sa04 -s 18:00:00 -e 19:00:00     (DEBIAN) 4th day of month & 6pm-7pm
sar -n DEV -f /var/log/sa/sa04 -s 18:00:00 -e 19:00:00          (REDHAT) 4th day of month & 6pm-7pm

How to Setup MySQL High Availability InnoDB Cluster with MySQL Router & KeepAliveD?

I have tried to document step by step How to Setup MySQL High Availability InnoDB Cluster with MySQL Router & KeepAliveD. There are total 27 low level steps. I hope it will be useful for those who need it!

High Level Installation and Configuration Steps

1-Check Environment Details
2-Update /etc/hosts Files on All Servers
3-Disable Firewall & Selinux on All Servers
4-Install MySQL Server on MySQL-01 & MySQL-02 & MySQL-03
5-Install MySQL Shell (mysqlsh) on MySQL-01 & MySQL-02 & MySQL-03
6-Install MySQL Router on MySQL-Router01 & MySQL-Router02
7-Set MySQL Shell Password on MySQL-01 & MySQL-02 & MySQL-03
8-Configure MySQL-01 Instance
9-Configure MySQL-02 Instance
10-Configure MySQL-03 Instance
11-Create InnoDB Cluster on MySQL-01 Instance
12-Check InnoDB Cluster Status on MySQL-01 Instance
13-Add MySQL-02 Instances to the Cluster on MySQL-01 Instance
14-Add MySQL-03 Instances to the Cluster on MySQL-01 Instance
15-Check InnoDB Cluster Status on MySQL-01 Instance
16-Setup Router Account on MySQL-01 Instance
17-Configure MySQL Router on MySQL-Router01 & MySQL-Router02
18-Create a Table and Insert Some Data for Test Purpose on MySQL-01 Instance
19-Check Newly Created Table on MySQL-02 & MySQL-03 Instance
20-Ensure MySQL Router Works as Expected
21-Stop MySQL Service on MySQL-01 and Ensure Primary Node is Changed as Expected
22-Install KeepAliveD on MySQL-Router01 & MySQL-Router02
23-Configure KeepAliveD on MySQL-Router01 & MySQL-Router02
24-Enable/Start KeepAliveD Service on MySQL-Router01 & MySQL-Router02
25-Shutdown MySQL Router01 and Ensure DB Connections Work as Expected Using KeepAliveD Virtual IP Address and Ports (RW & RO)
26-Shutdown MySQL Router02 and Ensure DB Connections Work as Expected Using KeepAliveD Virtual IP Address and Ports (RW & RO)
27-KeepAliveD Test Script Using Virtual IP Address and Ports (RW & RO)

Low Level Installation and Configuration Steps

1-Check Environment Details

2-Update /etc/hosts Files on All Servers

vi /etc/hosts

192.168.1.151 rocky95-mysql01
192.168.1.152 rocky95-mysql02
192.168.1.153 rocky95-mysql03
192.168.1.154 rocky95-mysqlrouter01
192.168.1.155 rocky95-mysqlrouter02

3-Disable Firewall & Selinux on All Servers

# Disable & Check Firewall
sudo systemctl status firewalld
sudo systemctl stop firewalld
sudo systemctl disable firewalld
sudo systemctl status firewalld

# Disable & Check Selinux
vi /etc/selinux/config
SELINUX=disabled
reboot
sestatus

4-Install MySQL Server on MySQL-01 & MySQL-02 & MySQL-03

How to Install MySQL for Redhat-Based Linux (Rocky Linux)?

5-Install MySQL Shell (mysqlsh) on MySQL-01 & MySQL-02 & MySQL-03

MySQL Shell is an advanced command-line client and code editor for MySQL. In addition to SQL, MySQL Shell also offers scripting capabilities for JavaScript and Python.

# Installation
wget https://repo.mysql.com/mysql80-community-release-el9-5.noarch.rpm
rpm -ivh mysql80-community-release-el9-5.noarch.rpm
yum install mysql-shell -y

# Check
# mysqlsh --version
mysqlsh   Ver 8.0.40 for Linux on x86_64 - for MySQL 8.0.40 (MySQL Community Server (GPL))

6-Install MySQL Router on MySQL-Router01 & MySQL-Router02

# Installation
wget https://repo.mysql.com/mysql80-community-release-el9-5.noarch.rpm
rpm -ivh mysql80-community-release-el9-5.noarch.rpm
yum install mysql-router -y

# Check
# mysqlrouter --version
MySQL Router  Ver 8.0.40 for Linux on x86_64 (MySQL Community - GPL)

7-Set MySQL Shell Password on MySQL-01 & MySQL-02 & MySQL-03

# mysqlsh --uri root@localhost
Please provide the password for 'root@localhost': *********
Save password for 'root@localhost'? [Y]es/[N]o/Ne[v]er (default No): Y
MySQL Shell 8.0.40

Copyright (c) 2016, 2024, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
Creating a session to 'root@localhost'
Fetching schema names for auto-completion... Press ^C to stop.
Your MySQL connection id is 8 (X protocol)
Server version: 8.0.36 Source distribution
No default schema selected; type \use <schema> to set one.
 MySQL  localhost:33060+ ssl  JS >

8-Configure MySQL-01 Instance

Run below commands on MySQL-01 instance.

mysqlsh --uri root@localhost
dba.configureInstance()
Select 2
innodbcluster
InnoDBcluster_2025!
y
y
dba.checkInstanceConfiguration('innodbcluster@rocky95-mysql01')
InnoDBcluster_2025!
y

Sample Output

[root@rocky95-mysql01 ~]# mysqlsh --uri root@localhost
MySQL Shell 8.0.40

Copyright (c) 2016, 2024, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
Creating a session to 'root@localhost'
Fetching schema names for auto-completion... Press ^C to stop.
Your MySQL connection id is 11 (X protocol)
Server version: 8.0.36 Source distribution
No default schema selected; type \use <schema> to set one.
 MySQL  localhost:33060+ ssl  JS > dba.configureInstance()
Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as rocky95-mysql01:3306
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

ERROR: User 'root' can only connect from 'localhost'. New account(s) with proper source address specification to allow remote connection from all instances must be created to manage the cluster.

1) Create remotely usable account for 'root' with same grants and password
2) Create a new admin account for InnoDB cluster with minimal required grants
3) Ignore and continue
4) Cancel

Please select an option [1]: 2
Please provide an account name (e.g: icroot@%) to have it created with the necessary
privileges or leave empty and press Enter to cancel.
Account Name: innodbcluster
Password for new account: *******************
Confirm password: *******************

applierWorkerThreads will be set to the default value of 4.

NOTE: Some configuration options need to be fixed:
+----------------------------------------+---------------+----------------+--------------------------------------------------+
| Variable                               | Current Value | Required Value | Note                                             |
+----------------------------------------+---------------+----------------+--------------------------------------------------+
| binlog_transaction_dependency_tracking | COMMIT_ORDER  | WRITESET       | Update the server variable                       |
| enforce_gtid_consistency               | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                              | OFF           | ON             | Update read-only variable and restart the server |
| server_id                              | 1             | <unique ID>    | Update read-only variable and restart the server |
+----------------------------------------+---------------+----------------+--------------------------------------------------+

Some variables need to be changed, but cannot be done dynamically on the server.
Do you want to perform the required configuration changes? [y/n]: y
Do you want to restart the instance after configuring it? [y/n]: y

Creating user innodbcluster@%.
Account innodbcluster@% was successfully created.

Configuring instance...

WARNING: '@@binlog_transaction_dependency_tracking' is deprecated and will be removed in a future release. (Code 1287).
The instance 'rocky95-mysql01:3306' was configured to be used in an InnoDB cluster.
Restarting MySQL...
NOTE: MySQL server at rocky95-mysql01:3306 was restarted.
 MySQL  localhost:33060+ ssl  JS > dba.checkInstanceConfiguration('innodbcluster@rocky95-mysql01')
Please provide the password for 'innodbcluster@rocky95-mysql01': *******************
Save password for 'innodbcluster@rocky95-mysql01'? [Y]es/[N]o/Ne[v]er (default No): y
Validating local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as rocky95-mysql01:3306
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...
Instance configuration is compatible with InnoDB cluster

The instance 'rocky95-mysql01:3306' is valid to be used in an InnoDB cluster.

{
    "status": "ok"
}
 MySQL  localhost:33060+ ssl  JS >

9-Configure MySQL-02 Instance

Run below commands on MySQL-02 instance.

mysqlsh --uri root@localhost
dba.configureInstance()
Select 2
innodbcluster
InnoDBcluster_2025!
y
y
dba.checkInstanceConfiguration('innodbcluster@rocky95-mysql02')
InnoDBcluster_2025!
y

Sample Output

[root@rocky95-mysql02 ~]# mysqlsh --uri root@localhost
MySQL Shell 8.0.40

Copyright (c) 2016, 2024, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
Creating a session to 'root@localhost'
Fetching schema names for auto-completion... Press ^C to stop.
Your MySQL connection id is 9 (X protocol)
Server version: 8.0.36 Source distribution
No default schema selected; type \use <schema> to set one.
 MySQL  localhost:33060+ ssl  JS > dba.configureInstance()
Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as rocky95-mysql02:3306
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

ERROR: User 'root' can only connect from 'localhost'. New account(s) with proper source address specification to allow remote connection from all instances must be created to manage the cluster.

1) Create remotely usable account for 'root' with same grants and password
2) Create a new admin account for InnoDB cluster with minimal required grants
3) Ignore and continue
4) Cancel

Please select an option [1]: 2
Please provide an account name (e.g: icroot@%) to have it created with the necessary
privileges or leave empty and press Enter to cancel.
Account Name: innodbcluster
Password for new account: *******************
Confirm password: *******************

applierWorkerThreads will be set to the default value of 4.

NOTE: Some configuration options need to be fixed:
+----------------------------------------+---------------+----------------+--------------------------------------------------+
| Variable                               | Current Value | Required Value | Note                                             |
+----------------------------------------+---------------+----------------+--------------------------------------------------+
| binlog_transaction_dependency_tracking | COMMIT_ORDER  | WRITESET       | Update the server variable                       |
| enforce_gtid_consistency               | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                              | OFF           | ON             | Update read-only variable and restart the server |
| server_id                              | 1             | <unique ID>    | Update read-only variable and restart the server |
+----------------------------------------+---------------+----------------+--------------------------------------------------+

Some variables need to be changed, but cannot be done dynamically on the server.
Do you want to perform the required configuration changes? [y/n]: y
Do you want to restart the instance after configuring it? [y/n]: y

Creating user innodbcluster@%.
Account innodbcluster@% was successfully created.

Configuring instance...

WARNING: '@@binlog_transaction_dependency_tracking' is deprecated and will be removed in a future release. (Code 1287).
The instance 'rocky95-mysql02:3306' was configured to be used in an InnoDB cluster.
Restarting MySQL...
NOTE: MySQL server at rocky95-mysql02:3306 was restarted.
 MySQL  localhost:33060+ ssl  JS > dba.checkInstanceConfiguration('innodbcluster@rocky95-mysql02')
Please provide the password for 'innodbcluster@rocky95-mysql02': *******************
Save password for 'innodbcluster@rocky95-mysql02'? [Y]es/[N]o/Ne[v]er (default No): Y
Validating local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as rocky95-mysql02:3306
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...
Instance configuration is compatible with InnoDB cluster

The instance 'rocky95-mysql02:3306' is valid to be used in an InnoDB cluster.

{
    "status": "ok"
}
 MySQL  localhost:33060+ ssl  JS >

10-Configure MySQL-03 Instance

Run below commands on MySQL-03 instance.

mysqlsh --uri root@localhost
dba.configureInstance()
Select 2
innodbcluster
InnoDBcluster_2025!
y
y
dba.checkInstanceConfiguration('innodbcluster@rocky95-mysql03')
InnoDBcluster_2025!
y

Sample Output

[root@rocky95-mysql03 ~]# mysqlsh --uri root@localhost
MySQL Shell 8.0.40

Copyright (c) 2016, 2024, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
Creating a session to 'root@localhost'
Fetching schema names for auto-completion... Press ^C to stop.
Your MySQL connection id is 9 (X protocol)
Server version: 8.0.36 Source distribution
No default schema selected; type \use <schema> to set one.
 MySQL  localhost:33060+ ssl  JS > dba.configureInstance()
Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as rocky95-mysql03:3306
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

ERROR: User 'root' can only connect from 'localhost'. New account(s) with proper source address specification to allow remote connection from all instances must be created to manage the cluster.

1) Create remotely usable account for 'root' with same grants and password
2) Create a new admin account for InnoDB cluster with minimal required grants
3) Ignore and continue
4) Cancel

Please select an option [1]: 2
Please provide an account name (e.g: icroot@%) to have it created with the necessary
privileges or leave empty and press Enter to cancel.
Account Name: innodbcluster
Password for new account: *******************
Confirm password: *******************

applierWorkerThreads will be set to the default value of 4.

NOTE: Some configuration options need to be fixed:
+----------------------------------------+---------------+----------------+--------------------------------------------------+
| Variable                               | Current Value | Required Value | Note                                             |
+----------------------------------------+---------------+----------------+--------------------------------------------------+
| binlog_transaction_dependency_tracking | COMMIT_ORDER  | WRITESET       | Update the server variable                       |
| enforce_gtid_consistency               | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                              | OFF           | ON             | Update read-only variable and restart the server |
| server_id                              | 1             | <unique ID>    | Update read-only variable and restart the server |
+----------------------------------------+---------------+----------------+--------------------------------------------------+

Some variables need to be changed, but cannot be done dynamically on the server.
Do you want to perform the required configuration changes? [y/n]: y
Do you want to restart the instance after configuring it? [y/n]: y

Creating user innodbcluster@%.
Account innodbcluster@% was successfully created.

Configuring instance...

WARNING: '@@binlog_transaction_dependency_tracking' is deprecated and will be removed in a future release. (Code 1287).
The instance 'rocky95-mysql03:3306' was configured to be used in an InnoDB cluster.
Restarting MySQL...
NOTE: MySQL server at rocky95-mysql03:3306 was restarted.
 MySQL  localhost:33060+ ssl  JS > dba.checkInstanceConfiguration('innodbcluster@rocky95-mysql03')
Please provide the password for 'innodbcluster@rocky95-mysql03': *******************
Save password for 'innodbcluster@rocky95-mysql03'? [Y]es/[N]o/Ne[v]er (default No): Y
Validating local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as rocky95-mysql03:3306
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...
Instance configuration is compatible with InnoDB cluster

The instance 'rocky95-mysql03:3306' is valid to be used in an InnoDB cluster.

{
    "status": "ok"
}
 MySQL  localhost:33060+ ssl  JS >

11-Create InnoDB Cluster on MySQL-01 Instance

Run below commands on MySQL-01 instance.

mysqlsh --uri root@localhost
\c innodbcluster@rocky95-mysql01:3306
InnoDBcluster_2025!
Y
var v_cluster= dba.createCluster('INNODB_CLS')

Sample Output

[root@rocky95-mysql01 ~]# mysqlsh --uri root@localhost
MySQL Shell 8.0.40

Copyright (c) 2016, 2024, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
Creating a session to 'root@localhost'
Fetching schema names for auto-completion... Press ^C to stop.
Your MySQL connection id is 10 (X protocol)
Server version: 8.0.36 Source distribution
No default schema selected; type \use <schema> to set one.
 MySQL  localhost:33060+ ssl  JS > \c innodbcluster@rocky95-mysql01:3306
Creating a session to 'innodbcluster@rocky95-mysql01:3306'
Please provide the password for 'innodbcluster@rocky95-mysql01:3306': *******************
Save password for 'innodbcluster@rocky95-mysql01:3306'? [Y]es/[N]o/Ne[v]er (default No): Y
Fetching schema names for auto-completion... Press ^C to stop.
Closing old connection...
Your MySQL connection id is 12
Server version: 8.0.36 Source distribution
No default schema selected; type \use <schema> to set one.
 MySQL  rocky95-mysql01:3306 ssl  JS > var v_cluster= dba.createCluster('INNODB_CLS')
A new InnoDB Cluster will be created on instance 'rocky95-mysql01:3306'.

Validating instance configuration at rocky95-mysql01:3306...

This instance reports its own address as rocky95-mysql01:3306

Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using 'rocky95-mysql01:3306'. Use the localAddress option to override.

* Checking connectivity and SSL configuration...

Creating InnoDB Cluster 'INNODB_CLS' on 'rocky95-mysql01:3306'...

Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.

12-Check InnoDB Cluster Status on MySQL-01 Instance

 MySQL  rocky95-mysql01:3306 ssl  JS > v_cluster.status()
{
    "clusterName": "INNODB_CLS", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "rocky95-mysql01:3306", 
        "ssl": "REQUIRED", 
        "status": "OK_NO_TOLERANCE", 
        "statusText": "Cluster is NOT tolerant to any failures.", 
        "topology": {
            "rocky95-mysql01:3306": {
                "address": "rocky95-mysql01:3306", 
                "memberRole": "PRIMARY", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.36"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "rocky95-mysql01:3306"
}
 MySQL  rocky95-mysql01:3306 ssl  JS > v_cluster.describe()
{
    "clusterName": "INNODB_CLS", 
    "defaultReplicaSet": {
        "name": "default", 
        "topology": [
            {
                "address": "rocky95-mysql01:3306", 
                "label": "rocky95-mysql01:3306", 
                "role": "HA"
            }
        ], 
        "topologyMode": "Single-Primary"
    }
}

13-Add MySQL-02 Instances to the Cluster on MySQL-01 Instance

v_cluster.addInstance('rocky95-mysql02:3306')

Error

MySQL  rocky95-mysql01:3306 ssl  JS > v_cluster.addInstance('rocky95-mysql02:3306')

NOTE: The target instance 'rocky95-mysql02:3306' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether incremental state recovery can correctly provision it.
The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of 'rocky95-mysql02:3306' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'.

The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'.


Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): 
Validating instance configuration at rocky95-mysql02:3306...

This instance reports its own address as rocky95-mysql02:3306

Instance configuration is suitable.
ERROR: RuntimeError: Cannot add an instance with the same server UUID (e5622470-ccd9-11ef-872a-0800278f8491) of an active member of the cluster 'rocky95-mysql01:3306'. Please change the server UUID of the instance to add, all members must have a unique server UUID.
Cluster.addInstance: Cannot add an instance with the same server UUID (e5622470-ccd9-11ef-872a-0800278f8491) of an active member of the cluster 'rocky95-mysql01:3306'. Please change the server UUID of the instance to add, all members must have a unique server UUID. (RuntimeError)
 MySQL  rocky95-mysql01:3306 ssl  JS >

Solution

# Run below commands on MySQL-02 & MySQL-03 instances. 
# I am getting this error because I created MySQL-02 and MySQL-03 server by cloning MySQL-01 server :)

$ sudo systemctl stop mysqld
$ mv /var/lib/mysql/auto.cnf /var/lib/mysql/auto.cnf_backup
$ sudo systemctl start mysqld

Successfully Completed

 MySQL  rocky95-mysql01:3306 ssl  JS > v_cluster.addInstance('rocky95-mysql02:3306')

NOTE: The target instance 'rocky95-mysql02:3306' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether incremental state recovery can correctly provision it.
The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of 'rocky95-mysql02:3306' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'.

The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'.


Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): 
Validating instance configuration at rocky95-mysql02:3306...

This instance reports its own address as rocky95-mysql02:3306

Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using 'rocky95-mysql02:3306'. Use the localAddress option to override.

* Checking connectivity and SSL configuration...
A new instance will be added to the InnoDB Cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster...

Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: rocky95-mysql02:3306 is being cloned from rocky95-mysql01:3306
** Stage DROP DATA: Completed
** Clone Transfer  
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed

NOTE: rocky95-mysql02:3306 is shutting down...

* Waiting for server restart... ready 
* rocky95-mysql02:3306 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 74.72 MB transferred in about 1 second (~74.72 MB/s)

State recovery already finished for 'rocky95-mysql02:3306'

The instance 'rocky95-mysql02:3306' was successfully added to the cluster.

 MySQL  rocky95-mysql01:3306 ssl  JS >

14-Add MySQL-03 Instances to the Cluster on MySQL-01 Instance

v_cluster.addInstance('rocky95-mysql03:3306')

Sample Output

 MySQL  rocky95-mysql01:3306 ssl  JS > v_cluster.addInstance('rocky95-mysql03:3306')

NOTE: The target instance 'rocky95-mysql03:3306' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether incremental state recovery can correctly provision it.
The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of 'rocky95-mysql03:3306' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'.

The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'.


Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): 
Validating instance configuration at rocky95-mysql03:3306...

This instance reports its own address as rocky95-mysql03:3306

Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using 'rocky95-mysql03:3306'. Use the localAddress option to override.

* Checking connectivity and SSL configuration...
A new instance will be added to the InnoDB Cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster...

Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: rocky95-mysql03:3306 is being cloned from rocky95-mysql01:3306
** Stage DROP DATA: Completed
** Clone Transfer  
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed

NOTE: rocky95-mysql03:3306 is shutting down...

* Waiting for server restart... ready 
* rocky95-mysql03:3306 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 74.70 MB transferred in about 1 second (~74.70 MB/s)

State recovery already finished for 'rocky95-mysql03:3306'

The instance 'rocky95-mysql03:3306' was successfully added to the cluster.

 MySQL  rocky95-mysql01:3306 ssl  JS >

15-Check InnoDB Cluster Status on MySQL-01 Instance

 MySQL  rocky95-mysql01:3306 ssl  JS > v_cluster.status()
{
    "clusterName": "INNODB_CLS", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "rocky95-mysql01:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "rocky95-mysql01:3306": {
                "address": "rocky95-mysql01:3306", 
                "memberRole": "PRIMARY", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.36"
            }, 
            "rocky95-mysql02:3306": {
                "address": "rocky95-mysql02:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.36"
            }, 
            "rocky95-mysql03:3306": {
                "address": "rocky95-mysql03:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.36"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "rocky95-mysql01:3306"
}
 MySQL  rocky95-mysql01:3306 ssl  JS >

16-Setup Router Account on MySQL-01 Instance

v_cluster.setupRouterAccount('mysqlrouter')
InnoDBcluster_2025! ---> Password

Sample Output

 MySQL  rocky95-mysql01:3306 ssl  JS > v_cluster.setupRouterAccount('mysqlrouter')

Missing the password for new account mysqlrouter@%. Please provide one.
Password for new account: *******************
Confirm password: *******************


Creating user mysqlrouter@%.
Account mysqlrouter@% was successfully created.

 MySQL  rocky95-mysql01:3306 ssl  JS >

17-Configure MySQL Router on MySQL-Router01 & MySQL-Router02

# Use Below Command for Default Data and Log Paths
mysqlrouter --bootstrap innodbcluster@rocky95-mysql01 --user mysqlrouter
InnoDBcluster_2025! ---> Password

Conf File Path=/etc/mysqlrouter/mysqlrouter.conf
Log File Path=/var/log/mysqlrouter/mysqlrouter.log

# Use Below Command for Custom Data and Log Paths
mkdir -p /data/mysqlrouter
mysqlrouter --bootstrap innodbcluster@rocky95-mysql01 -d /data/mysqlrouter --user mysqlrouter
InnoDBcluster_2025! ---> Password

Conf File Path=/data/mysqlrouter/mysqlrouter.conf
Log File Path=/data/mysqlrouter/log/mysqlrouter.log

# Enable and Start Service
systemctl enable mysqlrouter
systemctl start mysqlrouter
systemctl status mysqlrouter

Sample Output

[root@rocky95-mysqlrouter01 ~]# mysqlrouter --bootstrap innodbcluster@rocky95-mysql01 --user mysqlrouter
Please enter MySQL password for innodbcluster: 
# Bootstrapping system MySQL Router 8.0.40 (MySQL Community - GPL) instance...

- Creating account(s) (only those that are needed, if any)
- Verifying account (using it to run SQL queries that would be run by Router)
- Storing account in keyring
- Adjusting permissions of generated files
- Creating configuration /etc/mysqlrouter/mysqlrouter.conf

Existing configuration backed up to '/etc/mysqlrouter/mysqlrouter.conf.bak'

# MySQL Router configured for the InnoDB Cluster 'INNODB_CLS'

After this MySQL Router has been started with the generated configuration

    $ /etc/init.d/mysqlrouter restart
or
    $ systemctl start mysqlrouter
or
    $ mysqlrouter -c /etc/mysqlrouter/mysqlrouter.conf

InnoDB Cluster 'INNODB_CLS' can be reached by connecting to:

## MySQL Classic protocol

- Read/Write Connections: localhost:6446
- Read/Only Connections:  localhost:6447

## MySQL X protocol

- Read/Write Connections: localhost:6448
- Read/Only Connections:  localhost:6449

[root@rocky95-mysqlrouter01 ~]# systemctl enable mysqlrouter
Created symlink /etc/systemd/system/multi-user.target.wants/mysqlrouter.service → /usr/lib/systemd/system/mysqlrouter.service.

[root@rocky95-mysqlrouter01 ~]# systemctl start mysqlrouter
[root@rocky95-mysqlrouter01 ~]# 
[root@rocky95-mysqlrouter01 ~]# 
[root@rocky95-mysqlrouter01 ~]# systemctl status mysqlrouter
● mysqlrouter.service - MySQL Router
     Loaded: loaded (/usr/lib/systemd/system/mysqlrouter.service; disabled; preset: disabled)
     Active: active (running) since Sat 2025-01-11 17:45:45 +03; 2s ago
   Main PID: 2940 (mysqlrouter)
     Status: "running"
      Tasks: 25 (limit: 10961)
     Memory: 16.8M
        CPU: 109ms
     CGroup: /system.slice/mysqlrouter.service
             └─2940 /usr/bin/mysqlrouter

18-Create a Table and Insert Some Data for Test Purpose on MySQL-01 Instance

mysqlsh --uri root@localhost

\sql

create database testdb;

use testdb;

CREATE TABLE replication_test_table (id INT AUTO_INCREMENT PRIMARY KEY, replication_test varchar(50));

INSERT INTO replication_test_table (replication_test) VALUES
('Test01 was inserted by Master-192.168.1.151'),('Test02 was inserted by Master-192.168.1.151'),
('Test03 was inserted by Master-192.168.1.151'),('Test04 was inserted by Master-192.168.1.151'),
('Test05 was inserted by Master-192.168.1.151'),('Test06 was inserted by Master-192.168.1.151');

SELECT * FROM replication_test_table;

select @@hostname;

19-Check Newly Created Table on MySQL-02 & MySQL-03 Instance

mysqlsh --uri root@localhost

\sql

use testdb;

SELECT * FROM replication_test_table;

select @@hostname;

MySQL-02

MySQL-03

20-Ensure MySQL Router Works as Expected

## MySQL Classic protocol
- Read/Write Connections: localhost:6446
- Read/Only Connections:  localhost:6447

Run below command and ensure that all requests go to the MySQL-01 instance as expected. Because we have just one node for Read/Write purpose. The port number is 6446.

mysql -h rocky95-mysqlrouter01 -P 6446 -uinnodbcluster -pInnoDBcluster_2025! -e "select @@hostname;"

Run below command and ensure that all requests go to the MySQL-02 and MySQL-03 instances as expected. Because we have two nodes for Read purpose. The port number is 6447.

mysql -h rocky95-mysqlrouter01 -P 6447 -uinnodbcluster -pInnoDBcluster_2025! -e "select @@hostname;"

21-Stop MySQL Service on MySQL-01 and Ensure Primary Node is Changed as Expected

Check Current Primary Node on MySQL-01

mysqlsh --uri root@localhost
\c innodbcluster@rocky95-mysql01:3306
var v_cluster= dba.getCluster('INNODB_CLS')
v_cluster.status()

Stop MySQL Service on MySQL-01

[root@rocky95-mysql01 ~]# sudo systemctl stop mysqld
[root@rocky95-mysql01 ~]# 
[root@rocky95-mysql01 ~]# sudo systemctl status mysqld

Check Current Primary Node Again on MySQL-02

mysqlsh --uri root@localhost
\c innodbcluster@rocky95-mysql02:3306
InnoDBcluster_2025! --> Password
var v_cluster= dba.getCluster('INNODB_CLS')
v_cluster.status()

Start MySQL Service on MySQL-01

[root@rocky95-mysql01 ~]# sudo systemctl start mysqld
[root@rocky95-mysql01 ~]# 
[root@rocky95-mysql01 ~]# sudo systemctl status mysqld

Check Cluster Status Again on MySQL-01

mysqlsh --uri root@localhost
\c innodbcluster@rocky95-mysql01:3306
var v_cluster= dba.getCluster('INNODB_CLS')
v_cluster.status()

Promote MySQL-01 as PRIMARY

v_cluster.setPrimaryInstance(‘rocky95-mysql01:3306’)
v_cluster.status()

22-Install KeepAliveD on MySQL-Router01 & MySQL-Router02

sudo yum -y install keepalived
mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.backup

23-Configure KeepAliveD on MySQL-Router01 & MySQL-Router02

  • Important Points
    • state is important.
    • Use your own public interface name. Check it via ifconfig command.
    • virtual_router_id should be same.
    • priority is important.
    • Use your own virtual_ipaddress.

MySQL-Router01

vi /etc/keepalived/keepalived.conf

vrrp_script chk_mysqlrouter {
    script "/bin/killall -0 /usr/bin/mysqlrouter"  # Check if mysqlrouter is running
    interval 2          # Check every 2 seconds
    weight 2            # Increase priority by 2 if script succeeds
    fall 2              # Consider the service down after 2 consecutive failures
    rise 2              # Consider the service back up after 2 consecutive successes
}

vrrp_instance VI_1 {
    state MASTER
    interface enp0s3
    virtual_router_id 50
    priority 102
    advert_int 1
    virtual_ipaddress {
        192.168.1.100
    }

    track_script {
        chk_mysqlrouter
    }
}

MySQL-Router02

vi /etc/keepalived/keepalived.conf

vrrp_script chk_mysqlrouter {
    script "/bin/killall -0 /usr/bin/mysqlrouter"  # Check if mysqlrouter is running
    interval 2          # Check every 2 seconds
    weight 2            # Increase priority by 2 if script succeeds
    fall 2              # Consider the service down after 2 consecutive failures
    rise 2              # Consider the service back up after 2 consecutive successes
}

vrrp_instance VI_2 {
    state BACKUP
    interface enp0s3
    virtual_router_id 50
    priority 101
    advert_int 1
    virtual_ipaddress {
        192.168.1.100
    }

    track_script {
        chk_mysqlrouter
    }
}

24-Enable/Start KeepAliveD Service on MySQL-Router01 & MySQL-Router02

systemctl enable keepalived
systemctl start keepalived
systemctl status keepalived
ip addr show

25-Shutdown MySQL Router01 and Ensure DB Connections Work as Expected Using KeepAliveD Virtual IP Address and Ports (RW & RO)

## MySQL Classic protocol
- Read/Write Connections: localhost:6446
- Read/Only Connections:  localhost:6447
-- Read/Write Connection Test
mysql -h 192.168.1.100 -P 6446 -uinnodbcluster -pInnoDBcluster_2025! -e "select @@hostname;"
-- Read/Only Connection Test
mysql -h 192.168.1.100 -P 6447 -uinnodbcluster -pInnoDBcluster_2025! -e "select @@hostname;"

26-Shutdown MySQL Router02 and Ensure DB Connections Work as Expected Using KeepAliveD Virtual IP Address and Ports (RW & RO)

## MySQL Classic protocol
- Read/Write Connections: localhost:6446
- Read/Only Connections:  localhost:6447
-- Read/Write Connection Test
mysql -h 192.168.1.100 -P 6446 -uinnodbcluster -pInnoDBcluster_2025! -e "select @@hostname;"
-- Read/Only Connection Test
mysql -h 192.168.1.100 -P 6447 -uinnodbcluster -pInnoDBcluster_2025! -e "select @@hostname;"

27-KeepAliveD Test Script Using Virtual IP Address and Ports (RW & RO)

# RW Test
while [ 1 ]
do
sleep 1
mysql -h 192.168.1.100 -P 6446 -uinnodbcluster -pInnoDBcluster_2025! -e "select @@hostname;"
done

# RO Test
while [ 1 ]
do
sleep 1
mysql -h 192.168.1.100 -P 6447 -uinnodbcluster -pInnoDBcluster_2025! -e "select @@hostname;"
done

Some Useful Commands

Stopping an InnoDB Cluster

##### 1. Check the Cluster Status #####
-- Before stopping the cluster, check its current status to ensure it is healthy and operational

mysqlsh --uri innodbcluster@rocky95-mysql01:3306 -- cluster status
InnoDBcluster_2025! --> Password

##### 2. Disconnect Applications #####
Disconnect applications or services using the cluster to avoid writing data during the shutdown process

##### 3. Stop MySQL Instances (One at a Time) #####
-- Shut down the MySQL instances participating in the cluster in reverse order of their roles
-- First, stop secondary members
-- Finally, stop the primary member
-- Always shut down secondary members first to ensure the primary member remains active until the end.
-- Use the following command on each node

sudo systemctl stop mysqld
sudo systemctl status mysqld

##### 4. Confirm Shutdown #####

ps aux | grep mysqld

Starting an InnoDB Cluster

##### Start MySQL Instances (One at a Time) #####
-- Start the MySQL instances in the cluster one by one. The recommended order is:
-- Start the primary member first.
-- Then start the secondary members.
-- Use the following command on each node

sudo systemctl status mysqld
sudo systemctl start mysqld

##### Rejoin Instances to the Cluster #####
-- If the instances do not automatically rejoin the cluster, you can manually add them back
-- This ensures that the cluster reconfigures itself and resumes normal operation after a complete shutdown

mysqlsh --uri innodbcluster@rocky95-mysql01:3306
dba.rebootClusterFromCompleteOutage()
\sql
SELECT * FROM performance_schema.replication_group_members;

##### Verify Cluster Status #####
-- Check the cluster status to confirm all nodes are healthy and synchronized:

mysqlsh --uri innodbcluster@rocky95-mysql01:3306 -- cluster status
InnoDBcluster_2025! --> Password

##### Reconnect Applications ##### 
-- Once the cluster is fully operational, reconnect the applications and resume normal operations.

Find Cluster Name

mysqlsh --uri root@localhost
var v_cluster= dba.getCluster()
v_cluster.getName()

Directly Define Variable and Use

mysqlsh --uri innodbcluster@rocky95-mysql01:3306 --cluster
InnoDBcluster_2025! --> Password
cluster.status()

Check Cluster Status

mysqlsh --uri root@localhost
\c innodbcluster@rocky95-mysql01:3306
var v_cluster= dba.getCluster('INNODB_CLS')
v_cluster.status()

Check Cluster Status with Extended Info

mysqlsh --uri innodbcluster@rocky95-mysql01:3306 --cluster
InnoDBcluster_2025! --> Password
cluster.status({'extended':0})
cluster.status({'extended':1})
cluster.status({'extended':2})
cluster.status({'extended':3})

Check Directly Cluster Status

mysqlsh --uri innodbcluster@rocky95-mysql01:3306 -- cluster status
InnoDBcluster_2025! --> Password

Check Cluster Topology

mysqlsh --uri innodbcluster@rocky95-mysql01:3306 --cluster
InnoDBcluster_2025! --> Password
cluster.describe()

Rescan Cluster

mysqlsh --uri innodbcluster@rocky95-mysql01:3306 --cluster
InnoDBcluster_2025! --> Password
cluster.rescan()

Check Specific Object Value

mysqlsh --uri innodbcluster@rocky95-mysql01:3306 --cluster
cluster.status().defaultReplicaSet.topology["rocky95-mysql01:3306"].role
cluster.status().defaultReplicaSet.topology["rocky95-mysql01:3306"].status
cluster.status().defaultReplicaSet.topologyMode

Connect Directly to Primary or Secondary

mysqlsh --uri innodbcluster@rocky95-mysql02:3306 --cluster --redirect-primary
InnoDBcluster_2025! --> Password
mysqlsh --uri innodbcluster@rocky95-mysql01:3306 --cluster --redirect-secondary
InnoDBcluster_2025! --> Password

Switch to Multi-Primary Mode

mysqlsh --uri innodbcluster@rocky95-mysql01:3306 --cluster
InnoDBcluster_2025! --> Password
cluster.switchToMultiPrimaryMode()
cluster.status()

Switch to Single-Primary Mode

mysqlsh --uri innodbcluster@rocky95-mysql01:3306 --cluster
InnoDBcluster_2025! --> Password
cluster.switchToSinglePrimaryMode()
cluster.status()

Change Primary Node

mysqlsh --uri innodbcluster@rocky95-mysql01:3306 --cluster
InnoDBcluster_2025! --> Password
cluster.setPrimaryInstance('innodbcluster@rocky95-mysql01')
cluster.status()

Check Cluster Options

mysqlsh --uri innodbcluster@rocky95-mysql01:3306 --cluster
InnoDBcluster_2025! --> Password
cluster.options()

Remove/Add Instance from Routing

mysqlsh --uri innodbcluster@rocky95-mysql01:3306 --cluster
InnoDBcluster_2025! --> Password

-- Remove Instance from Routing
cluster.setInstanceOption("innodbcluster@rocky95-mysql02:3306", "tag:_hidden", true);

-- Add Instance to Routing
cluster.setInstanceOption("innodbcluster@rocky95-mysql02:3306", "tag:_hidden", false);

Remove/Add Instance from Cluster

mysqlsh --uri innodbcluster@rocky95-mysql01:3306 --cluster
InnoDBcluster_2025! --> Password

-- Remove Instance from Cluster
cluster.removeInstance("innodbcluster@rocky95-mysql02:3306", {force: true})

-- Add Instance to Cluster
cluster.addInstance("innodbcluster@rocky95-mysql02:3306")

-- Check Cluster Status
cluster.status()

How to Install Percona Server for MySQL on Redhat Based Linux?

I have tried to document step by step how to install Percona Server for MySQL on Redhat Based Linux. I hope it will be useful for those who need it!

Check ID & Hostname & OS Release

id
hostname
more /etc/os-release

Update Linux Packages

dnf -y update

Add Percona Repository

dnf install -y https://repo.percona.com/yum/percona-release-latest.noarch.rpm

Enable Percona Release

sudo percona-release enable ps80 release

sudo percona-release setup -y ps80

Install Percona Server Package

sudo dnf install percona-server-server -y

If you encounter "Error: Unable to find a match: percona-server-server" run below command;
dnf module disable mysql -y
Reason: Rocky Linux 8 uses module streams, and the default MySQL module may conflict with Percona. Disable the default MySQL module before installing Percona.

Then re-try below command:
sudo dnf install percona-server-server -y

Start & Enable & Check MySQL Service

sudo systemctl start mysqld.service

sudo systemctl enable mysqld.service

sudo systemctl status mysqld.service

Find & Note Temp Password

sudo grep 'temporary password' /var/log/mysqld.log

# Sample Output
2025-03-25T21:36:35.308940Z 6 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: 7GZ,nSSy*vf!

Perform Secure Installation

sudo mysql_secure_installation

Securing the MySQL server deployment.

Enter password for user root: 

The existing password for the user account root has expired. Please set a new password.

New password: StrongPassword2025!

Re-enter new password: StrongPassword2025!
The 'validate_password' component is installed on the server.
The subsequent steps will run with the existing configuration
of the component.
Using existing password for root.

Estimated strength of the password: 100 
Change the password for root ? ((Press y|Y for Yes, any other key for No) : n

 ... skipping.
By default, a MySQL installation has an anonymous user,
allowing anyone to log into MySQL without having to have
a user account created for them. This is intended only for
testing, and to make the installation go a bit smoother.
You should remove them before moving into a production
environment.

Remove anonymous users? (Press y|Y for Yes, any other key for No) : y
Success.

Normally, root should only be allowed to connect from
'localhost'. This ensures that someone cannot guess at
the root password from the network.

Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y
Success.

By default, MySQL comes with a database named 'test' that
anyone can access. This is also intended only for testing,
and should be removed before moving into a production
environment.

Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y
 - Dropping test database...
Success.

 - Removing privileges on test database...
Success.

Reloading the privilege tables will ensure that all changes
made so far will take effect immediately.

Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y
Success.

All done!

Verify Installation

mysql -u root -pStrongPassword2025!
select @@version;

How to Implement Oracle Materialized View Fast Refresh Method using ROWID?

I have tried to document step by step how to use Oracle materialized view fast refresh method with ROWID. I hope it will be useful for those who need it!

High Level Steps

1-Prepare SQL Statement will be Used for Materialized View
2-Create Materialized View Logs for Tables exist in the Query
3-Create Materialized View
4-Insert Some Test Data
5-Check Diff Records waiting on MLOG Table
6-Refresh Materialized View
7-Check Newly Created Records on Materialized View
8-Check Materialized View Refresh Completion Details
9-Check Limitations for Fast Refresh

1-Prepare SQL Statement will be Used for Materialized View

-- ROWIDs should be exist in SQL statement
SELECT E.ROWID EMPLOYEES_ROWID,
       J.ROWID JOB_HISTORY_ROWID,
       E.FIRST_NAME,
       E.LAST_NAME,
       E.EMAIL,
       J.START_DATE,
       J.END_DATE,
       J.JOB_ID
  FROM HR.EMPLOYEES E, HR.JOB_HISTORY J
 WHERE E.EMPLOYEE_ID = J.EMPLOYEE_ID;

2-Create Materialized View Logs for Tables exist in the Query

/* USING ROWID */
CREATE MATERIALIZED VIEW LOG ON HR.EMPLOYEES
    WITH ROWID
    INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW LOG ON HR.JOB_HISTORY
    WITH ROWID
    INCLUDING NEW VALUES;

3-Create Materialized View

CREATE MATERIALIZED VIEW HR.TEST_MV_USING_ROWID
BUILD IMMEDIATE
REFRESH FAST ON DEMAND
AS
    SELECT E.ROWID     EMPLOYEES_ROWID,
           J.ROWID     JOB_HISTORY_ROWID,
           E.FIRST_NAME,
           E.LAST_NAME,
           E.EMAIL,
           J.START_DATE,
           J.END_DATE,
           J.JOB_ID
      FROM HR.EMPLOYEES E, HR.JOB_HISTORY J
     WHERE E.EMPLOYEE_ID = J.EMPLOYEE_ID;     
     
/* If you get below error */     
ORA-01031: insufficient privileges
Help: https://docs.oracle.com/error-help/db/ora-01031/

/* SOLUTION */ 
GRANT CREATE TABLE TO HR;

4-Insert Some Test Data

INSERT INTO HR.JOB_HISTORY (EMPLOYEE_ID,START_DATE,END_DATE,JOB_ID,DEPARTMENT_ID) VALUES (201,TO_DATE ('1/1/2005','MM/DD/YYYY'),TO_DATE ('12/30/2005','MM/DD/YYYY'),'MK_REP',20);
INSERT INTO HR.JOB_HISTORY (EMPLOYEE_ID,START_DATE,END_DATE,JOB_ID,DEPARTMENT_ID) VALUES (201,TO_DATE ('1/1/2006','MM/DD/YYYY'),TO_DATE ('12/30/2006','MM/DD/YYYY'),'MK_REP',20);
INSERT INTO HR.JOB_HISTORY (EMPLOYEE_ID,START_DATE,END_DATE,JOB_ID,DEPARTMENT_ID) VALUES (201,TO_DATE ('1/1/2007','MM/DD/YYYY'),TO_DATE ('12/30/2007','MM/DD/YYYY'),'MK_REP',20);
INSERT INTO HR.JOB_HISTORY (EMPLOYEE_ID,START_DATE,END_DATE,JOB_ID,DEPARTMENT_ID) VALUES (201,TO_DATE ('1/1/2008','MM/DD/YYYY'),TO_DATE ('12/30/2008','MM/DD/YYYY'),'MK_REP',20);
INSERT INTO HR.JOB_HISTORY (EMPLOYEE_ID,START_DATE,END_DATE,JOB_ID,DEPARTMENT_ID) VALUES (201,TO_DATE ('1/1/2009','MM/DD/YYYY'),TO_DATE ('12/30/2009','MM/DD/YYYY'),'MK_REP',20);
COMMIT;

5-Check Diff Records waiting on MLOG Table

SELECT * FROM HR.MLOG$_JOB_HISTORY;

6-Refresh Materialized View

BEGIN
    DBMS_MVIEW.REFRESH ('TEST_MV_USING_ROWID', 'F');
END;

7-Check Newly Created Records on Materialized View

SELECT *
    FROM HR.TEST_MV_USING_ROWID MV
   WHERE     MV.JOB_ID = 'MK_REP'
         AND MV.START_DATE >= TO_DATE ('1/1/2005', 'MM/DD/YYYY')
ORDER BY MV.START_DATE;

8-Check Materialized View Refresh Completion Details

SELECT *
    FROM (SELECT OWNER,
                 MVIEW_NAME,
                 CONTAINER_NAME,
                 REFRESH_MODE,
                 REFRESH_METHOD,
                 LAST_REFRESH_TYPE,
                 STALENESS,
                 ROUND ((LAST_REFRESH_END_TIME - LAST_REFRESH_DATE) * 24 * 60,
                        2)
                     AS REFRESH_TIME_MINS
            FROM DBA_MVIEWS
           WHERE LAST_REFRESH_TYPE IN ('FAST', 'COMPLETE'))
ORDER BY REFRESH_TIME_MINS DESC;

9-Check Limitations for Fast Refresh

1. General Limitations

  • Primary Key or ROWID Required: The base table must have a primary key or ROWID for fast refresh.
  • Materialized View Log Required: A materialized view log must be created on the base table with all required columns.
  • Only Certain DML Supported: FAST REFRESH works efficiently with INSERT, UPDATE, and DELETE, but certain complex operations may require a complete refresh.
  • No Direct DML on MV: DML operations are not allowed on the materialized view itself.

2. SQL Query Restrictions

  • Joins & Aggregations:
    • Joins are allowed but must meet specific join conditions (e.g., primary key constraints).
    • Aggregations (e.g., SUM, AVG) require additional logs (INCLUDING NEW VALUES in MV log).
  • Subqueries & UNION:
    • Subqueries in the SELECT clause are not allowed.
    • UNION, MINUS, and INTERSECT are not supported.
  • Functions & Expressions:
    • Some PL/SQL functions and expressions (e.g., SYSDATE, USER) are not supported.
    • Complex expressions in the SELECT list may prevent fast refresh.

How to Restore Oracle Database to Another Server using RMAN?

I have tried to document step by step how to restore Oracle Database to another server using RMAN. I hope it will be useful for those who need it!

High Level Steps

1-Check Environment
2-Create Table and Insert Test Data on Source Database
3-Check Current Backups on Source Database
4-Check and Set Backup Location on Source Database
5-Enable Archive Log Mode on Source Database
6-Create Backup on Source Database
7-Create PFILE from SPFILE on Source Database
8-Transfer PFILE & Password Files from Source to Target Server
9-Check PFILE & Password Files on Target Database
10-Transfer Backup Files from Source to Target Server
11-Edit Oratab on Target Server
12-Create Necessary Directories on Target Server
13-Startup Target Database in NOMOUNT Mode
14-Restore Control File on Target Database
15-Take Target Database in MOUNT Mode
16-Restore Target Database
17-Recover Target Database
18-Open Target Database & Check Test Data
19-Change Target Database Name (If Needed)
20-Edit PFILE on Target Database
21-Rename Password File on Target Database
22-Open Database & Check New Name & Test Data

Low Level Steps

1-Check Environment

ROLEHOSTNAMEIP
Oracle Source Database19c_source192.168.1.111
Oracle Target Database19c_target192.168.1.112

2-Create Table and Insert Test Data on Source Database

su - oracle
sqlplus / as sysdba
create table rman_backup_restore_test (id int, name varchar2(10));
insert into rman_backup_restore_test values (1, 'test');
commit;
select * from rman_backup_restore_test;

3-Check Current Backups on Source Database

. oraenv
rman target /
list backup;

4-Check and Set Backup Location on Source Database

rman target /
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/u01/backup/ORCL_%U';
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u01/backup/ORCL_%F';
show all;

5-Enable Archive Log Mode on Source Database

select name,log_mode from v$database;

archive log list;

shutdown immediate;

startup mount

alter database archivelog;

alter database open;

select name,log_mode from v$database;

archive log list;

6-Create Backup on Source Database

rman target /

BACKUP DATABASE PLUS ARCHIVELOG;
OR
BACKUP INCREMENTAL LEVEL 0 DATABASE PLUS ARCHIVELOG;

list backup;

cd /u01/backup/
ls -lrt

7-Create PFILE from SPFILE on Source Database

. oraenv
cd $ORACLE_HOME/dbs
sqlplus / as sysdba
create pfile from spfile;
!ls -lrt

8-Transfer PFILE & Password Files from Source to Target Server

cd $ORACLE_HOME/dbs
ls -lrt initorcl*
ls -lrt orapw*
scp initorcl.ora orapworcl 192.168.1.112:/u01/app/oracle/product/19.3/db_home/dbs

9-Check PFILE & Password Files on Target Database

cd $ORACLE_HOME/dbs
ls -lrt initorcl*
ls -lrt orapw*

10-Transfer Backup Files from Source to Target Server

cd /u01/backup/
ls -lrt
scp * 192.168.1.112:/u01/backup/

11-Edit Oratab on Target Server

vi /etc/oratab

# Add Below Entry to Oratab
orcl:/u01/app/oracle/product/19.3/db_home:N

12-Create Necessary Directories on Target Server

mkdir -p /u01/app/oracle/oradata/ORCL
mkdir -p /u01/app/oracle/admin/orcl/adump

13-Startup Target Database in NOMOUNT Mode

cd $ORACLE_HOME/dbs
pwd
ls -lrt
sqlplus / as sysdba
startup nomount pfile=/u01/app/oracle/product/19.3/db_home/dbs/initorcl.ora;

14-Restore Control File on Target Database

more initorcl.ora | grep -i contr
cd /u01/app/oracle/oradata/ORCL/
ls -lrt
cd /u01/backup/
ls -lrt
rman target /
restore controlfile from '/u01/backup/ORCL_c-1715685767-20250125-01';
more initorcl.ora | grep -i contr
cd /u01/app/oracle/oradata/ORCL/
ls -lrt

15-Take Target Database in MOUNT Mode

rman target /
alter database mount;
crosscheck backup;
delete noprompt expired backup;
catalog start with '/u01/backup/' noprompt;

16-Restore Target Database

rman target /
restore database;

Note: To Restore Database to Different Location

run {
SET NEWNAME FOR DATABASE TO '/newlocation/%b';
SET NEWNAME FOR TEMPFILE TO '/newlocation/%b';
RESTORE DATABASE;
SWITCH DATAFILE ALL;
SWITCH TEMPFILE ALL;
}

17-Recover Target Database

recover database;
-- Check Existing Archivelogs
column name format a90
SELECT thread#, sequence#, name FROM v$archived_log;

This is normal when performing an incomplete recovery without specifying where recovery should stop. If sequence 14 is the last log sequence that was backed up, you can avoid the above error by using below command.

recover database until sequence 14;

18-Open Target Database & Check Test Data

sqlplus / as sysdba
select instance_name ||'  '||version||'  '||startup_time||'  '||database_status from gv$instance;
select name, open_mode, database_role, log_mode from gv$database;
alter database open resetlogs;
select name, open_mode, database_role, log_mode from gv$database;
select * from rman_backup_restore_test;

19-Change Target Database Name (If Needed)

shut immediate;
startup mount;
exit
. oraenv
nid target=/ DBNAME=ORCL_NEW SETNAME=Y

20-Edit PFILE on Target Database

cd $ORACLE_HOME/dbs
ls -lrt
cp initorcl.ora initorcl_new.ora
# Change db_name from orcl to orcl_new
vi initorcl_new.ora

21-Rename Password File on Target Database

cd $ORACLE_HOME/dbs
ls -lrt
cp orapworcl orapworcl_new
rm orapworcl
ls -lrt

22-Open Database & Check New Name & Test Data

export ORACLE_SID=orcl_new
sqlplus / as sysdba
startup;
select name, open_mode, database_role, log_mode from gv$database;
select * from rman_backup_restore_test;