How to Find MySQL Configuration File Path on Windows?

First Option: Run a Command to Locate the Configuration File

Open a Command Prompt and run the following command to see the current configuration file path. This will display the paths where MySQL looks for its configuration file.

mysql --help | findstr "my.ini"

Second Option: Using the MySQL Workbench

  1. Open Workbench and connect to your server.
  2. Go to Server > Options File to view or edit configuration settings.

How to Setup Percona Monitoring and Management for MySQL on Rocky Linux?

I have tried to document step by step how to setup Percona Monitoring and Management for MySQL on Rocky Linux. I hope it will be useful for those who need it!

High Level Steps

1-Check Environment
2-Install PMM Server
2.1-Disable Firewall & Selinux
2.2-Pull Latest PMM Docker Image
2.3-Create Docker Container to Keep Data
2.4-Check PMM Server IP
2.5-Go to PMM Login Page and Change Default Password
3-Install PMM Client on Target DB Server
3.1-Disable Firewall & Selinux
3.2-Download and Install Percona Repo Package
3.3-Install Percona Monitoring and Management Client
3.4-Verify the Installation
4-Register PMM Client with PMM Server
5-Create PMM User for Monitoring on Target Database
6-Register the Server on Target Server to Enable Monitoring
6.1-Check Available Options
6.2-Add MySQL to Monitoring
6.3-Check pmm-admin list
7-Check PMM Portal for Newly Added Database
8-PMM Server Shutdown/Startup Commands

1-Check Environment

ROLEHOSTNAMEIPOS
PMM Serverrocky95-pmm192.168.1.160Rocky Linux 9.5
MySQL Serverrocky95-mysql8192.168.1.161Rocky Linux 9.5

2-Install PMM Server

2.1-Disable Firewall & Selinux

# Disable & Check Firewall
sudo systemctl status firewalld
sudo systemctl stop firewalld
sudo systemctl disable firewalld
sudo systemctl status firewalld

# Disable & Check Selinux
vi /etc/selinux/config
SELINUX=disabled
reboot
sestatus

2.2-Pull Latest PMM Docker Image

yum install podman-docker
docker pull percona/pmm-server:2
docker images

2.3-Create Docker Container to Keep Data

I will keep PMM Data in /data/pmm directory.

mkdir -p /data/pmm

docker create --volume /data/pmm --name pmm-data percona/pmm-server:2 /bin/true

docker volume ls

docker run --detach --restart always --publish 443:443 --volumes-from pmm-data --name pmm-server percona/pmm-server:2

docker ps

docker inspect pmm-data | egrep "Source|Destination"

2.4-Check PMM Server IP

ip addr

2.5-Go to PMM Login Page and Change Default Password

https://192.168.1.160/graph/login

3-Install PMM Client on Target DB Server

3.1-Disable Firewall & Selinux

# Disable & Check Firewall
sudo systemctl status firewalld
sudo systemctl stop firewalld
sudo systemctl disable firewalld
sudo systemctl status firewalld

# Disable & Check Selinux
vi /etc/selinux/config
SELINUX=disabled
reboot
sestatus

3.2-Download and Install Percona Repo Package

sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm

3.3-Install Percona Monitoring and Management Client

sudo yum install pmm2-client

3.4-Verify the Installation

sudo pmm-admin -v

4-Register PMM Client with PMM Server

## Syntax
sudo pmm-admin config --server-insecure-tls --server-url=https://admin:<password>@pmm.example.com

## Run Below Command
## My PMM Server Ip is 192.168.1.160
## Use Own Admin Password
sudo pmm-admin config --server-insecure-tls --server-url=https://admin:admin2025@192.168.1.160:443

5-Create PMM User for Monitoring on Target Database

##### For MySQL 8.0 #####
# 192.168.1.161 is my target MySQL database to be monitored
mysql -u root -p
CREATE USER 'pmm'@'192.168.1.161' IDENTIFIED BY 'Welc0me1!' WITH MAX_USER_CONNECTIONS 10;
GRANT SELECT, PROCESS, REPLICATION CLIENT, RELOAD, BACKUP_ADMIN ON *.* TO 'pmm'@'192.168.1.161';
FLUSH PRIVILEGES;

6-Register the Server on Target Server to Enable Monitoring

6.1-Check Available Options

pmm-admin add --help

6.2-Add MySQL to Monitoring

# Use Your Own Host IP Address
sudo pmm-admin add mysql --username=pmm --password=Welc0me1! --host=192.168.1.161 --port=3306 --query-source=perfschema

6.3-Check pmm-admin list

pmm-admin list

7-Check PMM Portal for Newly Added Database

https://192.168.1.160/graph/login

8-PMM Server Shutdown/Startup Commands

# Shutdown PMM Server
docker ps -a
docker stop pmm-server
docker ps -a

# Startup PMM Server
docker ps -a
docker start pmm-server
docker ps -a

# Check PMM Home Page
https://192.168.1.160/graph/login

How to Install MySQL for Redhat-Based Linux (Rocky Linux)?

I have tried to document step by step how to install MySQL for Redhat-Based Linux (Rocky Linux). I hope it will be useful for those who need it!

Update OS Packages

sudo yum update -y

Install MySQL Server

sudo yum install mysql-server

Check Installed MySQL Packages

rpm -qa | grep mysql

Check MySQL Service

sudo systemctl status mysql.service
sudo systemctl enable mysqld.service
sudo systemctl start mysqld.service
sudo systemctl status mysqld.service

Check MySQL Connection

sudo mysql

Run & Configure Secure Installation

sudo mysql_secure_installation

Check Database Connection via root Password

sudo mysql -u root -p

How to Promote Slave Server to Master for Test Purpose in MySQL?

In order to perform some tests in your MySQL cluster environment, you can create a new clone server by shutting down the services in the slave database, then promote it to the master role and perform your tests by following the steps below.

High Level Steps

1. Stop Replica & MySQL Service on Slave Server
2. Shutdown VM (Optional) & Clone Slave Server
3. Start Slave Server & Check the Replica Status
4. Start Newly Cloned Slave Server & Check the Replica Status
5. Stop Replication on the Newly Cloned Slave Server
6. Reset the Replica Configuration on Newly Cloned Slave Server
7. Check Write Operations on Newly Cloned Slave Server
8. Configure IP Address on Newly Cloned Slave Server
9. Check IP Address and Port on Newly Cloned Slave Server
10. Update Application Connection Strings & Test It

Low Level Steps

1. Stop Replica & MySQL Service on Slave Server

# sudo mysql -u root -p

mysql> STOP REPLICA;

mysql> exit

# systemctl stop mysql.service

# systemctl status mysql.service

2. Shutdown VM (Optional) & Clone Slave Server

Depending on the virtualization environment technology, MySQL Slave Server VM can be cloned either while running or closed.

3. Start Slave Server & Check the Replica Status

Run the following command on the replica to ensure it is up and running.

# sudo mysql -u root -p

mysql> SHOW REPLICA STATUS\G

4. Start Newly Cloned Slave Server & Check the Replica Status

Run the following command on the replica to ensure it is up and running.

# sudo mysql -u root -p

mysql> SHOW REPLICA STATUS\G

5. Stop Replication on the Newly Cloned Slave Server

Stop the replica threads to ensure no further replication happens.

mysql> STOP REPLICA;

6. Reset the Replica Configuration on Newly Cloned Slave Server

Remove the replication configuration from the replica. This clears the replication settings, ensuring the replica no longer attempts to connect to the old master.

mysql> RESET SLAVE ALL;

7. Check Write Operations on Newly Cloned Slave Server

mysql> show databases;

mysql> create database testdb;

mysql> show databases;

mysql> use testdb;

mysql> CREATE TABLE test_table (test varchar(50));

mysql> show tables;

mysql> INSERT INTO test_table VALUES
('Test01 was inserted by Master-192.168.1.162'),('Test02 was inserted by Master-192.168.1.162'),
('Test03 was inserted by Master-192.168.1.162'),('Test04 was inserted by Master-192.168.1.162'),
('Test05 was inserted by Master-192.168.1.162'),('Test06 was inserted by Master-192.168.1.162');

mysql> SELECT * FROM test_table;

8. Configure IP Address on Newly Cloned Slave Server

# Backup current configuration file
cp /etc/mysql/mysql.conf.d/mysqld.cnf /etc/mysql/mysql.conf.d/mysqld.cnf.backup

# Edit configuration file
vi /etc/mysql/mysql.conf.d/mysqld.cnf

# Type your master database ip address
bind-address            = 192.168.1.162

# Save and Exit

# Restart the MySQL server service to apply the changes
sudo systemctl restart mysql

9. Check IP Address and Port on Newly Cloned Slave Server

# mysql -u root -p

mysql> SHOW VARIABLES LIKE 'bind_address';

mysql> SHOW VARIABLES LIKE 'port';

10. Update Application Connection Strings & Test It

Redirect your applications to the new master.

How to Install MySQL for Debian-Based Linux (Ubuntu)?

I have tried to document step by step how to install MySQL for Debian-Based Linux (Ubuntu). I hope it will be useful for those who need it!

Update/Upgrade OS Packages

sudo apt update
sudo apt upgrade

Install MySQL Server

sudo apt install mysql-server

Check Installed MySQL Packages

# dpkg --list | grep mysql

# apt list --installed | grep mysql

Check MySQL Service

sudo systemctl status mysql.service

Check MySQL Connection

sudo mysql

Run & Configure Secure Installation

sudo mysql_secure_installation

Change root Password

sudo mysql
alter user 'root'@'localhost' identified with mysql_native_password by 'Deuce4040';

Check Database Connection via root Password

sudo mysql -u root -p

How to Delete Oracle Log Files with ADRCI as Automatically?

I have tried to document step by step how to delete Oracle log files with ADRCI as automatically. I hope it will be useful for those who need it!

High Level Steps

1-Run ADRCI
2-Check All ADR Homes
3-Set ADR Home
4-Check the Oldest Files for the Directory
5-Change Purging Policy As Your Need
6-Check LAST_AUTOPRG_TIME to Ensure Automatic Deletion is Run
7-Check the Directory to Ensure Log Files are Deleted as Requested
8-Is it Possible to Change ADR Automatic Purge Scheduled Time?

Low Level Steps

1-Run ADRCI

[oracle@racprimary01 ~]$ adrci
ADRCI: Release 19.0.0.0.0 - Production on Tue Dec 31 18:18:10 2024
Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
ADR base = "/u01/app/oracle"
adrci>

2-Check All ADR Homes

adrci> show homes
ADR Homes: 
diag/rdbms/test/TEST1
diag/asm/+asm/+ASM1
diag/asm/user_oracle/host_1107208194_110
diag/crs/racprimary01/crs
diag/clients/user_oracle/host_1107208194_110
diag/clients/user_root/host_1107208194_110
diag/tnslsnr/racprimary01/asmnet1lsnr_asm
diag/tnslsnr/racprimary01/listener_scan1
diag/tnslsnr/racprimary01/listener_scan2
diag/tnslsnr/racprimary01/listener_scan3
diag/tnslsnr/racprimary01/listener
diag/asmcmd/user_oracle/racprimary01.localdomain
diag/kfod/racprimary01/kfod
adrci>

3-Set ADR Home

I will run my tests on diag/crs/racprimary01/crs home directory.

adrci> set home diag/crs/racprimary01/crs
adrci> 
adrci> show home
ADR Homes: 
diag/crs/racprimary01/crs
adrci>

4-Check the Oldest Files for the Directory

[oracle@racprimary01 trace]$ cd /u01/app/oracle/diag/crs/racprimary01/crs/trace
[oracle@racprimary01 trace]$
[oracle@racprimary01 trace]$ pwd
/u01/app/oracle/diag/crs/racprimary01/crs/trace
[oracle@racprimary01 trace]$
[oracle@racprimary01 trace]$ ls -lrt | head -10

5-Change Purging Policy as Your Need

For testing purposes, I will configure it to delete files older than 8 days (192 hours). My test date is January 1st 🙂

adrci> set control (SHORTP_POLICY=192)        -- Default is 720 hours (30 days)

adrci> set control (LONGP_POLICY=192)         -- Default is 8760 hours (365 days)

adrci> select SHORTP_POLICY,LONGP_POLICY from ADR_CONTROL;

6-Check LAST_AUTOPRG_TIME to Ensure Automatic Deletion is Run

An empty LAST_AUTOPRG_TIME field in the output indicates that no deletions have been made automatically.

adrci> select SHORTP_POLICY,LONGP_POLICY,LAST_AUTOPRG_TIME from ADR_CONTROL;

ADR Home = /u01/app/oracle/diag/crs/racprimary01/crs:
*************************************************************************
SHORTP_POLICY        LONGP_POLICY         LAST_AUTOPRG_TIME                        
-------------------- -------------------- ---------------------------------------- 
192                  192                  2025-01-01 00:56:14.124793 +03:00 

7-Check the Directory to Ensure Log Files are Deleted as Requested

[oracle@racprimary01 trace]$ cd /u01/app/oracle/diag/crs/racprimary01/crs/trace
[oracle@racprimary01 trace]$
[oracle@racprimary01 trace]$ pwd
/u01/app/oracle/diag/crs/racprimary01/crs/trace
[oracle@racprimary01 trace]$
[oracle@racprimary01 trace]$ ls -lrt | head -10

8-Is it Possible to Change ADR Automatic Purge Scheduled Time?

There is no control on the purge frequency, it is fixed to be 48 hours after startup and then every 7 days. Manual purge is the only way to control purge actions in between.

Reference: Is there a way to control ADRCI Auto_Purge Frequency done by the MMON ? (Doc ID 1446242.1)

Other Notes

Traditional Way to Delete Log Files via Crontab

# Create & Go to the Script Directory
cd /home/oracle/scripts

# Create SH File & Add Files to be Deleted
vi remove_old_logs.sh

find /u01/app/oracle/diag/rdbms/test/test/trace -type f -name "*.trc" -mtime +7 -exec rm {} \;
find /u01/app/oracle/diag/rdbms/test/test/trace -type f -name "*.trm" -mtime +7 -exec rm {} \;
find /u01/app/oracle/diag/rdbms/test/test/alert -type f -name "*.xml" -mtime +7 -exec rm {} \;
find /u01/app/oracle/diag/tnslsnr/racprimary01/listener/alert -type f -name "*.xml" -mtime +7 -exec rm {} \;
find /u01/app/oracle/diag/tnslsnr/racprimary01/listener1/alert -type f -name "*.xml" -mtime +7 -exec rm {} \;
find /u01/app/oracle/diag/tnslsnr/racprimary01/listener2/alert -type f -name "*.xml" -mtime +7 -exec rm {} \;
find /u01/app/oracle/diag/tnslsnr/racprimary01/listener3/alert -type f -name "*.xml" -mtime +7 -exec rm {} \;
find /u01/app/oracle/admin/test/adump -type f -name "*.aud" -mtime +7 -exec rm {} \;

# Try it Manually Before Schedule
./remove_old_logs.sh

# Add to Crontab
# Delete Old Log Files
00 7 * * * /home/oracle/scripts/remove_old_logs.sh

Useful ADRCI Commands

Run ADRCI

[oracle@racprimary01 ~]$ adrci
ADRCI: Release 19.0.0.0.0 - Production on Tue Dec 31 18:18:10 2024
Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
ADR base = "/u01/app/oracle"
adrci>

Check Base Location

adrci> show base
ADR base is "/u01/app/oracle"
adrci>

Check ADR Homes

adrci> show homes
ADR Homes: 
diag/rdbms/test/TEST1
diag/asm/+asm/+ASM1
diag/asm/user_oracle/host_1107208194_110
diag/crs/racprimary01/crs
diag/clients/user_oracle/host_1107208194_110
diag/clients/user_root/host_1107208194_110
diag/tnslsnr/racprimary01/asmnet1lsnr_asm
diag/tnslsnr/racprimary01/listener_scan1
diag/tnslsnr/racprimary01/listener_scan2
diag/tnslsnr/racprimary01/listener_scan3
diag/tnslsnr/racprimary01/listener
diag/asmcmd/user_oracle/racprimary01.localdomain
diag/kfod/racprimary01/kfod
adrci>

Set ADR Home

adrci> set home diag/rdbms/test/TEST1
adrci> 
adrci> show home
ADR Homes: 
diag/rdbms/test/TEST1
adrci>

Check AlertLog

adrci> show alert

Check Errors

adrci> show problem
adrci> show incident

Check Trace Files

adrci> show tracefile
adrci> show tracefile -rt

Check/Change Purging Policy

# CHECK COMMAND
adrci> show control

# CHANGE COMMAND
adrci> set control (SHORTP_POLICY=120)
adrci> set control (LONGP_POLICY=720)

SHORTP_POLICY

  • Number of hours after which to purge ADR contents that have a short life. Default is 720 (30 days).
  • A setting of 0 (zero) means that all contents that have a short life can be purged. The maximum setting is 35791394. If a value greater than 35791394 is specified, then this attribute is set to 0 (zero).
  • The ADR contents that have a short life include the following:
    • Trace files, including those files stored in the cdmp_timestamp subdirectories
    • Core dump files
    • Packaging information

LONGP_POLICY

  • Number of hours after which to purge ADR contents that have a long life. Default is 8760 (365 days).
  • A setting of 0 (zero) means that all contents that have a long life can be purged. The maximum setting is 35791394. If a value greater than 35791394 is specified, then this attribute is set to 0 (zero).
  • The ADR contents that have a long life include the following:
    • Incident information
    • Incident dumps
    • Alert logs

ADRCI Set Policy Script for All ADR Homes

#!/bin/sh
for ADRHOME in `adrci exec="show home"`
do
    if [ $ADRHOME = "ADR" -o $ADRHOME = "Homes:" ]
    then
        continue;
    fi
    echo $ADRHOME
    adrci <<EOF
         set home $ADRHOME
         set control (SHORTP_POLICY = 192)
         set control (LONGP_POLICY = 192)
         purge
    exit
EOF
done

Manuel Purge Command

# Purge Specific File Types
adrci> purge -age 2880 -type trace (Deletes older than 2 days trace files. [2880 minutes=2 days])
adrci> purge -age 2880 -type alert (Deletes older than 2 days alert files. [2880 minutes=2 days])

# It is possible to delete the following file types with the same command:
ALERT – files in ./alert directory
INCIDENT — files in ./incident/incdir_<incid> directory
TRACE — files in ./trace directory
CDUMP — files in ./cdump directory
UTSCDMP — ./trace/cdmp_<timestamp> directories
STAGE — files in ./stage directory
SWEEP — files in ./sweep directory
HM — files in the ./hm directory and metadata information in the HM schema
IPS — files in the ./incpkg directory and the metadata information in the IPS schema

# Purge All Files
adrci> purge -age 2880 (Deletes older than 2 days all files. [2880 minutes=2 days])

Check Help

adrci> help
 HELP [topic]
   Available Topics:
        CREATE REPORT
        ECHO
        ESTIMATE
        EXIT
        HELP
        HOST
        IPS
        PURGE
        RUN
        SELECT
        SET BASE
        SET BROWSER
        SET CONTROL
        SET ECHO
        SET EDITOR
        SET HOMES | HOME | HOMEPATH
        SET TERMOUT
        SHOW ALERT
        SHOW BASE
        SHOW CONTROL
        SHOW HM_RUN
        SHOW HOMES | HOME | HOMEPATH
        SHOW INCDIR
        SHOW INCIDENT
        SHOW LOG
        SHOW PROBLEM
        SHOW REPORT
        SHOW TRACEFILE
        SPOOL
 There are other commands intended to be used directly by Oracle, type
 "HELP EXTENDED" to see the list
adrci>