MariaDB
Open source relational database forked from MySQL. Maintains MySQL compatibility while adding unique features. Provides enterprise-level features for free.
Database Server
MariaDB
Overview
MariaDB is an open-source relational database management system developed in 2009 by Michael Widenius, the founder of MySQL. While maintaining complete compatibility with MySQL, it has earned trust from enterprises worldwide as a next-generation database that delivers enterprise-level features, enhanced security, and performance optimization. Adopted as standard in major Linux distributions, cloud platforms, and container environments, it enables seamless migration from existing MySQL applications. MariaDB 2025 edition, featuring the stable 11.6 series, provides all the functionality required for modern data-driven applications through new storage engines, enhanced JSON capabilities, improved replication, and AI/machine learning integration.
Details
MariaDB 11.6 (released November 2024) is a comprehensive database platform that significantly evolves traditional MySQL. The latest version introduces changes from latin1 to utf8mb4 as the default character set, extended TIMESTAMP range (supporting up to 2106), S3 storage integration, enhanced ALTER TABLE processing, and new security features. Through the dual approach of open-source development by MariaDB Foundation and enterprise support by MariaDB Corporation, it addresses a wide range of needs from community to commercial editions. With proprietary technologies like ColumnStore (analytical processing), Galera Cluster (distributed processing), and MaxScale (database proxy), it scalably handles everything from single databases to large-scale distributed systems. Managed service offerings on Amazon RDS, Google Cloud SQL, and Microsoft Azure optimize operations in cloud-native environments.
Key Features
- Complete MySQL Compatibility: Zero-downtime migration possible from existing MySQL applications
- Rich Storage Engines: Purpose-optimized engines including InnoDB, MyISAM, ColumnStore, Spider, and S3
- High Availability: Multi-master configuration and automatic failover through Galera Cluster
- Enterprise Security: Data masking, audit logging, and role-based access control
- Cloud Optimization: Managed service support on major cloud platforms
- Open Source: Free usage and modification under GPL v2 license
Pros and Cons
Pros
- Faster query execution performance and optimized storage engines compared to MySQL 8.0
- Transparency and flexibility through open source, avoiding vendor lock-in
- Purpose-optimized solutions through diverse storage engines (analytics, distributed, archive, etc.)
- True multi-master replication and linear scalability through Galera Cluster
- Independence from Oracle-led MySQL and community-driven development
- Standard package provision in major Linux distributions
Cons
- Some latest MySQL features unavailable due to feature differences with MySQL 8.0
- Some enterprise features require commercial license (MariaDB Enterprise)
- Compatibility limitations with Oracle MySQL Cluster
- Specialized knowledge and additional components required for large-scale distributed environments
- No direct compatibility with MySQL InnoDB Cluster
- Limited complete compatibility with some MySQL tools and cloud services
Reference Pages
Code Examples
Installation and Basic Setup
# MariaDB installation on Ubuntu/Debian systems
# Add MariaDB official repository
curl -LsS https://downloads.mariadb.com/MariaDB/mariadb_repo_setup | sudo bash -s -- --mariadb-server-version=11.6
# Install MariaDB server and client
sudo apt update
sudo apt install -y mariadb-server mariadb-client
# Start MariaDB service and enable auto-start
sudo systemctl enable mariadb
sudo systemctl start mariadb
sudo systemctl status mariadb
# Run security configuration script
sudo mariadb-secure-installation
# Environment setup using Docker Compose
cat > docker-compose.yml << 'EOF'
version: '3.8'
services:
mariadb:
image: mariadb:11.6
container_name: mariadb
restart: unless-stopped
environment:
MYSQL_ROOT_PASSWORD: rootpassword
MYSQL_DATABASE: testdb
MYSQL_USER: testuser
MYSQL_PASSWORD: testpassword
MARIADB_AUTO_UPGRADE: 1
ports:
- "3306:3306"
volumes:
- mariadb_data:/var/lib/mysql
- ./conf.d:/etc/mysql/conf.d
command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin
restart: unless-stopped
environment:
PMA_HOST: mariadb
PMA_PORT: 3306
PMA_USER: root
PMA_PASSWORD: rootpassword
ports:
- "8080:80"
depends_on:
- mariadb
volumes:
mariadb_data:
driver: local
EOF
# Start services
docker-compose up -d
# Configuration file optimization
sudo tee /etc/mysql/conf.d/optimization.cnf << 'EOF'
[mysql]
default-character-set = utf8mb4
[mysqld]
# Basic settings
character-set-server = utf8mb4
collation-server = utf8mb4_unicode_ci
init-connect = 'SET NAMES utf8mb4'
# Performance settings
innodb_buffer_pool_size = 2G
innodb_log_file_size = 256M
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = 1
innodb_purge_batch_size = 127
# Security settings
bind-address = 127.0.0.1
skip-networking = 0
local-infile = 0
# Connection settings
max_connections = 200
max_user_connections = 50
EOF
# Restart MariaDB
sudo systemctl restart mariadb
Basic Database Operations
-- Database connection
mysql -u root -p
-- Database creation and selection
CREATE DATABASE company_db CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
USE company_db;
-- Table creation (utilizing MariaDB 11.6 new features)
CREATE TABLE employees (
id INT AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(100) NOT NULL,
email VARCHAR(150) UNIQUE NOT NULL,
department VARCHAR(50),
salary DECIMAL(10,2),
hire_date DATE,
metadata JSON,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
INDEX idx_department (department),
INDEX idx_hire_date (hire_date)
) ENGINE=InnoDB;
-- Sample data insertion
INSERT INTO employees (name, email, department, salary, hire_date, metadata) VALUES
('John Smith', '[email protected]', 'Engineering', 75000.00, '2023-01-15',
JSON_OBJECT('skills', JSON_ARRAY('Python', 'SQL', 'Docker'), 'level', 'senior')),
('Jane Doe', '[email protected]', 'Marketing', 65000.00, '2023-03-20',
JSON_OBJECT('skills', JSON_ARRAY('Analytics', 'SEO', 'Content'), 'level', 'mid')),
('Bob Johnson', '[email protected]', 'Engineering', 80000.00, '2022-06-10',
JSON_OBJECT('skills', JSON_ARRAY('Java', 'Kubernetes', 'AWS'), 'level', 'senior'));
-- Basic queries
SELECT * FROM employees WHERE department = 'Engineering';
-- JSON function queries
SELECT name, JSON_EXTRACT(metadata, '$.skills') as skills
FROM employees
WHERE JSON_EXTRACT(metadata, '$.level') = 'senior';
-- Aggregation queries
SELECT
department,
COUNT(*) as employee_count,
AVG(salary) as avg_salary,
MAX(salary) as max_salary
FROM employees
GROUP BY department;
-- User management
CREATE USER 'app_user'@'localhost' IDENTIFIED BY 'secure_password';
GRANT SELECT, INSERT, UPDATE, DELETE ON company_db.* TO 'app_user'@'localhost';
FLUSH PRIVILEGES;
-- Backup
mysqldump -u root -p company_db > company_db_backup.sql
-- Restore
mysql -u root -p company_db < company_db_backup.sql
Python Integration Programming
import pymysql
import json
from datetime import datetime, date
from contextlib import contextmanager
import logging
class MariaDBManager:
def __init__(self, host='localhost', port=3306, user='root',
password='password', database='company_db', charset='utf8mb4'):
"""MariaDB connection management class"""
self.connection_params = {
'host': host,
'port': port,
'user': user,
'password': password,
'database': database,
'charset': charset,
'autocommit': True,
'cursorclass': pymysql.cursors.DictCursor
}
self.connection = None
@contextmanager
def get_connection(self):
"""Connection context manager"""
try:
self.connection = pymysql.connect(**self.connection_params)
yield self.connection
except Exception as e:
logging.error(f"Database connection error: {e}")
if self.connection:
self.connection.rollback()
raise
finally:
if self.connection:
self.connection.close()
def execute_query(self, query, params=None):
"""Execute query (for SELECT)"""
with self.get_connection() as conn:
with conn.cursor() as cursor:
cursor.execute(query, params)
return cursor.fetchall()
def execute_update(self, query, params=None):
"""Execute update query (for INSERT/UPDATE/DELETE)"""
with self.get_connection() as conn:
with conn.cursor() as cursor:
cursor.execute(query, params)
conn.commit()
return cursor.rowcount
def insert_employee(self, name, email, department, salary, hire_date, skills=None, level='junior'):
"""Insert employee data"""
metadata = {
'skills': skills or [],
'level': level,
'created_by': 'system'
}
query = """
INSERT INTO employees (name, email, department, salary, hire_date, metadata)
VALUES (%s, %s, %s, %s, %s, %s)
"""
params = (name, email, department, salary, hire_date, json.dumps(metadata))
return self.execute_update(query, params)
def get_employees_by_department(self, department):
"""Get employees by department"""
query = """
SELECT id, name, email, department, salary, hire_date,
JSON_EXTRACT(metadata, '$.skills') as skills,
JSON_EXTRACT(metadata, '$.level') as level
FROM employees
WHERE department = %s
ORDER BY hire_date
"""
return self.execute_query(query, (department,))
def get_salary_statistics(self):
"""Get salary statistics"""
query = """
SELECT
department,
COUNT(*) as employee_count,
ROUND(AVG(salary), 2) as avg_salary,
MIN(salary) as min_salary,
MAX(salary) as max_salary,
ROUND(STDDEV(salary), 2) as salary_stddev
FROM employees
GROUP BY department
ORDER BY avg_salary DESC
"""
return self.execute_query(query)
def search_employees_by_skill(self, skill):
"""Search employees by skill"""
query = """
SELECT name, email, department,
JSON_EXTRACT(metadata, '$.skills') as skills
FROM employees
WHERE JSON_SEARCH(metadata, 'one', %s, NULL, '$.skills[*]') IS NOT NULL
"""
return self.execute_query(query, (skill,))
# Usage example and performance test
def performance_test():
"""Performance test"""
db = MariaDBManager(
host='localhost',
user='app_user',
password='secure_password',
database='company_db'
)
# Large data insertion test
import time
start_time = time.time()
batch_data = []
for i in range(1000):
batch_data.append((
f'Employee_{i}',
f'emp_{i}@company.com',
'Engineering' if i % 2 == 0 else 'Marketing',
50000 + (i * 100),
date(2023, 1, 1),
json.dumps({
'skills': ['Python', 'SQL'] if i % 2 == 0 else ['Marketing', 'Analytics'],
'level': 'senior' if i % 10 == 0 else 'junior'
})
))
# Batch insertion
with db.get_connection() as conn:
with conn.cursor() as cursor:
query = """
INSERT INTO employees (name, email, department, salary, hire_date, metadata)
VALUES (%s, %s, %s, %s, %s, %s)
"""
cursor.executemany(query, batch_data)
conn.commit()
insertion_time = time.time() - start_time
print(f"1000 records insertion time: {insertion_time:.2f} seconds")
# Search performance test
start_time = time.time()
results = db.get_employees_by_department('Engineering')
search_time = time.time() - start_time
print(f"Search time: {search_time:.4f} seconds, Results: {len(results)}")
# JSON search test
start_time = time.time()
skill_results = db.search_employees_by_skill('Python')
json_search_time = time.time() - start_time
print(f"JSON search time: {json_search_time:.4f} seconds, Results: {len(skill_results)}")
if __name__ == "__main__":
# Create database management object
db_manager = MariaDBManager()
try:
# Insert employee data
db_manager.insert_employee(
name="Alice Johnson",
email="[email protected]",
department="Engineering",
salary=85000.00,
hire_date=date(2023, 8, 15),
skills=["Go", "Docker", "Kubernetes"],
level="senior"
)
print("Employee data inserted")
# Get employees by department
engineering_employees = db_manager.get_employees_by_department("Engineering")
print(f"Engineering department employees: {len(engineering_employees)}")
# Salary statistics
salary_stats = db_manager.get_salary_statistics()
print("Salary statistics:")
for stat in salary_stats:
print(f" {stat['department']}: Average ${stat['avg_salary']} ({stat['employee_count']} employees)")
# Run performance test
performance_test()
except Exception as e:
print(f"Error occurred: {e}")
Replication and Cluster Configuration
# Galera Cluster multi-master configuration setup
# Node 1 configuration (/etc/mysql/conf.d/galera.cnf)
sudo tee /etc/mysql/conf.d/galera.cnf << 'EOF'
[galera]
# Galera Provider Configuration
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so
# Galera Cluster Configuration
wsrep_cluster_name="production_cluster"
wsrep_cluster_address="gcomm://10.0.1.1,10.0.1.2,10.0.1.3"
# Galera Synchronization Configuration
wsrep_sst_method=rsync
# Galera Node Configuration
wsrep_node_address="10.0.1.1"
wsrep_node_name="node1"
# InnoDB Configuration
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
innodb_doublewrite=1
innodb_flush_log_at_trx_commit=2
# Binary Logging
binlog_format=row
log-bin=mysql-bin
EOF
# Initialize cluster on first node
sudo galera_new_cluster
# Start MariaDB service
sudo systemctl start mariadb
# Start MariaDB on nodes 2,3
sudo systemctl start mariadb
# Check cluster status
mysql -u root -p -e "SHOW STATUS LIKE 'wsrep%'"
# Replication setup (Master-Slave configuration)
# Master configuration
sudo tee -a /etc/mysql/conf.d/master.cnf << 'EOF'
[mysqld]
log-bin=mysql-bin
server-id=1
binlog-do-db=production_db
binlog-ignore-db=mysql
EOF
# Create replication user
mysql -u root -p << 'EOF'
CREATE USER 'replica'@'%' IDENTIFIED BY 'replica_password';
GRANT REPLICATION SLAVE ON *.* TO 'replica'@'%';
FLUSH PRIVILEGES;
SHOW MASTER STATUS;
EOF
# Slave configuration
sudo tee -a /etc/mysql/conf.d/slave.cnf << 'EOF'
[mysqld]
server-id=2
relay-log=mysql-relay-bin
read-only=1
EOF
# Start replication on slave
mysql -u root -p << 'EOF'
CHANGE MASTER TO
MASTER_HOST='10.0.1.1',
MASTER_USER='replica',
MASTER_PASSWORD='replica_password',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=1234;
START SLAVE;
SHOW SLAVE STATUS\G
EOF
Performance Monitoring and Optimization
# MariaDB performance monitoring configuration
# Enable Performance Schema
sudo tee -a /etc/mysql/conf.d/performance.cnf << 'EOF'
[mysqld]
# Performance Schema
performance_schema=ON
performance_schema_max_table_instances=40000
performance_schema_max_table_handles=40000
# Slow Query Log
slow_query_log=1
slow_query_log_file=/var/log/mysql/slow.log
long_query_time=2
log_queries_not_using_indexes=1
# General Log
general_log=0
general_log_file=/var/log/mysql/general.log
# Binary Log Settings
expire_logs_days=7
max_binlog_size=100M
sync_binlog=1
# InnoDB Monitoring
innodb_monitor_enable=all
EOF
# Performance monitoring queries
mysql -u root -p << 'EOF'
-- Check current processes
SHOW PROCESSLIST;
-- Check slow queries
SELECT * FROM mysql.slow_log ORDER BY start_time DESC LIMIT 10;
-- Check database sizes
SELECT
table_schema as 'Database',
ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) as 'Size (MB)'
FROM information_schema.tables
GROUP BY table_schema;
-- Check table usage
SELECT
table_name,
ROUND(((data_length + index_length) / 1024 / 1024), 2) as 'Size (MB)',
table_rows,
ROUND(((data_length + index_length) / table_rows), 2) as 'Avg Row Size'
FROM information_schema.tables
WHERE table_schema = 'company_db'
ORDER BY (data_length + index_length) DESC;
-- Index usage statistics
SELECT
object_schema,
object_name,
index_name,
count_read,
count_write,
count_fetch,
count_insert,
count_update,
count_delete
FROM performance_schema.table_io_waits_summary_by_index_usage
WHERE object_schema = 'company_db'
ORDER BY count_read DESC;
EOF
# Automated backup script
cat > /etc/cron.daily/mariadb-backup << 'EOF'
#!/bin/bash
# Configuration
DB_USER="backup_user"
DB_PASS="backup_password"
BACKUP_DIR="/var/backups/mariadb"
DATE=$(date +%Y%m%d_%H%M%S)
RETENTION_DAYS=7
# Create backup directory
mkdir -p $BACKUP_DIR
# Get database list (excluding system DBs)
DBS=$(mysql -u$DB_USER -p$DB_PASS -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema|performance_schema|mysql|sys)")
# Backup each database
for db in $DBS; do
echo "Backing up: $db"
mysqldump -u$DB_USER -p$DB_PASS \
--single-transaction \
--routines \
--triggers \
--events \
--hex-blob \
$db | gzip > $BACKUP_DIR/${db}_$DATE.sql.gz
done
# Remove old backup files
find $BACKUP_DIR -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete
echo "Backup completed: $(date)"
EOF
chmod +x /etc/cron.daily/mariadb-backup
# Real-time monitoring tool
cat > mariadb_monitor.py << 'EOF'
#!/usr/bin/env python3
import pymysql
import time
import os
from datetime import datetime
def monitor_mariadb():
"""MariaDB real-time monitoring"""
connection = pymysql.connect(
host='localhost',
user='root',
password='password',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor
)
try:
while True:
with connection.cursor() as cursor:
# Basic statistics
cursor.execute("SHOW GLOBAL STATUS LIKE 'Threads_connected'")
connections = cursor.fetchone()
cursor.execute("SHOW GLOBAL STATUS LIKE 'Queries'")
queries = cursor.fetchone()
cursor.execute("SHOW GLOBAL STATUS LIKE 'Uptime'")
uptime = cursor.fetchone()
# InnoDB statistics
cursor.execute("SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_pages_data'")
buffer_data = cursor.fetchone()
cursor.execute("SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_pages_total'")
buffer_total = cursor.fetchone()
# Replication status (if slave)
try:
cursor.execute("SHOW SLAVE STATUS")
slave_status = cursor.fetchone()
except:
slave_status = None
# Display
os.system('clear')
print(f"=== MariaDB Monitor - {datetime.now().strftime('%Y-%m-%d %H:%M:%S')} ===")
print(f"Connections: {connections['Value']}")
print(f"Total Queries: {queries['Value']}")
print(f"Uptime: {int(uptime['Value']) // 3600} hours")
if buffer_total and buffer_data:
buffer_usage = int(buffer_data['Value']) / int(buffer_total['Value']) * 100
print(f"InnoDB Buffer Pool Usage: {buffer_usage:.1f}%")
if slave_status:
print(f"Replication Status: {slave_status.get('Slave_SQL_Running', 'Unknown')}")
print("\n--- Recent Processes ---")
cursor.execute("SHOW PROCESSLIST")
processes = cursor.fetchmany(5)
for proc in processes:
if proc['Command'] != 'Sleep':
print(f"ID:{proc['Id']} User:{proc['User']} DB:{proc['db']} Time:{proc['Time']}s")
time.sleep(5)
except KeyboardInterrupt:
print("\nStopping monitoring")
finally:
connection.close()
if __name__ == "__main__":
monitor_mariadb()
EOF
chmod +x mariadb_monitor.py
High Availability and Security Configuration
# SSL/TLS configuration
# Generate SSL certificates
sudo mysql_ssl_rsa_setup --uid=mysql --datadir=/var/lib/mysql
# Add SSL configuration
sudo tee -a /etc/mysql/conf.d/ssl.cnf << 'EOF'
[mysqld]
# SSL Configuration
ssl-ca=/var/lib/mysql/ca.pem
ssl-cert=/var/lib/mysql/server-cert.pem
ssl-key=/var/lib/mysql/server-key.pem
# Require SSL for all connections
require_secure_transport=ON
# TLS versions
tls_version=TLSv1.2,TLSv1.3
EOF
# Advanced security configuration
sudo tee -a /etc/mysql/conf.d/security.cnf << 'EOF'
[mysqld]
# Security Settings
local-infile=0
skip-symbolic-links
secure_file_priv="/var/lib/mysql-files/"
# Audit Plugin
plugin_load_add=server_audit
server_audit_logging=ON
server_audit_events=CONNECT,QUERY,TABLE
# User Account Security
validate_password_plugin=FORCE_PLUS_PERMANENT
validate_password_length=12
validate_password_mixed_case_count=1
validate_password_number_count=1
validate_password_special_char_count=1
# Connection Security
max_user_connections=100
max_connections_per_hour=1000
max_queries_per_hour=10000
EOF
# Audit log configuration and rotation
sudo tee /etc/logrotate.d/mariadb-audit << 'EOF'
/var/lib/mysql/server_audit.log {
daily
missingok
rotate 52
compress
notifempty
create 640 mysql mysql
postrotate
/bin/kill -HUP `cat /var/run/mysqld/mysqld.pid 2> /dev/null` 2> /dev/null || true
endscript
}
EOF
# Firewall configuration
sudo ufw allow from 10.0.1.0/24 to any port 3306
sudo ufw allow from 192.168.1.0/24 to any port 3306
sudo ufw deny 3306
# Database encryption configuration
mysql -u root -p << 'EOF'
-- Table encryption
CREATE TABLE sensitive_data (
id INT AUTO_INCREMENT PRIMARY KEY,
customer_name VARCHAR(100) ENCRYPTED=YES,
ssn VARCHAR(20) ENCRYPTED=YES,
credit_card VARCHAR(20) ENCRYPTED=YES,
data JSON,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
) ENGINE=InnoDB ENCRYPTED=YES ENCRYPTION_KEY_ID=1;
-- Encrypted backup
SET @backup_key = 'backup_encryption_key_123456789';
EOF
# Heartbeat configuration for automatic failover
sudo apt install -y heartbeat
sudo tee /etc/ha.d/ha.cf << 'EOF'
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
warntime 10
initdead 120
udpport 694
ucast eth0 10.0.1.2
auto_failback on
node node1
node node2
EOF
sudo tee /etc/ha.d/haresources << 'EOF'
node1 10.0.1.100/24/eth0 mariadb
EOF
# VIP management script
cat > /usr/local/bin/mariadb_vip_manager.sh << 'EOF'
#!/bin/bash
VIP="10.0.1.100"
INTERFACE="eth0"
MYSQL_USER="root"
MYSQL_PASS="password"
check_mysql() {
mysql -u$MYSQL_USER -p$MYSQL_PASS -e "SELECT 1" >/dev/null 2>&1
return $?
}
add_vip() {
ip addr add $VIP/24 dev $INTERFACE
echo "VIP $VIP added"
}
remove_vip() {
ip addr del $VIP/24 dev $INTERFACE
echo "VIP $VIP removed"
}
case "$1" in
start)
if check_mysql; then
add_vip
else
echo "MySQL is not running"
exit 1
fi
;;
stop)
remove_vip
;;
status)
if ip addr show dev $INTERFACE | grep -q $VIP; then
echo "VIP is active"
exit 0
else
echo "VIP is not active"
exit 1
fi
;;
*)
echo "Usage: $0 {start|stop|status}"
exit 1
;;
esac
EOF
chmod +x /usr/local/bin/mariadb_vip_manager.sh
Operations and Maintenance
# Comprehensive maintenance script
cat > /usr/local/bin/mariadb_maintenance.sh << 'EOF'
#!/bin/bash
LOG_FILE="/var/log/mariadb_maintenance.log"
DB_USER="root"
DB_PASS="password"
log_message() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> $LOG_FILE
}
# Database optimization
optimize_databases() {
log_message "Starting database optimization"
DBS=$(mysql -u$DB_USER -p$DB_PASS -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema|performance_schema|mysql|sys)")
for db in $DBS; do
log_message "Optimizing: $db"
mysql -u$DB_USER -p$DB_PASS -e "USE $db; OPTIMIZE TABLE \`$(mysql -u$DB_USER -p$DB_PASS -e "USE $db; SHOW TABLES;" | tail -n +2 | tr '\n' '`,`' | sed 's/,`$//')\`;"
done
log_message "Database optimization completed"
}
# Update statistics
update_statistics() {
log_message "Starting statistics update"
mysql -u$DB_USER -p$DB_PASS -e "ANALYZE TABLE mysql.user, mysql.db, mysql.tables_priv;"
log_message "Statistics update completed"
}
# Log rotation
rotate_logs() {
log_message "Starting log rotation"
mysql -u$DB_USER -p$DB_PASS -e "FLUSH LOGS;"
log_message "Log rotation completed"
}
# Table repair check
check_tables() {
log_message "Starting table check"
mysql -u$DB_USER -p$DB_PASS << 'SQL'
SELECT
table_schema,
table_name,
ROUND(((data_length + index_length) / 1024 / 1024), 2) AS 'Size (MB)'
FROM information_schema.tables
WHERE table_schema NOT IN ('information_schema', 'mysql', 'performance_schema', 'sys')
ORDER BY (data_length + index_length) DESC;
SQL
log_message "Table check completed"
}
# Execute
case "$1" in
optimize)
optimize_databases
;;
statistics)
update_statistics
;;
logs)
rotate_logs
;;
check)
check_tables
;;
all)
optimize_databases
update_statistics
rotate_logs
check_tables
;;
*)
echo "Usage: $0 {optimize|statistics|logs|check|all}"
exit 1
;;
esac
log_message "Maintenance process completed: $1"
EOF
chmod +x /usr/local/bin/mariadb_maintenance.sh
# Weekly maintenance cron setup
echo "0 2 * * 0 /usr/local/bin/mariadb_maintenance.sh all" | sudo crontab -
# MariaDB health check script
cat > /usr/local/bin/mariadb_health_check.py << 'EOF'
#!/usr/bin/env python3
import pymysql
import json
import sys
from datetime import datetime
def health_check():
"""MariaDB health diagnosis"""
try:
connection = pymysql.connect(
host='localhost',
user='root',
password='password',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor
)
health_status = {
'timestamp': datetime.now().isoformat(),
'overall_status': 'HEALTHY',
'issues': [],
'recommendations': []
}
with connection.cursor() as cursor:
# Connection count check
cursor.execute("SHOW GLOBAL STATUS LIKE 'Threads_connected'")
connections = int(cursor.fetchone()['Value'])
cursor.execute("SHOW GLOBAL VARIABLES LIKE 'max_connections'")
max_connections = int(cursor.fetchone()['Value'])
connection_usage = (connections / max_connections) * 100
if connection_usage > 80:
health_status['issues'].append(f"High connection usage: {connection_usage:.1f}%")
health_status['overall_status'] = 'WARNING'
# Slow query check
cursor.execute("SHOW GLOBAL STATUS LIKE 'Slow_queries'")
slow_queries = int(cursor.fetchone()['Value'])
cursor.execute("SHOW GLOBAL STATUS LIKE 'Queries'")
total_queries = int(cursor.fetchone()['Value'])
if total_queries > 0:
slow_query_ratio = (slow_queries / total_queries) * 100
if slow_query_ratio > 5:
health_status['issues'].append(f"High slow query ratio: {slow_query_ratio:.2f}%")
health_status['overall_status'] = 'WARNING'
# InnoDB Buffer Pool usage
cursor.execute("SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_pages_data'")
buffer_data = int(cursor.fetchone()['Value'])
cursor.execute("SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_pages_total'")
buffer_total = int(cursor.fetchone()['Value'])
buffer_usage = (buffer_data / buffer_total) * 100
if buffer_usage > 90:
health_status['issues'].append(f"High InnoDB Buffer Pool usage: {buffer_usage:.1f}%")
health_status['recommendations'].append("Consider increasing Buffer Pool size")
# Replication status
try:
cursor.execute("SHOW SLAVE STATUS")
slave_status = cursor.fetchone()
if slave_status:
if slave_status['Slave_SQL_Running'] != 'Yes':
health_status['issues'].append("Slave SQL thread stopped")
health_status['overall_status'] = 'CRITICAL'
if slave_status['Slave_IO_Running'] != 'Yes':
health_status['issues'].append("Slave IO thread stopped")
health_status['overall_status'] = 'CRITICAL'
except:
pass # Master or replication not configured
# Table size and fragmentation
cursor.execute("""
SELECT
table_schema,
table_name,
ROUND(data_length/1024/1024, 2) as data_mb,
ROUND(data_free/1024/1024, 2) as free_mb,
ROUND((data_free/(data_length+data_free))*100, 2) as fragmentation_pct
FROM information_schema.tables
WHERE table_schema NOT IN ('information_schema', 'mysql', 'performance_schema', 'sys')
AND data_free > 0
AND (data_free/(data_length+data_free))*100 > 20
ORDER BY fragmentation_pct DESC
LIMIT 5
""")
fragmented_tables = cursor.fetchall()
if fragmented_tables:
health_status['recommendations'].append(f"{len(fragmented_tables)} tables with 20%+ fragmentation")
# Final determination
if health_status['issues']:
if health_status['overall_status'] != 'CRITICAL':
health_status['overall_status'] = 'WARNING'
# Output results
print(json.dumps(health_status, indent=2))
# Exit code
if health_status['overall_status'] == 'CRITICAL':
sys.exit(2)
elif health_status['overall_status'] == 'WARNING':
sys.exit(1)
else:
sys.exit(0)
except Exception as e:
error_status = {
'timestamp': datetime.now().isoformat(),
'overall_status': 'ERROR',
'error': str(e)
}
print(json.dumps(error_status, indent=2))
sys.exit(3)
finally:
if 'connection' in locals():
connection.close()
if __name__ == "__main__":
health_check()
EOF
chmod +x /usr/local/bin/mariadb_health_check.py
# Hourly health check
echo "0 * * * * /usr/local/bin/mariadb_health_check.py > /var/log/mariadb_health.log" | sudo crontab -e
echo "MariaDB maintenance environment setup completed"