Check status of cluster: https://nbari.com/post/galera/

Find process list not sleeping:

SELECT user, time, state, info FROM information_schema.processlist WHERE command != 'Sleep' AND time >= 2 ORDER BY time DESC, id;

Doing a heavy DDL (ALTER):

SELECT * FROM information_schema.processlist WHERE command != 'Sleep' AND info LIKE 'alter%' AND time >= 2 ORDER BY time DESC, id LIMIT 1;

DDL stands for data definition language. It is a set of SQL statements that work on the database as a whole, rather than an individual table or row. Create, alter, and drop are elements of this language.

Get locked tables:

show open tables where in_use > 0;

Finding locks:

show engine innodb status;

At the end:

---TRANSACTION 1135701157, ACTIVE 6768 sec
MySQL thread id 5208136, OS thread handle 0x7f2982e91700, query id 882213399 xxxIPxxx 82.235.36.49 my_user cleaning up

kill the transaction thread:

 KILL 5208136;

Information about all locks transactions are waiting for:

select * from information_schema.innodb_lock_waits;

A list of blocking transactions:

SELECT * FROM information_schema.INNODB_LOCKS WHERE LOCK_TRX_ID IN (SELECT BLOCKING_TRX_ID FROM INNODB_LOCK_WAITS);

Or:

SELECT INNODB_LOCKS.* FROM information_schema.INNODB_LOCKS JOIN INNODB_LOCK_WAITS ON (INNODB_LOCKS.LOCK_TRX_ID = INNODB_LOCK_WAITS.BLOCKING_TRX_ID);

A list of transactions waiting for locks:

SELECT TRX_ID, TRX_REQUESTED_LOCK_ID, TRX_MYSQL_THREAD_ID, TRX_QUERY FROM information_schema.INNODB_TRX WHERE TRX_STATE = 'LOCK WAIT';

To check logs with pt-query-digest, first get the binlog and do:

mysqlbinlog --start-datetime="2019-04-10 10:40:00" --stop-datetime="2019-04-10 11:00:00" mysql-bin.0013* > out.sql

Then:

pt-query-digest --type binlog --group-by fingerprint --limit "100%" out.sql > report.txt

in macOS if getting a segmentation fault use the full path /usr/local/Cellar/percona-toolkit/3.0.13/libexec/bin/pt-query-digest or download the lates version from https://www.percona.com/get/pt-query-digest

Get all table sizes for specific database in MB:

SELECT
    table_schema 'Database Name',
    SUM(data_length + index_length) 'Size in Bytes',
    ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) 'Size in MiB'
FROM information_schema.tables
GROUP BY table_schema
ORDER BY 2 DESC limit 10;

For a specific database:

SELECT
    table_schema 'Database Name',
    SUM(data_length + index_length) 'Size in Bytes',
    ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) 'Size in MiB'
FROM information_schema.tables
WHERE table_schema = 'your_db_name';

Get a specific table size for a database in MB:

SELECT table_name AS "Table",
ROUND(((data_length + index_length) / 1024 / 1024), 2) AS "Size (MB)"
FROM information_schema.TABLES
WHERE table_schema = "database_name"
ORDER BY (data_length + index_length) DESC;

How to calculate a good InnoDB log file size: (http://code.openark.org/blog/mysql/how-to-calculate-a-good-innodb-log-file-size-recap)

SELECT
  innodb_os_log_written_per_minute*60
    AS estimated_innodb_os_log_written_per_hour,
  CONCAT(ROUND(innodb_os_log_written_per_minute*60/1024/1024, 1), 'MB')
    AS estimated_innodb_os_log_written_per_hour_mb
FROM
  (SELECT SUM(value) AS innodb_os_log_written_per_minute FROM (
    SELECT -VARIABLE_VALUE AS value
      FROM INFORMATION_SCHEMA.GLOBAL_STATUS
      WHERE VARIABLE_NAME = 'innodb_os_log_written'
    UNION ALL
    SELECT SLEEP(60)
      FROM DUAL
    UNION ALL
    SELECT VARIABLE_VALUE
      FROM INFORMATION_SCHEMA.GLOBAL_STATUS
      WHERE VARIABLE_NAME = 'innodb_os_log_written'
  ) s1
) s2;

Find if a mysqldump is running:

SELECT * FROM information_schema.processlist WHERE info LIKE 'SELECT /*!40001 SQL_NO_CACHE */%';

Free some space in case --log-bin is enabled:

PURGE BINARY LOGS BEFORE NOW();

Or for an specific date:

PURGE BINARY LOGS BEFORE '20XX-01-01 00:00:00';