This is a tale of a newbie sysadmin (me!) and a server (a Zabbix server). It is a tale of disk space woes, long queries, huge databases, and Oracle tomfoolery.
Thankfully I was successful in fixing the problem (I hope!) with some changes to the database. Nevertheless, it has been a stressful week. Here’s the story.
Oh man. What a start to this week I had. Zabbix started complaining about free disk space, memory utilization, and long query run time. Fun on a bun. These alerts started rolling in on Sunday. On Monday I tried basic remediation (clearing /tmp, removing old Linux headers, clearing the apt cache, things like that) and thought I had the problem solved when I cleared about 6 GB of data. The next day the storage alerts came back. I didn’t have time to address them. By the end of the day, all the errors were returning. Zabbix was NOT happy.
After finally installing ncdu (a command line graphical disk usage tool) I discovered the culprit: /var/lib/mysql/ibdata1 was huge. Like, 90GB+ huge. Gigantic, by any standard (especially on a VM with a 120GB hard drive). Upon doing some research I discovered the reason why: Zabbix by default (as a nature of MySQL defaults) created it’s tables using the MyISAM format. Without enabling file-per-table or some version of partitioning, then database simply builds and builds in size in one single file (ibdata1). Crap.
More importantly: ibdata1 will NEVER decrease in size. It just won’t. You can’t optimize it. If you delete rows you still won’t get storage back. The only way you can get storage space back is to recreate a table manually, copying the old data to it. Double crap.
Enter the hero of our story: innoDB.
Enter the new villain of our story: working with a 90GB ibdata1 (myISAM) file for conversion to individual innodb files.
The process was (specific for Zabbix; you can draw from it to generalize it):
- Get some extra storage (You’re going to need it). I mapped a network share temporarily.
- Dump the database into an SQL file:
-
mysqldump -u [your db user] -p zabbix –allow-keywords –events > [path to backup location]/zabbix.sql
- Note: this file was MUCH smaller than the innodb1 file (35GB vs 95GB).
-
- Stop MySQL
-
sudo service mysql stop
-
- Copy the existing innodb files to a backup location (just in case).
-
sudo chown [youruser]:[yourgroup] /var/lib/mysql -R
- This is so you can easily read/write to that folder as you. We’ll change it back to mysql:mysql when we’re done. Don’t leave it like this!
-
cp /var/lib/mysql [path to backup location] -R
sudo chown mysql:mysql /var/lib/mysql -R
-
- Start MySQL
-
sudo service mysql start
-
- Drop the existing database
-
mysql -u [your db user] -p
drop database zabbix; - Note: As expected: ibdata1 is still 90GB+ for me! 🙁
-
- Stop MySQL as in Step 3.
- Make changes to your MySQL my.cnf
-
nano /etc/mysql/my.cnf
- Add the following lines to the [mysqld] section:
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_buffer_pool_size=4G - Please note: buffer pool should be ~4x log file size. Adjust as necessary for your server parameters.
-
- Delete the ibdata1, ib_logfile0 and ib_logfile1
-
sudo rm /var/lib/mysql/ibdata1
sudo rm /var/lib/mysql/ib_logfile0
sudo rm/var/lib/mysql/ib_logfile1
-
- Start MySQL as in Step 5.
- Reload your database data.
-
mysql -u [ -p zabbix < [path to backup location]/zabbix.sql
- Note: it’s smaller! A lot smaller. Because the tables are the correct size now and not filled with bloat.
-
- (Optional steps for Zabbix)
- It’s going to hurt, but you’re going to need to drop some data from some tables or be willing to sit there for hours and wait.
-
mysql -u [your db user] -p
use zabbix;
CREATE TABLE ‘history_uint_new’ LIKE ‘history_uint’;
CREATE TABLE ‘trigger_uint_new’ LIKE ‘trigger_uint’;
DROP TABLE ‘history_uint’;
DROP TABLE ‘trigger_uint’;
ALTER TABLE ‘history_uint_new’ RENAME ‘history_uint’;
ALTER TABLE ‘trigger_uint_new’ RENAME ‘trigger_uint’; - Please note: We do this instead of just deleting the data with a MySQL delete command because it would have to iterate through each record. These tables are HUGE (of the 35GB import, roughly 32GB was in these two tables).
- Alter the table structure to ensure it is now innoDB
-
mysql -u [your db user] -p
use zabbix; - For each table in your database run the command:
-
ALTER TABLE [tablename] ENGINE=’InnoDB’;
-
- For Zabbix 2.4: here is a text file with all the alter table commands. Just copy paste. [Link]
-
- Time to check and optimize
-
mysqlcheck -c -u [a root db user] -p –all-databases
- If it finds problems, run:
-
mysqlcheck -r -u [a root db user] -p –all-databases
-
-
mysqlcheck -o -u [a root db user] -p –all-databases
-
- Schedule weekly optimizes
- Otherwise it’s all moot: innodb will continue to grow and grow and won’t give back space unless you schedule optimizes. I used crontab to do it.
Well, that’s it.
On 2/22 at 10:30AM my server was using 96GB of it’s 120GB.
Today at 8:00AM my server was using 37GB of it’s 120GB. It’s collecting data constantly.
Let’s see how long it’ll stay up this time (record is 40 days right now).
Moral of this adventure: unless you have a really good reason to use MyISAM, just don’t. MySQL has been aware of the innodb1 file size permanently increasing for almost 10 years and continues to claim “it’s a feature” or “it’s expected behavior.”
Good grief.
-M, out
Aside and not relevant: here’s a video of me nearly getting rammed off the road yesterday.