Cleaning Up A Database: Or Why MyISAM Sucks

This is a tale of a newbie sysadmin (me!) and a server (a Zabbix server). It is a tale of disk space woes, long queries, huge databases, and Oracle tomfoolery.

Thankfully I was successful in fixing the problem (I hope!) with some changes to the database. Nevertheless, it has been a stressful week. Here’s the story.

Oh man.  What a start to this week I had.  Zabbix started complaining about free disk space, memory utilization, and long query run time.  Fun on a bun.  These alerts started rolling in on Sunday.  On Monday I tried basic remediation (clearing /tmp, removing old Linux headers, clearing the apt cache, things like that) and thought I had the problem solved when I cleared about 6 GB of data.  The next day the storage alerts came back.  I didn’t have time to address them.  By the end of the day, all the errors were returning.  Zabbix was NOT happy.

After finally installing ncdu (a command line graphical disk usage tool) I discovered the culprit: /var/lib/mysql/ibdata1 was huge.  Like, 90GB+ huge.  Gigantic, by any standard (especially on a VM with a 120GB hard drive).  Upon doing some research I discovered the reason why: Zabbix by default (as a nature of MySQL defaults) created it’s tables using the MyISAM format.  Without enabling file-per-table or some version of partitioning, then database simply builds and builds in size in one single file (ibdata1).  Crap.

More importantly: ibdata1 will NEVER decrease in size.  It just won’t.  You can’t optimize it.  If you delete rows you still won’t get storage back.  The only way you can get storage space back is to recreate a table manually, copying the old data to it.  Double crap.

Enter the hero of our story: innoDB.

Enter the new villain of our story: working with a 90GB ibdata1 (myISAM) file for conversion to individual innodb files.

The process was (specific for Zabbix; you can draw from it to generalize it):

  1. Get some extra storage (You’re going to need it).  I mapped a network share temporarily.
  2. Dump the database into an SQL file:
    1. mysqldump -u [your db user] -p zabbix –allow-keywords –events > [path to backup location]/zabbix.sql

    2. Note: this file was MUCH smaller than the innodb1 file (35GB vs 95GB).
  3. Stop MySQL
    1. sudo service mysql stop

  4. Copy the existing innodb files to a backup location (just in case).
    1. sudo chown [youruser]:[yourgroup] /var/lib/mysql -R

    2. This is so you can easily read/write to that folder as you.  We’ll change it back to mysql:mysql when we’re done.  Don’t leave it like this!
    3. cp /var/lib/mysql [path to backup location] -R
      sudo chown mysql:mysql /var/lib/mysql -R

  5. Start MySQL
    1. sudo service mysql start

  6. Drop the existing database
    1. mysql -u [your db user] -p
      drop database zabbix;

    2. Note: As expected: ibdata1 is still 90GB+ for me! 🙁
  7. Stop MySQL as in Step 3.
  8. Make changes to your MySQL my.cnf
    1. nano /etc/mysql/my.cnf

    2. Add the following lines to the [mysqld] section:

      innodb_file_per_table
      innodb_flush_method=O_DIRECT
      innodb_log_file_size=1G
      innodb_buffer_pool_size=4G

    3. Please note: buffer pool should be ~4x log file size.  Adjust as necessary for your server parameters.
  9. Delete the ibdata1, ib_logfile0 and ib_logfile1
      1. sudo rm /var/lib/mysql/ibdata1
        sudo rm /var/lib/mysql/ib_logfile0
        sudo rm/var/lib/mysql/ib_logfile1

  10. Start MySQL as in Step 5.
  11. Reload your database data.
    1. mysql -u [ -p zabbix < [path to backup location]/zabbix.sql

    2. Note: it’s smaller! A lot smaller.  Because the tables are the correct size now and not filled with bloat.
  12. (Optional steps for Zabbix)
    1. It’s going to hurt, but you’re going to need to drop some data from some tables or be willing to sit there for hours and wait.
    2. mysql -u [your db user] -p
      use zabbix;
      CREATE TABLE ‘history_uint_new’ LIKE ‘history_uint’;
      CREATE TABLE ‘trigger_uint_new’ LIKE ‘trigger_uint’;
      DROP TABLE ‘history_uint’;
      DROP TABLE ‘trigger_uint’;
      ALTER TABLE ‘history_uint_new’ RENAME ‘history_uint’;
      ALTER TABLE ‘trigger_uint_new’ RENAME ‘trigger_uint’;

    3. Please note: We do this instead of just deleting the data with a MySQL delete command because it would have to iterate through each record.  These tables are HUGE (of the 35GB import, roughly 32GB was in these two tables).
  13. Alter the table structure to ensure it is now innoDB
    1. mysql -u [your db user] -p
      use zabbix;

    2. For each table in your database run the command:
      1. ALTER TABLE [tablename] ENGINE=’InnoDB’;

    3. For Zabbix 2.4: here is a text file with all the alter table commands.  Just copy paste. [Link]
  14. Time to check and optimize
    1. mysqlcheck -c -u [a root db user] -p –all-databases

    2. If it finds problems, run:
      1. mysqlcheck -r -u [a root db user] -p –all-databases

    3. mysqlcheck -o -u [a root db user] -p –all-databases

  15. Schedule weekly optimizes
    1. Otherwise it’s all moot: innodb will continue to grow and grow and won’t give back space unless you schedule optimizes.  I used crontab to do it.

Well, that’s it.

On 2/22 at 10:30AM my server was using 96GB of it’s 120GB.

Today at 8:00AM my server was using 37GB of it’s 120GB.  It’s collecting data constantly.

Let’s see how long it’ll stay up this time (record is 40 days right now).

Moral of this adventure: unless you have a really good reason to use MyISAM, just don’t.  MySQL has been aware of the innodb1 file size permanently increasing for almost 10 years and continues to claim “it’s a feature” or “it’s expected behavior.”

Good grief.

-M, out

Aside and not relevant: here’s a video of me nearly getting rammed off the road yesterday.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.