You are here

MySQL File System Fragmentation Benchmarks

MySQL Performance Blog - Sat, 22/03/2008 - 3:47am

Few days ago I wrote about testing writing to many files and seeing how this affects sequential read performance. I was very interested to see how it shows itself with real tables so I've got the script and ran tests for MyISAM and Innodb tables on ext3 filesystem. Here is what I found:

The fragmentation we speak in this article is filesystem fragmentation or internal table fragmentation which affects performance of full table scan. Not all queries are going to be affected same way, for example point select reading single page should not be significantly affected - ie you may not be affected as bad as we show here.

Benchmarks were done using this script:
The benchmark run with following simple shell script:

PLAIN TEXT SQL:
  1. [root@DB10 ~]# for i in 1 10 100 1000 10000; do ./benchmark.php $i 10000000; mysql -e'drop database test1'; mysql -e'create database test1'; done;
  2. TABLES: 1; total records: 10000000; WRITE rows per sec: 9498.478176443 , reads rows per sec: 45142.234447809sec.
  3. TABLES: 10; total records: 10000000; WRITE rows per sec: 8401.0627704619 , reads rows per sec: 18970.855770078sec.
  4. TABLES: 100; total records: 10000000; WRITE rows per sec: 6689.2612428044 , reads rows per sec: 2212.7170957877sec.
  5. TABLES: 1000; total records: 10000000; WRITE rows per sec: 5180.4069362984 , reads rows per sec: 1346.7226581156sec.
  6. TABLES: 10000; total records: 10000000; WRITE rows per sec: 262.6496819245 , reads rows per sec: 1169.4009695919sec.

The script creates specified amount of tables and does specified number of inserts going to random tables. I used default MySQL settings for MyISAM (table_cache=64) and set innodb_buffer_pool_size=8G innodb_flush_logs_at_trx_commit=2 innodb_log_file_size=256M innodb_flush_method=O_DIRECT for Innodb.

The tables were sized so they are considerably larger than amount of memory in box so full table scan will be IO bound.

As you can see from MyISAM results (above) the insert speeds does not degrade that badly until going from 1000 to 10000 tables, even though table_cache was just 64. I expect this is because updating index header (most complex part of opening and closing MyISAM table) can happen by OS in background and flushing 1000 pages each 30 seconds is not big overhead for this server configuration.

Going to 10000 tables however insert speed dropped 20 times. This could be because ext3 does not like so many files in directory or because random updates to 10000 distinct pages for index header updates not to mention modification time update is a lot of overhead. During this last test box felt really sluggish responding 10+ seconds for as simple command as "ls" even though loadavg was about 1. In the process list I could see some single value insert statements taking over 5 seconds... So it does not work very well.

The read performance, which is the main measurement for this benchmark suffered quite as expected - with 10000 tables it was 40 times worse than with single table! Looking at IOSTAT I could see average read size of being just 4K which means ext2 does horrible job in this case of doing extent allocation. Note however even 100 tables are enough to drop performance 20 times.

Innodb in single tablespace mode showed following results:

PLAIN TEXT SQL:
  1. [root@DB10 ~]# for i in 1 10 100 1000 10000; do ./benchmark.php $i 10000000; mysql -e'drop database test1'; mysql -e'create database test1'; done;
  2. TABLES: 1; total records: 10000000; WRITE rows per sec: 4919.7214223134 , reads rows per sec: 25408.766711241sec.
  3. TABLES: 10; total records: 10000000; WRITE rows per sec: 4887.5507251885 , reads rows per sec: 11848.973747839sec.
  4. TABLES: 100; total records: 10000000; WRITE rows per sec: 4007.1215976416 , reads rows per sec: 11826.941546043sec.
  5. TABLES: 1000; total records: 10000000; WRITE rows per sec: 2838.9678814081 , reads rows per sec: 13758.602641499sec.
  6. TABLES: 10000; total records: 10000000; WRITE rows per sec: 803.46939763369 , reads rows per sec: 3629.3610005806sec.

As you can see insert speed starts slower but degrades less, even though drop from 1000 to 10000 tables is dramatic as well. The read speed is also slower (expected as table was larger for same amount of rows) though it drops at different rate. Interesting enough it dropped just 2 times and was about same for 10 100 and 1000 tables which could be because of extent allocation for rather large tables. For 10000 tables we had just 1000 of 4K rows in the table which caused too much space allocated as single pages. I expect if we would use larger amount of rows read performance for 10000 tables would be close.

Innodb with innodb_file_per_table=1 had the following results:

PLAIN TEXT SQL:
  1. [root@DB10 ~]# for i in 1 10 100 1000 10000; do ./benchmark.php $i 10000000; mysql -e'drop database test1'; mysql -e'create database test1'; done;
  2. TABLES: 1; total records: 10000000; WRITE rows per sec: 4479.3690015631 , reads rows per sec: 25554.477094788sec.
  3. TABLES: 10; total records: 10000000; WRITE rows per sec: 4279.1557765714 , reads rows per sec: 16787.265656296sec.
  4. TABLES: 100; total records: 10000000; WRITE rows per sec: 3609.974742019 , reads rows per sec: 16525.06580466sec.
  5. TABLES: 1000; total records: 10000000; WRITE rows per sec: 2130.8515988384 , reads rows per sec: 11602.826401997sec.
  6. TABLES: 10000; total records: 10000000; WRITE rows per sec: 434.330528194 , reads rows per sec: 5157.1389149296sec.

Insert performance is close, the difference is perhaps explained by the fact files needed to be constantly extended (meta data updates) and reopened for more than 100 tables. Read performance starts close but degrades less for 10 and 100 tables and when better again for 10000 tables. I can't explain why it is a bit worse for 1000 tables thought as I did only one run (It took more than 24 hours) it also could be some activity spike.

A bit better performance in this case can be perhaps explained but a bit larger increment how tablespaces are allocated comparted to internal allocation from single tablespace.

Summary: There are few basic things we can learn from these results
- Concurrent growth of many tables causes data fragmentation and affects table scan performance nadly
- MyISAM suffers worse than Innodb
- Innodb extent allocation works (perhaps would be good option for MyISAM as well)
- Innodb suffers fragmentation less if it stores different tables in different files.

Entry posted by peter | One comment

Add to: delicious | digg | reddit | netscape | Google Bookmarks