Infrastructure at your Service

Yann Neuhaus

Simulating database-like I/O activity with Flexible I/O

You do not want to install or configure swingbench, load runner etc. – just to test the performance of your I/O system based on filesystems? Then Flexible I/O is the right tool for you. This post provides an overview.

In order to simulate Oracle database I/O workload on filesystems, ORION cannot be used since it only uses raw or block devices (scratching the filesystem). To simulate Oracle database I/O behaviour on filesystems, we can use the Flexible I/O utility:
http://forums.oracle.com/forums/thread.jspa?threadID=1115772

Flexible I/O is presented here:
http://linux.die.net/man/1/fio

In our case, Flexible I/O has been configured to simulate several load types similar to a database behavior, based on the following recommendations:
http://www.slideshare.net/markwkm/filesystem-performance-from-a-database-perspective

The ext3 filesystem has been considered during the tests. Both asynchronous I/O and direct I/O have been enabled. The release 1.50 of the utility has been used. ext4 has not been considered since it is only reliable from Red Hat EL 6 on, and we were using a Red Hat 6 release.

Flexible I/O has been installed from the following resources:
http://rpm.pbone.net/index.php3?stat=26&dist=52&size=176279&name=fio-1.50-1.el5.rf.x86_64.rpm

The following Flexible I/O features do really help to perform tests neat to the database I/O reality:

– Flexible I/O simulates I/O on a pre-allocated file, based on the filesystem (like Oracle uses its own files)
– It can simulate sequential or random reads or writes
– It can also simulate mixed workload (reads and writes, you can set the spreading between read and writes – like a database)
– Flexible I/O allows to set the block size used to simulate the I/O (ideal for database workload simulation)
– Flexible I/O allows to simulate both direct and asynchronous I/O (based on the libaio engine)
– Finally, Flexible I/O allows to parallelize the load by allocating several processes (in this case the file size should be adapted, since each process will use its own file for I/O simulation)

The following configuration file has been created, with several sections corresponding to different I/O types (sequential, random, etc.):

[email protected]:~/fio_tests/ cat fio_config.cfg
 [seq-read]
 # Sequential reads
 rw=read
 # Size if the the I/O
 size=8g
 # Directory where the test file will be created
 directory=/utest
 # Diables posix_fadvise - predeclare an access pattern for file data
 fadvise_hint=0
 # Block size
 blocksize=8k
 # Use of direct I/O
 direct=1
 # Number of I/O threads :
 numjobs=1
 # Number of files :
 nrfiles=1
 # Duration of the test in seconds
 runtime=300
 # Usage of ASYNC I/O
 ioengine=libaio
 # Runtime based, overwrites or overreads several times the specified file
 time_based
 # To free pagecache, dentries and inodes (only possible as root, therefore commented out) :
 # exec_prerun=echo 3 > /proc/sys/vm/drop_caches
[seq-write]
 rw=write
 size=8g
 directory=/utest
 fadvise_hint=0
 blocksize=8k
 direct=1
 numjobs=1
 nrfiles=1
 runtime=300
 ioengine=libaio
 time_based
 # exec_prerun=echo 3 > /proc/sys/vm/drop_caches
[random-read]
 # Each process allocates one file, therefore to have 8G the size should be set to 1G with 8 processes
 rw=randread
 size=1g
 directory=/utest
 fadvise_hint=0
 blocksize=8k
 direct=1
 numjobs=8
 nrfiles=1
 runtime=300
 ioengine=libaio
 time_based
 # exec_prerun=echo 3 > /proc/sys/vm/drop_caches
[random-write]
 # Each process allocates one file, therefore to have 8G the size should be set to 1G with 8 processes
 rw=randwrite
 size=1g
 directory=/utest
 fadvise_hint=0
 blocksize=8k
 direct=1
 numjobs=8
 nrfiles=1
 runtime=300
 ioengine=libaio
 time_based
 # exec_prerun=echo 3 > /proc/sys/vm/drop_caches
[read-write]
 # Each process allocates one file, therefore to have 8G the size should be set to 1G with 8 processes
 rw=rw
 rwmixread=80
 size=1g
 directory=/utest
 fadvise_hint=0
 blocksize=8k
 direct=1
 numjobs=8
 nrfiles=1
 runtime=300
 ioengine=libaio
 time_based
 # exec_prerun=echo 3 > /proc/sys/vm/drop_caches

To start the tool, give the configuration file as parameter as well as the section you wannt to run :
fio –section=seq-read fio_config.cfg

Make sure the filesystem /utest has enough space to allocate the test file (its size is defined by “size”).
Explaining and discussing the results is of course out of the scope of this post, but just drop a comment and I will come back to you!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Yann Neuhaus
Yann Neuhaus

Chairman of the Board, Chief Sales Officer (CSO), Region Manager