I/O Benchmarking tools

September 28th, 2011

This blog post will  be a place to park ideas and experiences with I/O benchmark tools and will be updated  on an ongoing basis.

Please feel free to share your own experiences with these tools or others in the comments!


There are a number of tools out there to do I/O benchmark testing such as

  • fio
  • IOZone
  • bonnie++
  • FileBench
  • Tiobench
  • orion

My choice for best of breed is fio
(thanks to Eric Grancher for suggesting fio).

Orion:

  • Orion
  •  Orion_Users_Guide.pdf

For Oracle I/O testing , Orion from Oracle would be the normal choice but I’ve run into some  install errors which were solved  but more importantly run into  runtime bugs.

IOZone

IOZone, available at  http://linux.die.net/man/1/iozone, is the tool I see the most references to on the net and google searches. The biggest drawback of IOZone is the there seems to be no way to limit the test to 8K random reads. Example

Bonnie++

http://www.googlux.com/bonnie.html

Bonnie  is a close to IOZone, but not quite as flexible and even less flexible than fio. Example.

FileBench

http://sourceforge.net/projects/filebench/
FileBench Wiki
Haven’t investigated FileBench though looks interesting.

Tiobench

http://sourceforge.net/projects/tiobench/

not much info

Fio – flexible I/O tester

Fio Man Page
Fio How to
http://freshmeat.net/projects/fio/
Here is a  description from the above URL:
“fio is an I/O tool meant to be used both for benchmark and stress/hardware verification. It has support for 13 different types of I/O engines (sync, mmap, libaio, posixaio, SG v3, splice, null, network, syslet, guasi, solarisaio, and more), I/O priorities (for newer Linux kernels), rate I/O, forked or threaded jobs, and much more. It can work on block devices as well as files. fio accepts job descriptions in a simple-to-understand text format. Several example job files are included. fio displays all sorts of I/O performance information. Fio is in wide use in many places, for both benchmarking, QA, and verification purposes. It supports Linux, FreeBSD, NetBSD, OS X, OpenSolaris, AIX, HP-UX, and Windows.”
with fio, setup options in a file with benchmark configuration, example


# job name between brackets (except when value is "global" )
[read_8k_200MB]

# overwrite if true will create file if it doesn't exist
# if file exists and is large enough nothing happens
# here it is set to false because file should exist
overwrite=0

#rw=
#   read        Sequential reads
#   write       Sequential writes
#   randwrite   Random writes
#   randread    Random reads
#   rw          Sequential mixed reads and writes
#   randrw      Random mixed reads and writes
rw=read

# ioengine=
#    sync       Basic read(2) or write(2) io. lseek(2) is
#               used to position the io location.
#    psync      Basic pread(2) or pwrite(2) io.
#    vsync      Basic readv(2) or writev(2) IO.
#    libaio     Linux native asynchronous io.
#    posixaio   glibc posix asynchronous io.
#    solarisaio Solaris native asynchronous io.
#    windowsaio Windows native asynchronous io.
ioengine=libaio

# direct If value is true, use non-buffered io. This is usually
#        O_DIRECT. Note that ZFS on Solaris doesn't support direct io.
direct=1

# bs The block size used for the io units. Defaults to 4k.
bs=8k

directory=/tmpnfs

# fadvise_hint if set to true fio will use fadvise() to advise the kernel
#               on what IO patterns it is likely to issue.
fadvise_hint=0

# nrfiles= Number of files to use for this job. Defaults to 1.
nrfiles=1
filename=toto.dbf
size=200m

Then run

$ fio config_file

read_8k_200MB: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=libaio, iodepth=1
fio 1.50
Starting 1 process
Jobs: 1 (f=1): [R] [100.0% done] [8094K/0K /s] [988 /0  iops] [eta 00m:00s]
read_8k_200MB: (groupid=0, jobs=1): err= 0: pid=27041
  read : io=204800KB, bw=12397KB/s, iops=1549 , runt= 16520msec
    slat (usec): min=14 , max=2324 , avg=20.09, stdev=15.57
    clat (usec): min=62 , max=10202 , avg=620.90, stdev=246.24
     lat (usec): min=203 , max=10221 , avg=641.43, stdev=246.75
    bw (KB/s) : min= 7680, max=14000, per=100.08%, avg=12407.27, stdev=1770.39
  cpu          : usr=0.69%, sys=2.62%, ctx=26443, majf=0, minf=26
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w/d: total=25600/0/0, short=0/0/0
     lat (usec): 100=0.01%, 250=2.11%, 500=20.13%, 750=67.00%, 1000=3.29%
     lat (msec): 2=7.21%, 4=0.23%, 10=0.02%, 20=0.01%

Run status group 0 (all jobs):
   READ: io=204800KB, aggrb=12397KB/s, minb=12694KB/s, maxb=12694KB/s, mint=16520msec, maxt=16520msec

 

 


Uncategorized

  1. Trackbacks

  2. September 30th, 2011: Log Buffer #240, A Carnival of the Vanities for DBAs | The Pythian Blog
  1. Comments

  2. Karl Arao
    September 28th, 2011 at 19:52 | #1

    You may also want to check out Robin Miller’s “dt” tool, Mark Seger pointed me on this one as he is using this intensively when doing test cases on collectl – http://www.scsifaq.org/RMiller_Tools/dt.html

  3. September 28th, 2011 at 20:07 | #2

    Thanks Karl – I’ll have to check that out!

  4. Przemyslaw Bak (przemol)
    October 3rd, 2011 at 09:39 | #3

    Hi,

    I recommend vdbench and SWAT by Henk Vandenbergh. Look at: http://przemol.blogspot.com/2008/06/vdbench-disk-io-workload-generator.html
    Vdbench is very flexible regarding customizing IO workload but in pair with SWAT you can also replay real workload (vdbench) saved during normal production (using SWAT).

  5. Mark Seger
    October 4th, 2011 at 11:57 | #4

    As karl said, I’m a big fan of dt, but I’m an even bigger fan of data! None of these tools tell you what’s going on during the test and in my opinion that’s one of the main purposes of doing the benchmarking in the first place. Just because a test takes n-seconds to run, it doesn’t necessary mean anything! What if you had an abnormal burst of CPU during the test that affected the run? Is the test taking caching into account?

    A great example of negative affect of caching is trying to write files a lot small than your system’s memory. Your benchmark might tell you your I/O rate was 100MB/sec when in fact it might be measuring the time it spent filling the cache. w/o a tool like collectl, watching disk/memory usage every second (or even less), you’ll never even know.

    -mark

  6. Przemyslaw Bak (przemol)
    December 15th, 2011 at 12:04 | #5

    @Mark Seger
    IMHO doing benchmarks using cached filesystems is cheating yourself. You should prepare the whole environment similar to your production. Otherwise it doesn’t make any sense …

  7. November 9th, 2012 at 19:29 | #6

    @Przemyslaw Bak: fio offer the option of using direct I/O so file system caching can be avoided. Some file systems don’t support direct I/O like ZFS, but in the case of ZFS one can turn off data caching in the primarycache and secondarycache thus becoming the equivalent off turning filesystem caching. Then of course when possible I test the raw devices and just avoid the whole issue

    I haven’t tried dt or vdbench but am interested in looking into them.

    I find the fio community quite active and the tool flexible so I’m dubious of something like vdbench that’s primarily for the Solaris community having the same level of flexibility but I’m interested in finding out.

You must be logged in to post a comment.