Direct I/O for Solaris benchmarking

September 23rd, 2011

ZFS doesn’t have direct I/O.
Solaris dd doesn’t have a iflag= direct.

Thus for I/O benchmarking it requires mounting and umounting the file system between tests for UFS and for ZFS exporting and re-importing the pools.

But there is a trick. Reading off of /dev/rdsk will by pass the cache.

Here is a simple piece of code that will benchmark the disks. The code was put together by George Wilson and Jeff Bonwick (I beleive)

#!/bin/ksh
disks=`format < /dev/null | grep c.t.d | nawk '{print $2}'`
getspeed1()
{
       ptime dd if=/dev/rdsk/${1}s0 of=/dev/null bs=64k count=1024 2>&1 |
           nawk '$1 == "real" { printf("%.0f\n", 67.108864 / $2) }'
}
getspeed()
{
       for iter in 1 2 3
       do
               getspeed1 $1
       done | sort -n | tail -2 | head -1
}
for disk in $disks
do
       echo $disk `getspeed $disk` MB/sec
done

 


Uncategorized

  1. Trackbacks

  2. No trackbacks yet.
  1. Comments

  2. September 24th, 2011 at 19:07 | #1

    Hi Kyle,

    This script cannot work cut-and-pastewise… the pipe to nawk following a new line would not work (just back it with “\”).

    Is getspeed1 (executed inside the getspeed() ksh function) an executable in the PATH of the process executing this script? I’m not a sol expert so maybe there is a /bin/getspeed1? Seems unlikely.

    Also, how does reading the raw disks under a file system help compare UFS to ZFS in any way? Maybe I’m missing the point?

    P.S., I’m so glad you joined us for beers the other night…yet another OakTable moment :-)

  3. September 24th, 2011 at 20:31 | #2

    disks= line is also missing the closing command quote and is followed by a stray closed parenthsis. Must be a bad paste of a korn function.

  4. September 25th, 2011 at 17:07 | #3

    Hey Kevin,
    Thanks for stopping by and sorry for the mucked up script. Yes – something when wrong with the posting. Looks like wordpress gets in and mucks with stuff even between <pre></pre>. Ugh!

    RE ZFS and UFS: I do most of my work on ZFS which doesn’t have direct I/O so it’s nice to have a way to run I/O benchmarks from disk with out having to mess with the pools.
    I also do some tests with UFS and since dd doesn’t have direct I/O flag like on LINUX its nice to have a way to run dd with direct I/O without the flag.
    – Kyle

  5. September 25th, 2011 at 23:09 | #4

    ok…getspeed1() still won’t work without a back after the pipe…I’d just join the nawk line up

    …anyway, I still push the point that you are not testing the file system be it UFS ZFS or XYZ when pounding the underlying disks… that is a good way to see what the drives can do but tells you nothing about what the file system can do…

  6. September 26th, 2011 at 14:59 | #5

    filesystem vs disks – yes, tests is for testing disk primarily.
    My main goal in the above tests is testing the underlying disks to see whether they are the bottleneck. I’m dealing with Oracle databases over NFS so easiest thing is to go through the stack – disks, filesystem (zfs), nfs, tcp, network, host machine (cpu, memory) to the database (I/O read or write, CPU, other waits)
    If the underlying areas can be isolated and tested it makes understanding the performance of the stack much easier.

  7. September 26th, 2011 at 16:27 | #6

    OK, Kyle, I sort of thought that may be the case. There’s one issue that comes to mind. For production purposes customers should be running dNFS against a Filer and logging in to a filer to fiddle with dd(1) is a problem. I think you are mostly discussing the Sun S7000 (Amber Roads) in this particular case. I can’t think of any other production-quality Filer that lets one spelunk about with a full-featured shell. Am I missing the point?

    I have a dd(1) from the GNU source around here somewhere that uses Solaris directio(3C) which would allow you to read the UFS files in the direct I/O path at least. I’ll see if I can rummage it up. I hacked that out whilst blogging about cp(1) on Solaris (http://kevinclosson.wordpress.com/2007/02/23/standard-file-utilities-with-direct-io/)

  8. November 14th, 2011 at 17:58 | #7

    Kyle,

    Perhaps your readers might consider pushing directio onto an existing file with the code offered at the following link? Probably won’t work with ZFS…don’t know…

    http://kevinclosson.wordpress.com/2007/01/11/analysis-and-workaround-for-the-solaris-10203-patchset-problem-on-vxfs-files/

You must be logged in to post a comment.