NFS throughput testing trick (solaris)

July 19th, 2011

When testing the read and write speed to an NFS mounted file system it’s often unclear if the bottleneck is the speed of the network connection or the speed of the underlying storage. For the underlying storage it would be nice just take it out of the equation and concentrate on the network speed. For network speed testing for NFS mount here is a cool trick to take the storage subsystem out of the equation by using a ramdisk. On the NFS server create a ramdisk

ramdiskadm -a ramdisk1 1000m
newfs /dev/rramdisk/ramdisk1
mkdir /ramdisk
mount /dev/ramdisk/ramdisk1 /ramdisk
share -F nfs -o rw /ramdisk
chmod 777 /ramdisk

Then on the NFS client, for example LINUX, mount the ramdisk

mkdir /ramdisk
mount -t nfs  -o  'rw,bg,hard,nointr,rsize=1048576,wsize=1048576,tcp,nfsvers=3,timeo=600'   192.168.1.1:/ramdisk /home/odbadmin/ramdisk

Now, on the target I can read and write to the ramdisk and avoid the disk speed issues:

time  dd if=/ramdisk/toto of=/dev/null bs=1024k count=800
838860800 bytes (839 MB) copied, 7.74495 seconds, 108 MB/s

For read tests, be aware that in many cases, reads will come from caching on the client side after the first run.

And to drop the ramdisk

 umount /ramdisk
 ramdiskadm -d ramdisk1


UPDATE: thanks to Adam Leventhal for this tidbit. Instead of creating a ramdisk, just use /tmp:

On Server

 share -F nfs -o rw /tmp

On client:

mkdir /tmpnfs
mount -o vers=4,rsize=32768,wsize=32768 172.16.1.1:/tmp  /tmpnfs


Uncategorized

  1. Trackbacks

  2. No trackbacks yet.
  1. Comments

  2. Kevin Closson
    July 21st, 2011 at 18:41 | #1

    Hi Kyle,

    Wouldn’t it be best on the client to use dd with direct I/O (iflag=direct) or was the client a Unix derivation that has no O_DIRECT dd(1)?

  3. July 21st, 2011 at 19:53 | #2

    Hey Kevin,

    Good point. AFAICT it doesn’t make much difference for writes, but for read tests it would simplify things. I loose the habit of trying the flag as some platforms don’t support it, but LINUX does

    write
    dd if=/dev/zero of=toto bs=8k count=100 oflag=direct

    read
    dd of=/dev/null if=toto bs=8k count=100 iflag=direct

    – Kyle

  4. Kevin Closson
    July 21st, 2011 at 23:05 | #3

    ..yeah …the point I was trying to get at is if you write through with O_DIRECT then there is no chance of client cache soiling your analysis… no worries…

  5. July 21st, 2011 at 23:58 | #4

    Hey Kevin – thanks for dropping by and pointing this out. Yes, when it works it makes life much easier !

  6. Alex Gorbachev
    July 22nd, 2011 at 04:02 | #5

    I actually used RAM disks mounted via NFS about 4 years ago to stress test one interesting platform – could get 0.3 ms of random 8K read with super night throughput. that’s twice as good as Exadata flash reads today. :)

  7. August 30th, 2011 at 22:14 | #6

    UPDATE: instead of jumping through the hoops for ramdisks just use /tmp!

    thanks to Adam Leventhal for this tidbit.

    On Server

    share -F nfs -o rw /tmp

    On client:

    mkdir /tmpnfs
    mount -o vers=4,rsize=32768,wsize=32768 172.16.1.1:/tmp /tmpnfs

  8. Sumesh
    December 1st, 2011 at 12:42 | #7

    The advantage of using ramdisk is that the overhead on disk read will not be there for the network speed testing for NFS mount.

  9. madhavi
    August 14th, 2012 at 14:49 | #8

    i might be dumb here, is this the update…?

    ramdiskadm -a ramdisk1 1000m
    newfs /dev/rramdisk/ramdisk1
    mkdir /ramdisk

    mount /dev/ramdisk/ramdisk1 /tmp ?
    share -F nfs -o rw /tmp?
    chmod 777 /tmp?

  10. August 14th, 2012 at 15:19 | #9

    Hi Madhavi,

    No need to use ramdisk at all. One can just use /tmp ( since /tmp is memory mapped )

    On Server

    share -F nfs -o rw /tmp

    On client:

    mkdir /tmpnfs
    mount -o vers=4,rsize=32768,wsize=32768 172.16.1.1:/tmp /tmpnfs

You must be logged in to post a comment.