NFS throughput testing trick (solaris)
When testing the read and write speed to an NFS mounted file system it’s often unclear if the bottleneck is the speed of the network connection or the speed of the underlying storage. For the underlying storage it would be nice just take it out of the equation and concentrate on the network speed. For network speed testing for NFS mount here is a cool trick to take the storage subsystem out of the equation by using a ramdisk. On the NFS server create a ramdisk
ramdiskadm -a ramdisk1 1000m newfs /dev/rramdisk/ramdisk1 mkdir /ramdisk mount /dev/ramdisk/ramdisk1 /ramdisk share -F nfs -o rw /ramdisk chmod 777 /ramdisk
Then on the NFS client, for example LINUX, mount the ramdisk
mkdir /ramdisk mount -t nfs -o 'rw,bg,hard,nointr,rsize=1048576,wsize=1048576,tcp,nfsvers=3,timeo=600' 192.168.1.1:/ramdisk /home/odbadmin/ramdisk
Now, on the target I can read and write to the ramdisk and avoid the disk speed issues:
time dd if=/ramdisk/toto of=/dev/null bs=1024k count=800 838860800 bytes (839 MB) copied, 7.74495 seconds, 108 MB/s
For read tests, be aware that in many cases, reads will come from caching on the client side after the first run.
And to drop the ramdisk
umount /ramdisk ramdiskadm -d ramdisk1
UPDATE: thanks to Adam Leventhal for this tidbit. Instead of creating a ramdisk, just use /tmp:
On Server
share -F nfs -o rw /tmp
On client:
mkdir /tmpnfs mount -o vers=4,rsize=32768,wsize=32768 172.16.1.1:/tmp /tmpnfs
Trackbacks
Comments
Hi Kyle,
Wouldn’t it be best on the client to use dd with direct I/O (iflag=direct) or was the client a Unix derivation that has no O_DIRECT dd(1)?
Hey Kevin,
Good point. AFAICT it doesn’t make much difference for writes, but for read tests it would simplify things. I loose the habit of trying the flag as some platforms don’t support it, but LINUX does
write
dd if=/dev/zero of=toto bs=8k count=100 oflag=direct
read
dd of=/dev/null if=toto bs=8k count=100 iflag=direct
– Kyle
..yeah …the point I was trying to get at is if you write through with O_DIRECT then there is no chance of client cache soiling your analysis… no worries…
Hey Kevin – thanks for dropping by and pointing this out. Yes, when it works it makes life much easier !
I actually used RAM disks mounted via NFS about 4 years ago to stress test one interesting platform – could get 0.3 ms of random 8K read with super night throughput. that’s twice as good as Exadata flash reads today. :)
UPDATE: instead of jumping through the hoops for ramdisks just use /tmp!
thanks to Adam Leventhal for this tidbit.
On Server
share -F nfs -o rw /tmp
On client:
mkdir /tmpnfs
mount -o vers=4,rsize=32768,wsize=32768 172.16.1.1:/tmp /tmpnfs
The advantage of using ramdisk is that the overhead on disk read will not be there for the network speed testing for NFS mount.
i might be dumb here, is this the update…?
ramdiskadm -a ramdisk1 1000m
newfs /dev/rramdisk/ramdisk1
mkdir /ramdisk
mount /dev/ramdisk/ramdisk1 /tmp ?
share -F nfs -o rw /tmp?
chmod 777 /tmp?
Hi Madhavi,
No need to use ramdisk at all. One can just use /tmp ( since /tmp is memory mapped )
On Server
share -F nfs -o rw /tmp
On client:
mkdir /tmpnfs
mount -o vers=4,rsize=32768,wsize=32768 172.16.1.1:/tmp /tmpnfs