Over the past year I have been primarily using NFS as the main means to host storage for my virtual machines. Using NFS and even software iSCSI is regarded as the simplest and easiest methods of getting up and running with VMware.
Lately however, the performance has just not been what one would expect from a RAID setup. I have a aging, but still useful storage array based on HighPoint’s 3122 card. Locally on my storage server, I would get ~110MB/s for writes and reads. For NFS and iSCSI with a good network infrastructure (HP 1810-24G) the performance was only around 45MB/s on average.
After much reading, I found out that NFS performance on VMware is limited since there is only one control channel, and VMware forces opens and closes on each read/write transaction, killing performance on NFS. The same goes for software iSCSI. VMware does this to protect data integrity. So, they use O_DIRECT for everything and this makes the performance less than optimal.
I bit the bullet and hit eBay. After acquiring the SAN switch, the Fiber Channel HBA’s and fiber cables, I now have a SAN. The bottleneck is now my old storage array, as I can easily get 110MB/s per VM. I ended up picking up,
- EMC Silkworm 3852 2Gb/s 16 port SAN switch ($250)
- QLogic 2462 4Gb/s 2 Port HBA ($150)
- Emulex LPe1050EX (2Gb/s) /LP1150-E (4Gb/s) x 3 ($300)
Using SCST 2.0, I was able to create Fiber Channel targets with the QLogic. All works well. My next storage adventure will be upgrading the array itself to beefier, more IOP scalable hardware. That is a 2011 project!
If anyone asks, no more NFS or software iSCSI for me. Though VMware will say to the contrary, those technologies just do not scale enough unless you have expensive EMC and NetApp gear that has custom code to eliminate the VMware O_DIRECT problem. Those using Open Source such as BSD and Linux will be stuck with less than stellar NFS and iSCSI performance until those implementations are updated. For now, I’m sticking with Fiber Channel. I’m not really looking at going back!