Category Archives: VMware ESX

Information on the enterprise-class VMware ESX and ESXi hypervisor products.

Hacking the HP 5406zl…

The HP 5406zl
The HP 5406zl

The venerable HP 5406zl.  This switch has been around for many years, in fact, it was introduced back in 2004.  Over time, this switch has seen a number of upgrades, and the current base model 5406zl’s provide 379Gbps of non-blocking goodness.  A while back I acquired one of these to replace the Cisco SG200-26 I had.  The 5406zl is a fully modular switch, and great deals on eBay can be had, if you know where and what to look for.

One of the modules for the 5406zl is the Advanced Services Module.  I have two of these.  They are x86 servers on a blade, letting you run VMware, Hyper-V or Xen.  Normally, you have to buy them from HP pre-configured.  The reality is you do not, you just have to be a bit creative.

These blades J4797A, come with a dual core 2.53Ghz, 4GB RAM and a 256GB 2.5″ hard drive.  You can easily upgrade the RAM from 4 to 8GB (it is DDR3 SODIMM).  You can also swap the Hard drive for a SSD.

I have a J4797A which is supposed to run Citrix XenServer.  I simply upgraded the RAM, and installed VMware 5.1 to the SSD, and viola, I was able to get a $2000 VMware blade for around $250CAD all in.  While not super speedy, these blades work great for DNS, Firewall and TACACS.  They even come with HP’s Lifetime Warranty.

Oh, and if you did not hear, the latest K15.16.0006 (now .0008) firmware, enables the “Premium” features for free.  Even more reason to find one of these switches on eBay.

Why I chose DL585’s for my storage server needs…

Ahh, the HP DL585.  The mid-size workhorse for many enterprises.  The venerable machines are now up to their 7th generation.  In the last 6 months, I have worked with two early generation DL585’s.  A G1 and a G2.  The machines offer gobs of internal bandwidth.  The G1 machine only had PCI-x and the G2 features a mix of both PCI-X and PCIe.  These machines bring back fond memories of my old days.  Working for HP does have some benefits.  Like, understanding the details of the machines and what and what they cannot do.

In December, I went through a process of replacing my old NFS based storage for VMware with Fiber Channel.  That went well, but I knew I could get even better performance.  Also, storage is easily the most intense service that I run.  It has killed several consumer and “enthusiast” motherboards.  The need to push a lot of I/O, 24/7 just burns those machines out.  In April I picked up a DL585G1 to replace my dual core E7600 setup.  Even though the DL585G1 has slower CPU’s (8 cores at 2.2 GHz AMD vs 2 @ 3.0 GHz Intel), the throughput increased by about 30%. That was using older, PCI-X QLogic fiber channel cards.

One drawback of the DL585G1 is that it is loud an puts out a lot of heat.  Not the best set up, even in a cold room.  So, I embarked to replace that trusty machine with a slightly newer DL585G2.  Still supported by HP, the DL585G2 is a great low-cost mid-range server.  The benefits are it is about 30% faster than the G2, has upgradable CPU’s to any AMD 8000 series CPU (including Quad Core Barcelona and Istanbul CPU’s).  It also runs very quiet and uses about 30% less power than the G1.  The gobs of throughput are still there as well.

Compared to the Dell and IBM systems, the DL585 is a steal.  Pricing for one of these systems was considerably less than the equivalent Intel server.  For someone on a smaller hardware budget, these systems are a great fit.  Are they going to out perform Sandy Bridge or Nehalem Xeons?  No, but if I had the money, that would be a different conversation.

For those searching on the web about AMD CPU’s and virtualization, the 800 series CPU’s from AMD DO support 64 bit virtualization on VMware.  The DL585G1 supports ESXi 4.1U1 just fine.

Windows 2008R2 SP1 Upgrades

Over the last week I have spent the time to upgrade the existing Windows 2008 infrastructure to Windows 2008R2 SP1.  I had been running 2008R2 on a couple of systems, but decided it was time to refresh the environment.  The upgrade process worked like a charm, and all of the systems updated correctly. The database server was in sad shape (it was a VM that was over 3 years old), so it was time to reinstall the OS.  Fortunately, MySQL 5.5 works just fine on 2008R2.

As for patching, I managed to also patch the systems to 2008R2 SP1 without any issues.  Exchange 2010 and the AD systems took the updates just fine.  2008R2 has proven to run a little better and faster under vSphere 4.1U1.  Even the WDDM video driver works perfectly fine!

In combination with Windows 7, the file transfers are very fast.  All in all worth the time to upgrade.

DIY SAN: Building your DIY SAN is as easy as …

Over the past year I have been primarily using NFS as the main means to host storage for my virtual machines. Using NFS and even software iSCSI is regarded as the simplest and easiest methods of getting up and running with VMware.

Lately however, the performance has just not been what one would expect from a RAID setup. I have a aging, but still useful storage array based on HighPoint’s 3122 card. Locally on my storage server, I would get ~110MB/s for writes and reads. For NFS and iSCSI with a good network infrastructure (HP 1810-24G) the performance was only around 45MB/s on average.

After much reading, I found out that NFS performance on VMware is limited since there is only one control channel, and VMware forces opens and closes on each read/write transaction, killing performance on NFS. The same goes for software iSCSI. VMware does this to protect data integrity. So, they use O_DIRECT for everything and this makes the performance less than optimal.

I bit the bullet and hit eBay. After acquiring the SAN switch, the Fiber Channel HBA’s and fiber cables, I now have a SAN. The bottleneck is now my old storage array, as I can easily get 110MB/s per VM. I ended up picking up,

  • EMC Silkworm 3852 2Gb/s 16 port SAN switch ($250)
  • QLogic 2462 4Gb/s 2 Port HBA ($150)
  • Emulex LPe1050EX (2Gb/s) /LP1150-E (4Gb/s) x 3 ($300)

Using SCST 2.0, I was able to create Fiber Channel targets with the QLogic. All works well. My next storage adventure will be upgrading the array itself to beefier, more IOP scalable hardware. That is a 2011 project!

If anyone asks, no more NFS or software iSCSI for me. Though VMware will say to the contrary, those technologies just do not scale enough unless you have expensive EMC and NetApp gear that has custom code to eliminate the VMware O_DIRECT problem. Those using Open Source such as BSD and Linux will be stuck with less than stellar NFS and iSCSI performance until those implementations are updated. For now, I’m sticking with Fiber Channel. I’m not really looking at going back!

Finally iSCSI on Linux that just works!

Back in November, I wrote about the transition to iSCSI and NFS, along with a need for a new storage array.  Turns out I ended up purchasing a new storage array and RAID card as a “Merry Christmas” to me gift.

I ended up purchasing the Sans Digital TRM8-B.  It’s a nice 8 bay SATA/eSATA enclosure.  Asides from the horrible blue LED in the rear, it performs quite nicely with quiet sounds and hot swappable drives.  To support the enclosure I picked up a HighPoint 3122 RAID eSATA card w/ 128MB cache.  Add to the mix a pair of 1TB WD black drives, and now I have 1.8TB of storage.

The performance has been good.  I easily get 125MB/s in Linux VM’s and around 70-80MB/s for Windows (my connect is only 1Gb/s to the filer).   What was not good was the performance of the iSCSI software I used.

I tried almost every Linux iSCSI you could find.  IET, tgt, Linux-iscsi.  They all had poor performance and where buggy.  vSphere is pretty picky about its iSCSI software.  I ended up going with ISCSI-SCST.  It works great!  It works nicely with ESX 4.0, plays well with my Windows backups.  If you are looking for a good, solid Linux iSCSI solution, this one appears to be it.  These guys have done it right too!

I migrated my iSCSI server into a VM, and the performance has remained consistent too.  The Filer is now configured purely to serve NFS to the ESX hosts, but any iSCSI go through a CENTOS/RHEL 5.4 VM.  A lot easier this way.  And guess what?  It just works.

NFS, iSCSI oh boy!

net-managerThe joys of storage!  Recently, I decided to dabble again in iSCSI.  With the VMware farm for “itinthedatacenter.com” getting larger, my small time NFS set up was starting to show its limits.  Even with 4 drives, and spreading out the I/O, the Linux NFS server I use (OpenFiler BTW) was trashing more and more with increased activity.

OpenFiler includes iSCSI, but I decided to update the drivers and the iSCSI software myself.  OpenFiler uses a rather old iSCSI version, and I wanted to be current.  So, off to compiling and hacking at kernel modules I went.  After about 2 days of effort, the filer was converted over to running iSCSI for my VMFS volumes form NFS.  The performance so far has been pretty good, and the trashing has been reduced.  This is most likely due to the lack of page cache that the iSCSI server uses.

My next task is to upgrade the storage controller and array.  Internal disks and standard SATA quite frankly blow chuncks on consumer and even enthusiast boards.  So, I have decided to pick up a new eSATA RADI card, along with a new 8 bay storage array.  It is a couple months away from implementation, but the plan is done.

I expect that I’ll be able to get 70-90MB/s from the new array with my existing drives, versus the 45-60MB/s I get now.  Reads should be quite improved with a controller having between 128MB and 256MB RAM.

I’m thankful I moved away from NFS to iSCSI.  NFS works, but, the lack of features and problems with NFS server reboots forced me to make the change.  ESX4.0 handles iSCSI much better than 3.5 ever did.  I even have created boot floppy for W2K8/W2K3/W2K8R2 for the VMware PVSCSI devices so that they can be booted from.  Oh, and yes, Virginia, you *CAN* boot from VMware PVSCSI disks in Windows.  It works nicely once you figure out how VMware has implemented it.

VMware ESX/ESXi Community Drivers

intelnicOne of the challenges of the whitebox-ESX community is the need for current drivers for hardware that is not on VMware’s HCL.  It appears the Open Source Community has found a way to build, and modify Linux drivers for ESX/ESXi.  The benefit of this is now, common 3rd party hardware is now available for use on ESX.  While it is not supported by VMware, it does not preclude users from using these drivers.  For example, RealTek RTL8139B NIC’s now work, the same goes for ICH9/10.

ESX 3.5U4 was released by VMware.  It appears the support they have added, ICH9/10, older Intel NIC’s, is eerily similar to what the community has added.  I’m excited about this, since community drivers will certainly expand the use of ESX beyond vendor-based equipment.  On the VMware forums, VMware themselves have not ruled out incorporating the drivers into future 3.x releases.  Here’s hoping 4.x will support the same driver model.

More details can be found here.

VMware Patching

vmwareSo, VMware just released their first ESXi patch set of 2009.  Being the intrepid “lif-on-the-edge” guy I am, I downloaded and applied the patches to the systems here today.  VMware makes these types of upgrades relatively painless.  I simply used DRS to move my VM’s from system to the other, applyed the patches, reboot, repeat again.  No downtime for the VM’s, life is good.

Now, in the real world, this would certainly required change management and other IT processes (think ITIL) to accomplish.  How do you do your patching?  Do you still take regular downtime windows?  Or have you gone transparent?  Discuss…

Want to create your own ESX Whitebox?

What exactly is a ESX Whitebox you ask?  It’s a VMware ESX server running on hardware that is not exactly in VMware’s Hardware Compatibility List (HCL).  Most home IT pro’s will probably be familiar with VMware Workstation, VMware Server or even other products like Xen or Sun’s VirtualBox.  While those other products are relatively easy to install, none really match the power (in my opinion) of what you get with VMware ESX.

Take a look at my ESX Whitebox Article for details on how you can build your own and a comparision of when and where to use VMware ESX and some of the other products on the market.