Why I chose DL585’s for my storage server needs…

Ahh, the HP DL585.  The mid-size workhorse for many enterprises.  The venerable machines are now up to their 7th generation.  In the last 6 months, I have worked with two early generation DL585’s.  A G1 and a G2.  The machines offer gobs of internal bandwidth.  The G1 machine only had PCI-x and the G2 features a mix of both PCI-X and PCIe.  These machines bring back fond memories of my old days.  Working for HP does have some benefits.  Like, understanding the details of the machines and what and what they cannot do.

In December, I went through a process of replacing my old NFS based storage for VMware with Fiber Channel.  That went well, but I knew I could get even better performance.  Also, storage is easily the most intense service that I run.  It has killed several consumer and “enthusiast” motherboards.  The need to push a lot of I/O, 24/7 just burns those machines out.  In April I picked up a DL585G1 to replace my dual core E7600 setup.  Even though the DL585G1 has slower CPU’s (8 cores at 2.2 GHz AMD vs 2 @ 3.0 GHz Intel), the throughput increased by about 30%. That was using older, PCI-X QLogic fiber channel cards.

One drawback of the DL585G1 is that it is loud an puts out a lot of heat.  Not the best set up, even in a cold room.  So, I embarked to replace that trusty machine with a slightly newer DL585G2.  Still supported by HP, the DL585G2 is a great low-cost mid-range server.  The benefits are it is about 30% faster than the G2, has upgradable CPU’s to any AMD 8000 series CPU (including Quad Core Barcelona and Istanbul CPU’s).  It also runs very quiet and uses about 30% less power than the G1.  The gobs of throughput are still there as well.

Compared to the Dell and IBM systems, the DL585 is a steal.  Pricing for one of these systems was considerably less than the equivalent Intel server.  For someone on a smaller hardware budget, these systems are a great fit.  Are they going to out perform Sandy Bridge or Nehalem Xeons?  No, but if I had the money, that would be a different conversation.

For those searching on the web about AMD CPU’s and virtualization, the 800 series CPU’s from AMD DO support 64 bit virtualization on VMware.  The DL585G1 supports ESXi 4.1U1 just fine.

Windows 2008R2 SP1 Upgrades

Over the last week I have spent the time to upgrade the existing Windows 2008 infrastructure to Windows 2008R2 SP1.  I had been running 2008R2 on a couple of systems, but decided it was time to refresh the environment.  The upgrade process worked like a charm, and all of the systems updated correctly. The database server was in sad shape (it was a VM that was over 3 years old), so it was time to reinstall the OS.  Fortunately, MySQL 5.5 works just fine on 2008R2.

As for patching, I managed to also patch the systems to 2008R2 SP1 without any issues.  Exchange 2010 and the AD systems took the updates just fine.  2008R2 has proven to run a little better and faster under vSphere 4.1U1.  Even the WDDM video driver works perfectly fine!

In combination with Windows 7, the file transfers are very fast.  All in all worth the time to upgrade.

Linux Software RAID 6

Software RAID or hardware RAID? That has been a long running question that every Architect faces when designing storage infrastructure for customers. In the Enterprise, this is usually a pretty straight-forward choice. It is hardware RAID. The next question would be the RAID level. RAID 5, RAID S, RAID 10 and some other more exotic vendor based levels are typical available. The choices of the level are typical set by application, risk and budget concerns.

Now, what is the home user to do? Intel offers semi-hardware RAID on their current motherboards which for most users will be adequate. (Just make sure you have good backups if using consumer hard drives!) If you are using Linux, other choices are available. Software RAID is an interesting option. Recently, I implemented a 14 disk Fiber Channel RAID 6 solution. Despite the write penalty that RAID 6 introduced, the performance is still very good. Writes exceeding 150MB/s on a 2Gb/s fiber channel SAN make the old SAN solution I have seem old!

If you are thinking of using Linux and RAID 6, give it a try! With Fiber Channel and SAS disks the performance should be quite good without breaking the bank.

DIY SAN: Building your DIY SAN is as easy as …

Over the past year I have been primarily using NFS as the main means to host storage for my virtual machines. Using NFS and even software iSCSI is regarded as the simplest and easiest methods of getting up and running with VMware.

Lately however, the performance has just not been what one would expect from a RAID setup. I have a aging, but still useful storage array based on HighPoint’s 3122 card. Locally on my storage server, I would get ~110MB/s for writes and reads. For NFS and iSCSI with a good network infrastructure (HP 1810-24G) the performance was only around 45MB/s on average.

After much reading, I found out that NFS performance on VMware is limited since there is only one control channel, and VMware forces opens and closes on each read/write transaction, killing performance on NFS. The same goes for software iSCSI. VMware does this to protect data integrity. So, they use O_DIRECT for everything and this makes the performance less than optimal.

I bit the bullet and hit eBay. After acquiring the SAN switch, the Fiber Channel HBA’s and fiber cables, I now have a SAN. The bottleneck is now my old storage array, as I can easily get 110MB/s per VM. I ended up picking up,

  • EMC Silkworm 3852 2Gb/s 16 port SAN switch ($250)
  • QLogic 2462 4Gb/s 2 Port HBA ($150)
  • Emulex LPe1050EX (2Gb/s) /LP1150-E (4Gb/s) x 3 ($300)

Using SCST 2.0, I was able to create Fiber Channel targets with the QLogic. All works well. My next storage adventure will be upgrading the array itself to beefier, more IOP scalable hardware. That is a 2011 project!

If anyone asks, no more NFS or software iSCSI for me. Though VMware will say to the contrary, those technologies just do not scale enough unless you have expensive EMC and NetApp gear that has custom code to eliminate the VMware O_DIRECT problem. Those using Open Source such as BSD and Linux will be stuck with less than stellar NFS and iSCSI performance until those implementations are updated. For now, I’m sticking with Fiber Channel. I’m not really looking at going back!

Old is new Again!

As many IT pros know, the stability and long life of any platform is desired. The main reason for this is stability. Over time however, it becomes necessary to upgrade the platform, whether it be the hardware, operating system, application or other bits of glue that make your services work. I work with customers helping to determine migration strategies for services, applications and servers. Many times, the application is available on the new platform, so it is just a matter of testing, and off you go. However, what happens when the application is still in use but the platform is just too old to maintain?

Enter my latest “real-life” migration. I run SmoothWall as my main firewall product. I have used a variety of others, including pfsense and some other commercial products. None however, has been as simple and as easy to use as my SmoothWall system. I have been hacking over many years VMware support into the platform. It is based on its own “customized” Linux kernel and libraries, which makes it very hard maintain. Well, vSphere 4.1 finally broke the old system, and I was not able to use any drivers or kernel optimizations anymore.

Enter in my solution, SmoothWall on CentOS! Using some old UNIX tricks, I managed to fool SmoothWall into thinking it is running on its own Linux system. The base is CentOS 5.5. No recompiles or big changes needed. Many of us old UNIX hands know that you can make older UNIX binaries work in a chroot environment. This is wayyy before virtualization as we know it today. Fortunately, the Linux kernel ABI is very consistent, so, this trick works great! At my old employer I used to use this trick all the time to migrate old broken build platforms onto modern, supported operating systems.

So, what is old is new again! I even gain SMP, and the ability to use more memory! Welcome additions to my firewall!

A little about power …

Power is probably the most important element in ensuring a stable computing experience.  Yet, commonly it is taken for granted.  In today’s modern data center, it is not uncommon to run into power limitations.  Vendor equipment is becoming more effiecient all the time.  HP even offers blade servers that allow for power capping.  These features allow cramped data centers to have even higher density.

Recently, I myself began to run into some basic power issues.  Here, I have a collection of UPS.  All from APC, they range from models about 8 years old to newer more top of the line units.  Even though my computing density is not that high, reliable power delivery is a must.  Being more so in the country adds to this.  The last several weekends, and even some weeknights, there have been localized power outages in my area.  Some only a few seconds long, to others 15 minutes or more.  One of these outages actually caused corruption on my storage array and a lost a few VM’s on XFS filesystems.  Even though XFS is error free, the files where damaged.  Moral of the story, make sure you have good UPS units.

I ended up purchasing 2 of APC’s SN1000 UPS’ from Tiger Direct.  These are currently $84CDN and are a great deal.  Originally, my older BackUPS 350 and BackUPS 750 where running the systems here.  However, as I have added more networking and storage equipment, the runtime has come down considerably.  These new units give the runtime here of about 30-40 minutes.  That is considerably longer than the 3 to 10 minutes I had before.  The SN1000 is the older model of the Sc1000.  These originally sold for over $300CDN or higher.

I plan to eBay out the old UPS’ here.  No sense keeping them.  I figure the $200CDN in high-end UPS will save me hours of rebuild time, the next time the power goes down hard.  If it can happen here, it can happen to anyone.

VoIP Galore!

After years of careful planning, it is finally done.  I managed to finally go all digital voice here at home.  Freeing myself from the “traditional” telephone providers, aka Bell, proved to be a most interesting and fun challenge.

The set up did not come super easy.  It required a lot of work and fiddling around with archance software.

First up, is the IP-PBX.  I went with Asterisk, as it is the best well known package to use.  Specifically, I went with Trixbox, since it comes as a distribution as is easy to install.  Within about 20 minutes the phone system itself is working. Next, I wanted to integrate true “unified communications”, so I set up Office Communications Server 2007 R2.  I can now make calls from the OCS client, and take them too, score!

For the phones in the house I went with the Cisco SPA3102.  It is finicky, and MANY people who claim to have set them up properly in Asterisk, have not.  Unless all 4 LED lights are on, it’s not set up.  After a couple days of playing, finally, all 4 lights work, and it is a proper extension in the house.  The SPA3102 features 911 and power-off pass-through, so, in emergencies, it will still work with Ma Bell.

Finally, I decided to use FlowRoute for my IP-Voice traffic.  There rates are cheap, and with the SPA3102 I can still keep my Bell line, but route local calls locally, and long distance out to FlowRoute.  All of my calls are digital, as even to go to local Bell calls, Asterisk digitizes them.

I’m quite pleased with the performance.  Even in a VM Asterisk runs ok.  In my next major hardware swap (later this year), I’ll move Asterisk to physical hardware to improve voicemail recording, but right now, it is not a issue.  Saving 50$ a month is quite impressive.  I may even offer the service to family and friends, who knows.

All I can say is once you go digital,  you never want to go back!

Finally iSCSI on Linux that just works!

Back in November, I wrote about the transition to iSCSI and NFS, along with a need for a new storage array.  Turns out I ended up purchasing a new storage array and RAID card as a “Merry Christmas” to me gift.

I ended up purchasing the Sans Digital TRM8-B.  It’s a nice 8 bay SATA/eSATA enclosure.  Asides from the horrible blue LED in the rear, it performs quite nicely with quiet sounds and hot swappable drives.  To support the enclosure I picked up a HighPoint 3122 RAID eSATA card w/ 128MB cache.  Add to the mix a pair of 1TB WD black drives, and now I have 1.8TB of storage.

The performance has been good.  I easily get 125MB/s in Linux VM’s and around 70-80MB/s for Windows (my connect is only 1Gb/s to the filer).   What was not good was the performance of the iSCSI software I used.

I tried almost every Linux iSCSI you could find.  IET, tgt, Linux-iscsi.  They all had poor performance and where buggy.  vSphere is pretty picky about its iSCSI software.  I ended up going with ISCSI-SCST.  It works great!  It works nicely with ESX 4.0, plays well with my Windows backups.  If you are looking for a good, solid Linux iSCSI solution, this one appears to be it.  These guys have done it right too!

I migrated my iSCSI server into a VM, and the performance has remained consistent too.  The Filer is now configured purely to serve NFS to the ESX hosts, but any iSCSI go through a CENTOS/RHEL 5.4 VM.  A lot easier this way.  And guess what?  It just works.

Merry Christmas … 2010

Merry Christmas!

Another year has passed us by.  This year has seen some interesting developments in the IT community.  The rise of social networking sites such as Facebook and Twitter would top my list.  The next would be the release of updated versions of VMware and Hyper-V.  Canada finally has more than one national GSM mobile phone carrier.  The Great Recession wrecked havoc with IT budgets early in 2009, and recovery appears to be underway leading into 2010.

What is ahead for 2010?  I have crystal ball, but expect more socail networking growth.  Expect more mobile application growth.  Over 2 billion downloads from iTunes shows that market has some legs.  Oh, and one thing we can look forward too is the new Mac Tablet.

If we think back 10 years to the end of 2000, RIAA was trying to kill media “pirate” media companies, the Internet bubble was just bursting, Nortel was still a force as a Canadian company, I found out I really enjoy good Indian, and who could have predicted 9/11 and the changes to travel and communication after that?  Lest we not forget Enron that brought SOX and financial IT requirements upon us. SCO vs IBM and the Linux world.  Eventful.

The next 10 years will be even more eventful.   Here is to a happy 2010.

NFS, iSCSI oh boy!

net-managerThe joys of storage!  Recently, I decided to dabble again in iSCSI.  With the VMware farm for “itinthedatacenter.com” getting larger, my small time NFS set up was starting to show its limits.  Even with 4 drives, and spreading out the I/O, the Linux NFS server I use (OpenFiler BTW) was trashing more and more with increased activity.

OpenFiler includes iSCSI, but I decided to update the drivers and the iSCSI software myself.  OpenFiler uses a rather old iSCSI version, and I wanted to be current.  So, off to compiling and hacking at kernel modules I went.  After about 2 days of effort, the filer was converted over to running iSCSI for my VMFS volumes form NFS.  The performance so far has been pretty good, and the trashing has been reduced.  This is most likely due to the lack of page cache that the iSCSI server uses.

My next task is to upgrade the storage controller and array.  Internal disks and standard SATA quite frankly blow chuncks on consumer and even enthusiast boards.  So, I have decided to pick up a new eSATA RADI card, along with a new 8 bay storage array.  It is a couple months away from implementation, but the plan is done.

I expect that I’ll be able to get 70-90MB/s from the new array with my existing drives, versus the 45-60MB/s I get now.  Reads should be quite improved with a controller having between 128MB and 256MB RAM.

I’m thankful I moved away from NFS to iSCSI.  NFS works, but, the lack of features and problems with NFS server reboots forced me to make the change.  ESX4.0 handles iSCSI much better than 3.5 ever did.  I even have created boot floppy for W2K8/W2K3/W2K8R2 for the VMware PVSCSI devices so that they can be booted from.  Oh, and yes, Virginia, you *CAN* boot from VMware PVSCSI disks in Windows.  It works nicely once you figure out how VMware has implemented it.

IT News & Reviews You Can Use