Building your own Data Center … at home! Part 1

ibm-cloud-computing-data-centerFor many enterprises, building a data center is a major undertaking.  It takes planning, understanding IT and Facilities planning, and most importantly, the proper budget to execute.  When this shifts form the corporate or enterprise environment, however, most setups in the home office are cobbled together.  Not as much thought, typically goes into what IT Pro’s use at home.  Maybe it is because having a large Data Center at work, one feels the need to not spend as much time and effort on that for servers that are just “playing around” or “for learning” purposes.  Well, the time has come to change all of that!  Read on for a bit of history on how I came to build my own Tier 1 Data Center at home…

A little bit of history…

Like many IT Pro’s, over the years I have had many incarnations of the “data center”  My first was back nearly 18 years ago.  A 486DX33 connected to a 386DX40, both running Windows 95 (and before that, OS/2!).  The whole thing worked with a 75ft Ethernet cable, and a 10 mbit/s half duplex hub.  The kind that shows the activity on them.  While it was not fast, it did let me share files from one computer to another.  At the time, I was working at a small computer place that was using those old coax ARC Net adapters.  It worked ok for them, and I wanted something similar at home.

intel-pentium-166-2Since then, I have had a collection of machines, Sparc LX’s, Sparcstation 5’s, a collection of Pentium 166/200 machines.  A bi upgrade was the Abit BP6, that let 2 (!!) Celeron 400’s run in SMP with Linux.  Eventually those machines became some separate boxes, that I used to develop Linux applications on.  They all sat on the floor of one bedroom of the place where I lived in Ottawa.  It was cute, but I learned the hard way about Power and Cooling.  While the power bill for 4 computers running was not bad, the HEAT put out by the machines was terrible.  The place in Ottawa did not have A/C, and well, it made for summers that where warm.  If there is one lessen to read from this blog post, remember… “Fans do not cool anything, they just move air.”  The movement of air takes the energy and disperses it in the space.  So, in the summers, in Ottawa I would have a fun running full blast with a window open for venting.  Not great for keeping humidity out and me cool!  I lost more than one power supply over the years at that location.

The servers eventually moved down to the basement into IKEA wooden racks, which was much a large improvement over being in the other bedroom.  The large open space let the heat dissipate, and the house as a result was much cooler.

I kept this general set up when i moved to Niagara for a couple of years.  Then, in late 2007, early 2008, I started really planning out my server collection.  I did start to apply IT principals, and at this point, things where much better.  I picked up a large bakers rack from Costco, and the servers where upgraded to Q6600’s, since the motherboards and the CPU’s themselves where dirt cheap (187$CDN/CPU in October 2008!).  I still have those motherboards and CPU’s to this day.. and they still work. Each of the servers had a maximum of 8GB of RAM.  I was going to do a fan-out model if I needed more capacity.  I intentionally chose CPU’s, memory and motherboards that where “long life” components.  In 2009, nearly 12 months after putting things together, I cobbled together a 3rd VM server running VMware.

KL_Intel_Core2_Q6600It was during this time in Niagara I learned a few things.  Heatsinks can and do come off running CPU’s, VMWare 3.5 and VMFS3 liked to lose your data, and never trust 10 year old CDROMs (yes, CD’s ROM’s, not DVD’s) for backup.  When developing the new setup I wanted some redundancy, and better networking.  That design, spawned the 3 level network I used today.  A production network, a DMZ network, and a storage or backup/black network.  There was no lights out, but the set up ran fairly happily.

The storage solution was using Gigabit Ethernet, and NFS. Good old Linux.  One of the big challenges was the performance.  Even with separate networks, the best the NFS server could give was anywhere from 20 to 30MB/s for reads and writes.  Not stellar, but enough to allow me to run 8 to 12 VM’s.  All the servers where on the baking rack, and for the time it seemed ok.  However, the I/O performance was always a problem.

And so began my quest…. Performance Performance Performance

Working for HP does have some advantages.  You get internal access to some of the smartest people anywhere, and there are thousands of articles on performance running for various technologies.  I began to put my HP knowledge to use, and started slowly doing upgrades to improve the storage performance.

In addition, when I was a full time consultant, I either worked at home, or at a customer location.  As a result, my home internet and network need to be running 24/7.  If not, it means loss of work time, and potential disruptions to clients.  Over the years I had been using a number of in-expensive D-Link routers for my internet connection.  In the summer, the heat would rise, just enough to randomly lock up the router.  In 2010, I decided it was finally time to start using eBay, and little would I know eBay would become a source of great inspiration, learning, and well, a strain on the wallet too!  After a lot of investigation, I purchased a Cisco 851.  This little router has just enough IOS to let you learn it, and runs well enough to pretty much forget about your Internet connection.  That was probably one of my best tech purchases. Even after 3 years, that router is still running just fine. It has had uptimes greater than one year.  It truly is set and forget.  Since then, I have never had an issue with my Internet connection.

1810g-24Off of the success of my new router, I decided that it was time to upgrade the network.  Being a good HP’er, I wanted something “Enterprise ready”, but not something that would break the bank.  Since HP employees in Canada get not discounts on Server, Storage or Network gear, it would be a purchase of faith in my employer.  I ordered the HP 1810G-24. About 5 days later, the switch was in my hands.  And all of 15 minutes of configuring it, I had my 3 VLAN’s up and running.  I then quickly swapped out my old D-Link “green” Gigabit switches for the HP.  After a total of about 20 minutes I had gone from a unmanaged network, to a mid-market branch office network.  There was a performance improvement.  My NFS performance increased to around 45MB/s.  That was nearly double in some case, what it was on the D-links.  It just goes to show that what networking you install does make a difference.

A few months passed, and I was still not happy with the storage I/O.  While better, boot storms would easily overwhelm the connections, and trying to boot more than one server at a time was painful.  I have purchased a eSATA enclosure, and was running 4 relatively fast SATA drives, but, performance was well below Gigabit speeds.  The file server itself was a older AMD 3500, single core machine with 2GB of RAM.  Not fast, but I would have thought fast enough for better than 45MB/s network performance in VMware.

So, in my hunt, i decided it was time to take the Fiber Channel plunge.  I read about several Fiber Channel packages for Linux, and SCST turns out to the best one for my needs.  Fast, scalable, easy to install.  Perfect!  My eBay habit kicked into high-gear and a I picked up an old Brocade Silkworm 3852.  16x2gbit/s of Fiber Channel goodness.  I also picked up a lot of 4 Brocade/Qlogic Fiber Channel HBA’s.  On a spur of the moment, I also picked up an old NetApp 2gb/s Fiber Channel disk shelf with 2TB of capacity split over 14 disks.  I created a software RAID 6, installed SCST, configured the zones on the switch… and viola!  The performance in a virtual machine went from 45MB/s to 180MB/s.  I could now actually use the servers at home.  This was a new lease on life for the equipment, and during that time I was able to upgrade to Exchange 2010 and Windows Server 2008R2, I even had a working implementation of OCSR2 as well.  What a difference storage makes.

In Part 2, we will talk about the planning, and design for the new and improved Home Data Center.  In Part 3 we will discuss the actual equipment, choices made, and the performance today.