Tag Archives: ITInTheDataCenter.com

Building your own Data Center … at home! Part 2

fanIt’s been a while since I originally posted Part 1 of this series.  However, it’s time to spend some time and fill in Part 2.  Part 1 originally covered the history of my “at home” Data Center.  In this Part 2, we will look at choices made for the new Data Center I implemented at home in November 2012 to early 2013.

Power and Cooling

The number one enemy of all computer equipment is heat.  It’s a function however of using electrical devices, as they use energy to perform tasks, they radiate heat.  There is no getting away from that.  In the case of a Home Data Center, this is critical.

A common solution is to use a fan to blow air around and make things cooler.  Well, a fan simply moves air, it does nothing to actually “cool” anything, except move warm air out allowing cooler air to come in, and absorb more energy.  However, in a small closet or area where people tend to put a collection of PC’s, this cannot happen.  The warm is constantly recycled, and as a result heat builds up.

What can we do to combat this?  It comes down to air circulation and the ability to replace warm air with cold air.  In other words, we have to be able to evacuate warm air not simply push it around.  Simply putting a portable AC beside the equipment does not help, not unless there’s a way to vent the heat from the AC to the outside, and set up some kind of feedback system where the air can maintain its temperature.  The main draw back of a AC doing this is it will use a lot of extra energy, and it pretty much needs to run 24/7 to keep heat from building up.

Now, some will say “I only have a couple PC’s, this does not matter much.”  Quite contrary.  For a couple of PC’s in a closet, they will radiate heat.  The older the PC, the more likely it is to give off more heat. Those 50$ P4 PC’s at the local computer shop might be plentiful, and good as a Firewall or NAS, but they will generate more heat than a state of the art Mini-ITX system with a low-end Haswell processor.  In Part 1, I talked about my first lab, being a collection of PC’s in one room.  In Ottawa where I lived the summers can be VERY hot and VERY humid.  Think 35C and a humidex of 45+.

I had 5 PC’s in one room, they where all old Pentium 200’s and Celeron 400’s.  Those machines put out a lot of heat, and being on the top floor of the place I had at the time, made it VERY warm.  Eventually I moved them to a rack in the basement, and my top floor become bearable.  I had my first run in with fans then, and learned the hardware after one UPS died from heat exposure that fans do not cool anything.

So even if you have a few PC’s, know that they WILL put out heat.   Another consideration is NOISE!  A couple PC’s will have a slight hum which you or your family may or may not find acceptable.  Enterprise servers from HP and Dell have loud fans designed to move a lot of air.  Network switches for enterprise typically have fans too.  This WILL make noise.  Make sure to pick a location that can shield you from the noise.  More energy efficient equipment makes this task much easier.

At my current location, I used to have several beige box PC’s on a bakers rack.  The weight itself made it pretty scarey, however, having systems spread over 3 levels and no way to manage heat led to several heat build up issues.  I had a few lower-end consumer motherboards pop capacitors from the heat.  The systems that where “enterprise grade” never once had an issue.  The learning here is heat will kill systems.  It’s no fun waking up at 6am, with a dead file server and dead VM’s because a capacitor blew in the middle of the night.

Lastly, overloading circuits in a warm environment can lead to fires.  It’s absolutely critical that you make sure the wiring is done correctly. For a couple PC’s a small UPS is fine.  Power-strips are not really safe.  For a bigger environment, look at getting some more expensive server grade UPS’es.  Not only do they let you have power during an outage, they will make your environment that much more safer too. Please get a certified electrician and inspection if you need new circuits.  Yes it will cost you more money, but your life could depend on this. Never load any circuit beyond 50%, in fact, below 50% is safest. If you are getting to 50% load on a circuit, you are likely pulling at least 900W’s, and it’s time to consider better options if that is the case. You maximum safe power density at home is likely going to top out at roughly 1600W, that’s roughly  3x110v 15A circuits.

What has this experience taught me?

  • Proper racking for circulation is a must.  Spreading systems over a few racks with no way to control the ingress and egress of air, just bad for hardware.
  • Making sure that the racking has proper circulation is also a must.
  • Place the systems in an area where heat energy can dissipate, and you have the ability to egress warm air and ingress cooler air.
  • Due to the home always being a warmer environment, without modifying the house itself to have a data center, buy equipment that is rated for 45C+ continuous operation.
  • Be power conscientious.  More power = more heat, more power = higher electricity bill.
  • Get a certified electrician to install any wiring you need and get it inspected. DO NOT SKIP THIS!
  • Assume you will get equipment failure, eventually heat will kill a component, it is only a matter of time.

For the last bullet, making purchasing choices based on reliability and cost are important.  Yes, at home budgets are tight.  If you want to run something 24/7 everyday, think of the consequences.

Weight Considerations

Another issue to loanvilok at is weight.  If you are going to go to racking, understand that you now will have a heavy load over a section of your floor.  How, while most wooden floors are engineered to hold a lot of load, they are NOT engineered to hold a rack full of computer equipment.  A couple PC’s and a monitor, sure.  But if you start getting into anything more exotic, especially when it comes to storage, then be prepared to manage the weight. If you are going to get a real server rack, make sure your floor is rated for at least 2000lbs.  That way, you can load it up to your hearts content.

As I mentioned before, I previously had equipment on a bakers rack.  That rack had some HP DL servers (585’s and 380’s) and a couple of disk enclosures.  I also had a few other PC’s at the time and a monitor, plus network switches.  The bakers rack could handle about 600lbs a shelf.  It was not the most stable thing especially if your floor has a slant.  There where many times I wondered just how safe I was messing around with equipment from the back or front.  A word to the wise, DO NOT mess around with bakers racks or shelving from Ikea.  It is just not designed to hold more than one PC or so.  I opted for real server racks since I could control the load, and manage the cooling a lot better than with any other option.

Managing weight tips,

  • If you are going to put more than a few servers in a condensed space, move it to the basement or a concrete floor.
  • Try to understand your weight requirements.  If you are thinking of storage or other exotic servers at home, they will weigh more and a wood floor, 2 stories up, is likely not safe.
  • If you use any racking, load the heaviest loads at the bottom to lightest at top.
  • Know that once you start loading your rack or space up, moving equipment is HARD! Think out your cabling requirements ahead of time before putting your equipment in its final location.
  • You will need about 25 sq ft of space for a 42U rack.  This includes space in front, to the sides and behind the unit.  Keep in mind that you will need to service equipment at some point, so you will need the room to get in there and make changes.  This should be thought of as very important since disassembling everything is not likely to be easy.
  • If you are going to be using “enterprise” servers like HP’s or Dell’s a server rack is your best option since it can safely mount the equipment with minimum fuss or risk.

Server Consoles

Unless you want to have a very basic set up, a KVM is a recommended choice.  Simple KVM’s allow one monitor, keyboard and mouse to be shared across a few PC’s.  A simple 20$ KVM will work for most cases where 1 to 4 PC’s in needed.  I got by with that set up for years. You can rack up your PC monitor.  It will take some room, but if expansion is not that important, a monitor on a shelf in the rack or on the table where the PC’s are is a good idea.

If you are going with a rack option, you might want to explore a couple interesting options.  First, rackmount monitors and keyboards are easily available on eBay and other locations.  They will range from 100$ to skys the limit.  I would recommend an inexpensive HP unit if you are going this route.  I decided to go with a LCD rack mount monitor, a separate keyboard tray and a old HP/Dell 8 port KVM.  This made the monitor nice and clean looking, it also let me do some “cool” looking configurations once it was mounted.  You can find them on eBay for as low as 125$ on occasion.  This will cost about the same as a all in one monitor/keyboard configuration.  The LCD I have cannot do gaming, but it’s 1024×768 resolution is good enough for installations and working with the consoles if needed.

Not all KVM’s are created equal.  If you have a KVM that is for PS/2 mice and keyboard, note it may not work at all with a USB to PS/2 adapter for a keyboard or mouse.  This may require purchasing a different KVM or expensive converter cables.  Old equipment tends to work finer, while newer, non-PS/2 equipment typically will have some kind of quirk with PS/2 only KVM’s.

On all of my servers I opted for IPMI.  I can manage the power and the console directly from my PC across the network.  This is a life saver as there’s no need to head to the basement to fiddle on the console.  On older PC’s, this might not be an option.  You can look for “IP KVM PCI” on eBay for help, and there are some older boards that generally are ok.  I do strongly recommend this option if you have a little extra money to spend.  Otherwise, and I did this for years, trips to the basement are a-ok.  However, once you go IPMI, you will never want to go back!

The platform

virtualizationSince we have now covered off the facility portion of building a data center at home, we can now switch focus to the platforms we want to run.  Generally speaking at home, many enthusiasts and IT pros will want to run a mixture of Windows, Linux, FreeBSD and possibility Solaris at home.  These platforms are good to learn on and provide great access for skills development.

We generally have a few choices when it comes to the platform if we want to use multiple operating systems.  One approach is using actually equipment dedicated to the task.  For example, setting up a few physical PC’s running Linux is pretty easy to do.  Getting a older machine to run Solaris, is also easy enough to do as well.  Depending on the size of your home lab, it may make sense to dedicate a couple systems to this, especially if you want to have experience loading hardware from bare metal.

However, this set up comes at a price.  Expansion requires more hardware, and more power usage.  So over time, the costs of running it will go up, not to mention prices for different types of equipment can and do vary. A simple choice that will work on most PC’s used in the home in the last couple of years is virtualization.

There are a tonne of good virtuaization platforms from you to choose from.  I myself use a combination of VMware ESXi and Microsoft Hyper-V 2012R2. What you want to do with your environment will dictate what platform you choose.  I would strongly recommend against using a 5 year old PC for this  Best bang for buck will come from using a relatively new PC with a good fast SATA harddrive and at least 8GB of RAM.  16GB is pretty much the minimum I recommend for anyone, as it’s enough to host about 8 or so VM’s with ok performance.  Keep in mind that if the VM’s are lightly used, i.e. one thing at a time, a SATA hard drive will be ok.  However, it will not be fast. I would recommend using VirtualBox or something similar if you want to occasionally dabble but would like to use that PC for something else.  Hyper-V in Windows 8 is pretty good too.

My best advice for choosing a platform is,

  • Decide if you are just going to do a few things at a time.  If so, a PC or two dedicated to blowing up and reinstalling might be the cheapest and simplest option.
  • If you want to run multiple operating systems at the same time, look at one of the free Hypervisors out there.  If this is simple and light-weight, I always recommend going with VirtualBox.  If this requires something more robust or complex, free versions of Hyper-V and VMware ESXi will work just fine.
  • At least 16GB of RAM will make your environment fairly usable if you want to run up to around 8 VM’s at a time.
  • Disk performance will never be fast on a PC with a single hard drive.  Memory and storage speed will cause the most roadblocks to setting up a home lab.  Generally the more memory and hard drives, the better, but keeping in mind this is for home.

Some people are fans of the HP Microserver.  I definitely recommend this product.  It’s simple, well supported and gives you good longer term performance for a decent price.  There are other products on the market, but the HP Microserver is by far one of the best you can get for a home lab.

Storage and Networking

The heart of your home lab will be your Storage and your Network.  These are often areas overlooked by people building their own home data center.  A switch is a switch!  That 4 disk NAS works great!  Think again.

Not all network switches are created equally.  While I would not advocate spending a tonne of money on a large Cisco or HP switch, good networking will improve your performance and the reliability of your home set up.  For example, that cheap 40$ router that does WiFi, USB and say cable internet, is likely not very fast.  Whats worse, it’s likely not rated for a warm environment, and they have a tendancy to sometimes die or act flakey under load.  I once had a DSL modem that only acted right if cold peas surrounded it.  Not a good idea!

To combat this, I do recommend spending a little more on network equipment, including routers and switches.  Be careful to read the specs.  99% of the switches for the “home” or “pro” market will not deliver 1Gbps per port.  I was looking a while back, and I noticed a 48 port switch for nearly $150.  This looks like a great deal until you look at the specs.  It was only rated for 16Gbps switching.  This means, only 8 ports of the 48 will get “full speed” (Network speed is duplex meaning the speed is in both directions).  So if you see capacity that is less than the number of ports, then the performance is not going to be great.  Awesome deal for $150, but, not for performance.  The $300 24 port managed switch I picked up provides 48Gbps, which is 1Gpbs per port, meaning I’ll get fairly consistent performance from the switch.  If you can, try avoiding “green” consumer switches.  They will drop performance all the time, and many vendors of Hypervisors will tell you the performance will stink. It has to do with how power saving is implemented.  Enterprise switches with “green” features will save you money.  My “green” enterprise switch has saved over 138000 watt hours over 9 months.  That’s about 165$ where I live.

The same can be said for storage.  Asides from memory, storage is the single most important part of a data center.  Whether at home or in the office, we all need storage.  The faster, the more reliable the storage the better our lives are.  Generally, IMHO, those in expensive 4 disk NAS units are ok-enough for running a small number of VM’s and hosting files for the home.  The processor will limit the performance, and typically I would recommend a unit that create “iSCSI” disks.  This will give you the best performance.  If it only offers SMB or CIFS access, the performance will be ok, but, you will need to use VirtualBox or Hyper-V as VMware ESXi does not support it.  The maximum performance most of those inexpensive units provide is 1Gpbs connections.  Since the processor is usually slower, you will not always get that speed, especially if you have various RAID options enabled. Expect to get around 70% performance on that link, and know that if you have multiple VM’s trying to update or run at the same time, the performance is going to make you want to get a coffee.  I always advice purchasing the best storage for your budget and needs.  I do recommend the 4 bay NAS units from Seagate, Western Digital, Drobo and Thecus.  You will get good performance at a ok price.  The type of harddrive to use in these set ups is not really an issue since you will not be able to max out the performance anyway.  Purchasing 4 “green” drives with a lower capacity will get about the same as 4 faster drives of the same size.  Remember, these units have some weight, and the more disks you have, the more the weigh.  I have 28, since I need to maximize random performance.

Finally a word on RAID

RAID does not mean no backups.  It just means that if a drive does die in your NAS, that it will be able to keep running and not lose any data for the time being.  Time being is the important part.  Over the years I cannot tell you how many times I lost data that I thought was safe, either on a disk somewhere as a backup or with RAID.  Make sure you have a plan to keep multiple copies of the data that is important to you, and do not trust it all in one location. Something as simple as bad hardware can wipe it out.  Make sure you take regular backups of your data and keep the backup in a safe location.  I do recommend using a Cloud backup for your NAS if possible.  Some people have data caps with their ISP’s that prevent this.  I would still recommend keeping multiple copies of your most needed documents on a minimum of 2 locations, preferably at least 3 to be safe. Test your backups on a regular basis.  Nothing is worse going to restore something and it does not work. I use Windows Server 2012R2 Essentials for home backups of my PC’s.  The data is then copied from the Home backup server to another location for safe keeping.  This way I have 3 copies.  My PC, My Home Backup Server, and my Disaster Recovery location.  This method has saved me numerous times.  I can always go back and re-create my PC from any time in the last 6 months.  So if I install a bad Windows patch, it’s 20 minutes later and I’m back to the previous configurations.  No muss or fuss.

I take data protection very seriously, as I have mentioned before, equipment will fail, are you ready?

Closing Thoughts on Part 2

This was a pretty high-level look at the factors I used to design the Home Data Center I have. Key elements include,

  • Power & Cooling
  • Racking
  • Server Consoles & Remote Access
  • The Platform: To Virtualize or not?
  • Storage and Networking

There are other choices as well, but these are the major ones to consider.

In Part 3 I will step you through the build and explain how things are configured for me today.

Squid vs Apache: Reverse Proxy Champ?

So it begins.  Another year of making the slow transition to a new version of RHEL.  With it, comes refreshing some of the basics for the infrastructure here.  I traditionally use a Apache Reverse Proxy for filtering content into the server subnet here.  It is fast and has served the purpose for the last 2 and 1/2 years.  The trouble has been that going to newer versions of Apache have typically not happened since the configuration file is several hundred lines long and debugging it is a pain at 2am on a Sunday night.

I decided to take the plunge and moved things over to Squid.  I had a hack in Apache that let me do http AND https.  Squid supported everything out of the box.  The difference?  17 lines for Squid, and 744 for Apache.

So far so good.  It has some quirks, but, nothing that cannot be worked around.  If you are considering Squid for a remote proxy, you should not be disappointed!

Now just to keep figuring out RHEL 6.0 ….

Windows 2008R2 SP1 Upgrades

Over the last week I have spent the time to upgrade the existing Windows 2008 infrastructure to Windows 2008R2 SP1.  I had been running 2008R2 on a couple of systems, but decided it was time to refresh the environment.  The upgrade process worked like a charm, and all of the systems updated correctly. The database server was in sad shape (it was a VM that was over 3 years old), so it was time to reinstall the OS.  Fortunately, MySQL 5.5 works just fine on 2008R2.

As for patching, I managed to also patch the systems to 2008R2 SP1 without any issues.  Exchange 2010 and the AD systems took the updates just fine.  2008R2 has proven to run a little better and faster under vSphere 4.1U1.  Even the WDDM video driver works perfectly fine!

In combination with Windows 7, the file transfers are very fast.  All in all worth the time to upgrade.

What happens when Murphy strikes…

dsl-speedstream6520Ahh, the joys of the Internet.  After suffering 3 days without real Internet access, I can say it’s good to be back again.  My Business Telco DSL provider had a 3 day outage.  Now, if this was the height of summer, and I wanted to spend more time outdoors, this would have been perfect.  Not so fast in this case.

When I designed the infrastructure for ITInTheDataCenter.com, I knew the WAN would be the weak link.  Only one connection, it’s all my eggs in one basket.  About 2 or 3 times a year, it goes away, so it’s usually tolerable.  But it’s usually for no more than 4 hours either.  This time for 72 hours, that’s a little much.

So, I managed to procure a back-up low-speed connection for occasional use.  I decided on Rogers Portable Internet.  It works.  The WiMax modem is a little strange, in that it only likes 10mb half internet Ethernet connections, but otherwise it works ok. For 40$ a month (that I’ll only activate when needed) it’ll save me headaches when travelling, or when I need access out with customers.

The moral of this story… Murphy will strike, it’s just a matter of when.  Always have a backup.