Category Archives: Data Center

Hacking the HP 5406zl…

The HP 5406zl
The HP 5406zl

The venerable HP 5406zl.  This switch has been around for many years, in fact, it was introduced back in 2004.  Over time, this switch has seen a number of upgrades, and the current base model 5406zl’s provide 379Gbps of non-blocking goodness.  A while back I acquired one of these to replace the Cisco SG200-26 I had.  The 5406zl is a fully modular switch, and great deals on eBay can be had, if you know where and what to look for.

One of the modules for the 5406zl is the Advanced Services Module.  I have two of these.  They are x86 servers on a blade, letting you run VMware, Hyper-V or Xen.  Normally, you have to buy them from HP pre-configured.  The reality is you do not, you just have to be a bit creative.

These blades J4797A, come with a dual core 2.53Ghz, 4GB RAM and a 256GB 2.5″ hard drive.  You can easily upgrade the RAM from 4 to 8GB (it is DDR3 SODIMM).  You can also swap the Hard drive for a SSD.

I have a J4797A which is supposed to run Citrix XenServer.  I simply upgraded the RAM, and installed VMware 5.1 to the SSD, and viola, I was able to get a $2000 VMware blade for around $250CAD all in.  While not super speedy, these blades work great for DNS, Firewall and TACACS.  They even come with HP’s Lifetime Warranty.

Oh, and if you did not hear, the latest K15.16.0006 (now .0008) firmware, enables the “Premium” features for free.  Even more reason to find one of these switches on eBay.

IPv6 and the need for IPAM

For many, the thought of moving to IPv6 is aipv6 theoretical exercise.  The thinking goes that the existing network, running IPv4, with a mixture of route-able and private IP addresses is more than enough for today, and tomorrow.  Why complicate things?  The straight facts are IPv4 exhaustion is coming, and organizations will need the appropriate “hands on” skills in working with IPv6.

For the unfamiliar, IPv6 is the successor to IPv4.  IP or “Internet Protocol” is the underlying technology that allows you to be reading this blog post.  At its simplest, IPv4 assigns your computer a location on the network, a location that allows routers a way to get information to and from your system.

Then, one could assume IPv6 is a simple upgrade, since TCP, UDP and ICMP continue to operate in a similar manner.  The short answer is “no.”  IPv6 uses a base 16 notation to denote it’s address.  IPv4 uses a quad dotted base 10 notation.  It means that for network admins and architects, the simple familiar becomes abcd:abcd:abcd:abcd:abcd.  With the address space being trillions of trillions of trillions of times bigger, concepts like NAT go away.  You do not need them, when every star in every galaxy in existence could have it’s own 4.3 billion addresses.

Why no NAT?  There is no need since every address is route-able.  Applications in addition to using ports, could use IPv6 addressing schemes for control and backplace operations as well as data transport.  Our method of layer 4 to 7 communication fundamentally changes.

To start on this journey, a good IPAM (IP Address Management) solution is needed.  Beyond spreadsheets, or Active Directory, think of how your organization will handle this transition.  It is coming, and the sooner organizations prepare, the better.  IPAM brings benefits of managing the IP space effectively, and combine that with Software Defined Networks (SDN) you can get some very powerful ways to reduce the costs of transition and get a better managed network out of this.

It’s best to start now, rather than later.  Blue Cat provides some very robust software that happens to provide IPAM functionality and SDN components that take network and address management to the next level.  If you are thinking of modernizing your network, they should be in your list of products to review.

Not getting the most of your Internet speed?

ERLFor the last few months, Internet speed here at have been less than impressive.  With a Business connection, the advertised speed earlier this year was 45mb/s down and 10mb/s up.  That increased in late May and early June to 60mb/s down and 10mb/s up.  Normally, 45mb/s was what we could hit all day long.  With the upgrade, we where hoping to get that extra 15mb/s  out of our connection.

For the last 4 years, since practically the days when I first got the business line, I have been using a very reliable Cisco 851 to do the routing.  The little machine is simple, and it never once ever hung or died like some of the consumer routers from Speedstream or Netgear did on me.  Knowing the router is very low end, I only really ran some NAT on it, and that was about it.

In March, after years of the same, I decided to enable the Firewall on the unit.  Now some will say “no way!” you lived that long without a firewall on the internet.  Truth is, since the router was NAT, it only allowed in a specific set of ports and only what was outbound, but it was not configured to do anything about spoofed packets or other problems.  I run a DMZ firewall that does screening and other IPS, so I was not too concerned.  However, I wanted to start a bit more filtering on the edge.

I enabled the firewall on the Cisco, and after a few days, I noticed the speeds hover at 30mb/s.  Speedtest after Speedtest, it did not matter, the speed was the same.  In early August, I contacted my provider, who also provides a Cisco 851 for my network access, and asked them to swap some gear and test for me.  Sure enough, they swapped the 851 for a 867VAE.  They performed their tests from that new router they got the full speed.  Hooked up the old Cisco behind the new router, and the speeds fell to 30mb/s.  We identified the culprit!

I had a choice, eBay a bigger, more power hungry Cisco, or find something that is enterprise class, but, not as expensive.  After looking at Mikrotik (which is very good BTW), I settled on the EdgeMAX Lite by Ubiquiti.

The price and the performance have been very good!  The initial set up and upgrade of the firmware to 1.5.0 was challenging, but once that was done, the network speeds have improved.  I now run the same configuration as the Cisco, Firewall and NAT enabled, and get 60mb/s all the time.  The CPU on the machine sits around 6% when at full speed here. This is the new model with the proper venting, so, I expect it to perform really quite well for the next while.

If you are suffering from poor Internet speed, and you have a fast connection like we do, seriously consider the EdgeMAX Lite (ERL).  The price and performance cannot be beat!

Follow-Up: Inexpensive FXO/FXS cards and Bell Canada Caller-ID

Caller-IDI promised an update on the status of the inexpensive FXO/FXS card I had ordered.

The card arrived in early May, after some very quick shipping.  The packaging was good, and the card came undamaged.  It is your typical Wildcard AEX410 card.

I mentioned trying this on VMware to see if I can virtualize it.  Well, as it turns out even vSphere 5.5 cannot use this card in VT-d mode.  The card is a PCI design, that sits behind a PCIe bridge.  That’s something VMware says will not work.  I tried a number of settings but no luck. The card would kernel panic the VM every time.

In early August, some lightning storms had the pleasure of taking out one of my trusty SPA3102’s.  These are not the most amazing VoIP gateways, but it was good for Caller-ID.  I have struggled for years looking for FXO system that will work with Bell Canada’s Caller-ID.  So far, out of all the products (AudioCodes, SPA, Grandstream, Wildcard) the only product that reads Caller-ID from Bell Canada is the SPA3102.

With the end of my trusty unit I put a physical server in to host my VoIP PBX, along with the Wildcard AEX410.  The Wildcard works just fine in that system with the same V2P (yes, Virtual to Physical!) converted system.  Since even the Wildcard will not read the Caller-ID, I have  a replacement SPA3102 daisy chained to the Wildcard FXO port.  If the power goes out, the SPA will still et the call work, which is great. So far, this combination gives me clear voice calls on the PSTN line, something the SPA itself cannot do, and I get Caller-ID.

The Wildcard works great, except, if you want Caller-ID in Canada, you will need to go with someone other than Bell Canada.

Building your own Data Center … at home! Part 2

fanIt’s been a while since I originally posted Part 1 of this series.  However, it’s time to spend some time and fill in Part 2.  Part 1 originally covered the history of my “at home” Data Center.  In this Part 2, we will look at choices made for the new Data Center I implemented at home in November 2012 to early 2013.

Power and Cooling

The number one enemy of all computer equipment is heat.  It’s a function however of using electrical devices, as they use energy to perform tasks, they radiate heat.  There is no getting away from that.  In the case of a Home Data Center, this is critical.

A common solution is to use a fan to blow air around and make things cooler.  Well, a fan simply moves air, it does nothing to actually “cool” anything, except move warm air out allowing cooler air to come in, and absorb more energy.  However, in a small closet or area where people tend to put a collection of PC’s, this cannot happen.  The warm is constantly recycled, and as a result heat builds up.

What can we do to combat this?  It comes down to air circulation and the ability to replace warm air with cold air.  In other words, we have to be able to evacuate warm air not simply push it around.  Simply putting a portable AC beside the equipment does not help, not unless there’s a way to vent the heat from the AC to the outside, and set up some kind of feedback system where the air can maintain its temperature.  The main draw back of a AC doing this is it will use a lot of extra energy, and it pretty much needs to run 24/7 to keep heat from building up.

Now, some will say “I only have a couple PC’s, this does not matter much.”  Quite contrary.  For a couple of PC’s in a closet, they will radiate heat.  The older the PC, the more likely it is to give off more heat. Those 50$ P4 PC’s at the local computer shop might be plentiful, and good as a Firewall or NAS, but they will generate more heat than a state of the art Mini-ITX system with a low-end Haswell processor.  In Part 1, I talked about my first lab, being a collection of PC’s in one room.  In Ottawa where I lived the summers can be VERY hot and VERY humid.  Think 35C and a humidex of 45+.

I had 5 PC’s in one room, they where all old Pentium 200’s and Celeron 400’s.  Those machines put out a lot of heat, and being on the top floor of the place I had at the time, made it VERY warm.  Eventually I moved them to a rack in the basement, and my top floor become bearable.  I had my first run in with fans then, and learned the hardware after one UPS died from heat exposure that fans do not cool anything.

So even if you have a few PC’s, know that they WILL put out heat.   Another consideration is NOISE!  A couple PC’s will have a slight hum which you or your family may or may not find acceptable.  Enterprise servers from HP and Dell have loud fans designed to move a lot of air.  Network switches for enterprise typically have fans too.  This WILL make noise.  Make sure to pick a location that can shield you from the noise.  More energy efficient equipment makes this task much easier.

At my current location, I used to have several beige box PC’s on a bakers rack.  The weight itself made it pretty scarey, however, having systems spread over 3 levels and no way to manage heat led to several heat build up issues.  I had a few lower-end consumer motherboards pop capacitors from the heat.  The systems that where “enterprise grade” never once had an issue.  The learning here is heat will kill systems.  It’s no fun waking up at 6am, with a dead file server and dead VM’s because a capacitor blew in the middle of the night.

Lastly, overloading circuits in a warm environment can lead to fires.  It’s absolutely critical that you make sure the wiring is done correctly. For a couple PC’s a small UPS is fine.  Power-strips are not really safe.  For a bigger environment, look at getting some more expensive server grade UPS’es.  Not only do they let you have power during an outage, they will make your environment that much more safer too. Please get a certified electrician and inspection if you need new circuits.  Yes it will cost you more money, but your life could depend on this. Never load any circuit beyond 50%, in fact, below 50% is safest. If you are getting to 50% load on a circuit, you are likely pulling at least 900W’s, and it’s time to consider better options if that is the case. You maximum safe power density at home is likely going to top out at roughly 1600W, that’s roughly  3x110v 15A circuits.

What has this experience taught me?

  • Proper racking for circulation is a must.  Spreading systems over a few racks with no way to control the ingress and egress of air, just bad for hardware.
  • Making sure that the racking has proper circulation is also a must.
  • Place the systems in an area where heat energy can dissipate, and you have the ability to egress warm air and ingress cooler air.
  • Due to the home always being a warmer environment, without modifying the house itself to have a data center, buy equipment that is rated for 45C+ continuous operation.
  • Be power conscientious.  More power = more heat, more power = higher electricity bill.
  • Get a certified electrician to install any wiring you need and get it inspected. DO NOT SKIP THIS!
  • Assume you will get equipment failure, eventually heat will kill a component, it is only a matter of time.

For the last bullet, making purchasing choices based on reliability and cost are important.  Yes, at home budgets are tight.  If you want to run something 24/7 everyday, think of the consequences.

Weight Considerations

Another issue to loanvilok at is weight.  If you are going to go to racking, understand that you now will have a heavy load over a section of your floor.  How, while most wooden floors are engineered to hold a lot of load, they are NOT engineered to hold a rack full of computer equipment.  A couple PC’s and a monitor, sure.  But if you start getting into anything more exotic, especially when it comes to storage, then be prepared to manage the weight. If you are going to get a real server rack, make sure your floor is rated for at least 2000lbs.  That way, you can load it up to your hearts content.

As I mentioned before, I previously had equipment on a bakers rack.  That rack had some HP DL servers (585’s and 380’s) and a couple of disk enclosures.  I also had a few other PC’s at the time and a monitor, plus network switches.  The bakers rack could handle about 600lbs a shelf.  It was not the most stable thing especially if your floor has a slant.  There where many times I wondered just how safe I was messing around with equipment from the back or front.  A word to the wise, DO NOT mess around with bakers racks or shelving from Ikea.  It is just not designed to hold more than one PC or so.  I opted for real server racks since I could control the load, and manage the cooling a lot better than with any other option.

Managing weight tips,

  • If you are going to put more than a few servers in a condensed space, move it to the basement or a concrete floor.
  • Try to understand your weight requirements.  If you are thinking of storage or other exotic servers at home, they will weigh more and a wood floor, 2 stories up, is likely not safe.
  • If you use any racking, load the heaviest loads at the bottom to lightest at top.
  • Know that once you start loading your rack or space up, moving equipment is HARD! Think out your cabling requirements ahead of time before putting your equipment in its final location.
  • You will need about 25 sq ft of space for a 42U rack.  This includes space in front, to the sides and behind the unit.  Keep in mind that you will need to service equipment at some point, so you will need the room to get in there and make changes.  This should be thought of as very important since disassembling everything is not likely to be easy.
  • If you are going to be using “enterprise” servers like HP’s or Dell’s a server rack is your best option since it can safely mount the equipment with minimum fuss or risk.

Server Consoles

Unless you want to have a very basic set up, a KVM is a recommended choice.  Simple KVM’s allow one monitor, keyboard and mouse to be shared across a few PC’s.  A simple 20$ KVM will work for most cases where 1 to 4 PC’s in needed.  I got by with that set up for years. You can rack up your PC monitor.  It will take some room, but if expansion is not that important, a monitor on a shelf in the rack or on the table where the PC’s are is a good idea.

If you are going with a rack option, you might want to explore a couple interesting options.  First, rackmount monitors and keyboards are easily available on eBay and other locations.  They will range from 100$ to skys the limit.  I would recommend an inexpensive HP unit if you are going this route.  I decided to go with a LCD rack mount monitor, a separate keyboard tray and a old HP/Dell 8 port KVM.  This made the monitor nice and clean looking, it also let me do some “cool” looking configurations once it was mounted.  You can find them on eBay for as low as 125$ on occasion.  This will cost about the same as a all in one monitor/keyboard configuration.  The LCD I have cannot do gaming, but it’s 1024×768 resolution is good enough for installations and working with the consoles if needed.

Not all KVM’s are created equal.  If you have a KVM that is for PS/2 mice and keyboard, note it may not work at all with a USB to PS/2 adapter for a keyboard or mouse.  This may require purchasing a different KVM or expensive converter cables.  Old equipment tends to work finer, while newer, non-PS/2 equipment typically will have some kind of quirk with PS/2 only KVM’s.

On all of my servers I opted for IPMI.  I can manage the power and the console directly from my PC across the network.  This is a life saver as there’s no need to head to the basement to fiddle on the console.  On older PC’s, this might not be an option.  You can look for “IP KVM PCI” on eBay for help, and there are some older boards that generally are ok.  I do strongly recommend this option if you have a little extra money to spend.  Otherwise, and I did this for years, trips to the basement are a-ok.  However, once you go IPMI, you will never want to go back!

The platform

virtualizationSince we have now covered off the facility portion of building a data center at home, we can now switch focus to the platforms we want to run.  Generally speaking at home, many enthusiasts and IT pros will want to run a mixture of Windows, Linux, FreeBSD and possibility Solaris at home.  These platforms are good to learn on and provide great access for skills development.

We generally have a few choices when it comes to the platform if we want to use multiple operating systems.  One approach is using actually equipment dedicated to the task.  For example, setting up a few physical PC’s running Linux is pretty easy to do.  Getting a older machine to run Solaris, is also easy enough to do as well.  Depending on the size of your home lab, it may make sense to dedicate a couple systems to this, especially if you want to have experience loading hardware from bare metal.

However, this set up comes at a price.  Expansion requires more hardware, and more power usage.  So over time, the costs of running it will go up, not to mention prices for different types of equipment can and do vary. A simple choice that will work on most PC’s used in the home in the last couple of years is virtualization.

There are a tonne of good virtuaization platforms from you to choose from.  I myself use a combination of VMware ESXi and Microsoft Hyper-V 2012R2. What you want to do with your environment will dictate what platform you choose.  I would strongly recommend against using a 5 year old PC for this  Best bang for buck will come from using a relatively new PC with a good fast SATA harddrive and at least 8GB of RAM.  16GB is pretty much the minimum I recommend for anyone, as it’s enough to host about 8 or so VM’s with ok performance.  Keep in mind that if the VM’s are lightly used, i.e. one thing at a time, a SATA hard drive will be ok.  However, it will not be fast. I would recommend using VirtualBox or something similar if you want to occasionally dabble but would like to use that PC for something else.  Hyper-V in Windows 8 is pretty good too.

My best advice for choosing a platform is,

  • Decide if you are just going to do a few things at a time.  If so, a PC or two dedicated to blowing up and reinstalling might be the cheapest and simplest option.
  • If you want to run multiple operating systems at the same time, look at one of the free Hypervisors out there.  If this is simple and light-weight, I always recommend going with VirtualBox.  If this requires something more robust or complex, free versions of Hyper-V and VMware ESXi will work just fine.
  • At least 16GB of RAM will make your environment fairly usable if you want to run up to around 8 VM’s at a time.
  • Disk performance will never be fast on a PC with a single hard drive.  Memory and storage speed will cause the most roadblocks to setting up a home lab.  Generally the more memory and hard drives, the better, but keeping in mind this is for home.

Some people are fans of the HP Microserver.  I definitely recommend this product.  It’s simple, well supported and gives you good longer term performance for a decent price.  There are other products on the market, but the HP Microserver is by far one of the best you can get for a home lab.

Storage and Networking

The heart of your home lab will be your Storage and your Network.  These are often areas overlooked by people building their own home data center.  A switch is a switch!  That 4 disk NAS works great!  Think again.

Not all network switches are created equally.  While I would not advocate spending a tonne of money on a large Cisco or HP switch, good networking will improve your performance and the reliability of your home set up.  For example, that cheap 40$ router that does WiFi, USB and say cable internet, is likely not very fast.  Whats worse, it’s likely not rated for a warm environment, and they have a tendancy to sometimes die or act flakey under load.  I once had a DSL modem that only acted right if cold peas surrounded it.  Not a good idea!

To combat this, I do recommend spending a little more on network equipment, including routers and switches.  Be careful to read the specs.  99% of the switches for the “home” or “pro” market will not deliver 1Gbps per port.  I was looking a while back, and I noticed a 48 port switch for nearly $150.  This looks like a great deal until you look at the specs.  It was only rated for 16Gbps switching.  This means, only 8 ports of the 48 will get “full speed” (Network speed is duplex meaning the speed is in both directions).  So if you see capacity that is less than the number of ports, then the performance is not going to be great.  Awesome deal for $150, but, not for performance.  The $300 24 port managed switch I picked up provides 48Gbps, which is 1Gpbs per port, meaning I’ll get fairly consistent performance from the switch.  If you can, try avoiding “green” consumer switches.  They will drop performance all the time, and many vendors of Hypervisors will tell you the performance will stink. It has to do with how power saving is implemented.  Enterprise switches with “green” features will save you money.  My “green” enterprise switch has saved over 138000 watt hours over 9 months.  That’s about 165$ where I live.

The same can be said for storage.  Asides from memory, storage is the single most important part of a data center.  Whether at home or in the office, we all need storage.  The faster, the more reliable the storage the better our lives are.  Generally, IMHO, those in expensive 4 disk NAS units are ok-enough for running a small number of VM’s and hosting files for the home.  The processor will limit the performance, and typically I would recommend a unit that create “iSCSI” disks.  This will give you the best performance.  If it only offers SMB or CIFS access, the performance will be ok, but, you will need to use VirtualBox or Hyper-V as VMware ESXi does not support it.  The maximum performance most of those inexpensive units provide is 1Gpbs connections.  Since the processor is usually slower, you will not always get that speed, especially if you have various RAID options enabled. Expect to get around 70% performance on that link, and know that if you have multiple VM’s trying to update or run at the same time, the performance is going to make you want to get a coffee.  I always advice purchasing the best storage for your budget and needs.  I do recommend the 4 bay NAS units from Seagate, Western Digital, Drobo and Thecus.  You will get good performance at a ok price.  The type of harddrive to use in these set ups is not really an issue since you will not be able to max out the performance anyway.  Purchasing 4 “green” drives with a lower capacity will get about the same as 4 faster drives of the same size.  Remember, these units have some weight, and the more disks you have, the more the weigh.  I have 28, since I need to maximize random performance.

Finally a word on RAID

RAID does not mean no backups.  It just means that if a drive does die in your NAS, that it will be able to keep running and not lose any data for the time being.  Time being is the important part.  Over the years I cannot tell you how many times I lost data that I thought was safe, either on a disk somewhere as a backup or with RAID.  Make sure you have a plan to keep multiple copies of the data that is important to you, and do not trust it all in one location. Something as simple as bad hardware can wipe it out.  Make sure you take regular backups of your data and keep the backup in a safe location.  I do recommend using a Cloud backup for your NAS if possible.  Some people have data caps with their ISP’s that prevent this.  I would still recommend keeping multiple copies of your most needed documents on a minimum of 2 locations, preferably at least 3 to be safe. Test your backups on a regular basis.  Nothing is worse going to restore something and it does not work. I use Windows Server 2012R2 Essentials for home backups of my PC’s.  The data is then copied from the Home backup server to another location for safe keeping.  This way I have 3 copies.  My PC, My Home Backup Server, and my Disaster Recovery location.  This method has saved me numerous times.  I can always go back and re-create my PC from any time in the last 6 months.  So if I install a bad Windows patch, it’s 20 minutes later and I’m back to the previous configurations.  No muss or fuss.

I take data protection very seriously, as I have mentioned before, equipment will fail, are you ready?

Closing Thoughts on Part 2

This was a pretty high-level look at the factors I used to design the Home Data Center I have. Key elements include,

  • Power & Cooling
  • Racking
  • Server Consoles & Remote Access
  • The Platform: To Virtualize or not?
  • Storage and Networking

There are other choices as well, but these are the major ones to consider.

In Part 3 I will step you through the build and explain how things are configured for me today.

Building your own Data Center … at home! Part 1

ibm-cloud-computing-data-centerFor many enterprises, building a data center is a major undertaking.  It takes planning, understanding IT and Facilities planning, and most importantly, the proper budget to execute.  When this shifts form the corporate or enterprise environment, however, most setups in the home office are cobbled together.  Not as much thought, typically goes into what IT Pro’s use at home.  Maybe it is because having a large Data Center at work, one feels the need to not spend as much time and effort on that for servers that are just “playing around” or “for learning” purposes.  Well, the time has come to change all of that!  Read on for a bit of history on how I came to build my own Tier 1 Data Center at home…

A little bit of history…

Like many IT Pro’s, over the years I have had many incarnations of the “data center”  My first was back nearly 18 years ago.  A 486DX33 connected to a 386DX40, both running Windows 95 (and before that, OS/2!).  The whole thing worked with a 75ft Ethernet cable, and a 10 mbit/s half duplex hub.  The kind that shows the activity on them.  While it was not fast, it did let me share files from one computer to another.  At the time, I was working at a small computer place that was using those old coax ARC Net adapters.  It worked ok for them, and I wanted something similar at home.

intel-pentium-166-2Since then, I have had a collection of machines, Sparc LX’s, Sparcstation 5’s, a collection of Pentium 166/200 machines.  A bi upgrade was the Abit BP6, that let 2 (!!) Celeron 400’s run in SMP with Linux.  Eventually those machines became some separate boxes, that I used to develop Linux applications on.  They all sat on the floor of one bedroom of the place where I lived in Ottawa.  It was cute, but I learned the hard way about Power and Cooling.  While the power bill for 4 computers running was not bad, the HEAT put out by the machines was terrible.  The place in Ottawa did not have A/C, and well, it made for summers that where warm.  If there is one lessen to read from this blog post, remember… “Fans do not cool anything, they just move air.”  The movement of air takes the energy and disperses it in the space.  So, in the summers, in Ottawa I would have a fun running full blast with a window open for venting.  Not great for keeping humidity out and me cool!  I lost more than one power supply over the years at that location.

The servers eventually moved down to the basement into IKEA wooden racks, which was much a large improvement over being in the other bedroom.  The large open space let the heat dissipate, and the house as a result was much cooler.

I kept this general set up when i moved to Niagara for a couple of years.  Then, in late 2007, early 2008, I started really planning out my server collection.  I did start to apply IT principals, and at this point, things where much better.  I picked up a large bakers rack from Costco, and the servers where upgraded to Q6600’s, since the motherboards and the CPU’s themselves where dirt cheap (187$CDN/CPU in October 2008!).  I still have those motherboards and CPU’s to this day.. and they still work. Each of the servers had a maximum of 8GB of RAM.  I was going to do a fan-out model if I needed more capacity.  I intentionally chose CPU’s, memory and motherboards that where “long life” components.  In 2009, nearly 12 months after putting things together, I cobbled together a 3rd VM server running VMware.

KL_Intel_Core2_Q6600It was during this time in Niagara I learned a few things.  Heatsinks can and do come off running CPU’s, VMWare 3.5 and VMFS3 liked to lose your data, and never trust 10 year old CDROMs (yes, CD’s ROM’s, not DVD’s) for backup.  When developing the new setup I wanted some redundancy, and better networking.  That design, spawned the 3 level network I used today.  A production network, a DMZ network, and a storage or backup/black network.  There was no lights out, but the set up ran fairly happily.

The storage solution was using Gigabit Ethernet, and NFS. Good old Linux.  One of the big challenges was the performance.  Even with separate networks, the best the NFS server could give was anywhere from 20 to 30MB/s for reads and writes.  Not stellar, but enough to allow me to run 8 to 12 VM’s.  All the servers where on the baking rack, and for the time it seemed ok.  However, the I/O performance was always a problem.

And so began my quest…. Performance Performance Performance

Working for HP does have some advantages.  You get internal access to some of the smartest people anywhere, and there are thousands of articles on performance running for various technologies.  I began to put my HP knowledge to use, and started slowly doing upgrades to improve the storage performance.

In addition, when I was a full time consultant, I either worked at home, or at a customer location.  As a result, my home internet and network need to be running 24/7.  If not, it means loss of work time, and potential disruptions to clients.  Over the years I had been using a number of in-expensive D-Link routers for my internet connection.  In the summer, the heat would rise, just enough to randomly lock up the router.  In 2010, I decided it was finally time to start using eBay, and little would I know eBay would become a source of great inspiration, learning, and well, a strain on the wallet too!  After a lot of investigation, I purchased a Cisco 851.  This little router has just enough IOS to let you learn it, and runs well enough to pretty much forget about your Internet connection.  That was probably one of my best tech purchases. Even after 3 years, that router is still running just fine. It has had uptimes greater than one year.  It truly is set and forget.  Since then, I have never had an issue with my Internet connection.

1810g-24Off of the success of my new router, I decided that it was time to upgrade the network.  Being a good HP’er, I wanted something “Enterprise ready”, but not something that would break the bank.  Since HP employees in Canada get not discounts on Server, Storage or Network gear, it would be a purchase of faith in my employer.  I ordered the HP 1810G-24. About 5 days later, the switch was in my hands.  And all of 15 minutes of configuring it, I had my 3 VLAN’s up and running.  I then quickly swapped out my old D-Link “green” Gigabit switches for the HP.  After a total of about 20 minutes I had gone from a unmanaged network, to a mid-market branch office network.  There was a performance improvement.  My NFS performance increased to around 45MB/s.  That was nearly double in some case, what it was on the D-links.  It just goes to show that what networking you install does make a difference.

A few months passed, and I was still not happy with the storage I/O.  While better, boot storms would easily overwhelm the connections, and trying to boot more than one server at a time was painful.  I have purchased a eSATA enclosure, and was running 4 relatively fast SATA drives, but, performance was well below Gigabit speeds.  The file server itself was a older AMD 3500, single core machine with 2GB of RAM.  Not fast, but I would have thought fast enough for better than 45MB/s network performance in VMware.

So, in my hunt, i decided it was time to take the Fiber Channel plunge.  I read about several Fiber Channel packages for Linux, and SCST turns out to the best one for my needs.  Fast, scalable, easy to install.  Perfect!  My eBay habit kicked into high-gear and a I picked up an old Brocade Silkworm 3852.  16x2gbit/s of Fiber Channel goodness.  I also picked up a lot of 4 Brocade/Qlogic Fiber Channel HBA’s.  On a spur of the moment, I also picked up an old NetApp 2gb/s Fiber Channel disk shelf with 2TB of capacity split over 14 disks.  I created a software RAID 6, installed SCST, configured the zones on the switch… and viola!  The performance in a virtual machine went from 45MB/s to 180MB/s.  I could now actually use the servers at home.  This was a new lease on life for the equipment, and during that time I was able to upgrade to Exchange 2010 and Windows Server 2008R2, I even had a working implementation of OCSR2 as well.  What a difference storage makes.

In Part 2, we will talk about the planning, and design for the new and improved Home Data Center.  In Part 3 we will discuss the actual equipment, choices made, and the performance today.

Why I chose DL585’s for my storage server needs…

Ahh, the HP DL585.  The mid-size workhorse for many enterprises.  The venerable machines are now up to their 7th generation.  In the last 6 months, I have worked with two early generation DL585’s.  A G1 and a G2.  The machines offer gobs of internal bandwidth.  The G1 machine only had PCI-x and the G2 features a mix of both PCI-X and PCIe.  These machines bring back fond memories of my old days.  Working for HP does have some benefits.  Like, understanding the details of the machines and what and what they cannot do.

In December, I went through a process of replacing my old NFS based storage for VMware with Fiber Channel.  That went well, but I knew I could get even better performance.  Also, storage is easily the most intense service that I run.  It has killed several consumer and “enthusiast” motherboards.  The need to push a lot of I/O, 24/7 just burns those machines out.  In April I picked up a DL585G1 to replace my dual core E7600 setup.  Even though the DL585G1 has slower CPU’s (8 cores at 2.2 GHz AMD vs 2 @ 3.0 GHz Intel), the throughput increased by about 30%. That was using older, PCI-X QLogic fiber channel cards.

One drawback of the DL585G1 is that it is loud an puts out a lot of heat.  Not the best set up, even in a cold room.  So, I embarked to replace that trusty machine with a slightly newer DL585G2.  Still supported by HP, the DL585G2 is a great low-cost mid-range server.  The benefits are it is about 30% faster than the G2, has upgradable CPU’s to any AMD 8000 series CPU (including Quad Core Barcelona and Istanbul CPU’s).  It also runs very quiet and uses about 30% less power than the G1.  The gobs of throughput are still there as well.

Compared to the Dell and IBM systems, the DL585 is a steal.  Pricing for one of these systems was considerably less than the equivalent Intel server.  For someone on a smaller hardware budget, these systems are a great fit.  Are they going to out perform Sandy Bridge or Nehalem Xeons?  No, but if I had the money, that would be a different conversation.

For those searching on the web about AMD CPU’s and virtualization, the 800 series CPU’s from AMD DO support 64 bit virtualization on VMware.  The DL585G1 supports ESXi 4.1U1 just fine.

Linux Software RAID 6

Software RAID or hardware RAID? That has been a long running question that every Architect faces when designing storage infrastructure for customers. In the Enterprise, this is usually a pretty straight-forward choice. It is hardware RAID. The next question would be the RAID level. RAID 5, RAID S, RAID 10 and some other more exotic vendor based levels are typical available. The choices of the level are typical set by application, risk and budget concerns.

Now, what is the home user to do? Intel offers semi-hardware RAID on their current motherboards which for most users will be adequate. (Just make sure you have good backups if using consumer hard drives!) If you are using Linux, other choices are available. Software RAID is an interesting option. Recently, I implemented a 14 disk Fiber Channel RAID 6 solution. Despite the write penalty that RAID 6 introduced, the performance is still very good. Writes exceeding 150MB/s on a 2Gb/s fiber channel SAN make the old SAN solution I have seem old!

If you are thinking of using Linux and RAID 6, give it a try! With Fiber Channel and SAS disks the performance should be quite good without breaking the bank.

DIY SAN: Building your DIY SAN is as easy as …

Over the past year I have been primarily using NFS as the main means to host storage for my virtual machines. Using NFS and even software iSCSI is regarded as the simplest and easiest methods of getting up and running with VMware.

Lately however, the performance has just not been what one would expect from a RAID setup. I have a aging, but still useful storage array based on HighPoint’s 3122 card. Locally on my storage server, I would get ~110MB/s for writes and reads. For NFS and iSCSI with a good network infrastructure (HP 1810-24G) the performance was only around 45MB/s on average.

After much reading, I found out that NFS performance on VMware is limited since there is only one control channel, and VMware forces opens and closes on each read/write transaction, killing performance on NFS. The same goes for software iSCSI. VMware does this to protect data integrity. So, they use O_DIRECT for everything and this makes the performance less than optimal.

I bit the bullet and hit eBay. After acquiring the SAN switch, the Fiber Channel HBA’s and fiber cables, I now have a SAN. The bottleneck is now my old storage array, as I can easily get 110MB/s per VM. I ended up picking up,

  • EMC Silkworm 3852 2Gb/s 16 port SAN switch ($250)
  • QLogic 2462 4Gb/s 2 Port HBA ($150)
  • Emulex LPe1050EX (2Gb/s) /LP1150-E (4Gb/s) x 3 ($300)

Using SCST 2.0, I was able to create Fiber Channel targets with the QLogic. All works well. My next storage adventure will be upgrading the array itself to beefier, more IOP scalable hardware. That is a 2011 project!

If anyone asks, no more NFS or software iSCSI for me. Though VMware will say to the contrary, those technologies just do not scale enough unless you have expensive EMC and NetApp gear that has custom code to eliminate the VMware O_DIRECT problem. Those using Open Source such as BSD and Linux will be stuck with less than stellar NFS and iSCSI performance until those implementations are updated. For now, I’m sticking with Fiber Channel. I’m not really looking at going back!

A little about power …

Power is probably the most important element in ensuring a stable computing experience.  Yet, commonly it is taken for granted.  In today’s modern data center, it is not uncommon to run into power limitations.  Vendor equipment is becoming more effiecient all the time.  HP even offers blade servers that allow for power capping.  These features allow cramped data centers to have even higher density.

Recently, I myself began to run into some basic power issues.  Here, I have a collection of UPS.  All from APC, they range from models about 8 years old to newer more top of the line units.  Even though my computing density is not that high, reliable power delivery is a must.  Being more so in the country adds to this.  The last several weekends, and even some weeknights, there have been localized power outages in my area.  Some only a few seconds long, to others 15 minutes or more.  One of these outages actually caused corruption on my storage array and a lost a few VM’s on XFS filesystems.  Even though XFS is error free, the files where damaged.  Moral of the story, make sure you have good UPS units.

I ended up purchasing 2 of APC’s SN1000 UPS’ from Tiger Direct.  These are currently $84CDN and are a great deal.  Originally, my older BackUPS 350 and BackUPS 750 where running the systems here.  However, as I have added more networking and storage equipment, the runtime has come down considerably.  These new units give the runtime here of about 30-40 minutes.  That is considerably longer than the 3 to 10 minutes I had before.  The SN1000 is the older model of the Sc1000.  These originally sold for over $300CDN or higher.

I plan to eBay out the old UPS’ here.  No sense keeping them.  I figure the $200CDN in high-end UPS will save me hours of rebuild time, the next time the power goes down hard.  If it can happen here, it can happen to anyone.