AlienVault OSSEC and Disconnected Clients … Fixing this the easy way!

alienvault-logoFor those who run AlienVault, the built in HIDS is provided by the OSSEC system.  This HIDS works pretty well, but occasionally you might run into a problem where the client becomes “Disconnected.”  Ordinarily, this is not a problem, it just means the device is not on the network.

However, sometimes, you will find in your OSSEC logs the dreaded “Duplicate counter for” message.  Most of the advice on various forums talks about extracting keys, and re-installing the agents.  Not a great fix, if you have more than a few agents that you control.

Here are some quick tips,

  • On the OSSEC/USM server, stopping the OSSEC system and removing all of the files in /var/ossec/queue/rids is ok, if you have Windows clients.  They will rebuild the ID all by themselves.  However, this is taking a hammer when a rock will do.  Just remove the “ID” of the agent. Do NOT restart the OSSEC server!
  • On the client with the OSSEC HIDS installed, remove all of the files in /var/ossec/queue/rids, then perform a /var/ossec/bin/ossec-control restart BUT make sure the OSSEC server is NOT running!
  • After you have fixed up the clients, start the OSSEC Server

Viola!  If you do a /var/ossec/bin/list_agents -c you will see them connect after a few seconds.  You can check the /var/ossec/logs/ossec.log file and the Duplicate error will no longer appear!

Windows 10 Build 10586.. Microsoft and Time Travel!

MS Time TravelFor those who have upgraded to the latest Windows 10 build, there”s a bit of an time travelling going on by MS.  The “old” Command Prompt shows 2016 as the Copyright date, whereas PowerShell and other elements of the system show Copyright 2015.  Looks like someone got ahead of themselves!  Happy Almost New Year!

As a bonus, Cortana appears to finally be working for Canadians!  The “Siri”-like experience is upon us.

IPv6 and the need for IPAM

For many, the thought of moving to IPv6 is aipv6 theoretical exercise.  The thinking goes that the existing network, running IPv4, with a mixture of route-able and private IP addresses is more than enough for today, and tomorrow.  Why complicate things?  The straight facts are IPv4 exhaustion is coming, and organizations will need the appropriate “hands on” skills in working with IPv6.

For the unfamiliar, IPv6 is the successor to IPv4.  IP or “Internet Protocol” is the underlying technology that allows you to be reading this blog post.  At its simplest, IPv4 assigns your computer a location on the network, a location that allows routers a way to get information to and from your system.

Then, one could assume IPv6 is a simple upgrade, since TCP, UDP and ICMP continue to operate in a similar manner.  The short answer is “no.”  IPv6 uses a base 16 notation to denote it’s address.  IPv4 uses a quad dotted base 10 notation.  It means that for network admins and architects, the simple familiar becomes abcd:abcd:abcd:abcd:abcd.  With the address space being trillions of trillions of trillions of times bigger, concepts like NAT go away.  You do not need them, when every star in every galaxy in existence could have it’s own 4.3 billion addresses.

Why no NAT?  There is no need since every address is route-able.  Applications in addition to using ports, could use IPv6 addressing schemes for control and backplace operations as well as data transport.  Our method of layer 4 to 7 communication fundamentally changes.

To start on this journey, a good IPAM (IP Address Management) solution is needed.  Beyond spreadsheets, or Active Directory, think of how your organization will handle this transition.  It is coming, and the sooner organizations prepare, the better.  IPAM brings benefits of managing the IP space effectively, and combine that with Software Defined Networks (SDN) you can get some very powerful ways to reduce the costs of transition and get a better managed network out of this.

It’s best to start now, rather than later.  Blue Cat provides some very robust software that happens to provide IPAM functionality and SDN components that take network and address management to the next level.  If you are thinking of modernizing your network, they should be in your list of products to review.

Follow-Up: Inexpensive FXO/FXS cards and Bell Canada Caller-ID

Caller-IDI promised an update on the status of the inexpensive FXO/FXS card I had ordered.

The card arrived in early May, after some very quick shipping.  The packaging was good, and the card came undamaged.  It is your typical Wildcard AEX410 card.

I mentioned trying this on VMware to see if I can virtualize it.  Well, as it turns out even vSphere 5.5 cannot use this card in VT-d mode.  The card is a PCI design, that sits behind a PCIe bridge.  That’s something VMware says will not work.  I tried a number of settings but no luck. The card would kernel panic the VM every time.

In early August, some lightning storms had the pleasure of taking out one of my trusty SPA3102’s.  These are not the most amazing VoIP gateways, but it was good for Caller-ID.  I have struggled for years looking for FXO system that will work with Bell Canada’s Caller-ID.  So far, out of all the products (AudioCodes, SPA, Grandstream, Wildcard) the only product that reads Caller-ID from Bell Canada is the SPA3102.

With the end of my trusty unit I put a physical server in to host my VoIP PBX, along with the Wildcard AEX410.  The Wildcard works just fine in that system with the same V2P (yes, Virtual to Physical!) converted system.  Since even the Wildcard will not read the Caller-ID, I have  a replacement SPA3102 daisy chained to the Wildcard FXO port.  If the power goes out, the SPA will still et the call work, which is great. So far, this combination gives me clear voice calls on the PSTN line, something the SPA itself cannot do, and I get Caller-ID.

The Wildcard works great, except, if you want Caller-ID in Canada, you will need to go with someone other than Bell Canada.

Inexpensive Asterisk FXO/FXS cards

AEX410Just a quick update to inform folks who could be looking for Asterisk compatible FXO/FXS cards that on eBay. ChinaRoby has them on for the very low price of approximately $130 US for PCI-E versions with free shipping.  The price is less than 50$ for PCI versions of the same.  Apparently there have been many others who have been getting in on this deal.  To put this into comparison, for the same card in Canada is almost $500.  So, there is a tonne of savings here.

I have two Voice gateways that are getting a little long in the tooth.  I decided it was time to spend a little to upgrade to better, hardware-based FXO/FXS solutions.  The devices I have work ok, but a lot of echo at times, even with hours and days of tuning. One device gives flawless voice but Caller-ID does not work.  The other, great Caller-ID but, tin-sounding voice quality.

I am planning to run this in a vSphere 5.5 set up with vt-d passthrough.  Here’s hoping the new card eliminates that, and two ugly power warts from my wall.  I will post an update once I get the cards and I can comment on there performance.

Building your own Data Center … at home! Part 2

fanIt’s been a while since I originally posted Part 1 of this series.  However, it’s time to spend some time and fill in Part 2.  Part 1 originally covered the history of my “at home” Data Center.  In this Part 2, we will look at choices made for the new Data Center I implemented at home in November 2012 to early 2013.

Power and Cooling

The number one enemy of all computer equipment is heat.  It’s a function however of using electrical devices, as they use energy to perform tasks, they radiate heat.  There is no getting away from that.  In the case of a Home Data Center, this is critical.

A common solution is to use a fan to blow air around and make things cooler.  Well, a fan simply moves air, it does nothing to actually “cool” anything, except move warm air out allowing cooler air to come in, and absorb more energy.  However, in a small closet or area where people tend to put a collection of PC’s, this cannot happen.  The warm is constantly recycled, and as a result heat builds up.

What can we do to combat this?  It comes down to air circulation and the ability to replace warm air with cold air.  In other words, we have to be able to evacuate warm air not simply push it around.  Simply putting a portable AC beside the equipment does not help, not unless there’s a way to vent the heat from the AC to the outside, and set up some kind of feedback system where the air can maintain its temperature.  The main draw back of a AC doing this is it will use a lot of extra energy, and it pretty much needs to run 24/7 to keep heat from building up.

Now, some will say “I only have a couple PC’s, this does not matter much.”  Quite contrary.  For a couple of PC’s in a closet, they will radiate heat.  The older the PC, the more likely it is to give off more heat. Those 50$ P4 PC’s at the local computer shop might be plentiful, and good as a Firewall or NAS, but they will generate more heat than a state of the art Mini-ITX system with a low-end Haswell processor.  In Part 1, I talked about my first lab, being a collection of PC’s in one room.  In Ottawa where I lived the summers can be VERY hot and VERY humid.  Think 35C and a humidex of 45+.

I had 5 PC’s in one room, they where all old Pentium 200’s and Celeron 400’s.  Those machines put out a lot of heat, and being on the top floor of the place I had at the time, made it VERY warm.  Eventually I moved them to a rack in the basement, and my top floor become bearable.  I had my first run in with fans then, and learned the hardware after one UPS died from heat exposure that fans do not cool anything.

So even if you have a few PC’s, know that they WILL put out heat.   Another consideration is NOISE!  A couple PC’s will have a slight hum which you or your family may or may not find acceptable.  Enterprise servers from HP and Dell have loud fans designed to move a lot of air.  Network switches for enterprise typically have fans too.  This WILL make noise.  Make sure to pick a location that can shield you from the noise.  More energy efficient equipment makes this task much easier.

At my current location, I used to have several beige box PC’s on a bakers rack.  The weight itself made it pretty scarey, however, having systems spread over 3 levels and no way to manage heat led to several heat build up issues.  I had a few lower-end consumer motherboards pop capacitors from the heat.  The systems that where “enterprise grade” never once had an issue.  The learning here is heat will kill systems.  It’s no fun waking up at 6am, with a dead file server and dead VM’s because a capacitor blew in the middle of the night.

Lastly, overloading circuits in a warm environment can lead to fires.  It’s absolutely critical that you make sure the wiring is done correctly. For a couple PC’s a small UPS is fine.  Power-strips are not really safe.  For a bigger environment, look at getting some more expensive server grade UPS’es.  Not only do they let you have power during an outage, they will make your environment that much more safer too. Please get a certified electrician and inspection if you need new circuits.  Yes it will cost you more money, but your life could depend on this. Never load any circuit beyond 50%, in fact, below 50% is safest. If you are getting to 50% load on a circuit, you are likely pulling at least 900W’s, and it’s time to consider better options if that is the case. You maximum safe power density at home is likely going to top out at roughly 1600W, that’s roughly  3x110v 15A circuits.

What has this experience taught me?

  • Proper racking for circulation is a must.  Spreading systems over a few racks with no way to control the ingress and egress of air, just bad for hardware.
  • Making sure that the racking has proper circulation is also a must.
  • Place the systems in an area where heat energy can dissipate, and you have the ability to egress warm air and ingress cooler air.
  • Due to the home always being a warmer environment, without modifying the house itself to have a data center, buy equipment that is rated for 45C+ continuous operation.
  • Be power conscientious.  More power = more heat, more power = higher electricity bill.
  • Get a certified electrician to install any wiring you need and get it inspected. DO NOT SKIP THIS!
  • Assume you will get equipment failure, eventually heat will kill a component, it is only a matter of time.

For the last bullet, making purchasing choices based on reliability and cost are important.  Yes, at home budgets are tight.  If you want to run something 24/7 everyday, think of the consequences.

Weight Considerations

Another issue to loanvilok at is weight.  If you are going to go to racking, understand that you now will have a heavy load over a section of your floor.  How, while most wooden floors are engineered to hold a lot of load, they are NOT engineered to hold a rack full of computer equipment.  A couple PC’s and a monitor, sure.  But if you start getting into anything more exotic, especially when it comes to storage, then be prepared to manage the weight. If you are going to get a real server rack, make sure your floor is rated for at least 2000lbs.  That way, you can load it up to your hearts content.

As I mentioned before, I previously had equipment on a bakers rack.  That rack had some HP DL servers (585’s and 380’s) and a couple of disk enclosures.  I also had a few other PC’s at the time and a monitor, plus network switches.  The bakers rack could handle about 600lbs a shelf.  It was not the most stable thing especially if your floor has a slant.  There where many times I wondered just how safe I was messing around with equipment from the back or front.  A word to the wise, DO NOT mess around with bakers racks or shelving from Ikea.  It is just not designed to hold more than one PC or so.  I opted for real server racks since I could control the load, and manage the cooling a lot better than with any other option.

Managing weight tips,

  • If you are going to put more than a few servers in a condensed space, move it to the basement or a concrete floor.
  • Try to understand your weight requirements.  If you are thinking of storage or other exotic servers at home, they will weigh more and a wood floor, 2 stories up, is likely not safe.
  • If you use any racking, load the heaviest loads at the bottom to lightest at top.
  • Know that once you start loading your rack or space up, moving equipment is HARD! Think out your cabling requirements ahead of time before putting your equipment in its final location.
  • You will need about 25 sq ft of space for a 42U rack.  This includes space in front, to the sides and behind the unit.  Keep in mind that you will need to service equipment at some point, so you will need the room to get in there and make changes.  This should be thought of as very important since disassembling everything is not likely to be easy.
  • If you are going to be using “enterprise” servers like HP’s or Dell’s a server rack is your best option since it can safely mount the equipment with minimum fuss or risk.

Server Consoles

Unless you want to have a very basic set up, a KVM is a recommended choice.  Simple KVM’s allow one monitor, keyboard and mouse to be shared across a few PC’s.  A simple 20$ KVM will work for most cases where 1 to 4 PC’s in needed.  I got by with that set up for years. You can rack up your PC monitor.  It will take some room, but if expansion is not that important, a monitor on a shelf in the rack or on the table where the PC’s are is a good idea.

If you are going with a rack option, you might want to explore a couple interesting options.  First, rackmount monitors and keyboards are easily available on eBay and other locations.  They will range from 100$ to skys the limit.  I would recommend an inexpensive HP unit if you are going this route.  I decided to go with a LCD rack mount monitor, a separate keyboard tray and a old HP/Dell 8 port KVM.  This made the monitor nice and clean looking, it also let me do some “cool” looking configurations once it was mounted.  You can find them on eBay for as low as 125$ on occasion.  This will cost about the same as a all in one monitor/keyboard configuration.  The LCD I have cannot do gaming, but it’s 1024×768 resolution is good enough for installations and working with the consoles if needed.

Not all KVM’s are created equal.  If you have a KVM that is for PS/2 mice and keyboard, note it may not work at all with a USB to PS/2 adapter for a keyboard or mouse.  This may require purchasing a different KVM or expensive converter cables.  Old equipment tends to work finer, while newer, non-PS/2 equipment typically will have some kind of quirk with PS/2 only KVM’s.

On all of my servers I opted for IPMI.  I can manage the power and the console directly from my PC across the network.  This is a life saver as there’s no need to head to the basement to fiddle on the console.  On older PC’s, this might not be an option.  You can look for “IP KVM PCI” on eBay for help, and there are some older boards that generally are ok.  I do strongly recommend this option if you have a little extra money to spend.  Otherwise, and I did this for years, trips to the basement are a-ok.  However, once you go IPMI, you will never want to go back!

The platform

virtualizationSince we have now covered off the facility portion of building a data center at home, we can now switch focus to the platforms we want to run.  Generally speaking at home, many enthusiasts and IT pros will want to run a mixture of Windows, Linux, FreeBSD and possibility Solaris at home.  These platforms are good to learn on and provide great access for skills development.

We generally have a few choices when it comes to the platform if we want to use multiple operating systems.  One approach is using actually equipment dedicated to the task.  For example, setting up a few physical PC’s running Linux is pretty easy to do.  Getting a older machine to run Solaris, is also easy enough to do as well.  Depending on the size of your home lab, it may make sense to dedicate a couple systems to this, especially if you want to have experience loading hardware from bare metal.

However, this set up comes at a price.  Expansion requires more hardware, and more power usage.  So over time, the costs of running it will go up, not to mention prices for different types of equipment can and do vary. A simple choice that will work on most PC’s used in the home in the last couple of years is virtualization.

There are a tonne of good virtuaization platforms from you to choose from.  I myself use a combination of VMware ESXi and Microsoft Hyper-V 2012R2. What you want to do with your environment will dictate what platform you choose.  I would strongly recommend against using a 5 year old PC for this  Best bang for buck will come from using a relatively new PC with a good fast SATA harddrive and at least 8GB of RAM.  16GB is pretty much the minimum I recommend for anyone, as it’s enough to host about 8 or so VM’s with ok performance.  Keep in mind that if the VM’s are lightly used, i.e. one thing at a time, a SATA hard drive will be ok.  However, it will not be fast. I would recommend using VirtualBox or something similar if you want to occasionally dabble but would like to use that PC for something else.  Hyper-V in Windows 8 is pretty good too.

My best advice for choosing a platform is,

  • Decide if you are just going to do a few things at a time.  If so, a PC or two dedicated to blowing up and reinstalling might be the cheapest and simplest option.
  • If you want to run multiple operating systems at the same time, look at one of the free Hypervisors out there.  If this is simple and light-weight, I always recommend going with VirtualBox.  If this requires something more robust or complex, free versions of Hyper-V and VMware ESXi will work just fine.
  • At least 16GB of RAM will make your environment fairly usable if you want to run up to around 8 VM’s at a time.
  • Disk performance will never be fast on a PC with a single hard drive.  Memory and storage speed will cause the most roadblocks to setting up a home lab.  Generally the more memory and hard drives, the better, but keeping in mind this is for home.

Some people are fans of the HP Microserver.  I definitely recommend this product.  It’s simple, well supported and gives you good longer term performance for a decent price.  There are other products on the market, but the HP Microserver is by far one of the best you can get for a home lab.

Storage and Networking

The heart of your home lab will be your Storage and your Network.  These are often areas overlooked by people building their own home data center.  A switch is a switch!  That 4 disk NAS works great!  Think again.

Not all network switches are created equally.  While I would not advocate spending a tonne of money on a large Cisco or HP switch, good networking will improve your performance and the reliability of your home set up.  For example, that cheap 40$ router that does WiFi, USB and say cable internet, is likely not very fast.  Whats worse, it’s likely not rated for a warm environment, and they have a tendancy to sometimes die or act flakey under load.  I once had a DSL modem that only acted right if cold peas surrounded it.  Not a good idea!

To combat this, I do recommend spending a little more on network equipment, including routers and switches.  Be careful to read the specs.  99% of the switches for the “home” or “pro” market will not deliver 1Gbps per port.  I was looking a while back, and I noticed a 48 port switch for nearly $150.  This looks like a great deal until you look at the specs.  It was only rated for 16Gbps switching.  This means, only 8 ports of the 48 will get “full speed” (Network speed is duplex meaning the speed is in both directions).  So if you see capacity that is less than the number of ports, then the performance is not going to be great.  Awesome deal for $150, but, not for performance.  The $300 24 port managed switch I picked up provides 48Gbps, which is 1Gpbs per port, meaning I’ll get fairly consistent performance from the switch.  If you can, try avoiding “green” consumer switches.  They will drop performance all the time, and many vendors of Hypervisors will tell you the performance will stink. It has to do with how power saving is implemented.  Enterprise switches with “green” features will save you money.  My “green” enterprise switch has saved over 138000 watt hours over 9 months.  That’s about 165$ where I live.

The same can be said for storage.  Asides from memory, storage is the single most important part of a data center.  Whether at home or in the office, we all need storage.  The faster, the more reliable the storage the better our lives are.  Generally, IMHO, those in expensive 4 disk NAS units are ok-enough for running a small number of VM’s and hosting files for the home.  The processor will limit the performance, and typically I would recommend a unit that create “iSCSI” disks.  This will give you the best performance.  If it only offers SMB or CIFS access, the performance will be ok, but, you will need to use VirtualBox or Hyper-V as VMware ESXi does not support it.  The maximum performance most of those inexpensive units provide is 1Gpbs connections.  Since the processor is usually slower, you will not always get that speed, especially if you have various RAID options enabled. Expect to get around 70% performance on that link, and know that if you have multiple VM’s trying to update or run at the same time, the performance is going to make you want to get a coffee.  I always advice purchasing the best storage for your budget and needs.  I do recommend the 4 bay NAS units from Seagate, Western Digital, Drobo and Thecus.  You will get good performance at a ok price.  The type of harddrive to use in these set ups is not really an issue since you will not be able to max out the performance anyway.  Purchasing 4 “green” drives with a lower capacity will get about the same as 4 faster drives of the same size.  Remember, these units have some weight, and the more disks you have, the more the weigh.  I have 28, since I need to maximize random performance.

Finally a word on RAID

RAID does not mean no backups.  It just means that if a drive does die in your NAS, that it will be able to keep running and not lose any data for the time being.  Time being is the important part.  Over the years I cannot tell you how many times I lost data that I thought was safe, either on a disk somewhere as a backup or with RAID.  Make sure you have a plan to keep multiple copies of the data that is important to you, and do not trust it all in one location. Something as simple as bad hardware can wipe it out.  Make sure you take regular backups of your data and keep the backup in a safe location.  I do recommend using a Cloud backup for your NAS if possible.  Some people have data caps with their ISP’s that prevent this.  I would still recommend keeping multiple copies of your most needed documents on a minimum of 2 locations, preferably at least 3 to be safe. Test your backups on a regular basis.  Nothing is worse going to restore something and it does not work. I use Windows Server 2012R2 Essentials for home backups of my PC’s.  The data is then copied from the Home backup server to another location for safe keeping.  This way I have 3 copies.  My PC, My Home Backup Server, and my Disaster Recovery location.  This method has saved me numerous times.  I can always go back and re-create my PC from any time in the last 6 months.  So if I install a bad Windows patch, it’s 20 minutes later and I’m back to the previous configurations.  No muss or fuss.

I take data protection very seriously, as I have mentioned before, equipment will fail, are you ready?

Closing Thoughts on Part 2

This was a pretty high-level look at the factors I used to design the Home Data Center I have. Key elements include,

  • Power & Cooling
  • Racking
  • Server Consoles & Remote Access
  • The Platform: To Virtualize or not?
  • Storage and Networking

There are other choices as well, but these are the major ones to consider.

In Part 3 I will step you through the build and explain how things are configured for me today.

Building your own Data Center … at home! Part 1

ibm-cloud-computing-data-centerFor many enterprises, building a data center is a major undertaking.  It takes planning, understanding IT and Facilities planning, and most importantly, the proper budget to execute.  When this shifts form the corporate or enterprise environment, however, most setups in the home office are cobbled together.  Not as much thought, typically goes into what IT Pro’s use at home.  Maybe it is because having a large Data Center at work, one feels the need to not spend as much time and effort on that for servers that are just “playing around” or “for learning” purposes.  Well, the time has come to change all of that!  Read on for a bit of history on how I came to build my own Tier 1 Data Center at home…

A little bit of history…

Like many IT Pro’s, over the years I have had many incarnations of the “data center”  My first was back nearly 18 years ago.  A 486DX33 connected to a 386DX40, both running Windows 95 (and before that, OS/2!).  The whole thing worked with a 75ft Ethernet cable, and a 10 mbit/s half duplex hub.  The kind that shows the activity on them.  While it was not fast, it did let me share files from one computer to another.  At the time, I was working at a small computer place that was using those old coax ARC Net adapters.  It worked ok for them, and I wanted something similar at home.

intel-pentium-166-2Since then, I have had a collection of machines, Sparc LX’s, Sparcstation 5’s, a collection of Pentium 166/200 machines.  A bi upgrade was the Abit BP6, that let 2 (!!) Celeron 400’s run in SMP with Linux.  Eventually those machines became some separate boxes, that I used to develop Linux applications on.  They all sat on the floor of one bedroom of the place where I lived in Ottawa.  It was cute, but I learned the hard way about Power and Cooling.  While the power bill for 4 computers running was not bad, the HEAT put out by the machines was terrible.  The place in Ottawa did not have A/C, and well, it made for summers that where warm.  If there is one lessen to read from this blog post, remember… “Fans do not cool anything, they just move air.”  The movement of air takes the energy and disperses it in the space.  So, in the summers, in Ottawa I would have a fun running full blast with a window open for venting.  Not great for keeping humidity out and me cool!  I lost more than one power supply over the years at that location.

The servers eventually moved down to the basement into IKEA wooden racks, which was much a large improvement over being in the other bedroom.  The large open space let the heat dissipate, and the house as a result was much cooler.

I kept this general set up when i moved to Niagara for a couple of years.  Then, in late 2007, early 2008, I started really planning out my server collection.  I did start to apply IT principals, and at this point, things where much better.  I picked up a large bakers rack from Costco, and the servers where upgraded to Q6600’s, since the motherboards and the CPU’s themselves where dirt cheap (187$CDN/CPU in October 2008!).  I still have those motherboards and CPU’s to this day.. and they still work. Each of the servers had a maximum of 8GB of RAM.  I was going to do a fan-out model if I needed more capacity.  I intentionally chose CPU’s, memory and motherboards that where “long life” components.  In 2009, nearly 12 months after putting things together, I cobbled together a 3rd VM server running VMware.

KL_Intel_Core2_Q6600It was during this time in Niagara I learned a few things.  Heatsinks can and do come off running CPU’s, VMWare 3.5 and VMFS3 liked to lose your data, and never trust 10 year old CDROMs (yes, CD’s ROM’s, not DVD’s) for backup.  When developing the new setup I wanted some redundancy, and better networking.  That design, spawned the 3 level network I used today.  A production network, a DMZ network, and a storage or backup/black network.  There was no lights out, but the set up ran fairly happily.

The storage solution was using Gigabit Ethernet, and NFS. Good old Linux.  One of the big challenges was the performance.  Even with separate networks, the best the NFS server could give was anywhere from 20 to 30MB/s for reads and writes.  Not stellar, but enough to allow me to run 8 to 12 VM’s.  All the servers where on the baking rack, and for the time it seemed ok.  However, the I/O performance was always a problem.

And so began my quest…. Performance Performance Performance

Working for HP does have some advantages.  You get internal access to some of the smartest people anywhere, and there are thousands of articles on performance running for various technologies.  I began to put my HP knowledge to use, and started slowly doing upgrades to improve the storage performance.

In addition, when I was a full time consultant, I either worked at home, or at a customer location.  As a result, my home internet and network need to be running 24/7.  If not, it means loss of work time, and potential disruptions to clients.  Over the years I had been using a number of in-expensive D-Link routers for my internet connection.  In the summer, the heat would rise, just enough to randomly lock up the router.  In 2010, I decided it was finally time to start using eBay, and little would I know eBay would become a source of great inspiration, learning, and well, a strain on the wallet too!  After a lot of investigation, I purchased a Cisco 851.  This little router has just enough IOS to let you learn it, and runs well enough to pretty much forget about your Internet connection.  That was probably one of my best tech purchases. Even after 3 years, that router is still running just fine. It has had uptimes greater than one year.  It truly is set and forget.  Since then, I have never had an issue with my Internet connection.

1810g-24Off of the success of my new router, I decided that it was time to upgrade the network.  Being a good HP’er, I wanted something “Enterprise ready”, but not something that would break the bank.  Since HP employees in Canada get not discounts on Server, Storage or Network gear, it would be a purchase of faith in my employer.  I ordered the HP 1810G-24. About 5 days later, the switch was in my hands.  And all of 15 minutes of configuring it, I had my 3 VLAN’s up and running.  I then quickly swapped out my old D-Link “green” Gigabit switches for the HP.  After a total of about 20 minutes I had gone from a unmanaged network, to a mid-market branch office network.  There was a performance improvement.  My NFS performance increased to around 45MB/s.  That was nearly double in some case, what it was on the D-links.  It just goes to show that what networking you install does make a difference.

A few months passed, and I was still not happy with the storage I/O.  While better, boot storms would easily overwhelm the connections, and trying to boot more than one server at a time was painful.  I have purchased a eSATA enclosure, and was running 4 relatively fast SATA drives, but, performance was well below Gigabit speeds.  The file server itself was a older AMD 3500, single core machine with 2GB of RAM.  Not fast, but I would have thought fast enough for better than 45MB/s network performance in VMware.

So, in my hunt, i decided it was time to take the Fiber Channel plunge.  I read about several Fiber Channel packages for Linux, and SCST turns out to the best one for my needs.  Fast, scalable, easy to install.  Perfect!  My eBay habit kicked into high-gear and a I picked up an old Brocade Silkworm 3852.  16x2gbit/s of Fiber Channel goodness.  I also picked up a lot of 4 Brocade/Qlogic Fiber Channel HBA’s.  On a spur of the moment, I also picked up an old NetApp 2gb/s Fiber Channel disk shelf with 2TB of capacity split over 14 disks.  I created a software RAID 6, installed SCST, configured the zones on the switch… and viola!  The performance in a virtual machine went from 45MB/s to 180MB/s.  I could now actually use the servers at home.  This was a new lease on life for the equipment, and during that time I was able to upgrade to Exchange 2010 and Windows Server 2008R2, I even had a working implementation of OCSR2 as well.  What a difference storage makes.

In Part 2, we will talk about the planning, and design for the new and improved Home Data Center.  In Part 3 we will discuss the actual equipment, choices made, and the performance today.

Windows Server 2012, VDI and nVidia GTX 650

So it has been a while since my last post.  Many interesting things have been going on, but, let’s chat about one of the more interesting technology develops I have been upto.

For years now, I have been a fan of VDI solutions.  I believe they offer customers the ability to significantly reduce spending on IT operations and maintenance for PC’s.  Of course, VDI removes the large “fat” PC at the desk, and replaces it with a “thin” client that uses backend server power to render the screen.

When I first got this working in 1997 with X11 and Linux, I thought it was very cool and interesting.  I have played with Citrix Metaframe back in 1999, and Windows NT Terminal Services Edition too.  That dates me doesn’t it?

Well, fast forward to 2012.  I wanted to take a oppourtunity to move my PC at home to a “thin” client infrastructure.  Using Windows Server 2012, Hyper-V and a Windows 7 VM of my desktop I gave it a try.  Things I can confirm,

  • RemoteFX DOES work with a GeForce GTX 650.  As advertised it does work worth with nVidia’s latest drivers (WDDM 1.2 DirectX 11), and Windows Server 2012.  I could not make this work with Windows Server 2008R2.
  • AMD Radeon 4870’s do not work with RemoteFX and Windows Server 2008R2 or Windows Server 2012.  Just too old and AMD calls this a “legacy” card.  Since this card is not DirectX 11, I expected it would not work with 2012, but not working with Windows Server 2008R2 was a bit f a surprise.
  • RemoteFX with a vGPU works for most applications.  I have a modern fully switched, enterprise-class network, and where the system faltered was on video.  Flash videos and Youtube HTML5.  While is works when you have it in a window, it does not work so well in full screen mode.  QuickTime videos, well, they play but there is a lot of tearing at 720p.

So for me, VDI at home is not quite ready for prime time.  Before anyone says “you just need more horsepower”, this was on a Xeon 1620 system with 32GB of RAM.  That should be more than enough to host one desktop with good performance.  Otherwise, performance was ok.

For my testing thin client testing, I borrowed a HP t610 Thin Client.  It works fabulously.  It is quiet, and you do not even notice it is running.  HP has a winner with that little machine.

Here is hoping SP1 for Windows Server 2012 improves the performance so I can try this again.  In the meantime, my desktop will stick on its high-fat diet.

Cisco 7960 & Asterisk – This is what IP telephony should be!

Folks who know me will understand I have this passion for most technology items that are either telephony or network related.  This goes back to my youth where I was enamored by the magic of 2400 baud modems and BBS’s.   It is also one of the reasons I went to work for Nortel for so many years.


For the past 2 years I have been running my own IP PBX using Asterisk and a combination of interesting telephony devices to make and receive PSTN calls.  I have been using a Bluetooth headset for work-related calls, but, after having my 3rd headset die in under two years, its clear they are not meant to be daily drivers.  So, I went out and picked up one of the IP phone products that my customers use quite frequently.  The Cisco 7960 IP Phone.


Yes, I am a HP guy, and I tried to get one of the HP 41xx series phones, but, no dice.  The HP products are great, they just require the use of MS Lync 2010.  While I do support Lync here, in an effort to have a wrking phone for work I went with the Cisco eBay route.  For $100 including the power brick, I have a almost new phone on my desk.


The difference between this IP phone and my other ATA’s and softphones is clear.  Not only did the phone require less than an hour to get running in SIP mode, I have had people comment on the quality of the sound in both speakerphone and handset modes.  The sound is clean, clear and crisp.  I usually had some complaints of echo, not so with the Cisco phone.


Using Endpoint Manager in Asterisk and FreePBX made getting the phone up and running in a cinch.  I already had all of the SIP firmwares for the 79xx series, so it was just grab and go.  Only had to change the extension to “nat=no” and viola the phone registered no problem.


My only complaint so far.. no backlight!  I do wish the screen had that, and more expensive phones do have the feature, but for $100 I’m not going to complain.


All in all, this is what IP telephony should be.  Simple, and easy to use!

Squid vs Apache: Reverse Proxy Champ?

So it begins.  Another year of making the slow transition to a new version of RHEL.  With it, comes refreshing some of the basics for the infrastructure here.  I traditionally use a Apache Reverse Proxy for filtering content into the server subnet here.  It is fast and has served the purpose for the last 2 and 1/2 years.  The trouble has been that going to newer versions of Apache have typically not happened since the configuration file is several hundred lines long and debugging it is a pain at 2am on a Sunday night.

I decided to take the plunge and moved things over to Squid.  I had a hack in Apache that let me do http AND https.  Squid supported everything out of the box.  The difference?  17 lines for Squid, and 744 for Apache.

So far so good.  It has some quirks, but, nothing that cannot be worked around.  If you are considering Squid for a remote proxy, you should not be disappointed!

Now just to keep figuring out RHEL 6.0 ….

IT News & Reviews You Can Use