Category Archives: Microsoft

Windows 10 Build 10586.. Microsoft and Time Travel!

MS Time TravelFor those who have upgraded to the latest Windows 10 build, there”s a bit of an time travelling going on by MS.  The “old” Command Prompt shows 2016 as the Copyright date, whereas PowerShell and other elements of the system show Copyright 2015.  Looks like someone got ahead of themselves!  Happy Almost New Year!

As a bonus, Cortana appears to finally be working for Canadians!  The “Siri”-like experience is upon us.

Windows 8.1 Update 1: What Windows 8 should have been

windows81Windows 8.1 Update 1 was released on April 8, 2014.  This update brings a number of features to update Windows 8, including finally, the ability to pin Metro applications to the Taskbar on the Desktop. Ironically, this is also the same day that Windows XP was officially retired by Microsoft.  Clients have been moving with much haste to Windows 7, preferring to leave Windows 8 for later.  This primarily has to do with the amount of change that Windows has undergone between Windows 7 and 8.  For many customers familiarity and usability will be king in the corporate environment.  In many ways, with Update 1 of Windows 8.1, that familiar usability and  familiarity are coming back.

The next update, will bring a return of the old “Start” button.  The new twist is this will be a mash-up of the new Start Screen, with items such as live tiles, and the old cascaded menu behaviour Windows 7 and XP users have grown to love.

But, for the moment, is it time to look at Windows 8.1 in the corporate environment?  Yes.  Most customers should be considering a plan to get them to Windows 8, if already on Windows 7, for sometime in the next 18 to 24 months.  I recommend this time frame, as Windows 8.1 and future updates will be finally ready for the consistency that enterprise organizations need.  This is not a make work project, since many of the newest devices coming out will need Windows 8 functionality to get the best performance out of them.  Touch on Windows 7 is terrible in comparison.

Many organizations may opt for so-called “long-life” equipment from the big vendors.  “Long-Life” equipment will push back the need for Windows 8 or Windows 9 or Windows X, for 5 to 6 years, however, this may cost them more in the end.  We are at an inflection point in terms of human-machine interaction.  Touch is just the first step to gesture control on the PC.  For desktop users this will not be much of a selling point, however, gesture controlled applications such as mapping and interactive presentation systems will put pressure on IT departments to meet this need.  It can be solved by special one off cases, however, this again increases the cost of supporting a platform such as Windows 7.

The good news is migration from Windows 7 to 8.1+ is much easier than it was with XP to 7.  Applications and Drivers should be mostly compatible, and the real planning should be in helping users with new features as when to use the new Start screen, or implementing features like Bitlocker.  Choosing to stay on Windows 7 for the long-term?  You have until January 14, 2020 to worry about the same status as XP, but keep in mind mainstream support is coming to an end in January 2015 for Windows 7 and you will need to pay Microsoft for support requests.

As Windows 8.1 continues to mature, it will leave behind the “Vista” status that Windows 8 had.  Windows 8.1 with Update 1 really does feel more like Windows 9 than Windows 8.

Do not forget, Windows Server 2012R2 just got the update as well.  This update though not as publicized this update includes Active directory updates to better support Office 365 deployments.

Building your own Data Center … at home! Part 2

fanIt’s been a while since I originally posted Part 1 of this series.  However, it’s time to spend some time and fill in Part 2.  Part 1 originally covered the history of my “at home” Data Center.  In this Part 2, we will look at choices made for the new Data Center I implemented at home in November 2012 to early 2013.

Power and Cooling

The number one enemy of all computer equipment is heat.  It’s a function however of using electrical devices, as they use energy to perform tasks, they radiate heat.  There is no getting away from that.  In the case of a Home Data Center, this is critical.

A common solution is to use a fan to blow air around and make things cooler.  Well, a fan simply moves air, it does nothing to actually “cool” anything, except move warm air out allowing cooler air to come in, and absorb more energy.  However, in a small closet or area where people tend to put a collection of PC’s, this cannot happen.  The warm is constantly recycled, and as a result heat builds up.

What can we do to combat this?  It comes down to air circulation and the ability to replace warm air with cold air.  In other words, we have to be able to evacuate warm air not simply push it around.  Simply putting a portable AC beside the equipment does not help, not unless there’s a way to vent the heat from the AC to the outside, and set up some kind of feedback system where the air can maintain its temperature.  The main draw back of a AC doing this is it will use a lot of extra energy, and it pretty much needs to run 24/7 to keep heat from building up.

Now, some will say “I only have a couple PC’s, this does not matter much.”  Quite contrary.  For a couple of PC’s in a closet, they will radiate heat.  The older the PC, the more likely it is to give off more heat. Those 50$ P4 PC’s at the local computer shop might be plentiful, and good as a Firewall or NAS, but they will generate more heat than a state of the art Mini-ITX system with a low-end Haswell processor.  In Part 1, I talked about my first lab, being a collection of PC’s in one room.  In Ottawa where I lived the summers can be VERY hot and VERY humid.  Think 35C and a humidex of 45+.

I had 5 PC’s in one room, they where all old Pentium 200’s and Celeron 400’s.  Those machines put out a lot of heat, and being on the top floor of the place I had at the time, made it VERY warm.  Eventually I moved them to a rack in the basement, and my top floor become bearable.  I had my first run in with fans then, and learned the hardware after one UPS died from heat exposure that fans do not cool anything.

So even if you have a few PC’s, know that they WILL put out heat.   Another consideration is NOISE!  A couple PC’s will have a slight hum which you or your family may or may not find acceptable.  Enterprise servers from HP and Dell have loud fans designed to move a lot of air.  Network switches for enterprise typically have fans too.  This WILL make noise.  Make sure to pick a location that can shield you from the noise.  More energy efficient equipment makes this task much easier.

At my current location, I used to have several beige box PC’s on a bakers rack.  The weight itself made it pretty scarey, however, having systems spread over 3 levels and no way to manage heat led to several heat build up issues.  I had a few lower-end consumer motherboards pop capacitors from the heat.  The systems that where “enterprise grade” never once had an issue.  The learning here is heat will kill systems.  It’s no fun waking up at 6am, with a dead file server and dead VM’s because a capacitor blew in the middle of the night.

Lastly, overloading circuits in a warm environment can lead to fires.  It’s absolutely critical that you make sure the wiring is done correctly. For a couple PC’s a small UPS is fine.  Power-strips are not really safe.  For a bigger environment, look at getting some more expensive server grade UPS’es.  Not only do they let you have power during an outage, they will make your environment that much more safer too. Please get a certified electrician and inspection if you need new circuits.  Yes it will cost you more money, but your life could depend on this. Never load any circuit beyond 50%, in fact, below 50% is safest. If you are getting to 50% load on a circuit, you are likely pulling at least 900W’s, and it’s time to consider better options if that is the case. You maximum safe power density at home is likely going to top out at roughly 1600W, that’s roughly  3x110v 15A circuits.

What has this experience taught me?

  • Proper racking for circulation is a must.  Spreading systems over a few racks with no way to control the ingress and egress of air, just bad for hardware.
  • Making sure that the racking has proper circulation is also a must.
  • Place the systems in an area where heat energy can dissipate, and you have the ability to egress warm air and ingress cooler air.
  • Due to the home always being a warmer environment, without modifying the house itself to have a data center, buy equipment that is rated for 45C+ continuous operation.
  • Be power conscientious.  More power = more heat, more power = higher electricity bill.
  • Get a certified electrician to install any wiring you need and get it inspected. DO NOT SKIP THIS!
  • Assume you will get equipment failure, eventually heat will kill a component, it is only a matter of time.

For the last bullet, making purchasing choices based on reliability and cost are important.  Yes, at home budgets are tight.  If you want to run something 24/7 everyday, think of the consequences.

Weight Considerations

Another issue to loanvilok at is weight.  If you are going to go to racking, understand that you now will have a heavy load over a section of your floor.  How, while most wooden floors are engineered to hold a lot of load, they are NOT engineered to hold a rack full of computer equipment.  A couple PC’s and a monitor, sure.  But if you start getting into anything more exotic, especially when it comes to storage, then be prepared to manage the weight. If you are going to get a real server rack, make sure your floor is rated for at least 2000lbs.  That way, you can load it up to your hearts content.

As I mentioned before, I previously had equipment on a bakers rack.  That rack had some HP DL servers (585’s and 380’s) and a couple of disk enclosures.  I also had a few other PC’s at the time and a monitor, plus network switches.  The bakers rack could handle about 600lbs a shelf.  It was not the most stable thing especially if your floor has a slant.  There where many times I wondered just how safe I was messing around with equipment from the back or front.  A word to the wise, DO NOT mess around with bakers racks or shelving from Ikea.  It is just not designed to hold more than one PC or so.  I opted for real server racks since I could control the load, and manage the cooling a lot better than with any other option.

Managing weight tips,

  • If you are going to put more than a few servers in a condensed space, move it to the basement or a concrete floor.
  • Try to understand your weight requirements.  If you are thinking of storage or other exotic servers at home, they will weigh more and a wood floor, 2 stories up, is likely not safe.
  • If you use any racking, load the heaviest loads at the bottom to lightest at top.
  • Know that once you start loading your rack or space up, moving equipment is HARD! Think out your cabling requirements ahead of time before putting your equipment in its final location.
  • You will need about 25 sq ft of space for a 42U rack.  This includes space in front, to the sides and behind the unit.  Keep in mind that you will need to service equipment at some point, so you will need the room to get in there and make changes.  This should be thought of as very important since disassembling everything is not likely to be easy.
  • If you are going to be using “enterprise” servers like HP’s or Dell’s a server rack is your best option since it can safely mount the equipment with minimum fuss or risk.

Server Consoles

Unless you want to have a very basic set up, a KVM is a recommended choice.  Simple KVM’s allow one monitor, keyboard and mouse to be shared across a few PC’s.  A simple 20$ KVM will work for most cases where 1 to 4 PC’s in needed.  I got by with that set up for years. You can rack up your PC monitor.  It will take some room, but if expansion is not that important, a monitor on a shelf in the rack or on the table where the PC’s are is a good idea.

If you are going with a rack option, you might want to explore a couple interesting options.  First, rackmount monitors and keyboards are easily available on eBay and other locations.  They will range from 100$ to skys the limit.  I would recommend an inexpensive HP unit if you are going this route.  I decided to go with a LCD rack mount monitor, a separate keyboard tray and a old HP/Dell 8 port KVM.  This made the monitor nice and clean looking, it also let me do some “cool” looking configurations once it was mounted.  You can find them on eBay for as low as 125$ on occasion.  This will cost about the same as a all in one monitor/keyboard configuration.  The LCD I have cannot do gaming, but it’s 1024×768 resolution is good enough for installations and working with the consoles if needed.

Not all KVM’s are created equal.  If you have a KVM that is for PS/2 mice and keyboard, note it may not work at all with a USB to PS/2 adapter for a keyboard or mouse.  This may require purchasing a different KVM or expensive converter cables.  Old equipment tends to work finer, while newer, non-PS/2 equipment typically will have some kind of quirk with PS/2 only KVM’s.

On all of my servers I opted for IPMI.  I can manage the power and the console directly from my PC across the network.  This is a life saver as there’s no need to head to the basement to fiddle on the console.  On older PC’s, this might not be an option.  You can look for “IP KVM PCI” on eBay for help, and there are some older boards that generally are ok.  I do strongly recommend this option if you have a little extra money to spend.  Otherwise, and I did this for years, trips to the basement are a-ok.  However, once you go IPMI, you will never want to go back!

The platform

virtualizationSince we have now covered off the facility portion of building a data center at home, we can now switch focus to the platforms we want to run.  Generally speaking at home, many enthusiasts and IT pros will want to run a mixture of Windows, Linux, FreeBSD and possibility Solaris at home.  These platforms are good to learn on and provide great access for skills development.

We generally have a few choices when it comes to the platform if we want to use multiple operating systems.  One approach is using actually equipment dedicated to the task.  For example, setting up a few physical PC’s running Linux is pretty easy to do.  Getting a older machine to run Solaris, is also easy enough to do as well.  Depending on the size of your home lab, it may make sense to dedicate a couple systems to this, especially if you want to have experience loading hardware from bare metal.

However, this set up comes at a price.  Expansion requires more hardware, and more power usage.  So over time, the costs of running it will go up, not to mention prices for different types of equipment can and do vary. A simple choice that will work on most PC’s used in the home in the last couple of years is virtualization.

There are a tonne of good virtuaization platforms from you to choose from.  I myself use a combination of VMware ESXi and Microsoft Hyper-V 2012R2. What you want to do with your environment will dictate what platform you choose.  I would strongly recommend against using a 5 year old PC for this  Best bang for buck will come from using a relatively new PC with a good fast SATA harddrive and at least 8GB of RAM.  16GB is pretty much the minimum I recommend for anyone, as it’s enough to host about 8 or so VM’s with ok performance.  Keep in mind that if the VM’s are lightly used, i.e. one thing at a time, a SATA hard drive will be ok.  However, it will not be fast. I would recommend using VirtualBox or something similar if you want to occasionally dabble but would like to use that PC for something else.  Hyper-V in Windows 8 is pretty good too.

My best advice for choosing a platform is,

  • Decide if you are just going to do a few things at a time.  If so, a PC or two dedicated to blowing up and reinstalling might be the cheapest and simplest option.
  • If you want to run multiple operating systems at the same time, look at one of the free Hypervisors out there.  If this is simple and light-weight, I always recommend going with VirtualBox.  If this requires something more robust or complex, free versions of Hyper-V and VMware ESXi will work just fine.
  • At least 16GB of RAM will make your environment fairly usable if you want to run up to around 8 VM’s at a time.
  • Disk performance will never be fast on a PC with a single hard drive.  Memory and storage speed will cause the most roadblocks to setting up a home lab.  Generally the more memory and hard drives, the better, but keeping in mind this is for home.

Some people are fans of the HP Microserver.  I definitely recommend this product.  It’s simple, well supported and gives you good longer term performance for a decent price.  There are other products on the market, but the HP Microserver is by far one of the best you can get for a home lab.

Storage and Networking

The heart of your home lab will be your Storage and your Network.  These are often areas overlooked by people building their own home data center.  A switch is a switch!  That 4 disk NAS works great!  Think again.

Not all network switches are created equally.  While I would not advocate spending a tonne of money on a large Cisco or HP switch, good networking will improve your performance and the reliability of your home set up.  For example, that cheap 40$ router that does WiFi, USB and say cable internet, is likely not very fast.  Whats worse, it’s likely not rated for a warm environment, and they have a tendancy to sometimes die or act flakey under load.  I once had a DSL modem that only acted right if cold peas surrounded it.  Not a good idea!

To combat this, I do recommend spending a little more on network equipment, including routers and switches.  Be careful to read the specs.  99% of the switches for the “home” or “pro” market will not deliver 1Gbps per port.  I was looking a while back, and I noticed a 48 port switch for nearly $150.  This looks like a great deal until you look at the specs.  It was only rated for 16Gbps switching.  This means, only 8 ports of the 48 will get “full speed” (Network speed is duplex meaning the speed is in both directions).  So if you see capacity that is less than the number of ports, then the performance is not going to be great.  Awesome deal for $150, but, not for performance.  The $300 24 port managed switch I picked up provides 48Gbps, which is 1Gpbs per port, meaning I’ll get fairly consistent performance from the switch.  If you can, try avoiding “green” consumer switches.  They will drop performance all the time, and many vendors of Hypervisors will tell you the performance will stink. It has to do with how power saving is implemented.  Enterprise switches with “green” features will save you money.  My “green” enterprise switch has saved over 138000 watt hours over 9 months.  That’s about 165$ where I live.

The same can be said for storage.  Asides from memory, storage is the single most important part of a data center.  Whether at home or in the office, we all need storage.  The faster, the more reliable the storage the better our lives are.  Generally, IMHO, those in expensive 4 disk NAS units are ok-enough for running a small number of VM’s and hosting files for the home.  The processor will limit the performance, and typically I would recommend a unit that create “iSCSI” disks.  This will give you the best performance.  If it only offers SMB or CIFS access, the performance will be ok, but, you will need to use VirtualBox or Hyper-V as VMware ESXi does not support it.  The maximum performance most of those inexpensive units provide is 1Gpbs connections.  Since the processor is usually slower, you will not always get that speed, especially if you have various RAID options enabled. Expect to get around 70% performance on that link, and know that if you have multiple VM’s trying to update or run at the same time, the performance is going to make you want to get a coffee.  I always advice purchasing the best storage for your budget and needs.  I do recommend the 4 bay NAS units from Seagate, Western Digital, Drobo and Thecus.  You will get good performance at a ok price.  The type of harddrive to use in these set ups is not really an issue since you will not be able to max out the performance anyway.  Purchasing 4 “green” drives with a lower capacity will get about the same as 4 faster drives of the same size.  Remember, these units have some weight, and the more disks you have, the more the weigh.  I have 28, since I need to maximize random performance.

Finally a word on RAID

RAID does not mean no backups.  It just means that if a drive does die in your NAS, that it will be able to keep running and not lose any data for the time being.  Time being is the important part.  Over the years I cannot tell you how many times I lost data that I thought was safe, either on a disk somewhere as a backup or with RAID.  Make sure you have a plan to keep multiple copies of the data that is important to you, and do not trust it all in one location. Something as simple as bad hardware can wipe it out.  Make sure you take regular backups of your data and keep the backup in a safe location.  I do recommend using a Cloud backup for your NAS if possible.  Some people have data caps with their ISP’s that prevent this.  I would still recommend keeping multiple copies of your most needed documents on a minimum of 2 locations, preferably at least 3 to be safe. Test your backups on a regular basis.  Nothing is worse going to restore something and it does not work. I use Windows Server 2012R2 Essentials for home backups of my PC’s.  The data is then copied from the Home backup server to another location for safe keeping.  This way I have 3 copies.  My PC, My Home Backup Server, and my Disaster Recovery location.  This method has saved me numerous times.  I can always go back and re-create my PC from any time in the last 6 months.  So if I install a bad Windows patch, it’s 20 minutes later and I’m back to the previous configurations.  No muss or fuss.

I take data protection very seriously, as I have mentioned before, equipment will fail, are you ready?

Closing Thoughts on Part 2

This was a pretty high-level look at the factors I used to design the Home Data Center I have. Key elements include,

  • Power & Cooling
  • Racking
  • Server Consoles & Remote Access
  • The Platform: To Virtualize or not?
  • Storage and Networking

There are other choices as well, but these are the major ones to consider.

In Part 3 I will step you through the build and explain how things are configured for me today.

Building your own Data Center … at home! Part 1

ibm-cloud-computing-data-centerFor many enterprises, building a data center is a major undertaking.  It takes planning, understanding IT and Facilities planning, and most importantly, the proper budget to execute.  When this shifts form the corporate or enterprise environment, however, most setups in the home office are cobbled together.  Not as much thought, typically goes into what IT Pro’s use at home.  Maybe it is because having a large Data Center at work, one feels the need to not spend as much time and effort on that for servers that are just “playing around” or “for learning” purposes.  Well, the time has come to change all of that!  Read on for a bit of history on how I came to build my own Tier 1 Data Center at home…

A little bit of history…

Like many IT Pro’s, over the years I have had many incarnations of the “data center”  My first was back nearly 18 years ago.  A 486DX33 connected to a 386DX40, both running Windows 95 (and before that, OS/2!).  The whole thing worked with a 75ft Ethernet cable, and a 10 mbit/s half duplex hub.  The kind that shows the activity on them.  While it was not fast, it did let me share files from one computer to another.  At the time, I was working at a small computer place that was using those old coax ARC Net adapters.  It worked ok for them, and I wanted something similar at home.

intel-pentium-166-2Since then, I have had a collection of machines, Sparc LX’s, Sparcstation 5’s, a collection of Pentium 166/200 machines.  A bi upgrade was the Abit BP6, that let 2 (!!) Celeron 400’s run in SMP with Linux.  Eventually those machines became some separate boxes, that I used to develop Linux applications on.  They all sat on the floor of one bedroom of the place where I lived in Ottawa.  It was cute, but I learned the hard way about Power and Cooling.  While the power bill for 4 computers running was not bad, the HEAT put out by the machines was terrible.  The place in Ottawa did not have A/C, and well, it made for summers that where warm.  If there is one lessen to read from this blog post, remember… “Fans do not cool anything, they just move air.”  The movement of air takes the energy and disperses it in the space.  So, in the summers, in Ottawa I would have a fun running full blast with a window open for venting.  Not great for keeping humidity out and me cool!  I lost more than one power supply over the years at that location.

The servers eventually moved down to the basement into IKEA wooden racks, which was much a large improvement over being in the other bedroom.  The large open space let the heat dissipate, and the house as a result was much cooler.

I kept this general set up when i moved to Niagara for a couple of years.  Then, in late 2007, early 2008, I started really planning out my server collection.  I did start to apply IT principals, and at this point, things where much better.  I picked up a large bakers rack from Costco, and the servers where upgraded to Q6600’s, since the motherboards and the CPU’s themselves where dirt cheap (187$CDN/CPU in October 2008!).  I still have those motherboards and CPU’s to this day.. and they still work. Each of the servers had a maximum of 8GB of RAM.  I was going to do a fan-out model if I needed more capacity.  I intentionally chose CPU’s, memory and motherboards that where “long life” components.  In 2009, nearly 12 months after putting things together, I cobbled together a 3rd VM server running VMware.

KL_Intel_Core2_Q6600It was during this time in Niagara I learned a few things.  Heatsinks can and do come off running CPU’s, VMWare 3.5 and VMFS3 liked to lose your data, and never trust 10 year old CDROMs (yes, CD’s ROM’s, not DVD’s) for backup.  When developing the new setup I wanted some redundancy, and better networking.  That design, spawned the 3 level network I used today.  A production network, a DMZ network, and a storage or backup/black network.  There was no lights out, but the set up ran fairly happily.

The storage solution was using Gigabit Ethernet, and NFS. Good old Linux.  One of the big challenges was the performance.  Even with separate networks, the best the NFS server could give was anywhere from 20 to 30MB/s for reads and writes.  Not stellar, but enough to allow me to run 8 to 12 VM’s.  All the servers where on the baking rack, and for the time it seemed ok.  However, the I/O performance was always a problem.

And so began my quest…. Performance Performance Performance

Working for HP does have some advantages.  You get internal access to some of the smartest people anywhere, and there are thousands of articles on performance running for various technologies.  I began to put my HP knowledge to use, and started slowly doing upgrades to improve the storage performance.

In addition, when I was a full time consultant, I either worked at home, or at a customer location.  As a result, my home internet and network need to be running 24/7.  If not, it means loss of work time, and potential disruptions to clients.  Over the years I had been using a number of in-expensive D-Link routers for my internet connection.  In the summer, the heat would rise, just enough to randomly lock up the router.  In 2010, I decided it was finally time to start using eBay, and little would I know eBay would become a source of great inspiration, learning, and well, a strain on the wallet too!  After a lot of investigation, I purchased a Cisco 851.  This little router has just enough IOS to let you learn it, and runs well enough to pretty much forget about your Internet connection.  That was probably one of my best tech purchases. Even after 3 years, that router is still running just fine. It has had uptimes greater than one year.  It truly is set and forget.  Since then, I have never had an issue with my Internet connection.

1810g-24Off of the success of my new router, I decided that it was time to upgrade the network.  Being a good HP’er, I wanted something “Enterprise ready”, but not something that would break the bank.  Since HP employees in Canada get not discounts on Server, Storage or Network gear, it would be a purchase of faith in my employer.  I ordered the HP 1810G-24. About 5 days later, the switch was in my hands.  And all of 15 minutes of configuring it, I had my 3 VLAN’s up and running.  I then quickly swapped out my old D-Link “green” Gigabit switches for the HP.  After a total of about 20 minutes I had gone from a unmanaged network, to a mid-market branch office network.  There was a performance improvement.  My NFS performance increased to around 45MB/s.  That was nearly double in some case, what it was on the D-links.  It just goes to show that what networking you install does make a difference.

A few months passed, and I was still not happy with the storage I/O.  While better, boot storms would easily overwhelm the connections, and trying to boot more than one server at a time was painful.  I have purchased a eSATA enclosure, and was running 4 relatively fast SATA drives, but, performance was well below Gigabit speeds.  The file server itself was a older AMD 3500, single core machine with 2GB of RAM.  Not fast, but I would have thought fast enough for better than 45MB/s network performance in VMware.

So, in my hunt, i decided it was time to take the Fiber Channel plunge.  I read about several Fiber Channel packages for Linux, and SCST turns out to the best one for my needs.  Fast, scalable, easy to install.  Perfect!  My eBay habit kicked into high-gear and a I picked up an old Brocade Silkworm 3852.  16x2gbit/s of Fiber Channel goodness.  I also picked up a lot of 4 Brocade/Qlogic Fiber Channel HBA’s.  On a spur of the moment, I also picked up an old NetApp 2gb/s Fiber Channel disk shelf with 2TB of capacity split over 14 disks.  I created a software RAID 6, installed SCST, configured the zones on the switch… and viola!  The performance in a virtual machine went from 45MB/s to 180MB/s.  I could now actually use the servers at home.  This was a new lease on life for the equipment, and during that time I was able to upgrade to Exchange 2010 and Windows Server 2008R2, I even had a working implementation of OCSR2 as well.  What a difference storage makes.

In Part 2, we will talk about the planning, and design for the new and improved Home Data Center.  In Part 3 we will discuss the actual equipment, choices made, and the performance today.

Windows 2008R2 SP1 Upgrades

Over the last week I have spent the time to upgrade the existing Windows 2008 infrastructure to Windows 2008R2 SP1.  I had been running 2008R2 on a couple of systems, but decided it was time to refresh the environment.  The upgrade process worked like a charm, and all of the systems updated correctly. The database server was in sad shape (it was a VM that was over 3 years old), so it was time to reinstall the OS.  Fortunately, MySQL 5.5 works just fine on 2008R2.

As for patching, I managed to also patch the systems to 2008R2 SP1 without any issues.  Exchange 2010 and the AD systems took the updates just fine.  2008R2 has proven to run a little better and faster under vSphere 4.1U1.  Even the WDDM video driver works perfectly fine!

In combination with Windows 7, the file transfers are very fast.  All in all worth the time to upgrade.

Virtual PC 2007, Virtual Windows XP & Vista

virtualpcLately in the Tech Press there’s been a lot of interest in Microsoft’s new “Virtual XP Mode” for Windows 7.  This addition has the possibility (and some headaches) or allowing older, XP only software to work just fine in the Windows 7 environment.  Not running the Windows 7 RC here, I decided to see if it could be made to work on Vista.

The answer is Yes.

You can download Virtual PC 2007 for Vista here.

You can download the “Virtual XP Image” from Microsoft here.

You’ll need a little utility to extract the VHD from the Virtual XP Image you downloaded from Microsoft.  You can find that here.

Simply install Virtual PC 2007 for Vista, extract the.VHD file from the MS download, and then create a new virtual machine in VPC2007 with the XP image.  And it works!  Install the Virtualization Additions and the performance will be just fine.

My only complaint with VPC 2007 is that it does NOT have USB support.  The whole reason for me to use it is to get my old NEC SuperScript 1400 laser printer working in Vista x64.  I have a hack for Vista x86 (32-bit), but not x64 for not having printing is a pain.  Yes I could buy a printer, but, this one works just fine.

To get the printer working, I did a little Virtualization magic.  I converted the .VHD image from MS into a .VMDK file, and ran it with VMPlayer 2.5 from VMware.  I installed the Windows VMware tools, and everything works just fine.  I now have my old USB printer working on Vista x64.  I expect to use the same trick on Windows 7 when I upgrade.  Incidently, the USB support is pretty slick in VMplayer 2.5.  Unity mode rocks too.

MS Certification & Hyper-V

hyperv_arch

This week I achieved my Microsoft Certified Professional status when I passed my MCTS exam for Hyper-V. I find certifications valuable for customers when showing credibility. They demonstrate someone has the minimum level of knowledge needed to run a product, which is a good thing, especially in the consulting world.

In spending time learning and working with Hyper-V, it’s clear it is a different type of product than VMware VI3 or vSphere 4. Hyper-V gives, what I expect, good OS level virtualization. While VMware is still the market leader, I think Hyper-V could challenge it, mostly in the smaller enterprise and SMB space. The additional of MS Hyper-V R2 for free (which includes High Availability features and Live Migration similar to VMotion) is going to have customers kicking the tires on the product.

I myself still prefer VMware. From a fit and polish perspective, I find it offers more features and, IMHO, VMware is still more mature and proven. I certainly won’t be able to switch to Hyper-V for ITInTheDataCenter.com, simply because I would require a lot more hardware. I think the lack of memory over subscription and ballooning support will hold some customers (like me), back from the product.

Comments?

Active Directory and BSOD

error_buttonEver had one of those days?  During routine maintenance to move some VM’s around to different disks (in an effort to get ready for some new storage), my Active Directory system went down, hard.

The volume was migrated using storage vmotion correctly, or so I thought.  I went to reboot the server after the move to test, and about 10 seconds of getting into the desktop, Windows 2008 BSOD’s with a error message “A device attached to the system is not functioning properly”.  so, I boot off the second plex of the mirror.  Same thing happens.  Now, this has me concerned.  Typically booting the second plex gets things going.  This was more fundamental.

I booted into Directory Services Recovery Mode, and hunted through log files and event logs.  I carefully removed each error as it came up, and I decided to sleep on it and just rebuild the disk plex — just to be safe.  What concerned me was it would boot into DSRM, but not into Safe mode.  Definately something was up!

Rebooting in the morning with a new plex did not fix it.  At this point, I started going through some MS material, and noticed that even in DSRM, Active Directory should start.  In looking through AD’s event log, it had the error, “The log file is corrupt” and would not start AD.  I’ve seen this before on Exchange, so, I tried to repair the AD logs.

Once I removed the corrupted log files, and rebooted, the system came up, and is working properly.  How a volmgr error and AD are related, I’m not sure.  Sometimes it helps to sleep on it.