Proper Planning of Virtual Servers (Part 2)

computer-servers-business-solutions2In consolidating server hardware, this is usually an attempt to reduce the investment in hardware that would otherwise be required in a data centre. Where you have several processes that don’t consume many resources in themselves, it is always tempting to combine them to reduce your hardware outlay. But this presents a number of problems:

A planned interruption to apply a patch to one service may require an interruption to all services if a reboot is required. Not only are un-needed interruptions to services obviously undesirable, but schedules for planned work are instantly made much more complex.

Change and Release management become much more complex when several services run on the same system and possibly share the same system components (for example, Java Runtime Engine or .Net framework). What do you do when you have two components on the same system using the same framework, with one validated for, and needing, an upgrade and the other lagging behind on this approval?

It should be clear that all of these scenarios require at least some planning, with the first one I present needing the least planning and the final scenario needing the most planning – how much exactly depending on what kind of deployment you ultimately sketch out.

Some parts of the planning are pretty much settled for you from the start. According to this website, the more RAM the better, based on what the host platform can physically support and what the guests sharing it require. With the obvious caveat that you probably don’t need to assign 4GB to a guest that only runs a dedicated DNS server, for example, RAM is always a no-brainer.

CPUs again are pretty much settled for you based on what the host will physically support and how your virtualisation software will allow you to allocate physical CPUs to guest machines. This again is a no-brainer, if only because your options here are only limited to a few.

I personally suggest using at least a dual-processor machine for your Virtual Server host, and making sure you work on utilisation figures somewhere between “average” and “worst case” when calculating CPU loading for each guest in order to minimise the amount of contention between virtual machines sharing the same CPU. For tasks that can run processor intensive I suggest on planning on one physical CPU to one virtual machine.

An area where planning can really make a difference are the virtual disks. On a poorly put together system it isn’t uncommon to see virtual disks from several guest systems thrown onto a hard disk with little or no thought. This will cause immense performance problems as the systems grow in size and complexity due to the large number of reads and writes being made to the same hard disk from different processes.

There appears to be a perception that SANs are magic that is perpetuated by some SAN sales teams. Just buy a few disks and throw your storage and backup needs onto them as you will, goes the mantra, and all will be fine because of our magic caching system. The same thoughts seem to pervade virtual machine disk planning, and are equally false wherever you hear them.

Actually I think it helps to use the same sorts of tools and thinking for good online backup deployment for servers as it does for a good SAN deployment. In both cases, the physical location of the data stored is “abstracted” from the server using it in some way, and in both cases we will tend to see several different servers using the same storage hardware.

By the way, before doing anything different to your server configuration, make sure that you use and backup using Carbonite server backup as part of the whole system management plan. Oh, and if I may digress, Carbonite also has a solution to backup home computers. I have been using them ever since and suggest that you do the same. Make sure that you get hold of a Carbonite offer code for home in order to make it way for affordable.

So, take the “virtual” (or “SAN”) out of the equation for the moment. What matters is the kind of server app you’re trying to deploy and the use it is going to get (testing, production use, DR recovery simulation, etc).

You’ve got some data and you’ve got some disks. Now we’re just doing standard server deployment planning; there are advantages in performance and reliability in placing different parts of a server application on different disks. If it makes sense to put the OS, transaction logs and main database files onto different physical disk spindles in a physical SQL or Exchange server, it makes just as much sense on a virtual server that is being deployed to replace that physical server.

So the basic rules might be: (note that most of these answers are suffixed with a silent “unless you’re using a test system where performance is not an issue”).

  • NEVER store virtual disks on the same spindles as the host operating system.
  • NEVER use software RAID on a host server!
  • NEVER use software RAID on a ‘production’ guest server
  • Run – don’t walk – away from any vendor who suggests throwing it all onto the largest hardware / SAN RAID 5 that you can afford and letting God sort it out. They’re either incompetent or actively out to screw you.
  • Think very carefully before you place virtual disks from different guests on the same spindle. I don’t want to say “Never” because the actual impact depends on your pattern of use for these virtual disks. Again: Test, don’t assume!
  • When deploying virtual disks, the same rules apply as they would for the same process on physical disks. For example, do not allow a database store to use the same disk spindles as its transaction logs.
  • Consider how you will back up each server. Be especially wary of “magic” backup processes that work at the host server level, and test to ensure that you can restore your guest servers to working condition.
  • If two or more servers are designed to provide redundancy for each other (e.g. domain controllers, primary and secondary DNS, etc) then NEVER place them on the same host machine!
  • If copying virtual machine images to “rapid deploy” servers, then be very careful about things like duplicating SIDs. I know it’s boring and adds to the deployment time, but I still suggest using sysprep for these kinds of images.

Tags: , ,

Comments are closed.