Proper Planning of Virtual Servers (Part 2)

computer-servers-business-solutions2In consolidating server hardware, this is usually an attempt to reduce the investment in hardware that would otherwise be required in a data centre. Where you have several processes that don’t consume many resources in themselves, it is always tempting to combine them to reduce your hardware outlay. But this presents a number of problems:

A planned interruption to apply a patch to one service may require an interruption to all services if a reboot is required. Not only are un-needed interruptions to services obviously undesirable, but schedules for planned work are instantly made much more complex.

Change and Release management become much more complex when several services run on the same system and possibly share the same system components (for example, Java Runtime Engine or .Net framework). What do you do when you have two components on the same system using the same framework, with one validated for, and needing, an upgrade and the other lagging behind on this approval?

It should be clear that all of these scenarios require at least some planning, with the first one I present needing the least planning and the final scenario needing the most planning – how much exactly depending on what kind of deployment you ultimately sketch out.

Some parts of the planning are pretty much settled for you from the start. According to this website, the more RAM the better, based on what the host platform can physically support and what the guests sharing it require. With the obvious caveat that you probably don’t need to assign 4GB to a guest that only runs a dedicated DNS server, for example, RAM is always a no-brainer.

CPUs again are pretty much settled for you based on what the host will physically support and how your virtualisation software will allow you to allocate physical CPUs to guest machines. This again is a no-brainer, if only because your options here are only limited to a few.

I personally suggest using at least a dual-processor machine for your Virtual Server host, and making sure you work on utilisation figures somewhere between “average” and “worst case” when calculating CPU loading for each guest in order to minimise the amount of contention between virtual machines sharing the same CPU. For tasks that can run processor intensive I suggest on planning on one physical CPU to one virtual machine.

An area where planning can really make a difference are the virtual disks. On a poorly put together system it isn’t uncommon to see virtual disks from several guest systems thrown onto a hard disk with little or no thought. This will cause immense performance problems as the systems grow in size and complexity due to the large number of reads and writes being made to the same hard disk from different processes.

There appears to be a perception that SANs are magic that is perpetuated by some SAN sales teams. Just buy a few disks and throw your storage and backup needs onto them as you will, goes the mantra, and all will be fine because of our magic caching system. The same thoughts seem to pervade virtual machine disk planning, and are equally false wherever you hear them.

Actually I think it helps to use the same sorts of tools and thinking for good online backup deployment for servers as it does for a good SAN deployment. In both cases, the physical location of the data stored is “abstracted” from the server using it in some way, and in both cases we will tend to see several different servers using the same storage hardware.

By the way, before doing anything different to your server configuration, make sure that you use and backup using Carbonite server backup as part of the whole system management plan. Oh, and if I may digress, Carbonite also has a solution to backup home computers. I have been using them ever since and suggest that you do the same. Make sure that you get hold of a Carbonite offer code for home in order to make it way for affordable.

So, take the “virtual” (or “SAN”) out of the equation for the moment. What matters is the kind of server app you’re trying to deploy and the use it is going to get (testing, production use, DR recovery simulation, etc).

You’ve got some data and you’ve got some disks. Now we’re just doing standard server deployment planning; there are advantages in performance and reliability in placing different parts of a server application on different disks. If it makes sense to put the OS, transaction logs and main database files onto different physical disk spindles in a physical SQL or Exchange server, it makes just as much sense on a virtual server that is being deployed to replace that physical server.

So the basic rules might be: (note that most of these answers are suffixed with a silent “unless you’re using a test system where performance is not an issue”).

  • NEVER store virtual disks on the same spindles as the host operating system.
  • NEVER use software RAID on a host server!
  • NEVER use software RAID on a ‘production’ guest server
  • Run – don’t walk – away from any vendor who suggests throwing it all onto the largest hardware / SAN RAID 5 that you can afford and letting God sort it out. They’re either incompetent or actively out to screw you.
  • Think very carefully before you place virtual disks from different guests on the same spindle. I don’t want to say “Never” because the actual impact depends on your pattern of use for these virtual disks. Again: Test, don’t assume!
  • When deploying virtual disks, the same rules apply as they would for the same process on physical disks. For example, do not allow a database store to use the same disk spindles as its transaction logs.
  • Consider how you will back up each server. Be especially wary of “magic” backup processes that work at the host server level, and test to ensure that you can restore your guest servers to working condition.
  • If two or more servers are designed to provide redundancy for each other (e.g. domain controllers, primary and secondary DNS, etc) then NEVER place them on the same host machine!
  • If copying virtual machine images to “rapid deploy” servers, then be very careful about things like duplicating SIDs. I know it’s boring and adds to the deployment time, but I still suggest using sysprep for these kinds of images.

Proper Planning of Virtual Servers (Part 1)

computer-servers-business-solutionsOne of the more common areas of confusion with Virtual Servers is how to deploy them properly, to ensure good performance and reliability. Some people are scared of Virtual Server technology and refuse to believe it can ever perform well enough to justify the investment. Others see Virtualisation as the magic bullet and end up throwing lots of money at technology they don’t really understand, with disappointing results.

While the Microsoft Virtual Server product is quite new, Virtualisation is a mature technology in general, with Virtual PC being a long established product and VMWare having a large range of virtualisation products to fit most needs and budgets.

When deciding whether or not to use virtualisation in a data centre, you should first of all formulate a list of aims you expect your virtualisation project to achieve. This should be painfully obvious but I’ve seen lots of IT projects implemented because “it looked cool” and these are usually characterised as the ones that end badly and cost way over their original budget.

Speaking of migration of old legacy servers onto new hardware, this frequently reflects old “line of business” apps running on NT4 or old versions of Linux that cannot easily be upgraded to more modern OSes and where maintenance on the current hardware platform has become an issue, according to website.

These deployments are usually pretty painless because the Virtualisation process is simply providing a compatibility layer to allow an old OS to run on new hardware for which it would not normally have drivers.

These old server applications typically will not stretch the abilities of modern hardware, so you can probably get away with sticking two or three applications of this kind together on one system without too much thought and get away with it. The use of the word “probably” is important here. Test, don’t assume!

Where users want to upgrade a server in place, but feel a clean install of components is either “required” or at least preferred, then virtualisation can make sense to reduce the hardware requirements for the temporary server used to hold the data while the “proper” hardware is being upgraded.

This hasn’t really come into play a lot so far, but right now we’re seeing people buy “64 bit ready” hardware and running 32 bit versions of operating systems and applications on it while they wait for 64 bit versions of those apps to appear and grow in maturity. This is a common scenario for people who are using SQL Server or Exchange Server on Windows at the moment… 64 bit versions of Exchange are still in beta at the time of writing, and 64 bit versions of SQL Server 2005 are available but very new, and all good DB admins are cautious about doing too many new things at once to data they actually care about.

Before I forget, I came across some other server based solutions for business when I read some fax reviews on ringcentral. It really was very pertinent info that I just had to share as I know that virtualisation entails a lot of planning. Businesses that use virtual fax and other business communication solution should definitely look into their server infrastructure to make sure that all upgrades and processes go smoothly. Just thought that it might also worth sharing this referral code for ringcentral fax in the event that this is something that your business might be interested in.

Anyway, once the tipping point arrives for these systems, it will not be possible to upgrade in place and setting up a temporary system on a virtual server to move the “live” install onto while the proper host system is being rebuilt makes a lot of sense.

In a way, I see the performance issues for this situation being quite similar to the issues you would encounter with moving a legacy system. You need to do some planning of course but you probably can just get away with “throwing” the system onto your virtual server host any way you can, because with luck it won’t be there for very long. You can also limit the amount of systems to be virtualised at any one time to whatever capacity your virtual host can cope with.

Online Backup Before You Defrag Your Computer (Part 2)

online-backup-before-defragI can’t understand the fascination of most otherwise perfectly normal Windows users with the Defrag option in Windows. It’s reviled as not being powerful enough, it’s credited with all kinds of performance improvements and even suspected of concealing superpowers to fix all kinds of problems.

I don’t understand why. I can’t think of any other community of users who are so obsessed with how ‘fragmented‘ their computer hard disks are. And the problem only seems to be coming to a boil once more with Windows Vista, where Microsoft have taken a few hard decisions in their redesign and update of this operating system background task.

Before you do anything to your computer, I highly suggest that you online backup everything. You’ll never know what can happen when you defrag says the Carbonite online backup solution expert at This is something that I can attest to having had some issues that occurred before. I really recommend online backup because it protects everything on your computer, including your operating system, all the installed software, and all the files that you have created and stored on your computer.

External harddrives are not enough because if some sort of disaster strikes to your home or business, all your files are on premise which does not spare it at all. The best is to have everything stored and backed up in a remote location. Now if you still have not installed any online backup software on your computer, I highly suggest that you use this Carbonite backup offer code from to backup your home computer. This allows you to backup unlimited files. For businesses, use this Carbonite backup for business offer code from, which allows you to backup an unlimited number of business computers.

Now back to the topic of defragging. Obviously by asking your defragger of choice if it considers your hard disk to be fragmented or not. But what if you don’t trust your defragger for some reason? Now most people would go and get a life, but for those of you who are in the grip of Windows disk fragmentation paranoia, this isn’t enough. What you must do instead is purchase several commercial defraggers and use them to test each other.

Then you get very unhappy because you can’t seem to get your disk defragged properly. You run one product and let it do what it wants and when it’s finished you test it with another product, and that finds some fragments, and so you run this second program and test it with the first one and go around in circles until you give up and ask for help. Why? The more cynical people in the audience (or those who use other operating systems and haven’t contracted this paranoia) may be asking themselves just how much of a problem a fragmented hard disk is, and if maybe these products are a prime example of snake oil.

Frankly, I’m not sure that the products are snake oil, but some of the hysteria surrounding the way these utilities are sold to ‘home user’ types does make me wonder. In some special cases, workstation drive fragmentation can be a real issue that needs to be carefully addressed, and servers should have all aspects of drive health checked on a regular basis, but for most home users the built in tools provided by Windows will be more than enough to keep things running well.