Posts Tagged ‘business productivity solutions’

Proper Planning of Virtual Servers (Part 2)

computer-servers-business-solutions2In consolidating server hardware, this is usually an attempt to reduce the investment in hardware that would otherwise be required in a data centre. Where you have several processes that don’t consume many resources in themselves, it is always tempting to combine them to reduce your hardware outlay. But this presents a number of problems:

A planned interruption to apply a patch to one service may require an interruption to all services if a reboot is required. Not only are un-needed interruptions to services obviously undesirable, but schedules for planned work are instantly made much more complex.

Change and Release management become much more complex when several services run on the same system and possibly share the same system components (for example, Java Runtime Engine or .Net framework). What do you do when you have two components on the same system using the same framework, with one validated for, and needing, an upgrade and the other lagging behind on this approval?

It should be clear that all of these scenarios require at least some planning, with the first one I present needing the least planning and the final scenario needing the most planning – how much exactly depending on what kind of deployment you ultimately sketch out.

Some parts of the planning are pretty much settled for you from the start. According to this website, the more RAM the better, based on what the host platform can physically support and what the guests sharing it require. With the obvious caveat that you probably don’t need to assign 4GB to a guest that only runs a dedicated DNS server, for example, RAM is always a no-brainer.

CPUs again are pretty much settled for you based on what the host will physically support and how your virtualisation software will allow you to allocate physical CPUs to guest machines. This again is a no-brainer, if only because your options here are only limited to a few.

I personally suggest using at least a dual-processor machine for your Virtual Server host, and making sure you work on utilisation figures somewhere between “average” and “worst case” when calculating CPU loading for each guest in order to minimise the amount of contention between virtual machines sharing the same CPU. For tasks that can run processor intensive I suggest on planning on one physical CPU to one virtual machine.

An area where planning can really make a difference are the virtual disks. On a poorly put together system it isn’t uncommon to see virtual disks from several guest systems thrown onto a hard disk with little or no thought. This will cause immense performance problems as the systems grow in size and complexity due to the large number of reads and writes being made to the same hard disk from different processes.

There appears to be a perception that SANs are magic that is perpetuated by some SAN sales teams. Just buy a few disks and throw your storage and backup needs onto them as you will, goes the mantra, and all will be fine because of our magic caching system. The same thoughts seem to pervade virtual machine disk planning, and are equally false wherever you hear them.

Actually I think it helps to use the same sorts of tools and thinking for good online backup deployment for servers as it does for a good SAN deployment. In both cases, the physical location of the data stored is “abstracted” from the server using it in some way, and in both cases we will tend to see several different servers using the same storage hardware.

By the way, before doing anything different to your server configuration, make sure that you use and backup using Carbonite server backup as part of the whole system management plan. Oh, and if I may digress, Carbonite also has a solution to backup home computers. I have been using them ever since and suggest that you do the same. Make sure that you get hold of a Carbonite offer code for home in order to make it way for affordable.

So, take the “virtual” (or “SAN”) out of the equation for the moment. What matters is the kind of server app you’re trying to deploy and the use it is going to get (testing, production use, DR recovery simulation, etc).

You’ve got some data and you’ve got some disks. Now we’re just doing standard server deployment planning; there are advantages in performance and reliability in placing different parts of a server application on different disks. If it makes sense to put the OS, transaction logs and main database files onto different physical disk spindles in a physical SQL or Exchange server, it makes just as much sense on a virtual server that is being deployed to replace that physical server.

So the basic rules might be: (note that most of these answers are suffixed with a silent “unless you’re using a test system where performance is not an issue”).

  • NEVER store virtual disks on the same spindles as the host operating system.
  • NEVER use software RAID on a host server!
  • NEVER use software RAID on a ‘production’ guest server
  • Run – don’t walk – away from any vendor who suggests throwing it all onto the largest hardware / SAN RAID 5 that you can afford and letting God sort it out. They’re either incompetent or actively out to screw you.
  • Think very carefully before you place virtual disks from different guests on the same spindle. I don’t want to say “Never” because the actual impact depends on your pattern of use for these virtual disks. Again: Test, don’t assume!
  • When deploying virtual disks, the same rules apply as they would for the same process on physical disks. For example, do not allow a database store to use the same disk spindles as its transaction logs.
  • Consider how you will back up each server. Be especially wary of “magic” backup processes that work at the host server level, and test to ensure that you can restore your guest servers to working condition.
  • If two or more servers are designed to provide redundancy for each other (e.g. domain controllers, primary and secondary DNS, etc) then NEVER place them on the same host machine!
  • If copying virtual machine images to “rapid deploy” servers, then be very careful about things like duplicating SIDs. I know it’s boring and adds to the deployment time, but I still suggest using sysprep for these kinds of images.

Proper Planning of Virtual Servers (Part 1)

computer-servers-business-solutionsOne of the more common areas of confusion with Virtual Servers is how to deploy them properly, to ensure good performance and reliability. Some people are scared of Virtual Server technology and refuse to believe it can ever perform well enough to justify the investment. Others see Virtualisation as the magic bullet and end up throwing lots of money at technology they don’t really understand, with disappointing results.

While the Microsoft Virtual Server product is quite new, Virtualisation is a mature technology in general, with Virtual PC being a long established product and VMWare having a large range of virtualisation products to fit most needs and budgets.

When deciding whether or not to use virtualisation in a data centre, you should first of all formulate a list of aims you expect your virtualisation project to achieve. This should be painfully obvious but I’ve seen lots of IT projects implemented because “it looked cool” and these are usually characterised as the ones that end badly and cost way over their original budget.

Speaking of migration of old legacy servers onto new hardware, this frequently reflects old “line of business” apps running on NT4 or old versions of Linux that cannot easily be upgraded to more modern OSes and where maintenance on the current hardware platform has become an issue, according to 95box.com website.

These deployments are usually pretty painless because the Virtualisation process is simply providing a compatibility layer to allow an old OS to run on new hardware for which it would not normally have drivers.

These old server applications typically will not stretch the abilities of modern hardware, so you can probably get away with sticking two or three applications of this kind together on one system without too much thought and get away with it. The use of the word “probably” is important here. Test, don’t assume!

Where users want to upgrade a server in place, but feel a clean install of components is either “required” or at least preferred, then virtualisation can make sense to reduce the hardware requirements for the temporary server used to hold the data while the “proper” hardware is being upgraded.

This hasn’t really come into play a lot so far, but right now we’re seeing people buy “64 bit ready” hardware and running 32 bit versions of operating systems and applications on it while they wait for 64 bit versions of those apps to appear and grow in maturity. This is a common scenario for people who are using SQL Server or Exchange Server on Windows at the moment… 64 bit versions of Exchange are still in beta at the time of writing, and 64 bit versions of SQL Server 2005 are available but very new, and all good DB admins are cautious about doing too many new things at once to data they actually care about.

Before I forget, I came across some other server based solutions for business when I read some fax reviews on ringcentral. It really was very pertinent info that I just had to share as I know that virtualisation entails a lot of planning. Businesses that use virtual fax and other business communication solution should definitely look into their server infrastructure to make sure that all upgrades and processes go smoothly. Just thought that it might also worth sharing this referral code for ringcentral fax in the event that this is something that your business might be interested in.

Anyway, once the tipping point arrives for these systems, it will not be possible to upgrade in place and setting up a temporary system on a virtual server to move the “live” install onto while the proper host system is being rebuilt makes a lot of sense.

In a way, I see the performance issues for this situation being quite similar to the issues you would encounter with moving a legacy system. You need to do some planning of course but you probably can just get away with “throwing” the system onto your virtual server host any way you can, because with luck it won’t be there for very long. You can also limit the amount of systems to be virtualised at any one time to whatever capacity your virtual host can cope with.