Microsoft Virtualization: part 2 (host virtualisation)

Earlier this evening I kicked off a series of posts on the various technologies that are collectively known as Microsoft Virtualization and the first area I’m going to examine is that of server, or host, virtualisation.

Whilst competitors like VMware have been working in the x86 virtualisation space since 1998, Microsoft got into virtualisation through acquisition of Connectix in 2003. Connectix had a product called Virtual PC and, whilst the Mac version was dropped just as MacOS X started to grow in popularity (with its place in the market taken by Parallels Desktop for Mac and VMware Fusion), there have been two incarnations of Virtual PC for Windows under Microsoft ownership – Virtual PC 2004 and Virtual PC 2007.

Virtual PC provides a host virtualisation capability (cf. VMware Workstation) but is aimed at desktop virtualisation (the subject for a future post). It does have a bastard stepchild (my words, albeit based on the inference of a Microsoft employee) called Virtual Server, which uses the same virtual machine and virtual hard disk technology but is implemented to run as a service rather than as an application (comparable with VMware Server) with a web management interface (which I find clunky – as Microsoft’s Matt McSpirit once described it, it’s a bit like Marmite – you either love it or hate it).

Virtual Server ran its course and the latest version is Virtual Server 2005 R2 SP1. The main problem with Virtual Server is the hosted architecture, whereby the virtualisation stack runs on top of a full operating system and involves very inefficient context switches between user and kernel mode in order to access the server hardware – that and the fact that it only supports 32-bit guest operating systems.

With the launch of Windows Server 2008, came a beta of Hyper-V – which, in my view, is the first enterprise-ready virtualisation product that Microsoft has released. The final product shipped on 26 June 2008 (as Microsoft’s James O’Neill pointed out, the last product to ship under Bill Gates’ tenure as a full-time Microsoft employee) and provides a solid and performant hypervisor-based virtualisation platform within the Windows Server 2008 operating system. Unlike the monolithic hypervisor in VMware ESX which includes device drivers for a limited set of supported hardware, Hyper-V uses a microkernalised model, with a high performance VMbus for communication between guest (child) VMs and the host (parent) partition, which uses the same device drivers as Windows Server 2008 to communicate with the hardware. At the time of writing, there are 419 server models certified for Hyper-V in the Windows Server Catalog.

Architecturally, Hyper-V has almost nothing in common with Virtual PC and Virtual Server, although it does use the same virtual hard disk (.VHD) format and virtual machines can be migrated from the legacy platforms to Hyper-V (although, once the VM additions have been removed and replaced with the Hyper-V integration components, they cannot be taken back into a Virtual PC/Virtual Server environment). Available only in 64-bit editions of Windows Server 2008, Hyper-V makes use of hardware assisted virtualisation as well as security features to protect against buffer overflow attacks.

I’ve written extensively about Hyper-V on this blog but the main posts I would highlight for information on Hyper-V are:

Whilst Hyper-V is a remarkably solid product, to some extent the virtualisation market is moving on from host virtualisation (although it is an enabler for various related technologies) and there are those who are wary of it because it’s from Microsoft and its a version 1 product. Then there are those who highlight it’s supposed weaknesses… mostly FUD from VMware (for example, a few days back a colleague told me that he couldn’t implement Hyper-V in an enterprise environment because it doesn’t support failover – a completely incorrect statement).

When configured to use Windows Server 2008’s failover clustering technologies, Hyper-V can save the state of a virtual machine and restart it on another node, using a technology known as quick migration. Live migration (where the contents of memory are copied on the fly, resulting in seamless failover between cluster nodes in a similar manner to VMware VMotion) is a feature that was removed from the first release of Hyper-V. Whilst this has attracted much comment, many organisations who are using virtualisation in a production environment will only fail virtual machines over in a controlled manner – although there will be some exceptions where live migration is required. Nevertheless, at the recent Microsoft Virtualization launch event, Microsoft demonstrated live migration and said it will be in the next release of Hyper-V.

Memory management is another area that has attracted attention – VMware’s ESX product has the ability to overcommit memory as well as to transparently share pages of memory. Hyper-V does not offer this and Microsoft has openly criticised memory overcommitment because the operating system things it is managing memory paging, meanwhile the virtual memory manager is swapping pages to disk whilst transparent page sharing breaks fundamental rules of isolation between virtual machines.

Even so, quoting from Steven Bink’s interview with Bob Muglia, Vice President of Microsoft’s Server and Tools division:

“We talked about VMware ESX and its features like shared memory between VMs, ‘we definitely need to put that in our product’. Later he said it will be in the next release – like hot add memory, disk and NICs will be and live migration of course, which didn’t make it in this release.”

[some minor edits made for the purposes of grammar]

Based on the comments that have been made elsewhere about shared memory management, this should probably be read as “we need something like that” and not “we need to do what VMware has done”.

Then there is scalabilty. At launch, Microsoft cited 4-core, 4-way servers as the sweet spot for virtualisation, with up to 16 cores supported, running up to 128 virtual machines. Now that Intel has lauched it’s new 6-core Xeon 7400 processors (codenamed Dunnington), an update has been released to allow Hyper-V to support 24 cores (and 192 VMs), as described in Microsoft knowledge base article 956710. Given the speed in which that update was released, I’d expect to see similar improvements in line with processor technology enhancements.

One thing is for sure, Microsoft will make some significant improvements in the next full release of Hyper-V. At the Microsoft Virtualization launch, as he demonstrated live migration, Bob Muglia spoke of the new features in the next release of Windows Server 2008, and Hyper-V (which I interpreted as meaning that Hyper-V v2 will be included in Windows Server 2008 R2currently scheduled for release in early 2010). Muglia continued by saying that:

“There’s actually quite a few new features there which we’ll talk about both at the upcoming PDC (Professional Developer’s Conference) in late October, as well as at WinHEC which is the first week of November. We’ll go into a lot of detail on Server 2008 R2 at that time.”

In the meantime, there is a new development – the standalone Hyper-V Server. Originally positioned as a $28 product for the OEM and enterprise channels, this will now be a free of charge download and is due to be released within 30 days of the Microsoft Virtualization launch (so, any day now).

As detailed in the video above, Hyper-V Server is a “bare-metal” virtualisation product and is not a Windows product (do the marketing people at Microsoft really think that Microsoft Hyper-V Server will not be confused with the Hyper-V role in Microsoft Windows Server?).

With just a command line interface (as in server core installations of Windows Server 2008), it includes a configuration utility for basic setup tasks like renaming the computer, joining a domain, updating network settings, etc. but is intended to be remotely managed using the Hyper-V Manager MMC on Windows Server 2008 or Windows Vista SP1, or with System Center Virtual Machine Manager (SCVMM) 2008.

Whilst it looks similar to server core and uses some Windows features (e.g. the same driver model and update mechanism) it has a single role – Microsoft Hyper-V and does not support features in Windows Server 2008 Enterprise Edition like failover clustering (so no quick migration) although the virtual machines can be moved to Windows Server 2008 Hyper-V if required at a later date. Hyper-V Server is also limited to 4 CPU sockets and 32GB of memory (as for Windows Server 2008 Standard Edition). I’m told that Hyper-V Server has a 100MB memory footprint and uses around 1TB of disk (which sounds a lot for a hypervisor – we’ll see when I get my hands on it in a few days time).

Unlike Windows Server 2008 Standard, Enterprise and Datacenter Editions, Hyper-V Server will not require client access licenses (although the virtual machine workloads may) and it does not include any virtualisation rights.

That just about covers Microsoft’s host virtualisation products. The next post in this series will look at various options for desktop virtualisation. In the meantime, I’ll be spending the day at VMware’s Virtualisation Forum in London, to see what’s happening on their side of the fence.

One thought on “Microsoft Virtualization: part 2 (host virtualisation)


  1. Well, that 100MB memory footprint and 1TB of disk was complete nonsense… I’ve just installed Hyper-V server and it’s 2.7GB on disk and running Task Manager shows that it’s using just over 300MB of RAM. It’s also running 30-odd processes, just shy of 50 services and has another 30-or-so services that are stopped. Hyper-V Server is certainly not just the Hyper-V hypervisor without Windows…

Leave a Reply