The SVVP Wizard clears up a support question around virtualising Microsoft products on other platforms

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier this week, I picked up an e-mail from one of my colleagues where he asked

“Do Microsoft officially support Exchange 2007 on VMware ESX virtual machines?”

That seems a fair enough question – and not an uncommon one either in a world where many organisations operate a virtualise-first policy and so are reluctant to deploy infrastructure applications such as Exchange on physical hardware.

One of our colleagues who specialises in messaging technologies referred us to a post on the Exchange Team blog (should you virtualise your Exchange Server 2007 SP1 environment – of course “should you” and “can you” are very different issues and it may be that the best way to consolidate mailbox servers is fewer, larger servers rather than lots of little virtualised ones) as well as to the excessively wordy Microsoft Support Policies and Recommendations for Exchange Servers in Hardware Virtualization Environments on TechNet.

After reading Microsoft knowledge base article 957006 (which Clive Watson referred me to a few months ago) I was pretty confident that Exchange running virtualised was supported as long as the virtualisation platform was either Hyper-V or another technology covered by the Server Virtualization Validation Program (SVVP) but we wanted better than “pretty confident” – if the supportability of an environment that we design is called into question later it could be very costly and I wanted a 100% cast iron guarantee.

Then I read Matt McSpirit’s blog post about the SVVP Wizard. This three-step process not only confirmed that the environment was covered but it also gave me the low down on exactly which features were and were not supported.

So, if you’re still not sure if a Microsoft product is supported in a virtualised environment, I recommend checking out the SVVP Wizard.

VMware launches vSphere

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier today, VMware launched their latest virtualisation platformvSphere. vSphere is what was once known as Virtual Infrastructure (VI) 4 and, for those who are unfamiliar with the name, the idea is that virtual infrastructure is more of a description of what VMware’s products do than a product name and the company is trying to put forward the message that, after more than 10 years of virtualisation (originally just running multiple operating systems on a workstation, then on a hypervisor, then moving to a virtual infrastructure), this fourth generation of products will transform the datacentre into a private “cloud” in what they refer to as an evolutionary step but a revolutionary approach.
VMware vSphere

Most of the launch was presented by VMware President and CEO, Paul Maritz, who welcomed a succession of leaders from Cisco, Intel, and HP to the stage in what sometimes felt like a bizarre “who’s who in The Valley” networking event, but had to be ousted from the stage by his own CTO, Stephen Herrod, in order to keep the presentation on schedule! Later on, he was joined by Michael Dell before VMware Chairman, Joseph M. Tucci closed down the event.

VMware President and CEO, Paul Maritz, speaking at the vSphere 4 launchFor a man whose career included a 14-year spell at Microsoft, where he was regarded as the number 3 behind Bill Gates and Steve Ballmer, it seemed odd to me how Maritz referred to the Redmond giant several times during his presentation but never by name – always as a remark about relative capabilities which seemed to indicate that, far from being a market leader that is comfortable with its portfolio, VMware is actually starting to regard Microsoft (and presumably Citrix too) as credible competition – not to be disregarded but certainly not to be named in the launch of a new version of their premier product!

Maritz also seemed to take a dig at some other vendors, as he cited IBM as claiming to have invented the hypervisor [my colleagues from ICL may disagree] but not having realised the potential, and referring to some clouds as “the ultimate Californian hotels” where you can check in but not check out as they are highly proprietary computing systems. I can only imagine that he’s referring to offerings for Amazon and Google here, as Microsoft’s Windows Azure is built on the same infrastructure that runs in on-premise datacentres – Windows Server and the .NET Framework, extended for the cloud – in just the same way that vSphere is VMware’s cloud operating system – be that internal, external or a hybrid and elastic cloud which spans on premise and service-based computing paradigms.

It’s all about the cloud

So, ignoring the politics, what is vSphere about? VMware view vSphere as a better way to build computing platforms, from small and medium businesses (SMBs) to the cloud. Maritz explained that “the cloud” is a useful shorthand for the important attributes of this platform, built from industry standard components (referring to cloud offerings from Amazon, Google, and “other” vendors – Microsoft then!), offering scalable, on-demand, flexible, well-managed, lights-out configurations. Whilst today’s datacentres are seen by VMware as pillars of complexity (albeit secure and well-understood complexity), VMware see the need for something evolutionary, something that remains secure and open – and they want to provide the bridge between datacentre and cloud, severing complex links, jacking up the software to separate it from the hardware and slide in new level of software (vSphere), whereby the applications, middleware and operating system see an aggregated pool of hardware as a single giant computing resource. Not just single machines but an evolutionary roadmap from today’s datacentres. A platform. An ecosystem. One which offers: compute; storage; network, security, availability, and management capabilities, extendable by partners.

If you take away the marketing rhetoric, VMware’s vision of the future is not dissimilar to Microsoft’s. Both see devices becoming less significant as the focus shifts towards the end user. Both have a vision for a cloud-centric services, backed up with on-premise computing where business requirements demand it. And both seem to believe the same analysts that say 70% of IT budgets today are spent on things that do not differentiate businesses from their competition.

Of course, VMware claims to be further ahead than their competition. That’s no surprise – but both VMware and Microsoft plan to bring their cloud offerings to fruition within the next six months (whilst VMware have announced product availability for vSphere, they haven’t said when their vCloud service provider partners will be ready; although Windows Azure’s availability will be an iterative approach and the initial offering will not include all of the eventual capabilities). And, whilst vSphere has some cool new features that further differentiate it from the Microsoft and Citrix virtualisation offerings, that particular technology gap is closing too.

Not just for the enterprise

Whilst VMware aim to revolutionalise the “plumbing”, they also claim that the advanced features make their solutions applicable to the low end of the market, announcing an Essentials product to provide “always on IT in a box”, using a small vSphere configuration with just a few servers, priced from $166 per CPU (or $995 for 3 servers)

Clients – not desktops

For the last year or so, VMware have been pushing VDI as an approach and, in some environments, that seems to be making some traction. Moving away from desktops and focusing on people rather than devices, VDI has become VMware View, part of the vClient initiative which takes the “desktop” into the cloud.

Some great new features

If Maritz’s clumsy Eagles references weren’t bad enough, Stephen Herrod’s section included a truly awful video with a gold disc delivered in Olympic relay style and “additional security” for the demo that featured “the presidential Blackberry”. It was truly cringe-worthy but Herrod did at least show off some of the technology as he talked through the efficiency, control, and choice marketing message:

  • Efficiency:
    • The ability to handle:
      • 2x the number of virtual processors per virtual machine (up from 4 to 8).
      • 2.5x more virtual NICs per virtual machine (up from 4 to 10).
      • 4x more memory per virtual machine (up from 64 GB to 255GB).
      • 3x increase in network throughput (up from 9 Gb/s to 30Gb/s).
      • 3x increase in the maximum recorded IOPS (up to over 300,000).
    • The ability to create vSphere clusters to build a giant computer with up to
      • 32 hosts.
      • 2048 cores.
      • 1280 VMs.
      • 3 million IOPS.
      • 32TB RAM.
      • 16PB storage.
    • vStorage thin provisioning – saving up to 50% storage through data de-duplication.
    • Distributed power management – resulting in 50% power savings during VMmark testing and allowing servers to be turned on/off without affecting SLAs. Just moving from VI3 to vSphere 4 should be expected to result in a 20% saving.
  • Control:
    • Host profiles make the giant computer easy to extend and scale with desired configuration management functionality.
    • Fault tolerance for zero downtime and zero data loss on failover. A shadow VM is created as a replica running on a second host, re-executing every piece of IO to keep the two VMs in lockstep. If one fails, there is seamless cutover and another VM is spawned so that it continues to be protected.
    • VMsafe APIs provide new always-on security offerings including vShield zones to maintain compliance without diverting non-compliant machines to a different network but zoning them within vSphere so that they can continue to run efficiently within the shared infrastructure whilst security compliance issues are addressed.
  • Choice:
    • An extensive hardware compatibility list.
    • 4x the number of operating systems supported as “the leading competitor”.
    • Dynamic provisioning.
    • Storage VMotion – the ability to move a VM between storage arrays in the same way as VMotion moves VM between hosts.

Packaging and pricing

It took 1000 VMware engineers, in 14 offices across the globe, three million engineering hours, to add 150 new features in the development of VMware vSphere 4.

VMware claim that vSphere is “The best platform for building cloud infrastructures”. But that’s exactly it – a platform for building the infrastructure. Something has to run on top of that infrastructure too! Nevertheless VMware does look to have a great new product set and features like vStorage thin provisioning, VMSafe APIs, Storage VMotion and Fault Tolerance are big steps forward. On the other hand, vSphere is still very expensive – at a time when IT budgets are being squeezed.

VMware vSphere 4 will be available in a number of product editions (Essentials, Essentials Plus, Standard, Advanced, Enterprise and Enterprise Plus) with per-CPU pricing starting at $166 and rising to $3495, not including the cost of vCenter for management of the infrastructure ($1495 to $4995 per instance) and a mandatory support subscription.

A comparison chart for the various product features is also available.

General availability of vSphere 4 is expected during the second quarter of 2009.

As XenServer goes free, new management tools are promised, and VMware gets another thorn in its side

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I may be a Microsoft MVP for Virtual Machine technology so my bias towards Hyper-V is pretty obvious (I’m also a VMware Certified Professional and I do try to remain fairly objective) but I did have to giggle when a colleague tipped me off this morning about some new developments at Citrix just as VMware are gearing up for next week’s VMworld Europe conference in Cannes.

Officially under embargo until next week (not an embargo that I’ve signed up to though), ZDNet is reporting that Citrix is to offer XenServer for free (XenServer is a commercial product based on the open source Xen project). From my standpoint, this is great news because Citrix and Microsoft already work very closely (Citrix developed the Linux integration for Hyper-V) and Citrix will be selling new management tools which will improve the management experience for both XenServer and Hyper-V but, in addition, Microsoft SCVMM will support XenServer (always expected, but never officially announced), meaning that SCVMM will further improve its position as a “manager of managers” and provide a single point of focus for managing all three of the major hypervisors.

VMware, of course, will respond and tell us that this is not simply a question of software cost (to some extent, they are correct, but many of the high-end features that they offer over the competition are just the icing on the top of the cake), that they have moved on from the hypervisor and how their cloud-centric Virtual Datacentre Operating System model will be the way forward. That may be so, but with server virtualisation now moving into mainstream computing environments and with Citrix and Microsoft already working closely together (along with Novelland now Red Hat), this is almost certainly not good news for VMware’s market dominance.

Further reading

Running VMware Server on top of Hyper-V… or not

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few days ago, I came across a couple of blog posts about how VMware Server won’t run on top of Hyper-V. Frankly, I’m amazed that any hosted virtualisation product (like VMware Server) will run on top of any hypervisor – I always understood that hosted virtualisation required so many hacks to work at all that if it saw something that wasn’t the real CPU (i.e. a hypervisor handling access to the processor’s hardware virtualisation support) then it might be expected to fall over in a heap – and it seems that VMware even coded VMware Server 2.0 to check for the existence of Hyper-V and fail gracefully. Quite what happens with VMware Server on top of ESX or XenServer, I don’t know – but I wouldn’t expect it to work any better.

Bizarrely, Virtual PC and Virtual Server will run on Hyper-V (I have even installed Hyper-V on top of Hyper-V whilst recording the installation process for a Microsoft TechNet video!) and, for the record, ESX will run in VMware Workstation too (i.e. hypervisor on top of hosted virtualisation). As for Hyper-V in VMware Workstation VM – I’ve not got around to trying it yet but Microsoft’s Matt McSpirit is not hopeful.

Regardless of the above, Steve Graegart did come up with a neat solution for those instances when you really must run a hosted virtualisation solution and Hyper-V on the same box. It involves dual-booting, which is a pain in the proverbial but, according to Steve, it works:

  1. Open a command prompt and create a new [boot loader] entry by copying the default one bcdedit /copy {default} /d "Boot without Hypervisor"
  2. After successful execution copy the GUID (ID of the new boot loader entry) including the curly braces to the clipboard.
  3. Set the HyperVisorLaunchType property to off bcdedit /set {guid} hypervisorlaunchtype off [using] the GUID you’ve previously copied to the clipboard.

After this, you should now have a boot time selection of whether or not to start Hyper-V (and hence whether or not an alternative virtualisation solution will run as expected).

Microsoft Virtualization: part 6 (management)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Today’s release of System Center Virtual Machine Manager 2008 is a perfect opportunity to continue my series of blog posts on Microsoft Virtualization technologies by highlighting the management components.

Microsoft view of virtualisation

System Center is at the heart of the Microsoft Virtualization portfolio and this is where Microsoft’s strength lies as management is absolutely critical to successful implementation of virtualisation technologies. Arguably, no other virtualisation vendor has such a complete management portfolio for all the different forms of virtualisation (although competitors may have additional products in certain niche areas) – and no-one else that I’m aware of is able to manage physical and virtual systems in the same tools and in the same view:

  • First up, is System Center Configuration Manager (SCCM) 2007, providing patch management and deployment; operating system and application configuration management; and software upgrades.
  • System Center Virtual Machine Manager (SCVMM) provides virtual machine management and server consolidation and resource utilisation optimisation, as well as providing the ability for physical to virtual (P2V) and limited virtual to virtual (V2V) conversion (predictably, from VMware to Microsoft, but not back again).
  • System Center Operations Manager (SCOM) 2007 (due for a second release in the first quarter of 2009) provides the end-to-end service management; server and application health monitoring and management (regardless of whether the server is physical or virtual); and performance monitoring and analysis.
  • System Center Data Protection Manager (SCDPM) completes the picture, providing live host virtual machine backup with in-guest consistency and rapid recovery (basically, quiescing VMs, before taking a snapshot and restarting the VM whilst backup continues – in a similar manner to VMware Consolidated Backup but also with the ability to act as a traditional backup solution).

But hang on – isn’t that four products to license? Yes, but there are ways to do this in a very cost-effective manner – albeit requiring some knowledge of Microsoft’s licensing policies which can be very confusing at times, so I’ll have a go at explaining things…

From the client management license perspective, SCCM is part of the core CAL suite that is available to volume license customers (i.e. most enterprises who are looking at Microsoft Virtualization). In addition, the Enterprise CAL suite includes SCOM (and many other products).

Looking at server management and quoting a post I wrote a few months ago licensing System Center products:

The most cost-effective way to license multiple System Center products is generally through the purchase of a System Center server management suite licence:

Unlike SCVMM 2007 (which was only available as part of the SMSE), SCVMM 2008 is available as a standalone product but it should be noted that, based on Microsoft’s example pricing, SCVMM 2008 (at $1304) is only marginally less expensive than the cost of the SMSE (at $1497) – both quoted prices include two years of software assurance and, for reference, the lowest price for VMware Virtual Center Management Server (VCMS) on the VMware website this morning is $6044. Whilst it should be noted that the VCMS price is not a direct comparison as it includes 1 year of Gold 12×5 support, it is considerably more expensive and has lower functionality.

It should be noted that the SMSE is virtualisation-technology-agnostic and grants unlimited virtualisation rights. By assigning an SMSE to the physical server, it can be:

  • Patched/updated (SCCM).
  • Monitored (SCOM).
  • Backed Up (SCDPM).
  • VMM host (SCVMM).
  • VMM server (SCVMM).

One of the advantages of using SCVMM and SCOM together is the performance and resource optimisation (PRO) functionality. Stefan Stranger has a good example of PRO in a blog post from earlier this year – basically SCVMM uses the management pack framework in SCOM to detect issues with the underlying infrastructure and suggest appropriate actions for an administrator to take – for example moving a virtual machine workload to another physical host, as demonstrated by Dell integrating SCVMM with their hardware management tools at the Microsoft Management Summit earlier this year).

I’ll end this post with a table which shows the relative feature sets of VMware Virtual Infrastructure Enterprise and the Windows Server 2008 Hyper-V/Server Management Suite Enterprise combination:

VMware Virtual Infrastructure Enterprise Microsoft Windows Server 2008/Server Management Suite Enterprise
Bare-metal Hypervisor ESX/ESXi Hyper-V
Centralised VM management Virtual Center SCVMM
Manage ESX/ESXi and Hyper-V SCVMM
VM Backup VCB SCDPM
High Availability/Failover Virtual Center Windows Server Clustering
VM Migration VMotion Quick Migration
Offline VM Patching Update Manager VMM (with Offline Virtual Machine Servicing Tool)
Guest Operating System patching/configuration management SCCM
End-to-end operating system monitoring SCOM
Intelligent placement DRS SCVMM
Integrated physical and virtual management SMSE

This table is based on one from Microsoft and, in fairness, there are a few features that VMware would cite that Microsoft doesn’t yet have (memory management and live migration are the usual ones). It’s true to say that VMware is also making acquisitions and developing products for additional virtualisation scenarios (and has a new version of Virtual Infrastructure on the way – VI4) but the features and functionality in this table are the ones that the majority of organisations will look for today. VMware has some great products (read my post from the recent VMware Virtualization Forum) – but if I was an IT Manager looking to virtualise my infrastructure, then I’d be thinking hard about whether I really should be spending all that money on the VMware solution, when I could use the same hardware with less expensive software from Microsoft – and manage my virtual estate using the same tools (and processes) that I use for the physical infrastructure (reducing the overall management cost). VMware may have maturity on their side but, when push comes to shove, the total cost of ownership is going to be a major consideration in any technology selection.

Microsoft virtualisation news

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Some time back, there was talk of System Center Virtual Machine Manager 2008 (then called SCVMM vNext) shipping within 90 days of Hyper-V. This link was later denied, or at least downplayed (depending upon who you spoke to at Microsoft) but it seems that SCVMM 2008 is expected to ship in September… that’s ooh… about 90 days after Hyper-V. Of course, speculating on product release dates is always a risky business, but Rakesh Malhotra should know (he runs the SCVMM program management team).

On a related note, he also explains why SCVMM requires virtual center in order to integrate with VMware ESX (a question I asked a few days back after the release of the VMware Infrastructure Toolkit for Windows v1.0 (PowerShell cmdlets for VI).

Last, but not least, a Microsoft Virtualization User Group has been formed and have an inaugural meeting planned at Microsoft’s London (Victoria) offices on 24 September.

Virtualised hardware hotel

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I was at a VMware event yesterday where they proudly played this video…

…it’s a bit of fun (and the music is really catchy – even if the lip sync is a bit out!) and was apparently first shown at VMworld a few months back.

It’s not just VMware that can offer this type of solution though – I did use VMware Virtual Infrastructure (VI) in the design I produced for a server consolidation exercise with a “big four” accountancy firm a couple of years back but it was very expensive and required a huge leap of faith on the part of both the customer and the datacentre managed service provider. Now we’re in the second half of 2008, I’m not sure if I would be using VMware products in my “virtualised hardware hotel”. For a lot less money I could do the same thing with Windows Server 2008 and Hyper-V, together with System Center Virtual Machine Manager 2008. Some people will argue that the VMware products have maturity on their side and I’ll concede that it’s true – VMware did create the x86 virtualisation market – but a hypervisor (or virtualisation layer, in VMware-speak) is a commodity now and the simple fact is that I really can’t justify advising my clients to spend the extra money on ESX and Virtual Center, especially as the Microsoft offerings under the System Center banner can be used to manage my virtual and physical infrastructure as one.

If only Microsoft produced viral videos like this, I could share one with you… so come on Redmond… give me something to play back at the VMware boys (and girls).

Microsoft releases Hyper-V to manufacturing

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Hyper-VWhen Windows Server 2008 shipped with only a beta version of the new “Hyper-V” virtualisation role in the box Microsoft undertook to release a final version within 180 days. I’ve commented before that, based on my impressions of the product, I didn’t think it would take that long and, as Microsoft ran at least two virtualisation briefings this week in the UK, I figured that something was just about to happen (on the other hand I guess they could just have been squeezing the events into the 2007/8 marketing budget before year-end on 30 June).

The big news is that Microsoft has released Hyper-V to manufacturing today.

[Update: New customers and partners can download Hyper-V. Customers who have deployed Windows Server 2008 can receive Hyper-V from Windows Update starting from 8 July 2008.]

Why choose Hyper-V?

I’ve made no secret of the fact that I think Hyper-V is one of the most significant developments in Windows Server 2008 (even though the hypervisor itself is a very small piece of code), and, whilst many customers and colleagues have indicated that VMware has a competitive advantage through product maturity, Microsoft really are breaking down the barriers that, until now, have set VMware ESX apart from anything coming out of Redmond.

When I asked Byron Surace, a Senior Product Manager for Microsoft’s Windows Server Virtualization group, why he believes that customers will adopt Hyper-V in the face of more established products, like ESX, he put it down to two main factors:

  • Customers now see server virtualisation as a commodity feature (so they expect it to be part of the operating system).
  • The issue of management (which I believe is the real issue for organisations adopting a virtualisation strategy) – and this is where Microsoft System Center has a real competitive advantage with the ability to manage both the physical and virtual servers (and the running workload) within the same toolset, rather than treating the virtual machine as a “container”.

When asked to comment on Hyper-V being a version 1 product (which means it will be seen by many as immature), Surace made the distinction between a “typical” v1 product and something “special”. After all, why ship a product a month before your self-imposed deadline is up? Because customer evidence (based on over 1.3 million beta testers, 120 TAP participants and 140 RDP customers) and analyst feedback to date is positive (expect to see many head to head comparisons between ESX and Hyper-V over the coming months). Quoting Surace:

“Virtualisation is here to stay, not a fad. [… it is a] major initiative [and a] pillar in Windows Server 2008.”

I do not doubt Microsoft’s commitment to virtualisation. Research from as recently as October 2007 indicates only 7% of servers are currently virtualised but expect that to grow to 17% over the next 2 years. Whilst there are other products to consider (e.g. Citrix XenServer), VMware products currently account for 70% of the x86 virtualisation market (4.9% overall) and are looking to protect their dominant position. One strategy appears to be pushing out plenty of FUD – for example highlighting an article that compares Hyper-V to VMware Server (which is ridiculous as VMware Server is a hosted platform – more analogous to the legacy Microsoft Virtual Server product, albeit more fully-featured with SMP and 64-bit support) and commenting that live migration has been dropped (even though quick migration is still present). The simple fact is that VMware ESX and Microsoft Hyper-V are like chalk and cheese:

  • ESX has a monolithic hypervisor whilst Hyper-V takes the same approach as the rest of the industry (including Citrix/Xen and Sun) with its microkernelised architecture which Microsoft consider to be more secure (Hyper-V includes no third party code whilst VMware integrates device drivers into its hypervisor).
  • VMware use a proprietary virtual disk format whilst Microsoft’s virtual hard disk (.VHD) specification has long since been offered up as an open standard (and is used by competing products like Citrix XenServer).
  • Hyper-V is included within the price of most Windows Server 2008 SKUs, whilst ESX is an expensive layer of middleware.
  • ESX doesn’t yet support 64-bit Windows Server 2008 (although that is expected in the next update).

None of this means that ESX, together with the rest of VMware’s Virtual Infrastructure (VI), are not good products but for many organisations Hyper-V offers everything that they need without the hefty ESX/VI price tag. Is the extra 10% really that important? And when you consider management, is VMware Virtual Infrastructure as fully-featured as the Microsoft Hyper-V and System Center combination? Then consider that server virtualisation is just one part of Microsoft’s overall virtualisation strategy, which includes server, desktop, application, presentation and profile virtualisation, within an overarching management framework.

Guest operating system support

At RTM the supported guest operating systems have been expanded to include:

  • Windows Server 2008 32- or 64-bit (1, 2 or 4-way SMP).
  • Windows Server 2003 32- or 64-bit (1, or 2 way SMP).
  • Windows Vista with SP1 32- or 64-bit (1, or 2 way SMP).
  • Windows XP with SP3 64-bit (1, or 2 way SMP), with SP2 64-bit (1, or 2 way SMP) or with SP2 32-bit (1 vCPU only).
  • Windows Server 2000 with SP4 (1 vCPU only).
  • SUSE Linux Enterprise Server 10 with SP1 or 2, 32- or 64-bit.

Whilst this is a list of supported systems (i.e. those with integration components to make full use of Hyper-V’s synthetic device driver model), others may work (in emulation mode) but my experience of installing the Linux integration components is that it is not always straightforward. Meanwhile, for many, the main omissions from that list will be Red Hat and Debian-based Linux distributions (e.g. Ubuntu). Microsoft isn’t yet making an official statement on support for other flavours of Linux (and the Microsoft-Novell partnership makes SUSE an obvious choice) but they are pushing the concept of a virtualisation ecosystem where customers don’t need to run one virtualisation technology for Linux/Unix operating systems and another for Windows and its logical to assume that this ecosystem should also include the leading Linux distribution (I’ve seen at least one Microsoft slide listing RHEL as a supported guest operating system for Hyper-V), although Red Hat’s recent announcement that they will switch their allegiance from Xen to KVM could raise some questions (it seems that Red Hat has never been fully on-board with the Xen hypervisor).

Performance and scalability

Microsoft are claiming that Hyper-V disk throughput is 150% that of VMware ESX Server – largely down to the synthetic device driver model (with virtualisation service clients in child partitions communicating with virtualisation service providers in the parent partition over a high-speed VMBus to access disk and network resources using native Windows drivers). The virtualisation overhead appears minimal – in Microsoft and QLogic’s testing of three workloads with two identical servers (one running Hyper-V and the other running direct on hardware) the virtualised system maintained between 88 and 97% of the number of IOPS that the native system could sustain and when switching to iSCSI there was less than a single percentage point difference (although the overall throughput was much lower). Intel’s vConsolidate testing suggests that moving from 2-core to 4-core CPUs can yield a 47% performance improvement with both disk and network IO scaling in a linear fashion.

Hardware requirements are modest too (Hyper-V requires a 64-bit processor with standard enhancements such as NX/XD and the Intel VT/AMD-V hardware virtualisation assistance) and a wide range of commodity servers are listed for Hyper-V in the Windows Server Catalog. According to Microsoft, when comparing Hyper-V with Microsoft Virtual Server (both running Windows Server 2003, with 16 single vCPU VMs on an 8-core server), disk-intensive operations saw a 178% improvement, CPU-intensive operations returned a 21% improvement and network-intensive operations saw a 107% improvement (in addition to the network improvements that the Hyper-V virtual switch presents over Virtual Server’s network hub arrangements).

Ready for action

As for whether Hyper-V is ready for production workloads, Microsoft’s experience would indicate that it is – they have moved key workloads such as Active Directory, File Services, Web Services (IIS), some line of business applications and even Exchange Server onto Hyper-V. By the end of the month (just a few days away) they aim to have 25% of their infrastructure virtualised on Hyper-V – key websites such as MSDN and TechNet have been on the new platform for several weeks now (combined, these two sites account for over 4 million hits each day).

It’s not just Microsoft that thinks Hyper-V is ready for action – around 120 customers have committed to Microsoft’s Rapid Deployment Programme (RDP) and, here in the UK, Paul Smith (the retail fashion and luxury goods designer and manufacturer) will shortly be running Active Directory, File Services, Print Services, Exchange Server, Terminal Services, Certificate Services, Web Services and Management servers on a 6-node Hyper-V cluster stretched between two data centres. A single 6-node cluster may not sound like much to many enterprises, but when 30 of your 53 servers are running on that infrastructure it’s pretty much business-critical.

Looking to the future

So, what does that future hold for Hyper-V? Well, Microsoft already announced a standalone version of Hyper-V (without the rest of Windows) and are not yet ready to be drawn on when that might ship.

In the meantime, System Center Virtual Machine Manager 2008 will ship later this year, including suppoort for managing Virtual Server, Hyper-V and VMware ESX hosts.

In addition, whilst Microsoft are keeping tight-lipped about what to expect in future Windows versions, Hyper-V is a key role for Windows Server and so the next release (expected in 2010) will almost certainly include additional functionality in support of virtualisation. I’d expect to see new features include those that were demonstrated and then removed from Hyper-V earlier in its lifecycle (live migration and the ability to hot-add virtual hardware) and a file system designed for clustered disks would be a major step forward too.

In conclusion…

Hyper-V may be a version 1 product but I really do think it is an outstanding achievement and a major step forward for Microsoft. As I’ve written before, expect Microsoft to make a serious dent in VMware’s x86 [and x64] virtualisation market dominance over the next couple of years.

Low-cost enterprise virtualisation from XenSource

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

As I write this, I’m on the train to attend a Microsoft event about creating and managing a virtual environment on the Microsoft platform (that’s something that I’m doing right now to support some of my business unit’s internal systems). I’m also on the Windows Server Virtualization TAP program (most of the information I get from that is under NDA – I’m saving it all up to blog when it becomes public!) and I have a good working knowledge of VMware’s product set, including some of the (non-technical) issues that a virtualisation project can face. With that in mind, I thought I’d take the time to attend one of XenSource‘s Unify Your Virtual World events yesterday to look at how this commercial spinoff from the open source Xen project fits into the picture.

From my point of view, the day didn’t start well: the location was a hotel next to London Heathrow airport with tiny parking spaces at an extortionate price (at least XenSource picked up the bill for that); there was poor signage to find the XenSource event; and stale pastries for breakfast; however I was pleased to see that, low key as the event was, the presenters were accessible (indeed John Glendinning, XenSource VP for Worldwide Sales, was actively floor-walking). And once the presentation got started things really picked up with practical demonstrations supplemented with PowerPoint slides (not OpenOffice Impress as I would expect from an open source advocate) only to set the scene and provide value, rather than the typical “death by PowerPoint” product pitch with only a few short demonstrations.

XenSource logoXenSource was founded in 2005 by the creators and leaders of the Xen hypervisor open source project and in that short time it has grown to the point where it is now a credible contender in the the x86 virtualisation space – so much so that they are currently in the process of being acquired by Citrix Systems. Rather than trying to dominate in the entire market, XenSource’s goal is clear – they provide a core virtualisation engine with partners providing the surrounding products for storage, backup, migration, etc., ensuring that there are multiple choices for enterprises that deploy the XenSource virtualisation products. The XenSource “engine” is a next generation hypervisor which delivers high performance computing through its use of paravirtualisation and hardware assist technologies. They also try to cast off the view of “it’s Linux so it must be difficult” with their 10 minutes to Xen model with no base operating system or RPMs to install, demonstrating the installation of a Xen server on bare metal hardware in around 10 minutes from a PXE boot (other deployment options are available).

From an architectural standpoint, the Xen hypervisor is very similar to Microsoft’s forthcoming Windows Server Virtualization model, providing an environment known as Domain 0. Memory and CPU access is facilitated by the hypervisor, providing direct access to hardware in most cases although for Windows VMs to make use of this the hardware must support Intel-VT or AMD-V (virtualisation hardware assistance). Storage and network access use a high performance memory bus to access the Domain 0 environment which itself makes use of standard Linux device drivers, ensuring broad hardware support.

One of the problems with running multiple virtual machines on a single physical server is the control of access to hardware. In a virtualisation environment that makes use of emulated drivers (e.g. VMware Server, Microsoft Virtual Server) the guest operating system is not aware that it is running in a virtual environment and any hardware calls are trapped by the virtual machine management layer which manages interaction with the hardware. The paravirtualised model used for Linux VMs allows the guest operating system to become aware that it is virtualised (known as enlightenment) and therefore to make a hypercall (i.e. a call to the hypervisor) that can interact directly with hardware. For non-paravirtualised operating systems that use the high performance memory bus (e.g. current versions of Windows), full virtualisation is invoked whereby the virtual machine believes it owns the hardware but in reality the hardware call is trapped by the virtualisation assist technology in the processor and passed to the hypervisor for action. For this reason, Intel VT or AMD-V capabilities are essential for Windows virtualisation with Xen.

XenSource view the VMware ESX Server model of hypervisor-based virtualisation as “first generation” – effectively using a mini-operating system kernel that includes custom device drivers and requires binary patching at runtime with a resulting performance overhead. In contrast, the “second generation” hypervisor model allows for co-operation between guests and the hypervisor, providing improved resource management and input/output performance. Furthermore, because the device drivers are outside the hypervisor, it has a small footprint (and consequentially small attack surface from a security standpoint) whilst supporting a broad range of hardware and providing significant performance gains.

XenSource claim that paravirtualised Linux on Xen has only a 0.5-2% latency (i.e. near-native performance) and even fully virtualised Windows on Xen has only a 2-6% latency (which is comparible with competing virtualisation products).

There are three XenSource products:

  • XenExpress – a production-ready, entry level system for a standalone server (free of charge).
  • XenServer – a mid-range multi-server virtualisation platform
  • XenEnterprise – high capacity dynamic virtualisation for the enterprise.

Because the three products share the same codebase (unlike Microsoft Virtual PC/Virtual Server or VMware Workstation/Server/ESX Server), upgrade is as simple as supplying a license key to unlock new functionality. For XenServer and XenEnterprise, there are both perpetual and annual licensing options (licensed per pair of physical CPU sockets) at a significantly reduced cost when compared with VMware Virtual Infrastructure 3 (VI3).

The version 4 XenSource products were released in August 2007 with an update planned for the last quarter of 2007. New features in version 4 include:

  • XenMotion (XenEnterprise only) for seamless movement of virtual machines between hosts without any noticeable downtime (cf. VMware VMotion).
  • XenResourcePools (XenEnterprise only) to join virtual servers and manage virtualised resources as a logical group, supporting automatic VM placement and XenMotion with shared storage (volume-based iSCSI and file-based NFS, using the .vhd disk format), authentication, authorisation and resource configuration (similar to the model in VMware Virtual Center).
  • Xen64, a true 64-bit hypervisor providing scalability and support for enterprise applications in either a 32- or 64-bit environment with quality of service controls on resources, dynamic guest configuration and supporting up to:
    • 128GB RAM (32GB per guest, hotplug addition for supported Linux operating systems).
    • 1-32 pCPUs (1-8 vCPUs per guest).
    • 1-8 NICs (1-7 NICs per guest – hotplug addition and removal).
    • 1-128 storage repositories (16TB per repository with hotpluggable disks).
  • XenCenter, which provides a graphical virtualisation management interface, with guided wizards and guest templates for host and resource pool configuration on multiple servers, storage and networking configuration and management, VM lifecycle management and import/export (cf. VMware Virtual Center). Whilst CLI commands are also available XenCenter is a Microsoft.NET application for Windows operating systems which makes use of the latest Windows user interface standards. Because XenCenter makes use of a distributed configuration database there is no dependency on a single SQL Server and management can fail over between virtual host servers.
  • XenAPI, a secure and remoteable programming interface for third-party and customer integration with existing products and processes including the xe commands for system control.

One example of the XenSource approach to providing additional functionality through partnerships is the agreement with Symantec whereby Symantec (formerly Veritas) Storage Foundation will be embedded into XenEnterprise (providing dynamic fibre-channel multipathing for redundancy, load balancing, resilience and speed); a new product called XenEnterprise High Availability will be developed for virtual machine failover; and Veritas NetBackup will be offered for data protection and backup of critical applications running on XenEnterprise virtual machines (via the NetBackup Agent, also supporting snapshots when used with Symantec Storage Foundation). Rather than re-certify systems for virtualisation, XenSource will accept Symantec’s certified plugins for common OEM architectures and, because Symantec Storage Foundation is already widely deployed, existing investments can be maintained.

In terms of demonstration, I was impressed by what I saw. XenSource demonstrated a bare metal installation in around 10 minutes and were able to show all the standard virtualisation demonstrations (e.g. running a ping, copying files, or watching a video whilst performing a live migration with no noticeable break in service). The XenCenter console can be switched between VNC and RDP communications and Xen makes use of is own .xva Xen virtual appliance format with Microsoft .vhd virtual hard disks. Conversion from VMware .vmdk files is possible using the supplied migration tools (there are Linux P2V tools included with the XenSource products but for Windows migrations it’s necessary to use products from partners such as PlateSpin and LeoStream) and templated installations can also be performed with simple conversion between running VMs and templates. When cloning virtual machines, there are options for “fat clones” whereby the whole disk is copied or thin provisioning using the same image and a differencing drive. Virtual machines can use emulated drivers or XenSource Tools can be installed for greater control from the console. Storage can be local, NFS or iSCSI based with fibre channel storage and logical volume management expected in the next release.

It’s clear that XenSource see VMware as their main competitor in the enterprise space and it looks to me as if they have a good product which provides most of the functionality in VMware VI3 Enterprise Edition (all of the functionality in VMware VI3 Standard Edition) at a significantly lower price point. The Citrix aquisition will provide the brand ownership that many sceptics will want to see before they buy an open source product, the partnership model should yield results in terms of flexibility in operations and it’s clear that the development pace is rapid. With XenSource going from strength to strength and Microsoft Windows Server Virtualization due to arrive around the middle of next year, VMware need to come up with something good if they want to retain their dominance of the x86 virtualisation market.

VMware ESX Server and HP MSA1500 – Active/Active or Active/Passive?

This content is 18 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Recently, I’ve been working on a design for a virtual infrastructure, based on VMware Virtual Infrastructure 3 with HP ProLiant servers and a small SAN – an HP MSA1500cs with MSA30 (Ultra320 SCSI) and MSA20 (SATA) disk shelves.

The MSA is intended as a stopgap solution until we have an enterprise SAN in place but it’s an inexpensive workgroup solution which will allow us to get the virtual infrastructure up and running, providing a mixture of SATA LUNs (for VCB, disk images, templates, etc.) and SCSI LUNs (for production virtual machines). The MSA’s Achilles’ heel is the controller, which only provides a single 2Gbps fibre channel connection – a serious bottleneck. Whilst two MSA1500 controllers can be used, the default configuration is active-passive; however HP now has firmware for active-active configurations when used with certain operating systems – what was unclear to me was how VMware ESX Server would see this.

I asked the question in the VMTN community forums thread entitled Active-Active MSA controller config. with VI3 and MSA1500 and got some helpful responses indicating that an active-active configuration was possible; however as another users pointed out, the recommended most recently used (MRU) recommended path policy seemed to be at odds with VMware’s fixed path advice for active-active controller configurations.

Thanks to the instructor on my VMware training course this week, I learned that, although the MSA controllers are active-active (i.e. they are both up and running – rather than one of them remaining in standby mode), they are not active-active from a VMware perspective – i.e. each controller can present a different set of LUNs to the ESX server but there is only one path to a LUN at any one time. Therefore, to ESX Server they are still active-passive. I also found the following on another post which seems to have been removed from the VMTN site (at least, I couldn’t get the link from Google to work) but Google had a cached copy of it:

“The active/active description”… “seems to imply that they are active/active in the sense that both are doing work but perhaps driving different LUN’s? i.e. if you have 10 volumes defined you might have 5 driven by controller A and 5 driven by controller B. Should either A or B fail all ten are going to be driven by the surviving controller. This is active/active yes [but] this is also the definition of active/passive in ESX words (i.e. only one controller have access to one LUN at any given time).”

Based on the above quote, it seems that MSA1500 solutions can be used with VMware products in an active-active configuration (which should, theoretically, double the throughput) but the MRU recommended path policy must be used as only one controller can access as LUN at any given time.