Microsoft Hyper-V: A reminder of where we’re at

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier this week I saw a tweet from the MIX 2011 conference that highlighted how Microsoft’s Office 365 software as a service platform runs entirely on their Hyper-V hypervisor.

There are those (generally those who have a big investment in VMware technologies) who say Microsoft’s hypervisor lacks the features to make it suitable for use in the enterprise. I don’t know how much bigger you have to get than Office 365, but the choice of hypervisor is becoming less and less relevant as we move away from infrastructure and concentrate more on the platform.

Even so, now that Hyper-V has reached the magical version 3 milestone (at which people generally start to accept Microsoft products) I thought it was worth a post to look back at where Hyper-V has come from, and where it’s at now:

Looking at some of the technical features:

  • Dynamic memory requires Windows 2003 SP2 or later (and is not yet supported for Linux guests). It’s important to understand the differences between over subscription and over commitment.
  • Performance is as close as no difference for differentiator between hypervisors.
  • Hyper-V uses Windows clustering for high availability – the same technology as is used for live migration.
  • In terms of storage scalability – it’s up to the customer to choose how to slice/dice storage – with partner support for multipathing, hardware snapshotting, etc. Hyper-V users can have 1 LUN for each VM, or for 1000 VMs (of course, no-one would actually do this).
  • Networking also uses the partner ecosystem – for example HP creates software to allow NIC teaming on its servers, and Hyper-V can use a virtual switch to point to this.
  • In terms of data protection, the volume shadow copy service on the host is used an there are a number of choices to make around agent placement. A single agent can be deployed to the host, with all guests protected (allowing whole machine recovery) or guests can have their own agents to allow backups at the application level (for Exchange, SQL Server, etc.).

I’m sure that competitor products may have a longer list of features but in terms of capability, Hyper-V is “good enough” for most scenarios I can think of – I’d be interested to hear what barriers to enterprise adoption people see for Hyper-V?

Hyper-V R2 service pack 1, Dynamic Memory, RemoteFX and virtual desktops

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I have to admit that I’ve tuned out a bit on the virtualisation front over the last year.  It seems that some vendors are ramming VDI down our throats as the answer to everything; meanwhile others are confusing virtualisation with “the cloud”.  I’m also doing less hands-on work with technology these days too and I struggle to make a business case to fly over to Redmond for the MVP Summit so I was glad when I was invited to join a call and take a look at some of the improvements Microsoft has made in Hyper-V as part of Windows Server 2008 R2 service pack 1.

Dynamic memory

There was a time when VMware criticised Microsoft for not having any Live Migration capabilities in Hyper-V but we’ve had them for a while now (since Windows Server 2008 R2).  Then there’s the whole device drivers in the hypervisor vs. drivers in the parent partition argument (I prefer hardware flexibility, even if there is the occasional bit of troubleshooting required, over a monolithic hypervisor and locked-down hardware compatibility list).  More recently the criticism has been directed at dynamic memory and I have to admit Microsoft didn’t help themselves with this either: first it was in the product, then it was out; and some evangelists and Product Managers said dynamic memory allocation was A Bad Thing:

“Sadly, another “me too” feature (dynamic memory) has definitely been dropped from the R2 release. I asked Microsoft’s Jeff Woolsey, Principle Group Program Manager for Hyper-V, what the problem was and he responded that memory overcommitment results in a significant performance hit if the memory is fully utilised and that even VMware (whose ESX hypervisor does have this functionality) advises against it’s use in production environments. I can see that it’s not a huge factor in server consolidation exercises, but for VDI scenarios (using the new RDS functionality), it could have made a significant difference in consolidation ratios.”

In case you’re wondering, at my notes from when this feature was dropped from Hyper-V in the R2 release candidate (it was previously demonstrated in the beta). Now that Microsoft has dynamic memory working it’s apparently A Good Thing (Microsoft’s PR works like that – bad when Microsoft doesn’t have it, right up to the point when they do…).

To be fair, it turns out Microsoft’s dynamic memory is not the same as VMware’s – it’s all about over-subscription vs. over commitment. Whereas VMware will overcommit memory and then de-duplicate to reclaim what it needs, Microsoft takes the approach of only providing each VM with enough memory to start up, monitoring performance and adding memory as required, and taking it back when applications are closed.

As for those consolidation ratio improvements: Michael Kleef, one of Microsoft’s Technical Program Managers in the Server and Cloud Division has found that dynamic memory can deliver a 40% improvement in VDI density (Michael also spoke about this at TechEd Europe last year).  Microsoft’s tests were conducted using the Login Virtual Session Indexer (LoginVSI) tool which is designed to script virtual workloads and is used by many vendors to test virtualised infrastructure.

It turns out that, when implementing VDI solutions, disk I/O is the first problem, memory comes next, and only after that is fixed will you hit a processor bottleneck. Instead of allocating 1GB of RAM for each Windows 7 VM, Microsoft used dynamic memory with a 512MB VM (which is supported on Hyper-V).  There’s no need to wait for an algorithm to compute where memory can be reclaimed – instead the minimum requirement is provided, and additional memory is allocated on demand – and Microsoft claims that other solutions rely on weakened operating system security to get to this level of density.  There’s no need to tweak the hypervisor either.

Microsoft’s tests were conducted using HP and Dell servers with 96GB of RAM (the sweet spot above which larger DIMMS are required and so the infrastructure cost rises significantly).  Using Dell’s reference architecture for Hyper-V R2, Microsoft managed to run the same workload on just 8 blades (instead of 12) using service pack 1 and dynamic memory, without ever exhausting server capacity or hitting the limits of unacceptable response times.

Dynamic memory reclamation uses Hyper-V/Windows’ ability to hot-add/remove memory with the system constantly monitoring itself for virtual machines under memory pressure (expanding using the configured memory buffer) or with excess memory, after which they become candidates to remove memory (not immediately in case the user restarts an application).  Whilst it’s particularly useful in a VDI scenario, Microsoft say it also works well with web workloads and server operating systems, delivering a 25-50% density improvement.

More Windows 7 VMs per logical CPU

Dynamic memory is just one of the new virtualisation features in Windows Server 2008 R2 service pack 1.  Another is a new support limit of 12 VMs per logical processor for exclusively Windows 7 workloads (it remains at 8 for other workloads).  And Windows 7 service pack 1 includes the necessary client side components to take advantage of the server-side improvements.

RemoteFX

The other major improvement in Windows Server 2008 R2 service pack 1 is RemoteFX.  This is a server-side graphics acceleration technology.  Due to improvements in the Remote Desktop (RDP) protocol, now at version 7.1, Microsoft is able to provide a more efficient encode/decode pipeline, together with enhanced USB redirection including support for phones, audio, webcams, etc. – all inside an RDP session.

Most of the RemoteFX benefits apply to VDI scenarios but one part also benefits session virtualisation (previously known as Terminal Services) – that’s the RDP encode/decode pipeline which Microsoft says is a game changer.

Microsoft has always claimed that Hyper-V’s architecture makes it scalable. With no device drivers inside the hypervisor (native device drivers only exist on the parent partition) and a VMBus used for communications between virtual machines and the parent partition.  Using this approach, virtual machines can now use a virtual GPU driver to provide the Direct3D or DirectX capabilities that are required for some modern applications – e.g. certain Silverlight or Internet Explorer 9 features.  Using the GPU installed in the server, RemoteFX allows VMs to request content via the virtual GPU and the VMBus, render using the physical GPU and pass the results back to the VM again.

The new RemoteFX encode/decode pipeline uses a render, capture and compress (RCC) process to render on the GPU but to encode the protocol using either the GPU, CPU or an application-specific integrated circiut (ASIC).  Using an ASIC is analogous to TCP offloading in that there is no work required by the CPU.  There’s also a decode ASIC – so clients can use RDP 7.1 in an ultra-thin client package (a solid state ASIC) with RemoteFX decoding.

Summary

Windows 7 and Windows Server 2008 R2 service pack is mostly a rollup of hotfixes but it also delivers some major virtualisation improvements that should help Microsoft to establish itself as a credible competitor in the VDI space. Of course, the hypervisor is just one part of a complex infrastructure and Microsoft still relies on partners to provide parts of the solution – but by using products like Citrix Xen Desktop as a session broker, and tools from Appsense for user state virtualisation, it’s finally possible to deliver a credible VDI solution on the Microsoft stack.

Extending certificate validity to avoid mouse/video refresh issues with the Hyper-V Virtual Machine Connection

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In order to avoid man in the middle attacks, Hyper-V’s Virtual Machine Connection (vmconnect.exe) requires certificates for a successful connection.  At some point, the certificates expire, resulting in an error message when connecting to virtual machines, as described in Microsoft knowledge base article 967902, which also includes details of an update to resolve the issue, introducing an annual certificate renewal process.

Unfortunately, there is a bug in the annual certificate review process that can affect the refresh of mouse/video connections. The bug only applies to certain use cases with VMConnect (i.e. Remote Desktop connections are unaffected) and there are two possible workarounds:

  1. Save and restore the virtual machine (temporary workaround, until the certificate expires again in a year).
  2. Install new self-signed certficates on each host. It may not be the most elegant fix, but it is simple, and has a long-term effect.

Microsoft has not created an update to resolve this new issue, which only applies in certain use cases; instead they have produced a sample script that uses makecert.exe to create new Hyper-V Virtual Machine Management Service (VMMS) self-signing certificates that don’t expire until 2050.  This script should be run on every affected host and running it several times will result in multiple certificates, which is untidy, but will not cause issues.

After installing the new certificates (in the Local Computer store, at Trusted Root Certificate Authorities\Certificates and at \Personal\Certificates), the VMMS should be configured to use it and then restarted. Obviously, this will affect all virtual machines running on the host, so the activity should only be carried out during a scheduled maintenance window. For organisations that do not want to use self-signed certificates, it’s also possible to use a certificate issued by a certificate authority (CA).

More details will shortly become available in Microsoft knowledge base article 2413735.

Hyper-V R2 Dynamic Memory: over-subscription vs. over-commitment

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

There’s been a lot of talk about how Microsoft’s Dynamic Memory capability in Hyper-V R2 compares with similar features from VMware – including the pros/cons of each approach. Because that’s been so well-covered elsewhere, I’ll avoid it here (from the VMware perspective, check out Eric Gray (vCritical),  for Microsoft Ben Armstrong is your man and, for a completely unbiased and objective view… well, good luck finding one). I did see an interesting quote however from one of Ben’s TechEd sessions in New Zealand recently:

  • “Over-subscription is what airlines do by selling more seats than places in a plane.
  • Over-commitment is what happens when all those passengers actually show up to use their seat.”

[Ben Armstrong at TechEd New Zealand]

One of my fellow Virtual Machine MVPs, Ronald Beekelaar, extended this analogy and it seemed good to share it more widely…

There is nothing wrong with over-subscription – it happens in many real-world scenarios such as: public transport; libraries; Doctors’ surgeries; hospitals; utility companies; telephone systems; etc.  – and these work well (most of the time). The issues occur when all of the people that could actually use the service try to at the same time, at which time we have over-commited the service.

What do we do when we have over-commitment? We add more resources (run extra buses, add carriages to a train, add books to the library, open a new hospital ward, lay more telephone cables, etc.) – and in the world of virtualisation, we add one or more hosts and migrate some of the conflicting workloads away.

Another of my “How Do I?” videos available on the Microsoft TechNet website

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the last couple of years, I’ve produced a few of the TechNet How Do I? videos for Microsoft and have linked to most (if not all) of them from this blog as they’ve gone live.

A few days ago, I was reviewing my community activities for the year and noticed one that had slipped through the net: running Hyper-V Server from a USB drive.

Unfortunately it doesn’t look as though I’ll be doing any more of these as the company I did the work through has lost the contract (and Microsoft produces a lot of this sort of content in house, reducing the scope for outsiders like me).  It was a nice gig, while it lasted – if a little time-consuming… hopefully the videos are useful to people!

Desktop virtualisation shake-up at Microsoft

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

What an afternoon… for a few days now, I’ve been trying to find out what’s the big announcement from Microsoft and Citrix in the desktop virtualisation hour webcast later today. I was keeping quiet until after the webcast but now the embargo has lifted, Microsoft has issued a press release, and the news is all over the web:

  • Coming in Windows Server 2008 R2 service pack 1 (for which there is no date announced, yet) will be the dynamic memory functionality that was in early releases of Hyper-V for Windows Server 2008 R2 but was later pulled from the product.  Also in SP1 will be a new graphics acceleration platform, known as RemoteFX, that is based on desktop-remoting technology that Microsoft obtained in 2008 when it acquired Calista Technologies, allowing for rich media content to be accessed over the remote desktop protocol, enabling users of virtual desktops and applications to receive a rich 3-D, multimedia experience while accessing information remotely..
  • Microsoft and Citrix are offering a “Rescue for VMware VDI” promotion, which allows VMware View customers to trade in up to 500 licenses at no additional cost, and the “VDI Kick Start” promotion, which offers new customers a more than 50 percent discount off the estimated retail price.
  • There are virtualisation licensing changes too: from July, Windows Client Software Assurance customers will no longer have to buy a separate license to access their Windows operating system in a VDI environment, as virtual desktop access rights now will be a Software Assurance (SA) benefit – effectively, if you have SA, you get Windows on screen, no matter what processor it is running on!  There will also be new roaming usage rights and Windows Client Software Assurance and new Virtual Desktop Access (the new name for VECD) customers will have the right to access their virtual Windows desktop and their Microsoft Office applications hosted on VDI technology on secondary, non-corporate network devices, such as home PCs and kiosks.
  • Citrix will ensure that XenDesktop HDX technology will be interoperable with and will extend RemoteFX within 6 months.
  • Oh yes, and Windows XP Mode (i.e. Windows Virtual PC) will no longer requires hardware virtualisation technology (although, frankly, I find that piece of news a little less exciting as I’d really like to see Virtual PC replaced by a client-side hypervisor).

Windows Server 2008 R2 Hyper-V crash turns out to be an Intel driver issue

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few weeks ago, I rebuilt a recently decommissioned server to run as an infrastructure test and development rig at home.  I installed Windows Server 2008 R2, enabled the Hyper-V role and all was good until I started to configure my networks, during which I experienced a “blue screen of death” (BSOD) – never a good thing on your virtualisation host, especially when it does the same thing again on reboot:

“Oh dear, my freshly built Windows Server 2008 R2 machine has just thrown 3 BSODs in a row… after running normally for an hour or so :-(“

The server is a Dell PowerEdge 840 (a small, workgroup server that I bought a couple of years ago) with 8GB RAM and a quad core Xeon CPU.  The hardware is nothing special – but fine for my infrastructure testing – and it had been running with Windows Server 2008 Hyper-V since new (with no issues) but this was the first time I’d tried R2. 

I have 3 network adapters in the server: a built in Broadcom NetXtreme Gigabit card (which I’ve reserved for remote access); and 2 Intel PRO/100s (for VM workloads).  Ideally I’d use Gigabit Ethernet cards for the VM workload too, but this is only my home network and they were what I had available!

Trying to find out the cause of the problem, I ran WhoCrashed, which gave me the following information:

This was likely caused by the following module: efe5b32e.sys
Bugcheck code: 0xD1 (0x0, 0x2, 0x0, 0xFFFFF88002C4A3F1)
Error: DRIVER_IRQL_NOT_LESS_OR_EQUAL
Dump file: C:\Windows\Minidump\020410-15397-01.dmp
file path: C:\Windows\system32\drivers\efe5b32e.sys
product: Intel(R) PRO/100 Adapter
company: Intel Corporation
description: Intel(R) PRO/100 Adapter NDIS 5.1 driver

That confirmed that the issue was with the Intel NIC driver, which sounded right as, after enabling the Hyper-V role, I connected an Ethernet cable to one of the Intel NICs and got a BSOD each time the server came up. If I disconnected the cable, no BSOD.  Back to the twitters:

“Does anyone know of any problems with Intel NICs and Hyper-V R2 (that might cause a BSOD)?”

I switched the in-box (Microsoft) drivers for some (older) Intel ones.  That didn’t fix things, so I switched back to the latest drivers.  Eventually I found that the issue was caused by the checkbox for “Allow management operating system to share this network adapter” and that,  if the NIC is live and I selected this, I could reproduce the error:

“Found the source of yesterday’s WS08R2 Hyper-V crash… any idea why enabling this option http://twitpic.com/11b64y would trip a BSOD?”

Even though I could work around the issue (because I don’t want to share a NIC between the parent partition and the children anyway – I have the Broadcom NIC for remote access) it seemed strange that this behaviour should occur.  There was no NIC teaming involved and the server was still a straightforward UK installation (aside from enabling Hyper-V and setting up virtual networks). 

Based on suggestions from other Virtual Machine MVPs I also:

  • Flashed the NICs to the latest release of the Intel Boot Agent (these cards don’t have a BIOS).
  • Updated the Broadcom NIC to the latest drivers too.
  • Attempted to turn off Jumbo frames but the the option was not available in the properties so I could rule that out.

Thankfully, @stufox (from Microsoft in New Zealand) saw my tweets and was kind enough to step in to offer assistance.  It took us a few days, thanks to timezone differences and my work schedule, but we got there in the end.

First up, I sent Stu a minidump from the crash, which he worked on with one of the Windows Server kernel developers. They suggested running the driver verifier (verifier.exe) against the various physical network adapters (and against vmswitch.sys).  More details of this tool can be found in Microsoft knowledge base article 244617 but the response to the verifier /query command was as follows:

09/02/2010, 23:19:33
Level: 000009BB
RaiseIrqls: 0
AcquireSpinLocks: 44317
SynchronizeExecutions: 2
AllocationsAttempted: 152850
AllocationsSucceeded: 152850
AllocationsSucceededSpecialPool: 152850
AllocationsWithNoTag: 0
AllocationsFailed: 0
AllocationsFailedDeliberately: 0
Trims: 41047
UnTrackedPool: 141544
 
Verified drivers:
 
Name: efe5b32e.sys, loads: 1, unloads: 0
CurrentPagedPoolAllocations: 0
CurrentNonPagedPoolAllocations: 0
PeakPagedPoolAllocations: 0
PeakNonPagedPoolAllocations: 0
PagedPoolUsageInBytes: 0
NonPagedPoolUsageInBytes: 0
PeakPagedPoolUsageInBytes: 0
PeakNonPagedPoolUsageInBytes: 0
 
Name: ndis.sys, loads: 1, unloads: 0
CurrentPagedPoolAllocations: 6
CurrentNonPagedPoolAllocations: 1926
PeakPagedPoolAllocations: 8
PeakNonPagedPoolAllocations: 1928
PagedPoolUsageInBytes: 984
NonPagedPoolUsageInBytes: 1381456
PeakPagedPoolUsageInBytes: 1296
PeakNonPagedPoolUsageInBytes: 1381968
 
Name: b57nd60a.sys, loads: 1, unloads: 0
CurrentPagedPoolAllocations: 0
CurrentNonPagedPoolAllocations: 3
PeakPagedPoolAllocations: 0
PeakNonPagedPoolAllocations: 3
PagedPoolUsageInBytes: 0
NonPagedPoolUsageInBytes: 188448
PeakPagedPoolUsageInBytes: 0
PeakNonPagedPoolUsageInBytes: 188448
 
Name: vmswitch.sys, loads: 1, unloads: 0
CurrentPagedPoolAllocations: 1
CurrentNonPagedPoolAllocations: 18
PeakPagedPoolAllocations: 2
PeakNonPagedPoolAllocations: 24
PagedPoolUsageInBytes: 108
NonPagedPoolUsageInBytes: 50352
PeakPagedPoolUsageInBytes: 632
PeakNonPagedPoolUsageInBytes: 54464

To be honest, I haven’t a clue what half of that means but the guys at Microsoft did – and they also asked me for a kernel dump (Dirk A D Smith has written an article at Network World that gives a good description of the various types of memory dump: minidump; kernel; and full). Transmitting this file caused some issues (it was 256MB in size – too big for e-mail) but it compressed well, and 7-zip allowed me to split it into chunks to get under the 50GB file size limit on Windows Live SkyDrive.  Using this, Stu and his kernel developer colleagues were able to see that there is a bug in the Intel driver I’m using but it turns out there is another workaround too – turning off Large Send Offload in the network adapter properties.  Since I did this, the server has run without a hiccup (as I would have expected).

“Thanks to @stufox for helping me fix the BSOD on my Hyper-V R2 server. Turned out to be an Intel device driver issue – I will blog details”

It’s good to know that Hyper-V was not at fault here: sure, it shows that a rogue device driver can bring down a Windows system but that’s hardly breaking news – the good thing about the Hyper-V architecture is that I can easily update network device drivers.  And, let’s face it, I was running enterprise-class software on a workgroup server with some old, unsupported, hardware – you could say that I was asking for trouble…

Adventures with Intel Virtualization Technology (VT)

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of weeks ago, David Saxon and I ran a Windows 7 Skills Update workshop for some of our colleagues, based on a course obtained from the Microsoft Partner Training and Readiness Resource Center.  My plan was to use David’s excellent training skills to deliver the course (which I recorded), before he left the organisation to take up a new challenge.  Ironically, working for an IT company means that it’s not always easy to get hold of kit for labs and David called in a number of favours in order to get hold of 8 brand new PCs and monitors for us to run the labs.  Each machine was supplied with a quad core CPU and 8GB of RAM but, when we tried to enable the Hyper-V role in Windows Server 2008 R2, it failed because these computers didn’t support Intel’s Virtualization Technology (VT).

“No VT?”, I said “But these are Intel Core2Quad processors… ah…” – I remembered seeing something about how some Core2Quads don’t provide Intel VT support, even though the Core2Duos do.  These were the Q8300 2.5GHz chips and, according to an Intel document, the specification was changed in June to correct this and enable the VT.

I should have known better – after all, I’m an MVP in Virtual Machine technology – but I put my hands up, I didn’t check the specifications of the machines that David had ordered (and anyway, I would have expected modern CPUs to include VT).  Mea Culpa.

As the PCs had been manufactured in August, I thought there was a chance that they used the new CPUs but did not have BIOS support for VT.  If that was the case, it may have been possible to enable it (more on that in a moment) but running both CPU-Z and Securable confirmed that these processors definitely didn’t support VT.

In this case, it really was a case of the CPU not providing the necessary features but there are also documented cases of PCs with VT not allowing it to be enabled in the BIOS.  Typically the OEM (most notably Sony) claims that they are consumer models and that VT is an enterprise feature but with Windows 7’s XP Mode relying on Virtual PC 7, which has a dependency on Intel VT or AMD-v, that argument no longer holds water (XP Mode is definitely a consumer feature – as it’s certainly not suitable for enterprise deployment, regardless of Microsoft’s Windows 7 marketing message around application compatibility).

However, with a little bit of perseverance, it may be possible to force VT support on PCs where the functionality is there but not exposed in the BIOS.  Another friend and colleague, Garry Martin, alerted me to a forum post he found where a utility was posted to enable VT on certain Fujitsu machines that have been restricted in this way.  I should say that if you try this, then you do so at your own risk and I will not accept any responsibility for the consequences.  Indeed, I decided not to try it on my problem machines because they were a different model and also, I didn’t fancy explaining to our Equipment Management team how the brand new PCs that we’d borrowed for a couple of days had been “bricked”.  In fact, I’d think it highly unlikely that this tool works on anything other than the model described in the forum post (and almost certainly not with a different OEM’s equipment or with a different BIOS).

Incidentally, Ed Bott has reasearched which Intel desktop and mobile CPUs support VT and which do not.  As far as I know, all recent server CPUs (i.e. Xeon processors) support VT.

Physical disks can only be added to Hyper-V VMs when the disk is offline

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I don’t often work with passthrough disks in Hyper-V but, after configuring my Netgear ReadyNAS as an iSCSI target earlier this evening, I wanted to use it as storage for a new virtual machine. Try as I might, I could not get Hyper-V Manager to accept a physical disk as a target, despite having tried both SCSI and IDE disk controllers. Then I read the information text next to the Physical hard disk dropdown in the VM settings:

“If the physical hard disk you want to use is not listed, make sure that the disk is offline. Use Disk Management on the physical computer to manage physical hard disks.”

Doh! a classic case of RTFM… (my excuse is that it’s getting late here). After taking the disk offline I could select it and attach it to the virtual machine.

Creating a Hyper-V workstation

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of years back, I was running Windows Server 2008 on my everyday notebook PC so that I could work with Hyper-V. That wasn’t really ideal and, these days, I’m back on a client OS – Windows 7 as it happens…

Even so, I’ve been discussing the concept of a developer workstation with my friend and colleague, Garry Martin, for some time now. I say “developer”, because Garry works in our application services business but the setup I came up with is equally valid for sysadmins who need a client side virtualisation solution (and no, type 2 hypervisors do not cut it – I want to run on bare metal!).

I finally got Hyper-V Server running from a USB flash drive a few days before Microsoft announced that it is a supported scenario (although I still haven’t seen the OEM document that describes the supported process). That provided the base for my solution… something that anyone with suitable hardware can use to boot from USB into a development environment, without any impact on their corporate build. Since then, I’ve confirmed that the RTM version of Hyper-V Server 2008 R2 works too (my testing was on the RC).

Next, I integrated the network card drivers for my system before starting to customise the Hyper-V Server installation. This is just the same as working with a server core installation of Windows Server but these days sconfig.vbs makes life easier (e.g. when setting up the computer name, network, remote management, Windows updates, etc.), although it was still necessary to manually invoke the control intl.cpl and control timedate.cpl commands to convince Hyper-V that I’m in the UK not Redmond…

Other changes that I made included:

The real beauty of this installation is that, now I’ve got everything working, it’s encapsulated in a single virtual hard disk (.VHD) image that can be given to any of our developers or technical specialists. I can take my bootable USB thumb drive to any machine and boot up my environment but, if I used an external hard disk instead, then I could even take my virtual machine images with me – and Garry has done some research into which drives/flash memory to use, which should appear as a guest post later this week. Creating and managing VMs can be done via PowerShell (remember, this setup is mobile and is unlikely to be accessible from a management workstation) and access to those running VMs is possible from PowerShell or Remote Desktop. I could even install Firefox if I wanted (actually, I’ve not tried that on Hyper-V Server but it works on Windows Server 2008 server core)

Of course, what I’d really like is for Microsoft to produce a proper client-side hypervisor, like Citrix XenClient but, until that day, this setup seems to work pretty well.