I recently bought a new server in order to consolidate various machines onto one host. The intention here is to license Microsoft Hyper-V Server when it is released but, as that’s not available to me right now, I thought I’d use the latest Windows Server 2008 (Server Core) build with the Hyper-V role enabled. Everything was looking good until I built the server, installed Hyper-V (using the
ocsetup Microsoft-Hyper-V command) and realised that although I had a functioning Hyper-V server, I had no way to manage it.
According to the release notes for the Hyper-V beta:
"To manage Hyper-V on a server core installation, you can do the following:
- Use Hyper-V Manager to connect to the server core installation remotely from a full installation of Windows Server 2008 on which the Hyper-V role is installed.
- Use the WMI interface."
I wanted to run Hyper-V on Server Core because my experience of running Virtual Server on Windows Server 2003 has been that patching the host is a major issue involving downtime on each guest virtual machine. Similarly (unless I migrate the workload to another server) applying updates to the parent partition on Hyper-V will also result in downtime in each child partition. By using Server Core, I reduce the size of the attack surface and therefore the likelihood of a critical patch being applicable to my server. If I need another Windows Server 2008 machine with Hyper-V installed just to manage the box then that’s not helping me much – even a version of Hyper-V Manager to run on a Windows client machine and administer the server would be a huge step forward!
I’ve raised a feedback request highlighting this as a potential issue which restricts the scenarios in which Hyper-V will be deployed; however I’m expecting it to be closed as "by design" and therefore not holding out much hope of this getting fixed before product release.