Useful links: September 2008

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A list of items I’ve come across this month that I found potentially useful, interesting, or just plain funny:

I’ve been running these “useful links” posts for a few months now but it really would make sense for me to use a service like Delicious instead. I’ve been trying to work out a way to get Delicious to publish links on a weekly or monthly basis but that might not work out. Just keep watching this space.

Bluetooth communications between an Apple iPhone 3G and a Ford Audio system

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

My company car is due for replacement and the lease company has arranged a demonstration car of my choice for a week – so last Wednesday a shiny Ford Mondeo 2.2TDCi Titanium X Sport Estate was delivered to my house. (For readers outside Europe who don’t know what a Mondeo is, here’s an American review of the range-topping Titanium X Sport model – it might also be useful to note that “car” and “estate” are English words for what’s known as “automobile” and “wagon” in some parts of the world.)

Whilst I’m not a great fan of the fake aluminium that car manufacturers seem to plaster all over the interior of cars these days, this car represents a reasonable balance between performance, economy and the need to transport a family complete with all their associated paraphernalia (or garden rubbish for the tip…) – and it’s pretty well-specced too. One of the features that I was particularly interested in was the Bluetooth and Voice Control system.

(The Ford website features a longer version of this video but I couldn’t easily embed it here – and, for the benefit of those with no sense of irony, this is not serious – it is a parody of a fantastic programme that the BBC used to run called Tomorrow’s World.)

My current car has a fully-fitted carphone kit for use with my work phone (a Nokia 6021) but if anyone calls me on my iPhone 3G I have to use another solution. Not so with the Mondeo. In fact, I couldn’t get the Nokia to recognise the Ford Audio system (even though it’s one of the handsets that Ford has tested) but the iPhone was quite happy to pair following the instructions in the owner’s handbook:

  1. The Bluetooth feature must be activated on the phone and on the audio unit. For additional information, refer to your phone user guide.
  2. The private mode must not be activated on the Bluetooth phone.
  3. Search for audio device.
  4. Select Ford Audio.
  5. The Bluetooth PIN number 0000 must be entered on the phone keypad.

[Ford Mondeo Owners Handbook (2008)]

Sony/Ford Audio System paired with iPhoneOnce paired, I could use the car’s controls to make calls and incoming calls on the iPhone were picked up by the car too.

Ford are not the only manufacturer to have such as system, but it is the first time I’ve had it fitted as standard on a car (on my Saab 9-3 I would have needed to specify an expensive stereo with built in satellite navigation to get the Bluetooth functionality) – and Ford do claim to be the only manufacturer to offer the system on small cars too:

Ford is the only manufacturer to offer a Bluetooth with Voice Control System on our smaller cars as well as our larger ones. It’s available on the Fiesta, Fusion, new Focus, new Focus CC, C-MAX, Mondeo, S-MAX, Galaxy, Fiesta Van, Transit Van, Transit Minibus, Transit Connect and Transit Chassis Cab.

(There are some light commercials on that list too.)

The downsides are that my phone has to have Bluetooth activated (and to be discoverable – leaving me subject to potential bluejacking). There’s also a bit of an echo (on both ends of the call) – something I haven’t experienced with the fitted car kit I use with the Nokia in my normal car – but it’s not bad enough to be a problem and, most importantly, the road noise at 70mph didn’t seem to cause too big a problem whilst making a call.

Sony/Ford Audio System picking up contacts from somewhere - not sure where though!So, what doesn’t work with the iPhone? Despite the audio system somehow managing to detect a couple of my contacts (which I can then select by pressing a button to dial), the Bluetooth Voice Control doesn’t seem to be able to read the phone directory – but it does work if dial by number, as shown in the pictures below:

Ford Converse+ System and Bluetooth Voice Control

Call on iPhone placed using Ford Bluetooth Voice ControlCall on iPhone placed using Ford Bluetooth Voice Control

Also, it would be nice to make the car’s audio system play the music on my iPhone over Bluetooth – except that Apple hasn’t given us A2DP (stereo Bluetooth Audio), so to connect the iPhone to the stereo requires use of a standard 3.5mm headset cable to the Aux socket on the car’s audio system (unavailable on the car I tested because that has a DVD system installed in the front seat headrests instead).

As for whether I will lease this car… well, the jury’s still out on that one. It drives well and I get a lot of toys for my money but the VW Passat Estate, Audi A4 Avant (or possibly A6) and BMW 3 series touring are all on my shortlist. Does anyone know if the iPhone works with the built-in systems in these cars?

How tone mapping can transform an HDR image

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few weeks back, I wrote about my efforts to create a photographic image with high dynamic range (HDR). Since then, I’ve learned that Adobe Photoshop’s approach to HDR is really little more than exposure blending. I had been reasonably pleased with the results (at least on screen) but then I gave Photomatix Pro a try.

Pointe de Trévignon HDR from Photomatix Pro

The initial HDR image that Photomatix Pro produced was disappointing, with deep shadows and washed out skies, but then I read in the help text that this was effectively in an unprocessed state, that my monitor cannot display the full range of information and that, in order to reveal highlight and shadow detail, I need to apply tone-mapping. Photomatix Pro did that for me and – wow! What a difference!

Pointe de Trévignon HDR from Photomatix Pro after tone-mapping

I thought this looked a little too surreal on screen so I reduced the luminosity (it’s actually much better when printed) but you can see how the detail is preserved throughout the entire exposure.

Pointe de Trévignon HDR from Photomatix Pro after tone-mapping

If I find myself creating other HDRs, I’ll probably purchase a copy of Photomatix Pro (and probably the Photoshop plugin version too) – until then I can continue to experiment with a fully-functional trial (but the resulting images will be watermarked – these screen shots are low-resolution previews). In the meantime, I’m going to try and get my head around the technical details of dynamic range, tone mapping and HDR imaging.

A quick Google tip

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Many Google services on the domain will try to detect your location and customise the results using the local language. This caught me out a few days ago when I was using a guest Wi-Fi connection at Microsoft and got the Google homepage in Dutch (presumably Microsoft UK’s Internet access is routed via the Netherlands) and tonight I fired up Google Reader in my hotel room and found it was in German (I’m using an iBAHN Internet connection).

Not to worry – just manually change the top level domain in the browser address bar to use a specific country (e.g. I switched from to and the interface should switch to the chosen locale.

VMware Virtualization Forum 2008

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

VMware logoA few days back, I wrote a post about why I don’t write much about VMware. This VMware-blindness is not entirely through choice – and I do believe that I need to keep up to speed on their view of the virtualisation market in order to remain objective when I talk about competitive offerings from Microsoft.

With that in mind, I spent today at the VMware Virtualization Forum 2008 – a free of charge event for VMware customers. I went to a similar event two years ago (when I was working on a VI3 deployment) and found it very worthwhile so I was pleased to be back again this year (I don’t know what happened in 2007!). It’s not a technical event – this is pure sales and marketing, with all the usual sponsors – but it did give me an overview of many of the announcements from last week’s VMworld conference in Las Vegas.

The event started late and, after keeping everyone waiting for 20 minutes, Chris Hammond, VMware’s Regional Director for the UK and Ireland had the audacity to ask delegates to make sure they arrive in the session rooms on time! There was the expected plug for the European version of the VMworld conference and then the keynote began, presented by Lewis Gee, VMware’s Vice President of field operations for EMEA. After a disclaimer that any of the future products discussed were for the purposes of sharing VMware’s vision/roadmap and may be subject to change, he began to look at how VMware has reached its current market position and some of the developments for the future.

Key points from the keynote were:

  • The growth of the virtualisation market:
    • x86 servers have proliferated. Typically a single application is running on each server, working at 5-10% utilisation.
    • Virtual machines allowed the partitioning of servers so that many virtual server instances could exist on a single physical server.
    • Virtualisation was initially used in test and development environment but is now increasingly seen in production. The net result is a consolidation of physical servers but there is still significant growth.
    • As virtualisation technology started to take hold customers began to look at the datacentre as a whole – VMware’s answer was Virtual Infrastructure.
    • Even now, there is significant investment in server hardware – in 2007, 8 million servers shipped (7.6m of which were x86/x64 servers).
    • The base component for VMware’s datacentre offerings is VMware’s bare-metal hypervisor – in paid (ESX) and free (ESXi) form. Whilst VMware now faces stiff competition from Microsoft and others, they were keen to highlight that their platform is proven and cited 120,000 customers, over 7 years, with 85% of customers using in production, and years of uptime at some customer sites.
  • ESXi uses just 32MB on disk – less code means fewer bugs, fewer patches, and VMware argues that their monolithic approach has no dependence on the operating system or arbitrary drivers (in contrast to the approach taken by Microsoft and others).
  • In the face of new competitive pressures (although I doubt they would admit it), VMware are now focusing on improved management capabilities using a model that they call vServices:
    • At the operating system level, VMware Converter provides P2V capabilities to allow physical operating system instances to be migrated to the virtual infrastructure.
    • Application vServices provide availability, security, and scalability using established technologies such as VMotion, Distributed Resource Scheduler (DRS), VMware High Availability (HA) and Storage VMotion.
    • Infrastructure vServices include vCompute, vStorage and vNetwork – for example, at VMworld, Cisco and VMware announced the Nexus 1000v switch – allowing network teams to look at the infrastructure as a whole regardless of whether it is physical or virtual.
    • Security vServices are another example of where VMware is partnering with other companies and the VMSafe functionality allows security vendors to integrate their solutions as another layer inside VI.
    • All of this is supplemented with Management vServices (not just from VMware but also integration with third party products).
    • Finally, of course the whole virtual infrastructure runs on a physical datacentre infrastructure (i.e. hardware).
  • All of this signals a move from a virtual infrastructure to what VMware is calling the Virtual Datacenter Operating System. It’s pure marketing speak but the concept is one of transforming the datacentre into an internal “cloud”. The idea is that the internal cloud is an elastic, self-managing and self-healing utility which should support federation with external clouds of computing and free IT from the constraints of static hardware-mapped applications (all of which sounds remarkably like Gartner’s presentation at the Microsoft Virtualization launch).
  • Looking to the future, VMware plan to release:
    • vCloud – enabling local cloud to off-premise cloud migration (elastic capacity – using local resources where available and supplementing these with additional resources from external sources as demand requires). There are not yet many cloud providers but expect to see this as an emerging service offering.
    • vCenter AppSpeed – providing quality of service (QoS) for a given service (made up of a number of applications, virtual machines, users, servers and datacentres); based around a model of discover, monitor and remediate.
    • VMware Fault Tolerance – application protection against hardware failures with no downtime (application and operating system independent) using mirrored virtual machine images for full business continuity.
  • Up to this point, the presentation had focused on datacentre developments but VMware are not taking their eyes off desktop virtualisation. VMware see desktops following users rather than devices (as do Microsoft) and VMware Virtual Desktop Infrastructure (VDI) is being repositioned as VMware View – providing users with access to the same environment wherever the desktop is hosted (thin client, PC, or mobile).
  • In summary, after 10 years in desktop virtualisation, 7 years in server virtualisation and 5 years in building a management portfolio (mostly through acquisition), VMware is now looking to the cloud:
    • Virtual Datacenter OS is about the internal cloud – providing efficient and flexible use of applications and resources.
    • vCloud is the initiative to allow the virtual datacentre to scale outside the firewall, including federation with the cloud.
    • VMware View is about solving the desktop dilemma – making the delivery of IT people and information centric.

VMware Virtual Datacenter OS

Next up was a customer presentation – with Lawrence Clark from Carphone Warehouse recounting his experiences of implementing virtualisation before Richard Garsthagen (Senior Evangelist at VMware) and Lee Dilworth (Specialist System Engineer at VMware) gave a short demonstration of some of the technical features (although there was very little there that was new for anyone who has seen VMware Virtual Infrastructure before – just, VMotion, HA, DRS and the new VMware fault tolerance functionality).

Following this, were a number of breakout sessions – and the first one I attended was Rory Clements’ presentation on transforming the datacentre through virtualisation. Rory gave an overview of VMware’s datacentre virtualisation products, based around a model of:

  1. Separate, using a hypervisor such as ESXi to run multiple virtual machines on a single physical server, with the benefits of encapsulation, hardware independence, partitioning and security through isolation.
  2. Consolidate, adding management products to the hypervisor layer, resulting in savings on capital expenditure as more and more servers are virtualised, running on a shared storage platform and using dynamic resource scheduling.
  3. Aggregate (capacity on demand), creating a virtual infrastructure with resource pooling, managed dynamically to guarantee application performance (a bold statement if ever I heard one!). Features such as VMotion can be used to remove planned downtime through live failover and HA provides a clustering capability for any application (although I do consider HA to be a misnomer in this case as it does require a restart).
  4. Automate (self-managing datacenter), enabling business agility with products/features such as: Stage Manager to automate the application provisioning cycle; Site Recovery Manager to create a workflow for disaster recovery and automatically fail entire datacentres over between sites; dynamic power management to move workloads and shut down virtualisation hosts that are not required to service current demands; and Update Manager, using DRS to dynamically reallocate workloads, then put a host into maintenance mode, patch, restart and bring the server online, before repeating with the rest of the nodes in the cluster.
  5. Liberate (computing clouds on and off premise) – create a virtual datacentre operating system with the vServices covered in the keynote session.

In a thinly veiled swipe at competitive products that was not entirely based on fact, Rory indicated that Microsoft were only at the first stage – entirely missing the point that they too have a strong virtualisation management solution and can cluster virtualisation hosts (even though the failover is not seamless). I don’t expect VMware to promote a competitive solution but the lack of honesty in the pitch did have a touch of the “used-car salesman” approach to it…

In the next session, Stéphane Broquère, a senior product marketing manager at VMware and formerly CEO at Dunes Technologies (acquired by VMware for their virtualisation lifecycle management software) talked about virtual datacentre automation with VMware:

  • Stéphane spoke of the virtual datacentre operating system as an elastic, self-managing, self-healing software substrate between the hardware pool and applications with software provisioned and allocated to hardware upon demand.
  • Looking specifically at the vManagement technologies, he described vApps using the DTMF open virtualisation format (OVF) to provide metadata which describes the application, service and what it requires – e.g. name, ports, response times, encryption, recovery point objective and VM lifetime.
  • vCenter is the renamed VirtualCenter with a mixture of existing and new functionality. Some products were skipped over (i.e. ConfigControl, CapacityIQ, Chargeback, Orchestrator, AppSpeed) but Stéphane spent some time looking at three products in particular:
    • Lifecycle Manager automates virtual machine provisioning, and provides intelligent deployment, tracking and decommissioning to ensure that a stable datacentre environment is maintained through proper approvals, standard configuration procedures and better accountability – all of which should lead to increased uptime.
    • Lab Manager provides a self-provisioning portal with an image library of virtual machine configurations, controlled with policies and quotas and running on a shared pool of resources (under centralised control).
    • Stage Manager targets release management by placing virtual machine images into a configuration (a service) and then promoting or demoting configurations between environments (e.g. integration, testing, staging, UAT and production) based on the rights assigned to a user. Images can also be archived or cloned to create a copy for further testing.
  • Over time, the various vCenter products (many of which are the result of acquisitions) can be expected to come together and there will be some consolidation (e.g. of the various workflow engines). In addition, VMware will continue to provide APIs and SDKs and collaborate with partners to extend.

After lunch, Lee Dilworth spoke about business continuity and disaster recovery, looking again at VMware HA, VMotion, DRS and Update Manager as well as other features that are not always considered like snapshots and network port trunking.

He also spoke of:

  • Experimental VM failure monitoring capabilities that monitor guest operating systems for failure and the ability to interpret hardware health information from a server management card.
  • Storage VMotion – redistributing workloads to optimise the storage configuration, providing online migration of virtual machine disks to a new data store with zero downtime, using a redo log file to capture in-flight transactions whilst the file copy is taking place (e.g. when migrating between storage arrays).
  • VMware Fault Tolerance – providing continuous availability, although still not an ideal technology for stretch clusters. It should also be noted that VMware Fault Tolerance is limited to a single vCPU and the shadow virtual machine is still live (consuming memory and CPU resources) so is probably not something that should be applied to all virtual machines.
  • vCenter Data Recovery (formerly VMware Consolidated Backup) – providing agentless disk-based backup and recovery, with virtual machine or file level restoration, incremental backups and data de-duplication to save space.
  • Site Recovery Manager (SRM) – allowing seamless failover between datacentres to restart hundreds of virtual machines on another site in the event of an unplanned or planned outage. Importantly, SRM requires a separate Virtual Center management framework on each site and replicated fibre channel or iSCSI LUNs (NFS will follow next year). It is not a replacement for existing storage replication products (it is an orchestration tool to integrate with existing replication products) nor is it a geo-clustering solution.

In the final session of the day, Reg Hall spoke about using virtual desktop infrastructure technologies to provide a desktop service from the datacentre. Key points were:

  • VMware has three desktop solutions:
    • Virtual Desktop Infrastructure (VDI), consisting of a the virtual desktop manager (VDM) connection broker, and standard Virtual Infrastructure functionality. Connected users access a security server and authenticated VDM then manages access to a pool of virtual desktops.
    • Assured Computing Environment (ACE), providing a portable desktop that is managed and secured with a central policy.
    • ThinApp (formerly Thinstall) for application virtualisation, allowing an application to be packaged once and deployed everywhere (on a physical PC, blade server, VDI, thin client, ACE VM, etc.) – although I’m told (by Microsoft) that VMware’s suggestion to use the product to run multiple versions of Internet Explorer side by side would be in breach of the EULA (I am not a lawyer).
  • Highlighting VMware’s memory management as an advantage over competitive solutions (and totally missing the point that by not buying VMware products, the money saved will buy a lot of extra memory), VMware cited double memory overcommitment when running virtual desktops; however their own performance tuning best practice guidance says “Avoid high memory overcomittment [sic.]. Make sure the host has more memory than the total amount of memory that will be used by ESX plus the sum of the working set sizes that will be used by all the virtual machines.”.
  • Assuming that hypervisors will become a target for attackers (a fair assumption), VMSafe provides a hardened virtual machine to protect other workloads through inspection of the virtual infrastructure.
  • As VDI becomes VMware View, desktops will follow users with the same operating system, application and data combinations available from a thin client, thick client (using client virtualisation – e.g. a desktop hypervisor) or even on a mobile device with the ability to check a desktop in/out for synchronisation with datacentre of offline working.
  • VDM will become View Manager and View Composer will manage the process of providing a single master virtual machine with many linked clones, appearing as individual systems but actually a single, scalable virtual image. At this point, patching becomes trivial, with patches applied to the master effectively being applied throughout the VMware View.
  • Other developments include improvements to connection protocols (moving away from RDP); improved 3D graphics virtualisation and a universal client (a device-independent client virtualisation layer).

Overall, the event provided me with what I needed – an overview of the current VMware products, along with a view of what is coming onstream over the next year or so. It’s a shame that the VMware Virtualization Forum suffered from poor organisation, lousy catering (don’t invite me to arrive somewhere before 9, then not provide breakfast, and start late!) and a lack of a proper closedown (the sessions ended, then there were drinks, but no closing presentation) – but the real point is not the event logistics but where this company is headed.

Behind all the marketing rhetoric, VMware is clearly doing some good things. They do have a lead on the competition at the moment but I’ve not seen any evidence that the lead is as advanced as the statements that we’ve-been-doing-hypervisor-based-virtualisation-for-7-years-and-Microsoft-is-only-just-getting-started would indicate. As one VMware employee told me, at last year’s event there was a lot of “tyre-kicking” as customers started to warm up to the idea of virtualisation whereas this year they want to know how to do specific things. That in itself is a very telling story – just because you have the best technology doesn’t mean that’s what customers are ready to deploy and, by the time customers are hosting datacentres in the cloud and running hypervisor-based client operating systems, VMware won’t be the only company offering the technology that lends itself to that type of service.

Microsoft Virtualization: part 2 (host virtualisation)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier this evening I kicked off a series of posts on the various technologies that are collectively known as Microsoft Virtualization and the first area I’m going to examine is that of server, or host, virtualisation.

Whilst competitors like VMware have been working in the x86 virtualisation space since 1998, Microsoft got into virtualisation through acquisition of Connectix in 2003. Connectix had a product called Virtual PC and, whilst the Mac version was dropped just as MacOS X started to grow in popularity (with its place in the market taken by Parallels Desktop for Mac and VMware Fusion), there have been two incarnations of Virtual PC for Windows under Microsoft ownership – Virtual PC 2004 and Virtual PC 2007.

Virtual PC provides a host virtualisation capability (cf. VMware Workstation) but is aimed at desktop virtualisation (the subject for a future post). It does have a bastard stepchild (my words, albeit based on the inference of a Microsoft employee) called Virtual Server, which uses the same virtual machine and virtual hard disk technology but is implemented to run as a service rather than as an application (comparable with VMware Server) with a web management interface (which I find clunky – as Microsoft’s Matt McSpirit once described it, it’s a bit like Marmite – you either love it or hate it).

Virtual Server ran its course and the latest version is Virtual Server 2005 R2 SP1. The main problem with Virtual Server is the hosted architecture, whereby the virtualisation stack runs on top of a full operating system and involves very inefficient context switches between user and kernel mode in order to access the server hardware – that and the fact that it only supports 32-bit guest operating systems.

With the launch of Windows Server 2008, came a beta of Hyper-V – which, in my view, is the first enterprise-ready virtualisation product that Microsoft has released. The final product shipped on 26 June 2008 (as Microsoft’s James O’Neill pointed out, the last product to ship under Bill Gates’ tenure as a full-time Microsoft employee) and provides a solid and performant hypervisor-based virtualisation platform within the Windows Server 2008 operating system. Unlike the monolithic hypervisor in VMware ESX which includes device drivers for a limited set of supported hardware, Hyper-V uses a microkernalised model, with a high performance VMbus for communication between guest (child) VMs and the host (parent) partition, which uses the same device drivers as Windows Server 2008 to communicate with the hardware. At the time of writing, there are 419 server models certified for Hyper-V in the Windows Server Catalog.

Architecturally, Hyper-V has almost nothing in common with Virtual PC and Virtual Server, although it does use the same virtual hard disk (.VHD) format and virtual machines can be migrated from the legacy platforms to Hyper-V (although, once the VM additions have been removed and replaced with the Hyper-V integration components, they cannot be taken back into a Virtual PC/Virtual Server environment). Available only in 64-bit editions of Windows Server 2008, Hyper-V makes use of hardware assisted virtualisation as well as security features to protect against buffer overflow attacks.

I’ve written extensively about Hyper-V on this blog but the main posts I would highlight for information on Hyper-V are:

Whilst Hyper-V is a remarkably solid product, to some extent the virtualisation market is moving on from host virtualisation (although it is an enabler for various related technologies) and there are those who are wary of it because it’s from Microsoft and its a version 1 product. Then there are those who highlight it’s supposed weaknesses… mostly FUD from VMware (for example, a few days back a colleague told me that he couldn’t implement Hyper-V in an enterprise environment because it doesn’t support failover – a completely incorrect statement).

When configured to use Windows Server 2008’s failover clustering technologies, Hyper-V can save the state of a virtual machine and restart it on another node, using a technology known as quick migration. Live migration (where the contents of memory are copied on the fly, resulting in seamless failover between cluster nodes in a similar manner to VMware VMotion) is a feature that was removed from the first release of Hyper-V. Whilst this has attracted much comment, many organisations who are using virtualisation in a production environment will only fail virtual machines over in a controlled manner – although there will be some exceptions where live migration is required. Nevertheless, at the recent Microsoft Virtualization launch event, Microsoft demonstrated live migration and said it will be in the next release of Hyper-V.

Memory management is another area that has attracted attention – VMware’s ESX product has the ability to overcommit memory as well as to transparently share pages of memory. Hyper-V does not offer this and Microsoft has openly criticised memory overcommitment because the operating system things it is managing memory paging, meanwhile the virtual memory manager is swapping pages to disk whilst transparent page sharing breaks fundamental rules of isolation between virtual machines.

Even so, quoting from Steven Bink’s interview with Bob Muglia, Vice President of Microsoft’s Server and Tools division:

“We talked about VMware ESX and its features like shared memory between VMs, ‘we definitely need to put that in our product’. Later he said it will be in the next release – like hot add memory, disk and NICs will be and live migration of course, which didn’t make it in this release.”

[some minor edits made for the purposes of grammar]

Based on the comments that have been made elsewhere about shared memory management, this should probably be read as “we need something like that” and not “we need to do what VMware has done”.

Then there is scalabilty. At launch, Microsoft cited 4-core, 4-way servers as the sweet spot for virtualisation, with up to 16 cores supported, running up to 128 virtual machines. Now that Intel has lauched it’s new 6-core Xeon 7400 processors (codenamed Dunnington), an update has been released to allow Hyper-V to support 24 cores (and 192 VMs), as described in Microsoft knowledge base article 956710. Given the speed in which that update was released, I’d expect to see similar improvements in line with processor technology enhancements.

One thing is for sure, Microsoft will make some significant improvements in the next full release of Hyper-V. At the Microsoft Virtualization launch, as he demonstrated live migration, Bob Muglia spoke of the new features in the next release of Windows Server 2008, and Hyper-V (which I interpreted as meaning that Hyper-V v2 will be included in Windows Server 2008 R2currently scheduled for release in early 2010). Muglia continued by saying that:

“There’s actually quite a few new features there which we’ll talk about both at the upcoming PDC (Professional Developer’s Conference) in late October, as well as at WinHEC which is the first week of November. We’ll go into a lot of detail on Server 2008 R2 at that time.”

In the meantime, there is a new development – the standalone Hyper-V Server. Originally positioned as a $28 product for the OEM and enterprise channels, this will now be a free of charge download and is due to be released within 30 days of the Microsoft Virtualization launch (so, any day now).

As detailed in the video above, Hyper-V Server is a “bare-metal” virtualisation product and is not a Windows product (do the marketing people at Microsoft really think that Microsoft Hyper-V Server will not be confused with the Hyper-V role in Microsoft Windows Server?).

With just a command line interface (as in server core installations of Windows Server 2008), it includes a configuration utility for basic setup tasks like renaming the computer, joining a domain, updating network settings, etc. but is intended to be remotely managed using the Hyper-V Manager MMC on Windows Server 2008 or Windows Vista SP1, or with System Center Virtual Machine Manager (SCVMM) 2008.

Whilst it looks similar to server core and uses some Windows features (e.g. the same driver model and update mechanism) it has a single role – Microsoft Hyper-V and does not support features in Windows Server 2008 Enterprise Edition like failover clustering (so no quick migration) although the virtual machines can be moved to Windows Server 2008 Hyper-V if required at a later date. Hyper-V Server is also limited to 4 CPU sockets and 32GB of memory (as for Windows Server 2008 Standard Edition). I’m told that Hyper-V Server has a 100MB memory footprint and uses around 1TB of disk (which sounds a lot for a hypervisor – we’ll see when I get my hands on it in a few days time).

Unlike Windows Server 2008 Standard, Enterprise and Datacenter Editions, Hyper-V Server will not require client access licenses (although the virtual machine workloads may) and it does not include any virtualisation rights.

That just about covers Microsoft’s host virtualisation products. The next post in this series will look at various options for desktop virtualisation. In the meantime, I’ll be spending the day at VMware’s Virtualisation Forum in London, to see what’s happening on their side of the fence.

Microsoft Virtualization: part 1 (introduction)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Sitting at Microsoft’s London offices waiting for this evening’s Microsoft Virtualization User Group (MVUG) event to start reminded me that I still haven’t written up my notes on the various technologies that make up Microsoft’s virtualisation portfolio. It’s been three months since I spent a couple of days in Reading learning about this, and tonight was a great recap – along with some news (some of which I can’t write about just yet – wait for PDC is all I can say! – and some of which I can).

A few weeks back, I highlighted in my post on virtualisation as an infrastructure architecture consideration that Microsoft’s virtualisation strategy is much broader than just server virtualisation, or virtual desktop infrastructure and introduced the following diagram, based on one which appears in many Microsoft slidedecks:

Microsoft view of virtualisation

At the heart of the strategy is System Center and, whereas VMware will highlight a number of technical weaknesses in the Microsoft products (some of which are of little consequence in reality), this is where Microsoft’s strength lies – especially with System Center Virtual Machine Manager (SCVMM) 2008 just about to be released (more on that soon) – as management is absolutely critical to successful implementation of a virtualisation strategy.

Over the next few days I’ll discuss the various components included in this diagram and highlight some of the key points about the various streams: server; desktop; application; and presentation virtualisation – as well as how they are all brought together in System Center.

Active Directory design considerations: part 8 (summary and further information)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the last few days, I’ve written a series of posts about design considerations for Microsoft Active Directory (AD), based on the MCS Talks: Enterprise Infrastructure series of webcasts. Just to summarise, the posts so far have been:

  1. Introduction.
  2. Forest and domain design.
  3. Organisational Units.
  4. Group policy objects.
  5. Security groups.
  6. Domain controller placement and site design.
  7. Domain controller configuration and DNS.

Just to finish the series it’s worth noting that implementing Active Directory is an iterative process. As business and technical application requirements change, so might the optimum directory configuration, particularly after major infrastructure changes such as a network upgrade.

The MCS Talks series is still running (and there are additional resources to compliment the second session on core infrastructure). I also have some notes from the third and fourth sessions on messaging and security that are ready to share so, if you’re finding this information useful, make sure you have subscribed to the RSS feed!

Active Directory design considerations: part 7 (domain controller configuration and DNS)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Continuing the series of posts about design considerations for Microsoft Active Directory (AD), based around the MCS Talks: Enterprise Architecture series of webcasts, this post discusses the design considerations for Active Directory domain controller configuration and DNS, which is critical to any Active Directory deployment.

Whilst the CPU specification for each server running as a domain controller will affect query performance, so can the disk configuration. Active Directory’s disk usage is mostly reads and the few writes are written to transaction logs before being committed to the database. For this reason, the separation of the logs (mostly written) from the database files (mostly read) can improve disk throughput.

Unlike for Exchange Server (where the decision to separate transaction logs from database files is mostly for resilience) with AD’s multi-master replication model providing resilience, the separation of logs and database files on a domain controller is about performance.

Having said that, in the same way that network improvements have allowed for domain controller consolidation, the move to a 64-bit version of Windows Server allows a larger addressable memory space and may even allow the entire AD database to be cached in RAM.

One critical piece of advice relating to domain controllers is when they are running in a virtualised environment. Microsoft recommends that DCs are never snapshotted (even RODCs), due to the potential to re-introduce out of date changes into AD if that snapshot is restored at a later date. Also, DCs should be configured to synchronise their time with the PDC emulator (the default) and not with the virtualisation host.

As I mentioned previously, DNS is critical to the correct operation of Active Directory and, which other DNS servers may be used, Microsoft recommends the use of AD-integrated DNS where possible as this provides a distributed, highly available DNS (effectively, DNS is as available as AD is). This can cause a political debate in some organisations, particularly where there is a heterogeneous network and the non-Windows computers do not use Active Directory. It is possible to configure Windows computers to use Windows DNS (AD integrated) and non-Windows computers to use another DNS implementation but this gets messy where shared subnets are involved (reverse lookup zones will be incomplete). For this reason, wherever possible, consolidation into a single organisational DNS should be considered.

Due to the overhead of managing root hints, Microsoft also recommends the use of the forwarding model and Windows Server 2003 introduced conditional forwarding, which is particularly useful where there are multiple forests, each of which is authoritative for its own zone. Windows Server 2008 improves conditional forwarding by storing conditional forwarding information in AD, rather than on each server (which created additional management overhead) although the standard forwarding is still defined on a per-server basis.

Enabling sleep/hibernation mode on a server with the Hyper-V role enabled

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of the problems with running Hyper-V on the notebook PC that I use for work is a lack of hibernation/sleep support. Mark Harrison has posted a partial solution which allows him to hibernate/sleep until he starts running virtual machines:

By setting the Hypervisor/Virtual Machine Support Driver to manual startup (editing the Start key at HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\hvboot to a value of 3), Mark found that Hyper-V can be left installed but not running, then net start hvboot can be used when Hyper-V is required. From this point on, sleep/hibernation will be unavailable (until the computer is restarted).

Unfortunately the main VM I run is the one with the (32-bit only) VPN connection to work that I use to access all of my corporate applications (which I need to access on a daily basis) so this solution doesn’t help me much, but I thought it might be useful to others.