Using psexec to make registry changes on a remote computer

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

So, here’s the problem. I’m in the UK and I want to send a 15MB file to someone in Australia. My Windows Live SkyDrive and Mesh accounts have 5MB limits (and there is no Mac client for Mesh for a point to point connection). I have an FTP server I can use but I need to create a new user account and I’m many miles away from the server. Of course, being Internet-facing, the FTP server is in a DMZ, so I’m careful about which services it is running but I can use a Remote Desktop Connection to connect to another computer and then use a second remote desktop session to access the FTP server from inside the firewall. At least, I should have been able to, if I’d enabled remote desktop… and I hadn’t.

I tried to connect to the registry remotely and enable Remote Desktop using the method that Daniel Petri describes but that failed:

Error connecting network registry
Unable to connect to
ipaddress. Make sure you have permission to administer this computer.

I wasn’t sure what was preventing access to the remote registry (the target is a fully patched Windows Server 2003 R2 computer) but I needed another method of access. That method was a Microsoft SysInternals tool called psexec which allowed me to bypass whatever security I was having trouble with and run commands on the remote server. First I edited the registry to allow Remote Desktop:

psexec \\ipaddress -u username -p password reg add "hklm\system\currentcontrolset\control\terminal server" /f /v fDenyTSConnections /t REG_DWORD /d 0

and was pleased to see that:

reg exited on ipaddress with error code 0.

Next I checked the value I’d just set:

psexec \\ipaddress -u username -p password reg query "hklm\system\currentcontrolset\control\terminal server"

Before I restarted the server:

psexec \\ipaddress -u username -p password shutdown -f -r -t 0

After this, I could RDP onto the console and make the changes that I needed.

If all the command line exercise is a little daunting, then it looks as though Phil Morgan’s RD Enable XP will also optionally call psexec to do the same thing…

Microsoft Virtualization: part 3 (desktop virtualisation)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Before the weekend, I started a series of posts on the various technologies that are collectively known as Microsoft Virtualization. So far, I’ve looked at host/server virtualisation and in this post, I’ll look at the various forms of desktop virtualisation that Microsoft offers.

Whilst VMware have a virtual desktop infrastructure (VDI) solution built around Virtual Infrastructure (VI), Microsoft’s options for virtualising the desktop are more varied – although it should be noted that they do not yet have a desktop broker and recommend partner products such as Citrix Xen Desktop or Quest vWorkspace (formerly Provision Networks Virtual Access Suite). With Hyper-V providing the virtualisation platform, System Center Virtual Machine Manager, Configuration Manager and Operations Manager for management of virtualised Vista clients, this is what some people at Microsoft have referred to as Microsoft VDI (although that’s not yet an official marketing concept).

Licensed by access device (PC or thin client) with the ability to run up to four virtual operating system instances per license, the Vista Enterprise Centralized Desktop (VECD) is actually platform agnostic (i.e. VECD can be used with VMware, Xen or other third-party virtualisation solutions). VECD is part of the Microsoft Desktop Optimization Pack (MDOP) and so requires a Software Assurance (SA) subscription.

With a broker to provide granular authentication and support for the Citrix Independent Computing Architecture (ICA) protocol (for better multimedia support than the Remote Desktop Protocol), users can connect to a Windows Vista desktop from any suitable access device.

To access this virtualised infrastructure there are a number of options – from thin-client terminal devices to Windows Fundamentals for Legacy PCs (WinFLP) – an operating system based on Windows XP Embedded and intended for use on older hardware. WinFLP is not a full general purpose operating system, but provides suitable capabilities for security, management, dcument-viewing and the Microsoft .NET framework, together with RDP client support and the ability to install other clients (e.g. Citrix ICA). Running on old, or low-specification hardware, WinFLP is an ideal endpoint for a VDI but it is a software assurance benefit – without SA then the closest alternative is to strip down/lock down Windows XP.

VDI is just one part of the desktop virtualisation solution though – since Microsoft’s purchase of Connectix in 2003, Virtual PC has been available for running virtualised operating system instances on the desktop. With the purchase of Kidaro in March 2008, Microsoft gained an enterprise desktop virtualisation solution, which has now become known as Microsoft Enterprise Desktop Virtualisation (MED-V) and is expected to become part of MDOP in the first half of 2009.

Effectively, MED-V provides a managed workspace, with automatic installation, image delivery and update; centralised management and reporting; usage policies and data transfer controls; and complete end use transparency (i.e. users do not need to know that part of their desktop is virtualised).

The best way I can describe MED-V is something like VMware ACE (for a locked-down virtual desktop) combined with the Unity feature from VMware Fusion/Coherence from Parallels Desktop for Mac, whereby the guest application instances appear to be running natively on the host operating system desktop.

MED-V runs within Virtual PC but integration with the host operating system is seamless (although MED-V applications can optionally be distinguished with a coloured border) – even down to the system tray level and providing simulated task manager entries.

A centralised repository is provided for virtual machine images with a variety of distribution methods possible – even a USB flash drive – and a management console is provided in order to control the user experience. Authentication is via Active Directory permissions, with MED-V icons published to the host desktop.

MED-V can be used to run applications with compatibility issues on a virtual Windows XP desktop running on Windows Vista until application compatibility fixes can be provided (e.g. using Application Compatibility Toolkit shims, or third party solutions such as those from ChangeBASE). Furthermore, whereas using application virtualisation to run two versions of Internet Explorer side-by-side involves breaching the end user licensing agreement (EULA), the MED-V solution (or any operating system-level virtualisation solution) provides a workaround, even allowing the use of lists to spawn an alternative browser for those applications that require it (e.g. Internet Explorer 7 on the desktop, with Internet Explorer 6 launched for certain legacy web applications).

Using technologies such as MED-V for desktop virtualisation allows a corporate desktop to be run on a “dirty” host (although network administrators will almost certainly have kittens). From a security standpoint, MED-V uses a key exchange mechanism to ensure security of client-server communications and the virtual hard disk (.VHD) image itself is encrypted, with the ability to set an expiry date after which the virtual machine is inoperable. Restrictions over access to clipboard controls (copy, paste, print screen, etc.) may be applied to limit interaction between guest and host machines – even to the point that it may be possible to copy data in one direction but not the other.

At this time, MED-V is 32-bit only, although future releases will have support for 64-bit host operating system releases (and I expect to see hypervisor-based virtualisation in a future Windows client release – although I’ve not seen anything from Microsoft to substantiate this, it is a logical progression to replace Virtual PC in the way that Hyper-V has replaced Virtual Server)

Desktop virtualisation has a lot of potential to aid organisations in the move to Windows Vista but, unlike VMware, who see VDI as a replacement for the desktop, Microsoft’s desktop virtualisation solutions are far more holistic, integrating with application and presentation virtualisation to provide a variety of options for application delivery.

In the next post in this series, I’ll take a closer look at application virtualisation.

Windows Server 2008 Hyper-V vs. Hyper-V Server 2008

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last summer, I wrote a post to help people understand the various versions of Hyper-V and now that Hyper-V Server has been launched, it’s got even more confusing.

The following table is lifted from the Microsoft website and should help to clear up which version of Hyper-V Server or Windows Server with the Hyper-V role enabled will allow various functionality:

Requirement Hyper-V Server 2008 Windows Server 2008 Standard Edition Windows Server 2008 Enterprise Edition Windows Server 2008 Datacenter Edition
Server consolidation Yes Yes Yes Yes
Test and development Yes Yes Yes Yes
Mixed operating system virtualisation (Windows and Linux) Yes Yes Yes Yes
Local Graphical User Interface Yes Yes Yes
High availability clustering Yes Yes
Quick migration Yes Yes
Large memory support (host >32GB RAM) Yes Yes
Support for >4 processors (host) Yes Yes
Ability to add further server roles Yes Yes Yes
Virtualisation rights (per assigned server license) Each guest should be licensed independently of the host 1 physical and 1 virtual 1 physical and 4 virtual 1 physical and unlimited virtual

MVP = Mark’s Very Pleased

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

MVPI’ve just heard that my Microsoft Most Valuable Professional (MVP) Award nomination for 2009 was successful and I can now say I’m an MVP for Virtual Machine technology.

Thank you to everyone who reads, links to and comments on this blog as, without your support, I wouldn’t write this stuff and therefore wouldn’t be getting the recognition from Microsoft that I have.

For those of you who skip over the Microsoft-focused content, don’t worry – it doesn’t mean that it will all be Microsoft from now on – I’ll still continue to write about whatever flavour of technology I find interesting at any given time, and I’ll still be trying to remain objective!

Hyper-V Server has RTMed – SCVMM due by the end of the month

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve just heard that Microsoft Hyper-V Server – the free version of Hyper-V with no reliance on Windows has shipped. Hyper-V Server will be available for download later today from the Microsoft website.

For more information about Hyper-V Server, check out the blog post I wrote a few days ago on host virtualisation using Microsoft Virtualization technologies.

System Center Virtual Machine Manager 2008 has not been released yet but Microsoft do say that it will be ready by the end of October (they had previously indicated that it would ship within 30 days of the Microsoft Virtualization launch last month).

Useful links: September 2008

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A list of items I’ve come across this month that I found potentially useful, interesting, or just plain funny:

I’ve been running these “useful links” posts for a few months now but it really would make sense for me to use a service like Delicious instead. I’ve been trying to work out a way to get Delicious to publish links on a weekly or monthly basis but that might not work out. Just keep watching this space.

Bluetooth communications between an Apple iPhone 3G and a Ford Audio system

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

My company car is due for replacement and the lease company has arranged a demonstration car of my choice for a week – so last Wednesday a shiny Ford Mondeo 2.2TDCi Titanium X Sport Estate was delivered to my house. (For readers outside Europe who don’t know what a Mondeo is, here’s an American review of the range-topping Titanium X Sport model – it might also be useful to note that “car” and “estate” are English words for what’s known as “automobile” and “wagon” in some parts of the world.)

Whilst I’m not a great fan of the fake aluminium that car manufacturers seem to plaster all over the interior of cars these days, this car represents a reasonable balance between performance, economy and the need to transport a family complete with all their associated paraphernalia (or garden rubbish for the tip…) – and it’s pretty well-specced too. One of the features that I was particularly interested in was the Bluetooth and Voice Control system.

(The Ford website features a longer version of this video but I couldn’t easily embed it here – and, for the benefit of those with no sense of irony, this is not serious – it is a parody of a fantastic programme that the BBC used to run called Tomorrow’s World.)

My current car has a fully-fitted carphone kit for use with my work phone (a Nokia 6021) but if anyone calls me on my iPhone 3G I have to use another solution. Not so with the Mondeo. In fact, I couldn’t get the Nokia to recognise the Ford Audio system (even though it’s one of the handsets that Ford has tested) but the iPhone was quite happy to pair following the instructions in the owner’s handbook:

  1. The Bluetooth feature must be activated on the phone and on the audio unit. For additional information, refer to your phone user guide.
  2. The private mode must not be activated on the Bluetooth phone.
  3. Search for audio device.
  4. Select Ford Audio.
  5. The Bluetooth PIN number 0000 must be entered on the phone keypad.

[Ford Mondeo Owners Handbook (2008)]

Sony/Ford Audio System paired with iPhoneOnce paired, I could use the car’s controls to make calls and incoming calls on the iPhone were picked up by the car too.

Ford are not the only manufacturer to have such as system, but it is the first time I’ve had it fitted as standard on a car (on my Saab 9-3 I would have needed to specify an expensive stereo with built in satellite navigation to get the Bluetooth functionality) – and Ford do claim to be the only manufacturer to offer the system on small cars too:

Ford is the only manufacturer to offer a Bluetooth with Voice Control System on our smaller cars as well as our larger ones. It’s available on the Fiesta, Fusion, new Focus, new Focus CC, C-MAX, Mondeo, S-MAX, Galaxy, Fiesta Van, Transit Van, Transit Minibus, Transit Connect and Transit Chassis Cab.

(There are some light commercials on that list too.)

The downsides are that my phone has to have Bluetooth activated (and to be discoverable – leaving me subject to potential bluejacking). There’s also a bit of an echo (on both ends of the call) – something I haven’t experienced with the fitted car kit I use with the Nokia in my normal car – but it’s not bad enough to be a problem and, most importantly, the road noise at 70mph didn’t seem to cause too big a problem whilst making a call.

Sony/Ford Audio System picking up contacts from somewhere - not sure where though!So, what doesn’t work with the iPhone? Despite the audio system somehow managing to detect a couple of my contacts (which I can then select by pressing a button to dial), the Bluetooth Voice Control doesn’t seem to be able to read the phone directory – but it does work if dial by number, as shown in the pictures below:

Ford Converse+ System and Bluetooth Voice Control

Call on iPhone placed using Ford Bluetooth Voice ControlCall on iPhone placed using Ford Bluetooth Voice Control

Also, it would be nice to make the car’s audio system play the music on my iPhone over Bluetooth – except that Apple hasn’t given us A2DP (stereo Bluetooth Audio), so to connect the iPhone to the stereo requires use of a standard 3.5mm headset cable to the Aux socket on the car’s audio system (unavailable on the car I tested because that has a DVD system installed in the front seat headrests instead).

As for whether I will lease this car… well, the jury’s still out on that one. It drives well and I get a lot of toys for my money but the VW Passat Estate, Audi A4 Avant (or possibly A6) and BMW 3 series touring are all on my shortlist. Does anyone know if the iPhone works with the built-in systems in these cars?

How tone mapping can transform an HDR image

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few weeks back, I wrote about my efforts to create a photographic image with high dynamic range (HDR). Since then, I’ve learned that Adobe Photoshop’s approach to HDR is really little more than exposure blending. I had been reasonably pleased with the results (at least on screen) but then I gave Photomatix Pro a try.

Pointe de Trévignon HDR from Photomatix Pro

The initial HDR image that Photomatix Pro produced was disappointing, with deep shadows and washed out skies, but then I read in the help text that this was effectively in an unprocessed state, that my monitor cannot display the full range of information and that, in order to reveal highlight and shadow detail, I need to apply tone-mapping. Photomatix Pro did that for me and – wow! What a difference!

Pointe de Trévignon HDR from Photomatix Pro after tone-mapping

I thought this looked a little too surreal on screen so I reduced the luminosity (it’s actually much better when printed) but you can see how the detail is preserved throughout the entire exposure.

Pointe de Trévignon HDR from Photomatix Pro after tone-mapping

If I find myself creating other HDRs, I’ll probably purchase a copy of Photomatix Pro (and probably the Photoshop plugin version too) – until then I can continue to experiment with a fully-functional trial (but the resulting images will be watermarked – these screen shots are low-resolution previews). In the meantime, I’m going to try and get my head around the technical details of dynamic range, tone mapping and HDR imaging.

A quick Google tip

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Many Google services on the google.com domain will try to detect your location and customise the results using the local language. This caught me out a few days ago when I was using a guest Wi-Fi connection at Microsoft and got the Google homepage in Dutch (presumably Microsoft UK’s Internet access is routed via the Netherlands) and tonight I fired up Google Reader in my hotel room and found it was in German (I’m using an iBAHN Internet connection).

Not to worry – just manually change the top level domain in the browser address bar to use a specific country (e.g. I switched from http://www.google.com/reader/ to http://www.google.co.uk/reader/) and the interface should switch to the chosen locale.

VMware Virtualization Forum 2008

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

VMware logoA few days back, I wrote a post about why I don’t write much about VMware. This VMware-blindness is not entirely through choice – and I do believe that I need to keep up to speed on their view of the virtualisation market in order to remain objective when I talk about competitive offerings from Microsoft.

With that in mind, I spent today at the VMware Virtualization Forum 2008 – a free of charge event for VMware customers. I went to a similar event two years ago (when I was working on a VI3 deployment) and found it very worthwhile so I was pleased to be back again this year (I don’t know what happened in 2007!). It’s not a technical event – this is pure sales and marketing, with all the usual sponsors – but it did give me an overview of many of the announcements from last week’s VMworld conference in Las Vegas.

The event started late and, after keeping everyone waiting for 20 minutes, Chris Hammond, VMware’s Regional Director for the UK and Ireland had the audacity to ask delegates to make sure they arrive in the session rooms on time! There was the expected plug for the European version of the VMworld conference and then the keynote began, presented by Lewis Gee, VMware’s Vice President of field operations for EMEA. After a disclaimer that any of the future products discussed were for the purposes of sharing VMware’s vision/roadmap and may be subject to change, he began to look at how VMware has reached its current market position and some of the developments for the future.

Key points from the keynote were:

  • The growth of the virtualisation market:
    • x86 servers have proliferated. Typically a single application is running on each server, working at 5-10% utilisation.
    • Virtual machines allowed the partitioning of servers so that many virtual server instances could exist on a single physical server.
    • Virtualisation was initially used in test and development environment but is now increasingly seen in production. The net result is a consolidation of physical servers but there is still significant growth.
    • As virtualisation technology started to take hold customers began to look at the datacentre as a whole – VMware’s answer was Virtual Infrastructure.
    • Even now, there is significant investment in server hardware – in 2007, 8 million servers shipped (7.6m of which were x86/x64 servers).
    • The base component for VMware’s datacentre offerings is VMware’s bare-metal hypervisor – in paid (ESX) and free (ESXi) form. Whilst VMware now faces stiff competition from Microsoft and others, they were keen to highlight that their platform is proven and cited 120,000 customers, over 7 years, with 85% of customers using in production, and years of uptime at some customer sites.
  • ESXi uses just 32MB on disk – less code means fewer bugs, fewer patches, and VMware argues that their monolithic approach has no dependence on the operating system or arbitrary drivers (in contrast to the approach taken by Microsoft and others).
  • In the face of new competitive pressures (although I doubt they would admit it), VMware are now focusing on improved management capabilities using a model that they call vServices:
    • At the operating system level, VMware Converter provides P2V capabilities to allow physical operating system instances to be migrated to the virtual infrastructure.
    • Application vServices provide availability, security, and scalability using established technologies such as VMotion, Distributed Resource Scheduler (DRS), VMware High Availability (HA) and Storage VMotion.
    • Infrastructure vServices include vCompute, vStorage and vNetwork – for example, at VMworld, Cisco and VMware announced the Nexus 1000v switch – allowing network teams to look at the infrastructure as a whole regardless of whether it is physical or virtual.
    • Security vServices are another example of where VMware is partnering with other companies and the VMSafe functionality allows security vendors to integrate their solutions as another layer inside VI.
    • All of this is supplemented with Management vServices (not just from VMware but also integration with third party products).
    • Finally, of course the whole virtual infrastructure runs on a physical datacentre infrastructure (i.e. hardware).
  • All of this signals a move from a virtual infrastructure to what VMware is calling the Virtual Datacenter Operating System. It’s pure marketing speak but the concept is one of transforming the datacentre into an internal “cloud”. The idea is that the internal cloud is an elastic, self-managing and self-healing utility which should support federation with external clouds of computing and free IT from the constraints of static hardware-mapped applications (all of which sounds remarkably like Gartner’s presentation at the Microsoft Virtualization launch).
  • Looking to the future, VMware plan to release:
    • vCloud – enabling local cloud to off-premise cloud migration (elastic capacity – using local resources where available and supplementing these with additional resources from external sources as demand requires). There are not yet many cloud providers but expect to see this as an emerging service offering.
    • vCenter AppSpeed – providing quality of service (QoS) for a given service (made up of a number of applications, virtual machines, users, servers and datacentres); based around a model of discover, monitor and remediate.
    • VMware Fault Tolerance – application protection against hardware failures with no downtime (application and operating system independent) using mirrored virtual machine images for full business continuity.
  • Up to this point, the presentation had focused on datacentre developments but VMware are not taking their eyes off desktop virtualisation. VMware see desktops following users rather than devices (as do Microsoft) and VMware Virtual Desktop Infrastructure (VDI) is being repositioned as VMware View – providing users with access to the same environment wherever the desktop is hosted (thin client, PC, or mobile).
  • In summary, after 10 years in desktop virtualisation, 7 years in server virtualisation and 5 years in building a management portfolio (mostly through acquisition), VMware is now looking to the cloud:
    • Virtual Datacenter OS is about the internal cloud – providing efficient and flexible use of applications and resources.
    • vCloud is the initiative to allow the virtual datacentre to scale outside the firewall, including federation with the cloud.
    • VMware View is about solving the desktop dilemma – making the delivery of IT people and information centric.

VMware Virtual Datacenter OS

Next up was a customer presentation – with Lawrence Clark from Carphone Warehouse recounting his experiences of implementing virtualisation before Richard Garsthagen (Senior Evangelist at VMware) and Lee Dilworth (Specialist System Engineer at VMware) gave a short demonstration of some of the technical features (although there was very little there that was new for anyone who has seen VMware Virtual Infrastructure before – just, VMotion, HA, DRS and the new VMware fault tolerance functionality).

Following this, were a number of breakout sessions – and the first one I attended was Rory Clements’ presentation on transforming the datacentre through virtualisation. Rory gave an overview of VMware’s datacentre virtualisation products, based around a model of:

  1. Separate, using a hypervisor such as ESXi to run multiple virtual machines on a single physical server, with the benefits of encapsulation, hardware independence, partitioning and security through isolation.
  2. Consolidate, adding management products to the hypervisor layer, resulting in savings on capital expenditure as more and more servers are virtualised, running on a shared storage platform and using dynamic resource scheduling.
  3. Aggregate (capacity on demand), creating a virtual infrastructure with resource pooling, managed dynamically to guarantee application performance (a bold statement if ever I heard one!). Features such as VMotion can be used to remove planned downtime through live failover and HA provides a clustering capability for any application (although I do consider HA to be a misnomer in this case as it does require a restart).
  4. Automate (self-managing datacenter), enabling business agility with products/features such as: Stage Manager to automate the application provisioning cycle; Site Recovery Manager to create a workflow for disaster recovery and automatically fail entire datacentres over between sites; dynamic power management to move workloads and shut down virtualisation hosts that are not required to service current demands; and Update Manager, using DRS to dynamically reallocate workloads, then put a host into maintenance mode, patch, restart and bring the server online, before repeating with the rest of the nodes in the cluster.
  5. Liberate (computing clouds on and off premise) – create a virtual datacentre operating system with the vServices covered in the keynote session.

In a thinly veiled swipe at competitive products that was not entirely based on fact, Rory indicated that Microsoft were only at the first stage – entirely missing the point that they too have a strong virtualisation management solution and can cluster virtualisation hosts (even though the failover is not seamless). I don’t expect VMware to promote a competitive solution but the lack of honesty in the pitch did have a touch of the “used-car salesman” approach to it…

In the next session, Stéphane Broquère, a senior product marketing manager at VMware and formerly CEO at Dunes Technologies (acquired by VMware for their virtualisation lifecycle management software) talked about virtual datacentre automation with VMware:

  • Stéphane spoke of the virtual datacentre operating system as an elastic, self-managing, self-healing software substrate between the hardware pool and applications with software provisioned and allocated to hardware upon demand.
  • Looking specifically at the vManagement technologies, he described vApps using the DTMF open virtualisation format (OVF) to provide metadata which describes the application, service and what it requires – e.g. name, ports, response times, encryption, recovery point objective and VM lifetime.
  • vCenter is the renamed VirtualCenter with a mixture of existing and new functionality. Some products were skipped over (i.e. ConfigControl, CapacityIQ, Chargeback, Orchestrator, AppSpeed) but Stéphane spent some time looking at three products in particular:
    • Lifecycle Manager automates virtual machine provisioning, and provides intelligent deployment, tracking and decommissioning to ensure that a stable datacentre environment is maintained through proper approvals, standard configuration procedures and better accountability – all of which should lead to increased uptime.
    • Lab Manager provides a self-provisioning portal with an image library of virtual machine configurations, controlled with policies and quotas and running on a shared pool of resources (under centralised control).
    • Stage Manager targets release management by placing virtual machine images into a configuration (a service) and then promoting or demoting configurations between environments (e.g. integration, testing, staging, UAT and production) based on the rights assigned to a user. Images can also be archived or cloned to create a copy for further testing.
  • Over time, the various vCenter products (many of which are the result of acquisitions) can be expected to come together and there will be some consolidation (e.g. of the various workflow engines). In addition, VMware will continue to provide APIs and SDKs and collaborate with partners to extend.

After lunch, Lee Dilworth spoke about business continuity and disaster recovery, looking again at VMware HA, VMotion, DRS and Update Manager as well as other features that are not always considered like snapshots and network port trunking.

He also spoke of:

  • Experimental VM failure monitoring capabilities that monitor guest operating systems for failure and the ability to interpret hardware health information from a server management card.
  • Storage VMotion – redistributing workloads to optimise the storage configuration, providing online migration of virtual machine disks to a new data store with zero downtime, using a redo log file to capture in-flight transactions whilst the file copy is taking place (e.g. when migrating between storage arrays).
  • VMware Fault Tolerance – providing continuous availability, although still not an ideal technology for stretch clusters. It should also be noted that VMware Fault Tolerance is limited to a single vCPU and the shadow virtual machine is still live (consuming memory and CPU resources) so is probably not something that should be applied to all virtual machines.
  • vCenter Data Recovery (formerly VMware Consolidated Backup) – providing agentless disk-based backup and recovery, with virtual machine or file level restoration, incremental backups and data de-duplication to save space.
  • Site Recovery Manager (SRM) – allowing seamless failover between datacentres to restart hundreds of virtual machines on another site in the event of an unplanned or planned outage. Importantly, SRM requires a separate Virtual Center management framework on each site and replicated fibre channel or iSCSI LUNs (NFS will follow next year). It is not a replacement for existing storage replication products (it is an orchestration tool to integrate with existing replication products) nor is it a geo-clustering solution.

In the final session of the day, Reg Hall spoke about using virtual desktop infrastructure technologies to provide a desktop service from the datacentre. Key points were:

  • VMware has three desktop solutions:
    • Virtual Desktop Infrastructure (VDI), consisting of a the virtual desktop manager (VDM) connection broker, and standard Virtual Infrastructure functionality. Connected users access a security server and authenticated VDM then manages access to a pool of virtual desktops.
    • Assured Computing Environment (ACE), providing a portable desktop that is managed and secured with a central policy.
    • ThinApp (formerly Thinstall) for application virtualisation, allowing an application to be packaged once and deployed everywhere (on a physical PC, blade server, VDI, thin client, ACE VM, etc.) – although I’m told (by Microsoft) that VMware’s suggestion to use the product to run multiple versions of Internet Explorer side by side would be in breach of the EULA (I am not a lawyer).
  • Highlighting VMware’s memory management as an advantage over competitive solutions (and totally missing the point that by not buying VMware products, the money saved will buy a lot of extra memory), VMware cited double memory overcommitment when running virtual desktops; however their own performance tuning best practice guidance says “Avoid high memory overcomittment [sic.]. Make sure the host has more memory than the total amount of memory that will be used by ESX plus the sum of the working set sizes that will be used by all the virtual machines.”.
  • Assuming that hypervisors will become a target for attackers (a fair assumption), VMSafe provides a hardened virtual machine to protect other workloads through inspection of the virtual infrastructure.
  • As VDI becomes VMware View, desktops will follow users with the same operating system, application and data combinations available from a thin client, thick client (using client virtualisation – e.g. a desktop hypervisor) or even on a mobile device with the ability to check a desktop in/out for synchronisation with datacentre of offline working.
  • VDM will become View Manager and View Composer will manage the process of providing a single master virtual machine with many linked clones, appearing as individual systems but actually a single, scalable virtual image. At this point, patching becomes trivial, with patches applied to the master effectively being applied throughout the VMware View.
  • Other developments include improvements to connection protocols (moving away from RDP); improved 3D graphics virtualisation and a universal client (a device-independent client virtualisation layer).

Overall, the event provided me with what I needed – an overview of the current VMware products, along with a view of what is coming onstream over the next year or so. It’s a shame that the VMware Virtualization Forum suffered from poor organisation, lousy catering (don’t invite me to arrive somewhere before 9, then not provide breakfast, and start late!) and a lack of a proper closedown (the sessions ended, then there were drinks, but no closing presentation) – but the real point is not the event logistics but where this company is headed.

Behind all the marketing rhetoric, VMware is clearly doing some good things. They do have a lead on the competition at the moment but I’ve not seen any evidence that the lead is as advanced as the statements that we’ve-been-doing-hypervisor-based-virtualisation-for-7-years-and-Microsoft-is-only-just-getting-started would indicate. As one VMware employee told me, at last year’s event there was a lot of “tyre-kicking” as customers started to warm up to the idea of virtualisation whereas this year they want to know how to do specific things. That in itself is a very telling story – just because you have the best technology doesn’t mean that’s what customers are ready to deploy and, by the time customers are hosting datacentres in the cloud and running hypervisor-based client operating systems, VMware won’t be the only company offering the technology that lends itself to that type of service.