Thick, thin, virtualised, whatever: it’s how you manage the desktop that counts

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In the second of my post-TechEd blog posts, I’ll take a look at one of the sessions I attended where Microsoft’s Eduardo Kassner spoke about various architectures for desktop delivery in relation to Microsoft’s vision for the Windows optimised desktop (CLI305). Again, I’ll stick with highlights in note form as, if I write up the session in full, it won’t be much fun to read!

  • Kassner started out by looking at who defines the desktop environment, graphing desktop performance against configuration control:
    • At the outset, the IT department (or the end user) installs approved applications and both configuration and performance are optimal.
    • Then the user installs some “cool shareware”, perhaps some other approved applications or personal software (e.g. iTunes) and it feels like performance has bogged down a little.
    • As time goes on, the PC may suffer from a virus attack, and the organisation needs an inventory of the installed applications, and the configuration is generally unknown. Performance suffers as a result of the unmanaged change.
    • Eventually, without control, update or maintenance, the PC become “sluggish”.
  • Complaints about desktop environments typically come down to: slow environment; application failures; complicated management; complicated maintenance; difficulty in updating builds, etc.
  • Looking at how well we manage systems: image management; patch management; hardware/software inventory; roles/profiles/personas; operating system or application deployment; and application lifecycle are all about desktop configuration. And the related processes are equally applicable to a “rich client”, “terminal client” or a “virtual client”.
  • Whatever the architecture, the list of required capabilities is the same: audit; compliance; configuration management; inventory management; application lifecycle; role based security and configuration; quality of service.
  • Something else to consider is that hardware and software combinations grow over time: new generations of hardware are launched (each with new management capabilities) and new operating system releases support alternative means of increasing performance, managing updates and configuration – in 2008, Gartner wrote:

    “Extending a notebook PC life cycle beyond three years can result in a 14% TCO increase”

    [source: Gartner, Age Matters When Considering PC TCO]

    and a few months earlier, they wrote that:

    “Optimum PC replacement decisions are based on the operating system (OS) and on functional compatibility, usually four years”

    [source: Gartner, Operational Considerations in Determining PC Replacement Life Cycle]

    Although when looking across a variety of analyst reports, three years seems to be the optimal point (there are some variations depending on the considerations made, but the general window is 2-5 years).

  • Regardless of the PC replacement cycle; the market is looking at two ways to “solve” the problem or running multiple operating system versions on multiple generations of hardware: “thin client” and “VDI” (also known as hosted virtual desktops) but Kassner does not agree that these technologies alone can resolve the issues:
    • In 1999, thin client shipments were 700,000 against a market size of 133m PCs [source: IDC 1999 Enterprise Thin Client Year in Review] – that’s around 0.6% of the worldwide desktop market.
    • In 2008, thin clients accounted for 3m units out of an overall market of 248m units [source: Gartner, 2008 PC Market Size Worldwide] – that’s up to 1.2% of the market, but still a very tiny proportion.
    • So what about the other 98.8% of the market? Kassner used 8 years’ worth of analyst reports to demonstrate that the TCO between a well-managed traditional desktop client and a Windows-based terminal was almost identical – although considerably lower than an unmanaged desktop. The interesting point was that in recent years the analysts stopped referring to the different architectures and just compared degrees of management! Then he compared VDI scenarios: showing that there was a 10% variance in TCO between a VDI desktop and a wide-open “regular desktop” but when that desktop was locked down and well-managed the delta was only 2%. That 2% saving is not enough to cover the setup cost a VDI infrastructure! Kassner did stress that he wasn’t saying VDI was no good at all – just that it was not for all and that a similar benefit can be achieved from simply virtualising the applications:
    • “Virtualized applications can reduce the cost of testing, packaging and supporting an application by 60%, and they reduced overall TCO by 5% to 7% in our model.”

      [source: Gartner, TCO of Traditional Software Distribution vs. Application Virtualization]

  • Having argued that thick vs. thin vs. VDI makes very little difference to desktop TCO, Kassner continued by commenting that the software plus services platform provides more options than ever, with access to applications from traditional PC, smartphone and web interfaces and a mixture of corporately owned and non-corporate assets (e.g. employees’ home PCs, or offshore contractor PCs). Indeed, application compatibility drives client device options and this depends upon the supported development stack and presentation capabilities of the device – a smartphone (perhaps the first example of IT consumerisation – and also a “thin client” device in its own right) is a example of a device that provides just a subset of the overall feature set and so is not as “forgiving” as a PC – one size does not fit all!
  • Kassner then went on to discuss opportunities for saving money with rich clients; but his summary was that it’s still a configuration management discussion:
    • Using a combination of group policy, a corporate base image, data synchronisation and well-defined security policies, we can create a well-managed desktop.
    • For this well-managed desktop, whether it is running on a rich client, a remote desktop client, with virtualised applications, using VDI or as a blade PC, we still need the same processes for image management, patch management, hardware/software inventory, operating system or application deployment, and application lifecycle management.
    • Once we can apply the well-managed desktop to various user roles (e.g. mobile, office, or task-based workers) on corporate or non-corporate assets, we can say that we have an optimised desktop.
  • Analysts indicate that “The PC of 2012 Will Morph Into the Composite Work Space” [source: Gartner], combining client hypervisors, application virtualisation, persistent personalisation and policy controls: effectively separating the various components for hardware, operating system and applications.  Looking at Microsoft’s view on this (after all, this was a Microsoft presentation!), there are two products to look at – both of which are Software Assurance benefits from the Microsoft Desktop Optimization Pack (MDOP) (although competitive products are available):
    • Application virtualisation (Microsoft App-V or similar) creates a package of an application and streams it to the desktop, eliminating the software installation process and isolating each application. This technology can be used to resolve conflicts between applications as well as to simplify application delivery and testing.
    • Desktop virtualisation (MED-V with Virtual PC or similar) creates a container with a full operating system environment to resolve incompatibility between applications and an alternative operating system, running two environments on the same PC [and, although Eduardo Kassner did not mention this in his presentation, it’s this management of multiple environments that provides a management headache, without suitable management toolsets – which is why I do not recommend Windows 7 XP Mode for the enterprise).
  • Having looked at the various architectures and their (lack of) effect on TCO, Kassner moved on to discuss Microsoft’s strategy.
    • In short, dependencies create complexity, so by breaking apart the hardware, operating system, applications and user data/settings the resulting separation creates flexibility.
    • Using familiar technologies: we can manage the user data and settings with folder redirection, roaming profiles and group policy; we can separate applications using App-V, RemoteApps or MED-V, and we can run multiple operating systems (although Microsoft has yet to introduce a client-side hypervisor, or a solution capable of 64-bit guest support) on a variety of hardware platforms (thin, thick, or mobile) – creating what Microsoft refers to as the Windows Optimized Desktop.
    • Microsoft’s guidance is to take the processes that produce a well-managed client to build a sustainable desktop strategy, then to define a number of roles (real roles – not departments, or jobs – e.g. mobile, office, anywhere, task, contract/offshore) and select the appropriate distribution strategy (or strategies). To help with this, there is a Windows Optimized Desktop solution accelerator (soon to become the Windows Optimized Desktop Toolkit).

There’s quite a bit more detail in the slides but these notes cover the main points. However you look at it, the architecture for desktop delivery is not that relevant – it’s how it’s managed that counts.

Maintaining a common user profile across different Windows versions

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I wish I could take the credit for this, but I can’t: last week one of my colleagues (Brad Mallard) showed me a trick he has for creating a single user profile for multiple Microsoft operating systems. Michael Pietroforte wrote about the different user profile formats for Windows XP and Vista back in 2007 but Brad’s tip takes this a step further…

Using Group Policy Preferences, Brad suggests creating a system variable to record the operating system version for a given client computer (e.g. %osversion%) and assign it to the computer account. Then in Active Directory Users and Computers (ADUC/dsa.msc), set the user’s profile path to \\servername\sharename\%username%.%osversion%. ADUC will resolve the %username% portion but not the %osversion% part so what remains will be something like \\bigfileserver\userprofiles\mark.wilson.%osversion%.

Using this method, one user can hotdesk between several locations with different desktop operating systems (e.g. Windows XP and Windows 7). Each time they log on to a machine with a different operating system, a new profile will be created in a subfolder of their user name. Technically, that’s two profiles – but at least they are in one location for management purposes. Combine this with folder redirection for documents, IE favorites, etc. and it should be possible to present a consistent view between two operating system releases.

Mark Russinovich explains “the machine SID duplication myth”

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of my colleagues just flagged a blog post I’d been meaning to read when I have a little more time from Microsoft (ex-SysInternals) Technical Fellow Mark Russinovich in which he discusses “the machine SID duplication myth“. It seems that all of the effort we put into de-duplicating SIDs on Windows NT-based systems (NT, 2000, XP, 2003, Vista, 2008, 7 and 2008 R2) over the years was not really required…

To be honest, I don’t think anyone ever said it was required – just that having multiple machines with the same security identifier sounded like a problem waiting to happen and that generating unique SIDs was best practice.

The full post is worth a read but, in summary, the new best practice is:

“Microsoft’s official policy on SID duplication will also now change and look for Sysprep to be updated in the future to skip SID generation as an option. Note that Sysprep resets other machine-specific state that, if duplicated, can cause problems for certain applications like Windows Server Update Services (WSUS), so Microsoft’s support policy will still require cloned systems to be made unique with Sysprep.”

As you were then…

Windows native boot from VHD roundup

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

This is the first of several planned posts based on knowledge gained at Tech·Ed last week – but this one is necessarily brief. Mark Minasi, who presented the session that this content is based on, owns the copyright on the materials he presented (although Microsoft still distributed them to delegates). Consequently, I can’t write his session up as fully as I would like; however this post captures some of the key points (along with some narrative of my own) as I see nothing that’s not already in the public domain (and some of which has already been written about on this blog). The value in Mark’s presentation was that it pulled together various items of information into one place and explained it in a way that was simple to follow – consequently I’m not repeating the full details, just the high level overview, with some extra links where I feel they add value (Mark seems like a decent fellow – he’s only trying to protect his income and I suspect the real problem would be if I presented his materials as my own – I’m sure he would understand the fine line I’m attempting to walk here):

  • The session was titled “How Windows Storage is Changing: Everything is going VHD (CLI302)” and that’s pretty spot on – the virtual hard disk (.VHD) file format allows an entire disk partition (but not a whole drive with multiple partitions) to be packaged in a single file complete with folder structure and NTFS permissions: Microsoft’s Storage Server uses .VHD files for iSCSI targets; Windows Backup has been able to perform completed PC backups to .VHD files since Vista; and with Windows 7 we have the ability to natively boot Windows from a VHD file. Just to be clear – this is not client/server virtualisation (as in with a hypervisor) – this is storage virtualisation (presenting the VHD container as as a logical volume, stored on a physical disk).
  • To understand native .VHD booting, it’s useful to understand recent changes in the boot process: boot.ini is no more – instead we have a Boot Configuration Database (BCD) and a system reserved partition (incidentally, that’s the same one that is used for BitLocker, and is automatically created in Windows 7, with no drive letter assigned).
  • Running Windows Backup from the command line with wbadmin.exe requires the use of the -allcritical switch to ensure that the System Reserved partition is backed up.
  • As Mike Kolitz described back in May, access to .VHD file contents from Windows 7 and Server 2008 R2 is provided by a completely new mini-port driver in the storage stack for VHD files. This VHD driver enables requests to files in the VHD to be sent to the host NTFS file system on the physical partition where the VHD file is located. VHD operations can also be performed on a remote share.
  • The steps for creating a .VHD file, attaching (mounting) it, assigning a drive letter and formatting the volume can be found in my previous post on running Windows from a USB flash drive (as well as elsewhere on the ‘net).
  • The diskpart.exe command can be used to view the details of the VHD once mounted (detail disk) and it will be identified as a Msft Virtual Disk SCSI Disk Device.
  • The System Reserved Boot Partition may populated using the bcdboot.exe command. After this partition has been created, the remainder can be partitioned and formatted, then a pre-configured .VHD can be copied to the second (non-system) partition. After editing the BCD and rebooting, the physical drive will be something like D: or E: (depending on the presence of optical drives) and the .VHD will be C:.
  • There are various methods for creating a pre-configured .VHD, including booting a reference PC from Windows PE and using imagex.exe (from the Windows Automated Installation Kit) to capture the disk contents to a .WIM file, then mounting the target .VHD and deploying the .WIM image to it. Alternatively, there is a SysInternals tool called Disk2VHD.
  • The changes to the BCD are also documented in a previous post on this site but Mark also highlighted the existence of the [locate] parameter instead of specifying a drive manually (James O’Neill uses it in his post on booting from VHD and the joy of BCDEdit).
  • There are GUI tools for editing the BCD, but bcdedit.exe is worth getting to know:

    “The GUI BCDEdit commands are rather like having a 3 metre ladder for a 5 metre wall” … “Step into the light, come to the command line, in the command line there is freedom my friends.”

    [Mark Minasi at TechEd Europe 2009]

  • Booting from VHD is a great feature but it does have its limitations: for instance I can’t use it on my everyday notebook PC because the current release doesn’t support hibernation or BitLocker.
  • To finish up his presentation, Mark demonstrated an unsupported method for installing Windows directly to .VHD: Run Windows setup and press shift and F10 to break out into a command prompt; wipe and partition the hard drive, creating and attach a new .VHD; ignore Windows setup’s protests that it can’t be installed to the .VHD – click the Next button anyway and it should work (although it may be patched in a future release).

Finally, if the contents of this post are interesting, this blog recently featured two guest posts from my friend and colleague, Garry Martin that build on the concepts described above: in the first post, Garry described the process for booting Windows 7 from VHD on a Windows XP system; the second went deep into an unsupported, but nevertheless useful, method for booting Windows 7 or Server 2008 R2 from a VHD on removable media… perhaps a USB flash drive? There are also some useful links in Mike Ormond’s post on native VHD booting and Jon Galloway has a whole bunch of tips even if he is still searching for his virtual machine nirvana.

Tech·Ed Europe 2009

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Tech·Ed logoThose who follow me on Twitter may have noticed that I’ve spent the last week at Microsoft’s European technical education conference – Tech·Ed Europe – which was held in Berlin this year.

Mauerfall celebrationsIt was a great week to be in Berlin as it co-incided with Germany’s celebrations for the 20th anniversary of the fall of the Berlin Wall (die Mauerfall) and I was lucky enough to be at the Brandenburg Gate, standing in the rain, a couple of hundred metres away from political heavyweights past and present, watching a line of 1000 “dominos” tumbling to signify the fall of the wall. I don’t want to give the impression that Tech·Ed is just a jolly though – actually it’s far from it and I spent half my weekend travelling to get there, before attending sessions from 9am to around 7pm most days and then networking in the evenings. This was my first Tech·Ed since 2001, for various family and business reasons, and it was both tremendously rewarding and very hard work.

Tech·Ed BadgeFirstly, I should try and give some indication of the size of the event: more than 7200 people spread over several halls in a convention centre; more than 110 partners in the exhibition hall; hundreds of Microsoft staff and volunteers in the Technical Learning Center; around 600 sessions in something like 20 session rooms – only 21 sessions of which can fit into the agenda; a keynote with seating for all 7200 people; catering for everyone too (including the 460 staff); and a lot of walking to/from sessions and around the centre.

So, what sort of content is covered in the sessions? This year Tech·Ed had a mixture of IT Pro and Developer content but over the years it’s been held as separate developer and IT Pro events on consecutive weeks – and, if I go back far enough, there used to be a separate IT Pro conference (the Microsoft Exchange Conference, later renamed IT Forum). This year there didn’t seem to be as much for coders at Tech·Ed, but they have a Professional Developer Conference (PDC) in Los Angeles next week; web developers have their own conference too (MIX); and, if IT management is more your forte, the Microsoft Management Summit (MMS) is intended for you. Microsoft’s description of Tech·Ed is as follows:

Tech·Ed Europe
Provides developers and IT professionals the most comprehensive technical education across Microsoft’s current and soon-to-release suite of products, solutions and services. This event provides hands-on learning, deep product exploration and opportunities to connect with industry and Microsoft experts one-to-one. If you are developing, deploying, managing, securing and mobilising Microsoft solutions, Tech·Ed Europe is the conference that will help you solve today’s real-world IT challenges and prepare for tomorrow’s innovations.”

This week I attended a wide variety of sessions coving topics as diverse as using hacker techniques to aid in IT administration to troubleshooting Windows using SysInternals tools and from managing and monitoring UNIX and Linux systems using System Center Operations Manager to looking at why the various architectures for desktop delivery don’t matter so much as the way in which they are managed. Meanwhile, colleagues focused on a variety of messaging and collaboration topics, or on directory services. I’m pleased to say that I learned a lot this week. So much indeed that, by Friday lunchtime I was struggling to take any more in – thankfully one of the benefits of attending the event is a year’s subscription to TechNet Online, giving me access to recorded versions of the sessions.

When I first attended Tech·Ed, back in 1998, my career was only just getting going. These days, I have 15 years industry experience and I now know many of the event organisers, Microsoft staff, and speakers – and one of the reasons is the tremendous networking opportunity that events like this afford. I didn’t spend much time around the trade stands but I did make sure I introduced myself to key speakers whose subject material crosses my own expertise. I also met up with a whole load of people from the community and was able to associate many faces with names – people like Sander Berkouwer and Tamás Lepenye (who I knew from our online interactions but had not previously had the chance to meet in person) as well as Steven Bink (who I first met a couple of years ago, but it’s always good to see him around). But, by far the most fortuitous interaction for me was meeting Microsoft Technical Fellow Mark Russinovich on Friday morning. I was walking into the conference centre and realised that Active Directory expert John Craddock (whom I had shared a taxi with on the way from the airport earlier in the week) was next to me – and then I saw he was with Mark, who is probably the best known Windows operating system internals expert (with the possible exception of Dave Cutler) and I took the opportunity to introduce myself. Mark won’t have a clue who I am (apart from the hopeless groupie who asked him to pose for a picture later in the day) but, nevertheless, I was able to introduce myself. Mark and Mark Russinovich - yes, he really is that tall!Then, there was the Springboard Community Partei – a real opportunity to meet with international speakers and authors like Mark Minasi, as well as key Microsoft staff like Stephen Rose (Microsoft Springboard), Ward Ralston (Windows Server 2008 R2 Group Product Manager) and Mark Russinovich (although I didn’t actually see him at the party, this video shows he was there) – as well as MVPs like Sander Berkouwer, Aidan Finn and Thomas Lee. These are the events that lead to lasting relationships – and those relationships provide real value in the IT world. Name dropping in a blog post is one thing – but the IT world in which we live is a small place – Aidan is writing a book with Mark Minasi and you never know what opportunities may arise in future.

So, back to the point – Tech·Ed is one of my favourite IT events and I would love to attend it more frequently. At the stage my career has reached I no longer need week-long training courses on technical products, but 75 minute sessions to give an overview of a specific topic area are perfect – and, at around £2000 for a week of technical education and networking opportunity, Tech·Ed is something I’d like to persuade my employer to invest in more frequently…

…I’ll have to wait and see on that, but Tech·Ed 2010 will be held in Berlin again next November – fingers crossed I’ll be one of the attendees.

A quick look at Microsoft Surface

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of weeks back I managed to get a close look at a Microsoft Surface table. Although Surface has been around for a while now, it was the first time I’d been “hands on” with one and, considering it’s really a bunch of cameras, and a PC running Windows Vista in a cabinet a bit like a 1980s Space Invaders game, it was actually pretty cool.

One thing I hadn’t appreciated previously is that Surface uses a totally different technology to a multitouch monitor: rather than relying on capacitance, the surface table is sensitive to anything that reflects or absorbs infra red light. It uses an infrared emitter and a series of cameras to detect light reflected by something on the surface, then processes the image and detects shapes. There’s also an API so that software can decide what to do with the resulting image and a DLP projector to project the user interface on the glass (with an infrared filter so as not to confuse the input system). At the moment, the Surface display is only 1024×768 pixels but that didn’t seem to be restrictive in any way – even with such a physically large display.

Although in some ways surface behaves like a touch device as it has multiple cameras so it can perform stereoscopic three dimensional gestures but, because it lacks direct touch capabilities, there is no concept of a hover/mouse-over. Indeed the surface team’s API was taken and extended in the Microsoft .NET Framework version 4 to work with Window Touch and, at some point in the future, the Surface and Windows Touch APIs will converge.

The surface technology is unable to accommodate pressure sensitivity directly but the underlying processor is just a PC and has USB ports so peripherals could be used to extend the available applications (e.g. a fingerprint reader, card reader, etc.)

Surface can also recognise the type of object on the glass (e.g. finger, blob, byte tag) and it returns an identifier along with X and Y co-ordinates and orientation. When I placed my hand on the device, it was recognised as five fingers and a blob. Similarly, objects can be given a tag (with a value), allowing for object interaction with the table. Surface is also Bluetooth and Wi-Fi enabled so it’s possible to place a device on the surface and communicate with it, for example copying photos from the surface to a phone, or exchanging assets between two phones via the software running on the table. Finally, because Surface understands the concepts of flick and inertia, it’s possible to write applications that make use of this (such as the demonstration application that allows a globe to be spun on the surface display, creating a rippled water effect that it feels like you are interacting with, simulating gravity, adding sprung connections between items on the display, or making them appear to be magnetic.

One technology that takes this interaction even further (sometimes mistakenly referred to as Surface v2) is Microsoft’s SecondLight, which uses another set of technologies to differentiate between the polarisation properties of light so images may be layered in three dimensions. That has the potential to extend the possibilities of a Surface-like device even further and offer very rich interaction between devices on the Surface.

At present, Surface is only available for commercial use, with a development SKU offering a 5-seat license for the SDK and the commercial unit priced at £8,500. I’m told that, if a developer can write Windows Presentation Foundation (WPF) they can write Surface applications and, because Surface runs WPF or XNA, just as an Xbox or a PC does, it does have the potential for games development.

With touch now a part of the operating system in Windows 7, we should begin to see increasing use of touch technologies although there is a key difference between surface and Windows Touch as the vertically mounted or table form factor affects the user interface and device interaction – for example, Surface also detects the direction from which it is being touched and shows the user interface in the correct orientation. In addition, Surface needs to be able to cope with interaction from multiple users with multiple focus points (imagine having multiple mice on a traditional PC!).

My hour with Surface was inspiring. The key takeaways were that this is a multi-touch, multi-user, multi-directional device with advanced object interaction capabilities. Where it has been used in a commercial context (e.g. AT&T stores) it has mostly been a novelty; however there can be business benefits too. In short, before deploying Surface, it’s important to look further than just the hardware costs and the software development costs, considering broader benefits such as brand awareness, increased footfall, etc. Furthermore, because Surface runs Windows, some of the existing assets from another application (e.g. a kiosk) should be fairly simple to port to a new user interface.

I get the feeling that touch is really starting to go somewhere and is about to break out of its niche, finding mainstream computing uses and opening up new possibilities for device interaction. Surface was a research project that caught Bill Gates’ attention; however there are other touch technologies that will build on this and take it forward. With Windows Touch built into the operating system and exciting new developments such as SecondLight, this could be an interesting space to watch over the next couple of years.

Using a Windows System Image backup to transfer a configuration between computers

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of my colleagues left our organisation a couple of weeks ago and his notebook PC was up for grabs (kind of like vultures looking for prey, my manager and I were trying to grab the best bits of his relinquished IT assets…). To be honest, the PC is only marginally better than the one I had already but it did have a slightly faster processor (Intel Core 2 Duo Mobile P8400 vs. T7500), a larger hard disk, and was in better physical condition (I’ll try not to drop this one!). I did need to transfer my configuration to the “new” machine quickly though (i.e. between the start and the end of our team meeting today!) so that my “old” machine could be reallocated to someone in need of a more modern PC.

I could have messed around with user state migration onto a fresh build; however I’m flying out to TechEd Europe at the weekend and I wanted to be sure that I had all my applications working so I tried a different approach. The two computers are similar, but not identical (both Fujitsu-Siemens Lifebooks – one is an S7210 and the other is an S7220) so I decided to try creating a Windows System Image and restoring it onto a different machine, then letting Plug and Play sort out the hardware. It’s a bit messy (with new network adapters etc.) but the theory was sound.

Plug and Play driver detection on Windows 7Not only was the theory sound, but it worked. After booting the “new” machine from the Windows 7 Repair Disc that I was prompted to create at the end of the backup, I restored my system, complete with all applications and data. Plug and Play did indeed identify all of my hardware, combined with Microsoft Update for a missing display driver (that would have worked too if I had been online at the time). Windows even managed to reactivate itself as the product key was still valid so my system is reporting itself as genuine (note that Windows licences remain with individual computers; however in this case both machines were licensed for Windows 7 using a volume license product key).

It’s important to note that this effectively cloned the machine (yes, I could have used any number of disk imaging products for this, but I was using the out-of-the-box tools) and so I was careful not to have both machines on the network at the same time. Indeed the last step (before passing the “old” machine on to my manager) was to securely erase my data partition, which I did using the cipher command, before booting into the Windows Recovery Environment one more time to run up diskpart and remove all of the disk partitions.

The only remaining hurdle is moving the (so far empty) BitLocker Drive Encryption Partition from its current location in the middle of my hard disk (which was the end of the smaller disk in my old machine) but that should be possible as I haven’t actually encrypted the drive on this PC.

Not bad for a few hours work, especially as there was no downtime involved (I was able to use the “old” machine to deliver my presentation whilst the “new” one was being prepared).

Useful Links: October 2009

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A list of items I’ve come across recently that I found potentially useful, interesting, or just plain funny:

Protecting my netbook with BitLocker

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of the reasons I run Windows 7 Ultimate edition on my netbook is to take advantage of features like BitLocker. For those who are not aware of this technology, BitLocker has offered whole drive encryption for fixed hard disks since Windows Vista but Windows 7 also includes encryption capabilities for removable devices (BitLocker To Go).

Even though I don’t keep much data on my netbook, it’s exactly the sort of device that’s likely to be lost or stolen and it seems like a perfect candidate for data encryption – although my main concern was that I might encrypt the device and then lock myself out (and I’m not the only one who’s had those concerns). Luckily there are options for key recovery – ranging from storing a copy of the key in a file or in Active Directory (not applicable for me as my netbook is not domain-joined) to the most basic printing the key on a piece of paper and keeping it in a safe place (i.e. not the carry case for my computer!).

So, armed with the knowledge that I had backed up all my critical data, just in case something went wrong, last weekend I “BitLockered” my netbook and I’m pleased to say it was really straightforward (especially as Windows 7 creates the necessary drive partition at install-time). It would have been even easier if my computer had a trusted platform module (TPM) chip but, even so, Windows can be configured to allow encryption without a TPM – I just need to supply a startup key when I turn the computer on – in this case I used a small capacity USB thumb drive to store the key, then remove it from the computer after the drive has been unlocked. In effect, I can only start (or resume) the computer with that USB “key” – or enter the recovery key to disable the encryption entirely.

There are two common ways to allow Windows to use BitLocker without a TPM: one involves editing the Local Security Policy and the other uses a few registry tweaks, which is the one I chose:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\FVE]
“UseAdvancedStartup”=dword:00000001
“EnableBDEWithNoTPM”=dword:00000001
“UseTPM”=dword:00000002
“UseTPMPIN”=dword:00000002
“UseTPMKey”=dword:00000002
“UseTPMKeyPIN”=dword:00000002

To revert to the default settings, use a .reg file with the following contents:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\FVE]
“UseAdvancedStartup”=-
“EnableBDEWithNoTPM”=-
“UseTPM”=-
“UseTPMPIN”=-
“UseTPMKey”=-
“UseTPMKeyPIN”=-

Then, using Windows Explorer, right click the drive you want to encrypt and select the option to turn on BitLocker, and follow the wizard. Make sure you store a copy of the recovery key, as this will be required to recover the data on a BitLocker protected drive.

It took a while for my drive to encrypt and, despite almost every reference for this that I’ve seen saying that a dialogue box will be presented showing encryption progress, I didn’t see this – all I saw was that the drive was almost full and lots of hard drive activity, then I got my free space back and the icon for the drive had a padlock and a key on it. Now, if I right click the drive there are options to manage BitLocker, including duplicating the startup key and saving/printing a new copy of the recovery key.

All in all, it was pretty painless and I haven’t noticed any performance degradation but if someone does take a fancy to my netbook, they won’t be able to access the data on it.

For more information, see the Windows BitLocker Drive Encryption Step-by-Step Guide on the Microsoft TechNet website and the BitLocker drive encryption team blog (although that hasn’t been updated in a while). Michael Pietroforte has also compared BitLocker with TrueCrypt, concluding that TruCrypt lets you choose your own recovery passphrase; however BitLocker can be managed with Group Policy and the keys can be stored in Active Directory Domain Services. There’s more information on storing BitLocker keys in Active Directory on the TechNet website (domain controllers must be running Windows Server 2003 SP1 or higher and schema extensions are required).

Apple’s new multitouch mouse misses the point

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week Apple updated its product line, ahead of Microsoft’s Windows 7 launch, and one of the new announcements was a replacement for the “Mighty Mouse”, which was quietly killed off a few weeks back after years of doing anything but living up to its name (as Adam Pash notes in Lifehacker’s coverage of Apple’s new lineup).

I first heard about Apple’s new “Magic Mouse” on Twitter:

“RT @real_microsoft: RT @Mirweis Once again #Apple seems to have nosed ahead of #Microsoft with the multitouch mouse: ”

[@michaelsfo]

and Apple’s latest mouse is a multitouch device that uses gestures to control the screen. As should be expected, it looks great but, as TechRadar reported, it doesn’t support a key gesture – the pinch zoom that we first saw on the iPhone and that Apple has made synonymous with multitouch through its advertising.

Furthermore, there’s no touch screen on any of Apple’s refreshed line-up. In fact, the iMac changes are mostly evolutionary (and there’s a new unibody entry-level MacBook). Meanwhile, with the launch of Windows 7, Microsoft now has advanced touch capability available within the operating system. A multitouch mouse is cool – seriously cool – but the real advantages of touch come with touch screens and other displays that take concepts like the Microsoft Surface table into mainstream computing uses.

Some people might not think touch is really a big deal, or that it’s just a bit gimmicky right now – but step back and take a look at what’s happened with smartphones: in 2007, Apple launched the iPhone and all we’ve seen since then is an endless stream of competing devices – each with multitouch capabilities. Now that’s crossing over into the PC marketplace and, unlike tablet PCs, or early Windows Mobile devices, there’s no need for a stylus and that’s why I believe touch will become much more signifcant that it has been previously. Only yesterday, I watched my young sons (both of whom are under 5) using one of Ikea’s play kiosks and they instantly knew what to do to colour in a picture on screen. As soon as prices drop, I’ll be buying a multitouch monitor for them to use with a PC at home as I expect touch to replace the mouse as the interface that their generation uses to access computing devices.

Far from nosing ahead of Microsoft, I believe Apple has missed the point with its new mouse (please excuse the, entirely accidental, pun). Just as in the years when they insisted that mice only needed a single button (indeed, one of the problems that made the Mighty Mouse so unreliable was that it offered all the functionality of a multi-button mouse with several contact switches under a single button shell in order to maintain the appearance of a single-button mouse), now they are implementing touch on trackpads and mice, rather than on screen. Sure, fingerprints on glass don’t look good but that hasn’t held back the iPhone – and nor would it the iMac or MacBook if they implemented multitouch on screen. For now, at least, Apple is holding off on touchscreen displays, whilst mainstream PC manufacturers such as Dell are embracing the potential for multitouch applications that the latest version of Windows offers. As for the criticism that multitouch monitors are spendy and Apple’s mouse is not, the monitors will come down in price pretty quickly and, based on my experience with Apple’s previous mouse, I won’t be rushing out to spend £55 on the latest model.

As it happens, I bought a mouse to match my white MacBook a couple of weeks ago. Ironically, its from Microsoft – the Arc mouse – and it manages to look good, feel good, and fold up for transportation with its (tiny) transponder neatly connected (with a magnet) to the underside. It seems that Jonathan Ive is not the only person that can design functional and stylish computer hardware (most of the time).