The Windows Network Connection Status Icon (NCSI)

Last night, whilst working in the Premier Inn close to the office, I noticed the browser going to an interesting URI after I connected to the hotel Wi-Fi.  That URI was http://www.msftconnecttest.com/redirect and a little more research tells me it’s used by Windows 10 to detect whether the PC has an Internet connection or not.

The feature is actually the Network Connection Status Icon (NCSI) and, more accurately, the URIs used are:

The URI I saw actually redirects to MSN whereas the ones above return static text to indicate a successful connection.

For those who want to know more, there’s a detailed technical reference on TechNet, which dates back to Windows Vista and an extensive blog post on the Network Connection Status Icon.

Short takes: deleting bit.ly Bitlinks; backing up and restoring Sticky Notes; accessing cmdlets after installing Azure PowerShell

Another collection of short notes to add to my digital memory…

Deleting bit.ly links

Every now and again, I spot some spam links in my Twitter feed – usually prefixed [delicious]. That suggests to me that there is an issue in Delicious or in Twitterfeed (the increasingly unreliable service I use to read certain RSS feeds and tweet on my behalf) and, despite password resets (passwords are so insecure) it still happens.

A few days ago I spotted some of these spam links still in my bit.ly links (the link shortener behind my mwil.it links, who also own Twitterfeed) and I wanted to permanently remove them.

Unfortunately, according to the “how do I delete a Bitlink” bit.ly knowledge base article – you can’t.

Where does Windows store Sticky Notes?

Last Friday (the 13th) I wrote about saving my work before my PC was rebuilt

One thing I forgot about was the plethora of Sticky Notes on my desktop so, today, I was searching for advice on where to find them (in my backup) so I could restore.

It turns out that Sticky Notes are stored in user profiles, under %appdata%\Microsoft\Sticky Notes, in a file called StickyNotes.snt. Be aware though, that the folder is not created until the Sticky Notes application has been run at least once. Restoring my old notes was as easy as:

  1. Run the Sticky Notes desktop application in Windows.
  2. Close Sticky Notes.
  3. Overwrite the StickyNotes.snt file with a previous copy.
  4. Re-open Sticky Notes.

Azure PowerShell installation requires a restart (or explicit loading of modules)

This week has involved a fair amount of restoring tools/settings to a rebuilt PC (did I mention that mine died in a heap last Friday? If only the hardware and software were supplied by the same vendor – oh they are!). After installing the Azure PowerShell package from the SCCM Software Center, I found that cmdlets returned errors like:

Get-AzureRmResource : The term ‘Get-AzureRmResource’ is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.

After some RTFMing, I found this:

This can be corrected by restarting the machine or importing the cmdlets from C:\Program Files\WindowsPowerShell\Modules\Azure\XXXX\ as following (where XXXX is the version of PowerShell installed[)]: import-module "C:\Program Files\WindowsPowerShell\Modules\Azure\XXXX\azure.psd1" import-module "C:\Program Files\WindowsPowerShell\Modules\Azure\XXXX\expressroute\expressroute.psd1"

Short takes: Unicode characters in Windows; OS X Remote Disc goes AWOL

More micro-posts from the collection of open tabs in my browser…

Unicode characters in Windows

Sometimes, when tweeting, it’s useful to be able to type the unicode horizontal ellipsis (…) rather than three full stops (…). It might look similar, but that’s two less characters out of 140.  I remember back in early days of Windows I could enter special characters using the numeric keypad but it seems that still works (sort of): FireFormat.Info has some useful information on entering Unicode characters in Microsoft Windows.

Mac OS X Remote Disc goes AWOL whilst installing Adobe Lightroom

My new Mac Mini doesn’t have an optical drive. That’s not generally a problem except I needed to install Lightroom on it, so I used OS X’s Remote Disc technology to share the DVD drive from my old MacBook across the network.  The software installation was progressing nicely until, right at the end, the Adobe installer wanted me to insert the disc! As I was already connected to a logical disc, I had no way forward but to abandon the installation, connect a USB DVD drive and try again.  Seems it’s not the universal solution to accessing optical media that I had hoped…

To add insult to injury, I then found (thanks to the Lightroom Queen) that the Lightroom downloads on the Adobe website are the full programme, so I could have downloaded the software and installed it locally – all I really needed was my license key!

Accessing my iCloud photostream from a Windows PC

I use a lot of Apple products and, not surprisingly, when iOS5 was released, I upgraded my iPhone and my iPad. One of the big advancements with iOS5 is the integration with iCloud, Apple’s cloud service for synchronising data between devices so, when I took a look a few days later I was a bit confused. From a Windows PC I logged in and saw links for Mail, Contacts, Calendar, Find My iPhone and iWork – all with familiar icons but I couldn’t fathom is where my photostream is. Certainly not visible in iCloud…

It turns out that there is a separate application needed to sync an iCloud photostream with a Windows PC. I installed it, it crashed (something to do with being behind our proxy servers at work, I think) but after a PC reboot and connection to my home network, photos from my iOS devices started showing up in the %userprofile%\My Pictures\Photo Stream\My Photo Stream folder.  The iCloud Control Panel for Windows also integrates with Safari 5.1.1 or Internet Explorer 8 bookmarks and with Outlook 2007 Contacts and Calendars).

All I need now is the ability to sync ActiveSync contacts from my iPad (the ones I have in Office 365)… I wish.

An alternative enterprise desktop

Earlier this week, I participated in an online event that looked at the use of Linux (specifically Ubuntu) as an alternative to Microsoft Windows on the enterprise desktop.

It seems that every year is touted as the year of Linux on the desktop – so why hasn’t it happened yet? Or maybe 2011 really is the year of Linux on the desktop and we’ll all be using Google Chrome OS soon. Somehow I don’t think so.

You see, the trouble with any of the “operating system wars” arguments is that they miss the point entirely. There is a trilogy of people, process and technology at stake – and the operating system is just one small part of one of those elements. It’s the same when people start to compare desktop delivery methods – thick, thin, virtualised, whatever – it’s how you manage the desktop it that counts.

From an end user perspective, many users don’t really care whether their PC runs Windows, Linux, or whatever-the-next-great-thing-is. What they require (and what the business requires – because salaries are expensive) is a system that is “usable”. Usability is in itself a subjective term, but that generally includes a large degree of familiarity – familiarity with the systems that they use outside work. Just look at the resistance to major user interface changes like the Microsoft Office “ribbon” – now think what happens when you change everything that users know about using a PC. End users also want something that works with everything else they use (i.e. an integrated experience, rather than jumping between disparate systems). And, for those who are motivated by the technology, they don’t want to feel that there is a two tier system whereby some people get a fully-featured desktop experience and others get an old, cascaded PC, with a “light” operating system on it.

From an IT management standpoint, we want to reduce costs. Not just hardware and software costs but the costs of support (people, process and technology). A “free” desktop operating system is just a very small part of the mix; supporting old hardware gets expensive; and the people costs associated with major infrastructure deployments (whether that’s a virtual desktop or a change of operating system) can be huge. Then there’s application compatibility – probably the most significant headache in any transformation. Yes, there is room for a solution that is “fit for purpose” and that may not be the same solution for everyone – but it does still need to be manageable – and it needs to meet all of the organisation’s requirements from a governance, risk and compliance perspective.

Even so, the days of allocating a Windows PC to everyone in an effort to standardise every single desktop device are starting to draw to a close. IT consumerisation is bringing new pressures to the enterprise – not just new device classes but also a proliferation of operating system environments. Cloud services (for example consuming software as a service) are a potential enabler – helping to get over the hurdles of application compatibility by boiling everything down to the lowest common denominator (a browser). The cloud is undoubtably here to stay and will certainly evolve but even SaaS is not as simple as it sounds with multiple browser choices, extensions, plug-ins, etc. If seems that, time and time again, it’s the same old legacy applications (generally specified by business IT functions, not corporate IT) that make life difficult and prevent the CIO from achieving the utopia that they seek.

2011 won’t be the year of Linux on the desktop – but it might just be the year when we stopped worrying about standardisation so much; the year when we accepted that one size might not fit all; and the year when we finally started to think about applications and data, rather than devices and operating systems.

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog.]

Hardware specific application targeting with MDT 2010

Guest Post
[Editor’s note: this post was originally published on Garry Martin’s blog on 28 October 2009. As Garry’s closing down his own site but the content is still valid, he asked me if I’d like to post it here and I gratefully accepted!]

I’m running a Proof of Concept (PoC) at work at the moment which is making use of Microsoft Deployment Toolkit (MDT) 2010. Whilst most of the drivers we need can be managed by using the Out-of-Box Drivers Import function, some are delivered by the OEM as .EXE or .MSI packages. Whilst we could use multiple Task Sequences to manage these, or even select the applications individually at build time, our preference was to use some sort of hardware specific targeting.

Process

First of all, we needed to uniquely identify the hardware, and for this purpose we used the Plug and Play (PnP) Device ID, or hardware ID as it is sometimes called.

To determine the hardware IDs for a device by using Device Manager:

  1. Install the device on a test computer
  2. Open DeviceManager
  3. Find your device in the list
  4. Right-click the entry for your device, and then click Properties
  5. In the Device Properties dialog box, click the Details tab
  6. In the Property list, click Hardware Ids
  7. Under Value, make a note of the characters displayed. They are arranged with the most specific at the top to the most general at the bottom. You can select one or more items in the list, and then press CTRL+C to copy them to the clipboard.

In our case, the Sierra Wireless MC8755 Device gave us USB\VID_1199&PID_6802&REV_0001 as the most specific value and USB\VID_1199&PID_6802 as the least specific, so we made a note of these before continuing.

Next, we downloaded the Sierra Wireless MC87xx 3G Watcher .MSI package from our notebook OEM support site. Sierra Wireless have instructions for performing a silent install of the 3G Watcher package, so we used those to understand the installation command we would need to use.

So, we had a unique ID for targeting, the installation package, and the installation command line we would need to use. Now we needed to configure MDT to deploy it. First, we create a New Application.

  1. In the MDT 2010 Deployment Workbench console tree, right-click Applications, and click New Application
  2. On the Application Type page, click Next to install an application and copy its source files to the deployment share
  3. On the Details page, type the application’s name in the Application Name box, and click Next
  4. On the Source page, type the path or browse to the folder containing the application’s source files, and click Next
  5. On the Destination page, click Next to use the default name for the application in the deployment share
  6. On the Command Details page, type the command you want to use to install the application, and click Next. We used the following
    msiexec.exe /i 3GWatcher.msi /qn
  7. On the Summary page, review the application’s details, and click Next
  8. On the Confirmation page, click Finish to close the New Application Wizard.

Next we modify the Task Sequence and create our query.

  1. In the MDT 2010 Deployment Workbench console tree, click Task Sequences
  2. In the details pane, right-click the name of the Task Sequence you want to add the Application to, and then click Properties
  3. In the Task Sequence Properties dialog box, click the Task Sequence tab
  4. Expand State Restore and click on Install Applications
  5. Click the Add button, and select General, then Install Application
  6. On the Properties tab for Install Application, type the application’s name in the Name box, and click the Options tab
  7. On the Options tab, click the Add button and select If statement
  8. On the Source page, type the path or browse to the folder containing the application’s source files, and click Next
  9. In the If Statement Properties dialog box, ensure All Conditions is selected and click OK
  10. On the Options tab, click the Add button and select Query WMI

This is where we’ll now use a WMI query that will provide our Hardware Specific Application Targeting. You’ll need to modify this for your particular hardware, but we previously discovered that our least specific Device ID value was USB\VID_1199&PID_6802 so we will use this to help form our query.

  1. In the Task Sequence WMI Condition dialog box, ensure the WMI namespace is root\cimv2 and type the following in the WQI Query text box, clicking OK when finished:
    SELECT * FROM Win32_PNPEntity WHERE DeviceID LIKE '%VID_1199&PID_6802%'
  2. Click OK to exit the Task Sequences dialog box

And that’s it. When you deploy a computer using the modified Task Sequence, the WMI query will run and, if matched, install the application. If a match can’t be found, the application won’t be installed. Hardware Specific Application Targeting in a nutshell.

Keeping Windows alive with curated computing

Like it or loath it, there’s no denying that the walled garden approach Apple has adopted for application development on iOS (the operating system used for the iPhone, iPad and now new iPods) has been successful. Forrester Research talk about this approach using the term “Curated Computing” – a general term for an environment where there is a gatekeeper controlling the availability of applications for a given platform. So, does this reflect a fundamental shift in the way that we buy applications? I believe it does.

Whilst iOS, Android (Google’s competing mobile operating system) and Windows Phone 7 (the new arrival from Microsoft) have adopted the curated computing approach (albeit with tighter controls over entry to Apple’s AppStore) the majority of the world’s computers are slightly less mobile. And they run Windows. Unfortunately, Windows’ biggest strength (its massive ecosystem of compatible hardware and software) is also its nemesis – a whole load of the applications that run on Windows are, to put it bluntly, a bit crap!

This is a problem for Microsoft. One the one hand, it gives their operating system a bad name (somewhat unfairly, in my opinion, Windows is associated with it’s infamous “Blue Screen of Death” yet we rarely hear about Linux/Mac OS X kernel panics or iOS lockups); but, on the other hand, it’s the same broad device and application support that has made Windows such a success over the last 20 years.

What we’re starting to see is a shift in the way that people approach personal computing. Over the next few years there will be an explosion in the number of mobile devices (smart phones and tablets) used to access corporate infrastructure, along with a general acceptance of bring your own computer (BYOC) schemes – maybe not for all organisations but for a significant number. And that shift gives us the opportunity to tidy things up a bit.

Remove the apps at the left side of the diagram and only the good ones will be left...A few weeks ago, Jon Honeyball was explaining a concept to me and, like many of the concepts that Jon puts forward, it makes perfect sense (and infuriates me that I’d never looked at things this way before). If we think of the quality of software applications, we can consider that, statistically, they follow a normal distribution. That is to say that, the applications on the left of the curve tend towards the software that we don’t want on our systems – from malware through to poorly-coded applications. Meanwhile, on the right of the curve are the better applications, right through to the Microsoft and Adobe applications that are in broad use and generally set a high standard in terms of quality.  The peak on the curve represents the point with the most apps – basically, most application can be described as “okay”. What Microsoft has to do is lose the leftmost 50% of applications from this curve, instantly raising the quality bar for Windows applications. One way to do this is curated computing.

Whilst Apple have been criticised for the lack of transparency in their application approval process (and there are some bad applications available for iOS too), this is basically what they have managed to achieve through their AppStore.

If Microsoft can do the same with Windows Phone 7, and then take that operating system and apply it to other device types (say, a tablet – or even the next version of their PC client operating system) they might well manage to save their share of the personal computing marketplace as we enter the brave new world of user-specific, rather than device-specific computing.

At the moment, the corporate line is that Windows 7 is Microsoft’s client operating system but, even though some Windows 7 tablets can be expected, they miss the mark by some way.

Time after time, we’ve seen Microsoft stick to their message (i.e. that their way is the best and that everyone else is wrong), right up to the point when they announce a new product or feature that seems like a complete U-turn.  That’s why I wouldn’t be too surprised to see them come up with a new approach to tablets in the medium term… one that uses an application store model and a new user interface. One can only live in hope.

Installing Windows from a network server without Windows Deployment Services

I’d like to start this post with a statement:

Windows Deployment Services (WDS) is a useful role in Windows Server 2008 R2.  It’s free (to licensed Windows users), supports multitasking, and is a perfectly good method of pushing Windows images to clients…

Unfortunately that statement has a caveat:

… but it needs to be installed on an Active Directory-member computer.

For some, that’s a non-starter.  And sometimes, you just want a quick and dirty solution.

I have a small dedicated server at home to run Active Directory along with basic network services (DNS, DHCP, etc.) for my home IT.  I also run Philippe Jounin’s excellent TFTP Daemon (service edition) on it in order to support image loads on my Cisco 7940 IP Phone.

In order to rebuild the Hyper-V server that I use for infrastructure test and development, I wanted to boot across the network and install Windows Server 2008 R2 – and a few days ago I found Mark Kubacki’s post about TFTPd32 and DHCP Server – Windows Deployment Services without WDS. Perfect!  No need to install another role on my little Atom-powered server – particularly as, once this server is built, I’ll probably install WDS on it  to deploy images to my various test virtual machines!

So, this is the process – with thanks to Mark Kubacki, and to Ryan T Adams (who wrote about installing Vista without a CD Drive using TFTP – for instance, installing Windows on a netbook) who were gracious enough to blog about their experiences and give me something to build upon:

  1. Download tftpboot.exe from Ryan T Adams’ site and run it to extract the contents to a suitable hard drive location (i.e. the TFTP root folder).  Unfortunately, you probably won’t need most of this 154MB download (more on that in a moment) but it will get you started.
  2. Start tftpd32.exe (or copy the files to your TFTP root, if you are already running a TFTP service, as I was) and add tftpd32.exe (or tftpd32_svc.exe) as a Windows Firewall exception (you could just disable the firewall but I don’t recommend that approach).
  3. Either set TFTPD32 to act as a DHCP server and specify the boot file options (as Ryan describes), or configure DHCP options 066 and 067 (boot server host name and boot file name) on another DHCP server (Mark shows how to do this for the Windows DHCP Server role) using the IP address of the TFTP server and the boot file name of boot\pxeboot.com.
  4. Make sure that the TFTP Server is set to include PXE capability in the advanced TFTP options and that it’s DHCP Server capability is turned off if you are using another DHCP server.
  5. Restart the TFTP Server (or service) to pick up the configuration changes.
  6. Boot a computer (or virtual machine) from its network card, press F12 when prompted and wait for Windows PE to load, then map a drive to another machine on the network which is sharing the Windows media (I use Slysoft Virtual Clone Drive to mount an operating system ISO file and I’ve shared the virtual drive).
  7. Switch to the newly mapped drive and type setup.exe to run Windows Setup.

Unfortunately, the version of the Windows Preinstallation Environment (Windows PE) that Ryan has supplied in tftpboot.exe is a 32-bit version (of Windows PE 2.0, I think).  When I tried to use this to install Windows Server 2008 R2 (which is 64-bit only), I was greeted with the following message:

This version of Z:\setup.exe is not compatible with the version of Windows you’re running.  Check your computer’s system information to see whether you need a x86 (32-bit) or x64 (64-bit) version of the program, and then contact the software publisher.

I needed a 64-bit version of Windows PE.  No problem.  That’s included in the Windows Automated Installation Kit (WAIK), so I overwrote Ryan’s winpe.wim with the one from %programfiles%\Windows AIK\Tools\PETools\amd64, and restarted the computer I wanted to build.  This time Windows Setup ran with no issues and Windows Server was installed successfully.

Even though I used the TFTPD32, this method could be used to install Windows from just about any TFTP server (it could even be running on totally different operating system, I guess), or even to load another WIM file (i.e. not Windows PE) from a network boot. I’m sure if I had more time I could come up with all sorts of scenarios (boot Windows directly from the network?) but, for now, I’ll stick to using this method as a WDS replacement.

Thick, thin, virtualised, whatever: it’s how you manage the desktop that counts

In the second of my post-TechEd blog posts, I’ll take a look at one of the sessions I attended where Microsoft’s Eduardo Kassner spoke about various architectures for desktop delivery in relation to Microsoft’s vision for the Windows optimised desktop (CLI305). Again, I’ll stick with highlights in note form as, if I write up the session in full, it won’t be much fun to read!

  • Kassner started out by looking at who defines the desktop environment, graphing desktop performance against configuration control:
    • At the outset, the IT department (or the end user) installs approved applications and both configuration and performance are optimal.
    • Then the user installs some “cool shareware”, perhaps some other approved applications or personal software (e.g. iTunes) and it feels like performance has bogged down a little.
    • As time goes on, the PC may suffer from a virus attack, and the organisation needs an inventory of the installed applications, and the configuration is generally unknown. Performance suffers as a result of the unmanaged change.
    • Eventually, without control, update or maintenance, the PC become “sluggish”.
  • Complaints about desktop environments typically come down to: slow environment; application failures; complicated management; complicated maintenance; difficulty in updating builds, etc.
  • Looking at how well we manage systems: image management; patch management; hardware/software inventory; roles/profiles/personas; operating system or application deployment; and application lifecycle are all about desktop configuration. And the related processes are equally applicable to a “rich client”, “terminal client” or a “virtual client”.
  • Whatever the architecture, the list of required capabilities is the same: audit; compliance; configuration management; inventory management; application lifecycle; role based security and configuration; quality of service.
  • Something else to consider is that hardware and software combinations grow over time: new generations of hardware are launched (each with new management capabilities) and new operating system releases support alternative means of increasing performance, managing updates and configuration – in 2008, Gartner wrote:

    “Extending a notebook PC life cycle beyond three years can result in a 14% TCO increase”

    [source: Gartner, Age Matters When Considering PC TCO]

    and a few months earlier, they wrote that:

    “Optimum PC replacement decisions are based on the operating system (OS) and on functional compatibility, usually four years”

    [source: Gartner, Operational Considerations in Determining PC Replacement Life Cycle]

    Although when looking across a variety of analyst reports, three years seems to be the optimal point (there are some variations depending on the considerations made, but the general window is 2-5 years).

  • Regardless of the PC replacement cycle; the market is looking at two ways to “solve” the problem or running multiple operating system versions on multiple generations of hardware: “thin client” and “VDI” (also known as hosted virtual desktops) but Kassner does not agree that these technologies alone can resolve the issues:
    • In 1999, thin client shipments were 700,000 against a market size of 133m PCs [source: IDC 1999 Enterprise Thin Client Year in Review] – that’s around 0.6% of the worldwide desktop market.
    • In 2008, thin clients accounted for 3m units out of an overall market of 248m units [source: Gartner, 2008 PC Market Size Worldwide] – that’s up to 1.2% of the market, but still a very tiny proportion.
    • So what about the other 98.8% of the market? Kassner used 8 years’ worth of analyst reports to demonstrate that the TCO between a well-managed traditional desktop client and a Windows-based terminal was almost identical – although considerably lower than an unmanaged desktop. The interesting point was that in recent years the analysts stopped referring to the different architectures and just compared degrees of management! Then he compared VDI scenarios: showing that there was a 10% variance in TCO between a VDI desktop and a wide-open “regular desktop” but when that desktop was locked down and well-managed the delta was only 2%. That 2% saving is not enough to cover the setup cost a VDI infrastructure! Kassner did stress that he wasn’t saying VDI was no good at all – just that it was not for all and that a similar benefit can be achieved from simply virtualising the applications:
    • “Virtualized applications can reduce the cost of testing, packaging and supporting an application by 60%, and they reduced overall TCO by 5% to 7% in our model.”

      [source: Gartner, TCO of Traditional Software Distribution vs. Application Virtualization]

  • Having argued that thick vs. thin vs. VDI makes very little difference to desktop TCO, Kassner continued by commenting that the software plus services platform provides more options than ever, with access to applications from traditional PC, smartphone and web interfaces and a mixture of corporately owned and non-corporate assets (e.g. employees’ home PCs, or offshore contractor PCs). Indeed, application compatibility drives client device options and this depends upon the supported development stack and presentation capabilities of the device – a smartphone (perhaps the first example of IT consumerisation – and also a “thin client” device in its own right) is a example of a device that provides just a subset of the overall feature set and so is not as “forgiving” as a PC – one size does not fit all!
  • Kassner then went on to discuss opportunities for saving money with rich clients; but his summary was that it’s still a configuration management discussion:
    • Using a combination of group policy, a corporate base image, data synchronisation and well-defined security policies, we can create a well-managed desktop.
    • For this well-managed desktop, whether it is running on a rich client, a remote desktop client, with virtualised applications, using VDI or as a blade PC, we still need the same processes for image management, patch management, hardware/software inventory, operating system or application deployment, and application lifecycle management.
    • Once we can apply the well-managed desktop to various user roles (e.g. mobile, office, or task-based workers) on corporate or non-corporate assets, we can say that we have an optimised desktop.
  • Analysts indicate that “The PC of 2012 Will Morph Into the Composite Work Space” [source: Gartner], combining client hypervisors, application virtualisation, persistent personalisation and policy controls: effectively separating the various components for hardware, operating system and applications.  Looking at Microsoft’s view on this (after all, this was a Microsoft presentation!), there are two products to look at – both of which are Software Assurance benefits from the Microsoft Desktop Optimization Pack (MDOP) (although competitive products are available):
    • Application virtualisation (Microsoft App-V or similar) creates a package of an application and streams it to the desktop, eliminating the software installation process and isolating each application. This technology can be used to resolve conflicts between applications as well as to simplify application delivery and testing.
    • Desktop virtualisation (MED-V with Virtual PC or similar) creates a container with a full operating system environment to resolve incompatibility between applications and an alternative operating system, running two environments on the same PC [and, although Eduardo Kassner did not mention this in his presentation, it’s this management of multiple environments that provides a management headache, without suitable management toolsets – which is why I do not recommend Windows 7 XP Mode for the enterprise).
  • Having looked at the various architectures and their (lack of) effect on TCO, Kassner moved on to discuss Microsoft’s strategy.
    • In short, dependencies create complexity, so by breaking apart the hardware, operating system, applications and user data/settings the resulting separation creates flexibility.
    • Using familiar technologies: we can manage the user data and settings with folder redirection, roaming profiles and group policy; we can separate applications using App-V, RemoteApps or MED-V, and we can run multiple operating systems (although Microsoft has yet to introduce a client-side hypervisor, or a solution capable of 64-bit guest support) on a variety of hardware platforms (thin, thick, or mobile) – creating what Microsoft refers to as the Windows Optimized Desktop.
    • Microsoft’s guidance is to take the processes that produce a well-managed client to build a sustainable desktop strategy, then to define a number of roles (real roles – not departments, or jobs – e.g. mobile, office, anywhere, task, contract/offshore) and select the appropriate distribution strategy (or strategies). To help with this, there is a Windows Optimized Desktop solution accelerator (soon to become the Windows Optimized Desktop Toolkit).

There’s quite a bit more detail in the slides but these notes cover the main points. However you look at it, the architecture for desktop delivery is not that relevant – it’s how it’s managed that counts.

Maintaining a common user profile across different Windows versions

I wish I could take the credit for this, but I can’t: last week one of my colleagues (Brad Mallard) showed me a trick he has for creating a single user profile for multiple Microsoft operating systems. Michael Pietroforte wrote about the different user profile formats for Windows XP and Vista back in 2007 but Brad’s tip takes this a step further…

Using Group Policy Preferences, Brad suggests creating a system variable to record the operating system version for a given client computer (e.g. %osversion%) and assign it to the computer account. Then in Active Directory Users and Computers (ADUC/dsa.msc), set the user’s profile path to \\servername\sharename\%username%.%osversion%. ADUC will resolve the %username% portion but not the %osversion% part so what remains will be something like \\bigfileserver\userprofiles\mark.wilson.%osversion%.

Using this method, one user can hotdesk between several locations with different desktop operating systems (e.g. Windows XP and Windows 7). Each time they log on to a machine with a different operating system, a new profile will be created in a subfolder of their user name. Technically, that’s two profiles – but at least they are in one location for management purposes. Combine this with folder redirection for documents, IE favorites, etc. and it should be possible to present a consistent view between two operating system releases.