Using Windows Autopilot to deploy PCs in the middle of a pandemic

A year ago, who would have thought that so many people would still be working from home because of COVID-19? That a pandemic response would lead to such a huge impact on the way we live? That we’d be having discussions about the future role of the office?

Lots of things changed in 2020. Some of them may never change back.

Changes to PC operating system deployment methods

There is a saying (attributed to the Greek philosopher, Heraclitus) that the one constant in life is change…

Over nearly 30 years working in IT, I’ve worked on a lot of PC rollouts. And the technology keeps on changing:

  • Back in 1994, I was using Laplink software with parallel cables (so much faster than serial connections) to push Windows for Workgroups 3.11 onto PCs for the UK Ministry of Defence.
  • In 2001, Ghost (which by then had been purchased by Norton) was the way to do it. Working with a a Microsoft partner called Conchango, my team at Polo Ralph Lauren rolled out 4000 new and rebuilt PCs. We did this across 8 European countries, supporting languages and PC hardware types with just two images.
  • By 2005, I was working for Conchango and using early versions of the Microsoft Business Desktop Deployment (BDD) solution accelerator to push standard operating environment (SOE) images to PCs for a UK retail and hospitality company.
  • By 2007, BDD had become Microsoft Deployment. Later, that was absorbed into System Center Configuration Manager.

After this, the PC deployment stuff gets a bit fuzzy. My career had moved in a different direction and, these days, I’m less worried about the detail (I have subject matter experts to rely on). My concerns are around the practicalities of meeting business requirements by making appropriate technology selections.

Which brings me back to the current day.

A set of business requirements

Imagine it’s early 2021 and you’re faced with this set of requirements:

  • Must deploy new Windows 10 PCs to a significant proportion of the business’ staff.
  • Must comply with UK restrictions and guidance in relation to the COVID-19 novel coronavirus.
  • Should follow Microsoft’s current recommended practice.
  • Must maintain compliance with all company standards for security and for information management. In particular, must not impact the company’s existing ISO 27001, ISO 9001 or Cyber Essentials Plus certifications.
  • Should not involve significant administrative overhead.

A solution, built around Windows Autopilot

The good news is that this is all possible. And it’s really straightforward to achieve using a combination of Microsoft technologies.

  • Azure Active Directory provides a universal identity platform, including conditional access, multifactor authentication.
  • Windows Autopilot takes a standard Windows 10 image (no need for customised “gold builds”) and applies appropriate policies to configure and secure it in accordance with organisational requirements. It does this by working with other Microsoft Endpoint Manager (MEM) components, like Intune.
  • OneDrive keeps user profile data backed up to the cloud, with common folders redirected so they remain synced, regardless of the PC being used.

What does it look like?

My colleague, Thom McKiernan (@ThomMcK), created a great unboxing video of his experience, opening up and getting started with his Surface Pro 7+:

(I tried to do the same with my Surface Laptop 3 but unboxing videos are clearly not my thing.)

Why does this matter?

The important thing for me is not the tech. It’s the impact that this had on our business. To be clear:

We deployed new PCs to staff, during a national lockdown, without the IT department touching a single PC.

For me, it took around 10 minutes from opening the box to sitting at a usable desktop with Microsoft Teams and Edge. (What else do you need to work in 2021?)

That would have been unthinkable a few years ago.

It seems that, on an almost daily basis, I talk to clients who are struggling with technology to allow staff to work from home. It always seems to come back to legacy VPNs or virtual desktop “solutions” that are holding the IT department back.

So, if you’re looking at how your organisation manages its end user device deployments, I recommend taking a look at Windows Autopilot. Perhaps you’re already licensed for Microsoft 365, in which case you have the tools. And, if you need some help to get it all working, well, you know who to ask…

Featured image created from Microsoft press images.

Providing fast mailbox access to Exchange Online in virtualised desktop scenarios

In last week’s post that provided a logical view on end user computing (EUC) architecture, I mentioned two sets of challenges that I commonly see with customers:

  1. “We invested heavily in thin client technologies and now we’re finding them to be over-engineered and expensive with multiple layers of technology to manage and control.”
  2. “We have a managed Windows desktop running <insert legacy version of Windows and Office here> but the business wants more flexibility than we can provide.”

What I didn’t say, is that I’m seeing a lot of Microsoft customers who have a combination of these and who are refreshing parts of their EUC provisioning without looking at the whole picture – for example, moving email from Exchange to Exchange Online but not adopting other Office 365 workloads and not updating their Office client applications (most notably Outlook).

In the last month, I’ve seen at least three organisations who have:

  • An investment in non-persistent virtualised desktops (using technology products from Citrix and others).
  • A stated objective to move email to Exchange Online.
  • Office Enterprise E3 or higher subscriptions (i.e. the licences for Office 365 ProPlus – for subscription-based evergreen Office clients) but no immediate intention to update Office from current levels (typically Office 2010).

These organisations are, in my opinion, making life unnecessarily difficult for themselves.

The technical challenges with such as solution come down to some basic facts:

  • If you move your email to the cloud, it’s further away in network terms. You will introduce latency.
  • Microsoft and Citrix both recommend caching Exchange mailbox data in Outlook.
  • Office 365 is designed to work with recent (2013 and 2016) versions of Office products. Previous versions may work, but with reduced functionality. For example, Outlook 2013 and later have the ability to control the amount of data cached locally – Outlook 2010 does not.

Citrix’s advice (in the Citrix Deployment Guide for Microsoft Office 365 for Citrix XenApp and XenDesktop 7.x) is using Outlook Cached Exchange Mode; however, they also state “For XenApp or non-persistent VDI models the Cached Exchange Mode .OST file is best located on an SMB file share within the XenApp local network”. My experience suggests that, where Citrix customers do not use Outlook Cached Exchange Mode, they will have a poor user experience connecting to mailboxes.

Often, a migration to Office 365  (e.g. to make use of cloud services for email, collaboration, etc.) is best combined with Office application updates. Whilst Outlook 2013 and later versions can control the amount of data that is cached, in a virtualised environment, this represents a user experience trade-off between reducing login times and reducing the impact of slow network access to the mailbox.

Put simply: you can’t have fast mailbox access to Exchange Online without caching on virtualised desktops, unless you want to add another layer of software complexity.

So, where does that leave customers who are unable or unwilling to follow Microsoft’s and Citrix’s advice? Effectively, there are two alternative approaches that may be considered:

  • The use of Outlook on the Web to access mailboxes using a browser. The latest versions of Outlook on the Web (formerly known as Outlook Web Access) are extremely well-featured and many users find that they are able to use the browser client to meet their requirements.
  • Third party solutions, such as those from FSLogix can be used to create “profile containers” for user data, such as cached mailbox data.

Using faster (SSD) disks for XenApp servers and improving the speed of the network connection (including the Internet connection) may also help but these are likely to be expensive options.

Alternatively, take a look at the bigger picture – go back to basics and look at how best to provide business users with a more flexible approach to end user computing.

A logical view on end user computing architecture

Over the last couple of years, I’ve worked with a variety of customers looking to transform the way they deliver end user computing services. Typically they fall into two camps:

  1. “We invested heavily in thin client technologies and now we’re finding them to be over-engineered and expensive with multiple layers of technology to manage and control.”
  2. “We have a managed Windows desktop running <insert legacy version of Windows and Office here> but the business wants more flexibility than we can provide.”

There are others too (like the ones who bought into a Mobile Device Management platform that’s no longer working for them) but the two examples above are by far and away the most common issues I see. When helping customers to understand their options for providing end user computing services, I like to step up a level from the technology – to get back to the logical building blocks of an end user computing solution. And, over time, I’ve developed and refined a diagram that seems to resonate pretty well with customers as a framework around which to build end user solutions.

Logical view on the end user computing landscape


Starting at the bottom left of the diagram, I’ll describe each of the main blocks in turn:

  • Identity and access: I start here because identity is absolutely key in any modern enterprise. If you’re still thinking about devices and operating systems – you’re doing it wrong (more on that later). Instead, the model is built around using someone’s identity to determine what applications they can access and how data is protected. Identity platforms work across cloud and on-premises environments, provide additional factors for authentication, self-service functionality (e.g. for password and group management), single sign-on to both corporate and cloud applications, integration with consumer and partner directory services and the ability to federate (i.e. to use a security token service to authenticate on-premises).
  • Data protection: with identity frameworks in place, let’s turn our attention to the data. Arguably there should be many more building blocks here but the main ones are around digital rights management, data loss prevention and endpoint security (firewalls, anti-virus, encryption, etc.).
  • Connectivity: until we all consume all of our services from the cloud, we generally need some form of connectivity to “the mothership”, whether that’s a client-less solution (like Microsoft DirectAccess) or another form of VPN. And of course that needs to run over some kind of network – typically a Wi-Fi or 4G connection but maybe Ethernet.
  • Devices: Arguably, there’s far too much attention paid to different types of devices here but there are considerations around form factor and ownership. Ultimately, with the correct levels of management control, it shouldn’t matter who owns the device but, for now, there’s a distinction between corporately-owned and user-owned devices. And what’s the “other” for? I use it as a placeholder to discuss embedded systems, etc.
  • Desktop operating system: Windows, MacOS, Linux… increasingly it doesn’t matter what the OS is as apps run cross-platform or even in a browser.
  • Mobile operating system: iOS, Android (maybe Windows Mobile). Again, it’s just a platform to run a browser – though there are considerations around native applications, app stores, etc. (we’ll come back to those in a short while).
  • Application delivery: this is where the “fun” starts. Often, this will be influenced by some technical debt – and many organisations will use more than one of the technologies listed. Apps may be locally installed – and they can be managed using a variety of management tools. In my world it’s System Center Configuration Manager, Intune and the major mobile app stores but, for others, there may be a different set of tools. Then there’s virtualised/containerised applications, remote desktops and published applications, trusted apps that run from a file share and, finally, the panacea that is a browser-delivered app. Which makes me think… maybe this diagram needs to consider add-ins and extensions… for now, let’s keep it simple.
  • Device and asset management: until we live in a world of entirely user-owned devices, there are assets to manage. Then, sadly, we have to control devices – whoever they belong to – whether that’s policy-driven device and application management, more traditional configuration management, or just the provision of a catalogue of approved applications. Then there’s alerting, perhaps backups (though maybe not if the data is stored away from devices) and something I’ve referred to as “desktop optimisation” which is really the management tools for some of the delivery methods and tools described elsewhere.
  • Productivity services: name your poison – Office 365 or G-Suite – it doesn’t matter; these are the things that people do in their productivity apps. You may disagree with some of the categories (would Slack fit into enterprise social networking, or is it team sites?) but ultimately it’s about an extensible set of productivity services that end users can consume.
  • Input/output services: I print very little but others print a lot. Similarly, there’s scanning to be done. The paperless office is still not here…
  • Environmental management: over time, this will fade away in favour of mobile device and application management solutions but, today, many organisations still need to consider how they control the configuration of desktop operating systems – in the Windows world that might mean Group Policy and for other platforms it could be scripted.
  • Business data and applications: all of the stuff above means nothing if organisations can’t unlock the potential of their data – whether it’s in the CRM or ERP system, end user-driven reporting and BI, workflow or another line of business system.
  • High availability and business continuity: You’ll notice that this block has no subcomponents. For me, it’s nothing more than a consideration. If the end user computing architecture has been designed to be device and platform agnostic, then replacing a device should be straightforward – no need to maintain whole infrastructures for business continuity purposes. Similarly, if all I need is a device with an Internet connection and a browser, then the high availability conversation moves away from the end user computing platform and into how we provide the services that end users need to access.

I’m sure the model will continue to develop over time – it’s far from perfect and some items will be de-emphasised over the years (for example the differentiation between mobile and desktop operating systems will become less important) whilst others will need to be added, but it seems a reasonable starting point around which to start a discussion.