Using Microsoft Bookings to manage device rollouts

Microsoft Bookings, showing available services

End-user computing (EUC) refreshes can place significant logistical challenges on an organisation. Whilst technologies like Windows 10 Autopilot will take us to a place where users can self-provision, often there’s more involved and some training is required to help users to adopt the technology (and potentially associated business changes).

Over the last few years, I’ve worked on projects that have used a variety of systems to manage the allocation of training/handover sessions to people but they’ve always been lacking in some way. We’ve tried using a PowerApps app, and a SharePoint calendar extension but then Microsoft made their Bookings app available on Office 365 Enterprise subscriptions (it was previously only available for Business subscriptions).

Microsoft Bookings is designed for small businesses and the example given in the Microsoft documentation is a pet grooming parlour. You could equally apply the app to other scenarios though: a hairdressing salon; bike repairs; or IT Services.

I can’t see Microsoft Bookings on my tenant!

That’s because, by default, it’s not there for Enterprise customers. Most of my customers use an E3 or E5 subscription and I was able to successfully test on a trial E3 tenant. My E1 was no good though…

The process to add the Business Apps (free) – including Bookings – to an Enterprise tenant will depend on whether it’s Credit Card (PAYG), Enterprise Agreement (EA) or Cloud Solutions Provider (CSP) licenced but it’s fully documented by Microsoft. When I enabled it on my test tenant, I received an invoice for £0.00.

So, how do I configure Microsoft Bookings?

The app is built around a calendar on a website, with a number of services and assigned staff. Each “staff member” needs to have a valid email address but they don’t need to be a real person – all of the email messages could be directed to a single mailbox, which also reduces the number of licences needed to operate the solution.

It took some thinking about how to do this for my End User Device Handover scenario but I set up:

  • A calendar for the project.
  • A service for the handover sessions. Use this to control when services are provided (e.g. available times and staff).
  • A number of dummy “staff” for the number of slots in each session (e.g. 10 people in each session, 10 slots so 10 “staff”).
Microsoft Bookings, showing confirmation of booked service Microsoft Bookings, calendar view

Once all of the staff available for a session are booked (i.e. all of the slots for a session are full), it’s no longer offered in the calendar. There’s no mechanism for preventing multiple/duplicate bookings but a simple manual check to export a .TSV file with all of the bookings each day will allow those to be identified and remediated.

(Incidentally, Excel wouldn’t open a TSV file for me. What I could do though was open the file in Notepad and copy/paste it to Excel, for sorting and identification of multiple bookings from the same email address.)

Further reading

These blog posts are a couple of years old now but helped a lot:

Microsoft Ignite | The Tour: London Recap

One of the most valuable personal development activities in my early career was a trip to the Microsoft TechEd conference in Amsterdam. I learned a lot – not just technically but about making the most of events to gather information, make new industry contacts, and generally top up my knowledge. Indeed, even as a relatively junior consultant, I found that dipping into multiple topics for an hour or so gave me a really good grounding to discover more (or just enough to know something about the topic) – far more so than an instructor-led training course.

Over the years, I attended further “TechEd”s in Amsterdam, Barcelona and Berlin. I fought off the “oh Mark’s on another jolly” comments by sharing information – incidentally, conference attendance is no “jolly” – there may be drinks and even parties but those are after long days of serious mental cramming, often on top of broken sleep in a cheap hotel miles from the conference centre.

Microsoft TechEd is no more. Over the years, as the budgets were cut, the standard of the conference dropped and in the UK we had a local event called Future Decoded. I attended several of these – and it was at Future Decoded that I discovered risual – where I’ve been working for almost four years now.

Now, Future Decoded has also fallen by the wayside and Microsoft has focused on taking it’s principal technical conference – Microsoft Ignite – on tour, delivering global content locally.

So, a few weeks ago, I found myself at the ExCeL conference centre in London’s Docklands, looking forward to a couple of days at “Microsoft Ignite | The Tour: London”.

Conference format

Just like TechEd, and at Future Decoded (in the days before I had to use my time between keynotes on stand duty!), the event was broken up into tracks with sessions lasting around an hour. Because that was an hour of content (and Microsoft event talks are often scheduled as an hour, plus 15 minutes Q&A), it was pretty intense, and opportunities to ask questions were generally limited to trying to grab the speaker after their talk, or at the “Ask the Experts” stands in the main hall.

One difference to Microsoft conferences I’ve previously attended was the lack of “level 400” sessions: every session I saw was level 100-300 (mostly 200/300). That’s fine – that’s the level of content I would expect but there may be some who are looking for more detail. If it’s detail you’re after then Ignite doesn’t seem to be the place.

Also, I noticed that Day 2 had fewer delegates and lacked some of the “hype” from Day 1: whereas the Day 1 welcome talk was over-subscribed, the Day 2 equivalent was almost empty and light on content (not even giving airtime to the conference sponsors). Nevertheless, it was easy to get around the venue (apart from a couple of pinch points).

Personal highlights

I managed to cover 11 topics over two days (plus a fair amount of networking). The track format of the event was intended to let a delegate follow a complete learning path but, as someone who’s a generalist (that’s what Architects have to be), I spread myself around to cover:

  • Dealing with a massive onset of data ingestion (Jeramiah Dooley/@jdooley_clt).
  • Enterprise network connectivity in a cloud-first world (Paul Collinge/@pcollingemsft).
  • Building a world without passwords.
  • Discovering Azure Tooling and Utilities (Simona Cotin/@simona_cotin).
  • Selecting the right data storage strategy for your cloud application (Jeramiah Dooley/@jdooley_clt).
  • Governance in Azure (Sam Cogan/@samcogan).
  • Planning and implementing hybrid network connectivity (Thomas Maurer/@ThomasMaurer).
  • Transform device management with Windows Autopilot, Intune and OneDrive (Michael Niehaus/@mniehaus and Mizanur Rahman).
  • Maintaining your hybrid environment (Niel Peterson/@nepeters).
  • Windows Server 2019 Deep Dive (Jeff Woolsey/@wsv_guy).
  • Consolidating infrastructure with the Azure Kubernetes Service (Erik St Martin/@erikstmartin).

In the past, I’d have written a blog post for each topic. I was going to say that I simply don’t have the time to do that these days but by the time I’d finished writing this post, I thought maybe I could have split it up a bit more! Regardless, here are some snippets of information from my time at Microsoft Ignite | The Tour: London. There’s more information in the slide decks – which are available for download, along with the content for the many sessions I didn’t attend.

Data ingestion

Ingesting data can be broken into:

  • Real-time ingestion.
  • Real-time analysis (see trends as they happen – and make changes to create a competitive differentiator).
  • Producing actions as patterns emerge.
  • Automating reactions in external services.
  • Making data consumable (in whatever form people need to use it).

Azure has many services to assist with this – take a look at IoT Hub, Azure Event Hubs, Azure Databricks and more.

Enterprise network connectivity for the cloud

Cloud traffic is increasing whilst traffic that remains internal to the corporate network is in decline. Traditional management approaches are no longer fit for purpose.

Office applications use multiple persistent connections – this causes challenges for proxy servers which generally degrade the Office 365 user experience. Remediation is possible, with:

  • Differentiated traffic – follow Microsoft advice to manage known endpoints, including the Office 365 IP address and URL web service.
  • Let Microsoft route traffic (data is in a region, not a place). Use DNS resolution to egress connections close to the user (a list of all Microsoft peering locations is available). Optimise the route length and avoid hairpins.
  • Assess network security using application-level security, reducing IP ranges and ports and evaluating the service to see if some activities can be performed in Office 365, rather than at the network edge (e.g. DLP, AV scanning).

For Azure:

  • Azure ExpressRoute is a connection to the edge of the Microsoft global backbone (not to a datacentre). It offers 2 lines for resilience and two peering types at the gateway – private and public (Microsoft) peering.
  • Azure Virtual WAN can be used to build a hub for a region and to connect sites.
  • Replace branch office routers with software-defined (SDWAN) devices and break out where appropriate.
Microsoft global network

Passwordless authentication

Basically, there are three options:

  • Windows Hello.
  • Microsoft Authenticator.
  • FIDO2 Keys.

Azure tooling and utilities

Useful resources include:

Selecting data storage for a cloud application

What to use? It depends! Classify data by:

  • Type of data:
    • Structured (fits into a table)
    • Semi-structured (may fit in a table but may also use outside metadata, external tables, etc.)
    • Unstructured (documents, images, videos, etc.)
  • Properties of the data:
    • Volume (how much)
    • Velocity (change rate)
    • Variety (sources, types, etc.)
Item TypeVolume Velocity Variety
Product catalogue Semi-structured High Low Low
Product photos Unstructured High Low Low
Sales data Semi-structured Medium High High

How to match data to storage:

  • Storage-driven: build apps on what you have.
  • Cloud-driven: deploy to the storage that makes sense.
  • Function-driven: build what you need; storage comes with it.

Governance in Azure

It’s important to understand what’s running in an Azure subscription – consider cost, security and compliance:

  • Review (and set a baseline):
    • Tools include: Resource Graph; Cost Management; Security Center; Secure Score.
  • Organise (housekeeping to create a subscription hierarchy, classify subscriptions and resources, and apply access rights consistently):
    • Tools include: Management Groups; Tags; RBAC;
  • Audit:
    • Make changes to implement governance without impacting people/work. Develop policies, apply budgets and audit the impact of the policies.
    • Tools include: Cost Management; Azure Policy.
  • Enforce
    • Change policies to enforcement, add resolution actions and enforce budgets.
    • Consider what will happen for non-compliance?
    • Tools include: Azure Policy; Cost Management; Azure Blueprints.
  • (Loop back to review)
    • Have we achieved what we wanted to?
    • Understand what is being spent and why.
    • Know that only approved resources are deployed.
    • Be sure of adhering to security practices.
    • Opportunities for further improvement.

Planning and implementing hybrid network connectivity

Moving to the cloud allows for fast deployment but planning is just as important as it ever was. Meanwhile, startups can be cloud-only but most established organisations have some legacy and need to keep some workloads on-premises, with secure and reliable hybrid communication.

Considerations include:

  • Extension of the internal protected network:
    • Should workloads in Azure only be accessible from the Internal network?
    • Are Azure-hosted workloads restricted from accessing the Internet?
    • Should Azure have a single entry and egress point?
    • Can the connection traverse the public Internet (compliance/regulation)?
  • IP addressing:
    • Existing addresses on-premises; public IP addresses.
    • Namespaces and name resolution.
  • Multiple regions:
    • Where are the users (multiple on-premises sites); where are the workloads (multiple Azure regions); how will connectivity work (should each site have its own connectivity)?
  • Azure virtual networks:
    • Form an isolated boundary with secure communications.
    • Azure-assigned IP addresses (no need for a DHCP server).
    • Segmented with subnets.
    • Network Security Groups (NSGs) create boundaries around subnets.
  • Connectivity:
    • Site to site (S2S) VPNs at up to 1Gbps
      • Encrypted traffic over the public Internet to the GatewaySubnet in Azure, which hosts VPN Gateway VMs.
      • 99.9% SLA on the Gateway in Azure (not the connection).
      • Don’t deploy production workloads on the GatewaySubnet; /26, /27 or /28 subnets recommended; don’t apply NSGs to the GatewaySubnet – i.e. let Azure manage it.
    • Dedicated connections (Azure ExpressRoute): private connection at up to 10Gbps to Azure with:
      • Private peering (to access Azure).
      • Microsoft peering (for Office 365, Dynamics 365 and Azure public IPs).
      • 99.9% SLA on the entire connection.
    • Other connectivity services:
      • Azure ExpressRoute Direct: a 100Gbps direct connection to Azure.
      • Azure ExpressRoute Global Reach: using the Microsoft network to connect multiple local on-premises locations.
      • Azure Virtual WAN: branch to branch and branch to Azure connectivity with software-defined networks.
  • Hybrid networking technologies:

Modern Device Management (Autopilot, Intune and OneDrive)

The old way of managing PC builds:

  1. Build an image with customisations and drivers
  2. Deploy to a new computer, overwriting what was on it
  3. Expensive – and the device has a perfectly good OS – time-consuming

Instead, how about:

  1. Unbox PC
  2. Transform with minimal user interaction
  3. Device is ready for productive use

The transformation is:

  • Take OEM-optimised Windows 10:
    • Windows 10 Pro and drivers.
    • Clean OS.
  • Plus software, settings, updates, features, user data (with OneDrive for Business).
  • Ready for productive use.

The goal is to reduce the overall cost of deploying devices. Ship to a user with half a page of instructions…

Windows Autopilot overview

Autopilot deployment is cloud driven and will eventually be centralised through Intune:

  1. Register device:
    • From OEM or Channel (manufacturer, model and serial number).
    • Automatically (existing Intune-managed devices).
    • Manually using a PowerShell script to generate a CSV file with serial number and hardware hash, which is then uploaded to the Intune portal.
  2. Assign Autopilot profile:
    • Use Azure AD Groups to assign/target.
    • The profile includes settings such as deployment mode, BitLocker encryption, device naming, out of box experience (OOBE).
    • An Azure AD device object is created for each imported Autopilot device.
  3. Deploy:
    • Needs Azure AD Premium P1/P2
    • Scenarios include:
      • User-driven with Azure AD:
        • Boot to OOBE, choose language, locale, keyboard and provide credentials.
        • The device is joined to Azure AD, enrolled to Intune and policies are applied.
        • User signs on and user-assigned items from Intune policy are applied.
        • Once the desktop loads, everything is present, including file links in OneDrive) – time depends on the software being pushed.
      • Self-deploying (e.g. kiosk, digital signage):
        • No credentials required; device authenticates with Azure AD using TPM 2.0.
      • User-driven with hybrid Azure AD join:
        • Requires Offline Domain Join Connector to create AD DS computer account.
        • Device connected to the corporate network (in order to access AD DS), registered with Autopilot, then as before.
        • Sign on to Azure AD and then to AD DS during deployment. If they use the same UPN then it makes things simple for users!
      • Autopilot for existing devices (Windows 7 to 10 upgrades):
        • Backup data in advance (e.g. with OneDrive)
        • Deploy generic Windows 10.
        • Run Autopilot user-driven mode (can’t harvest hardware hashes in Windows 7 so use a JSON config file in the image – the offline equivalent of a profile. Intune will ignore unknown device and Autopilot will use the file instead; after deployment of Windows 10, Intune will notice a PC in the group and apply the profile so it will work if the PC is reset in future).

Autopilot roadmap (1903) includes:

  • “White glove” pre-provisioning for end users: QR code to track, print welcome letter and shipping label!
  • Enrolment status page (ESP) improvements.
  • Cortana voiceover disabled on OOBE.
  • Self-updating Autopilot (update Autopilot without waiting to update Windows).

Maintaining your hybrid environment

Common requirements in an IaaS environment include wanting to use a policy-based configuration with a single management and monitoring solution and auto-remediation.

Azure Automation allows configuration and inventory; monitoring and insights; and response and automation. The Azure Portal provides a single pane of glass for hybrid management (Windows or Linux; any cloud or on-premises).

For configuration and state management, use Azure Automation State Configuration (built on PowerShell Desired State Configuration).

Inventory can be managed with Log Analytics extensions for Windows or Linux. An Azure Monitoring Agent is available for on-premises or other clouds. Inventory is not instant though – can take 3-10 minutes for Log Analytics to ingest the data. Changes can be visualised (for state tracking purposes) in the Azure Portal.

Azure Monitor and Log Analytics can be used for data-driven insights, unified monitoring and workflow integration.

Responding to alerts can be achieved with Azure Automation Runbooks, which store scripts in Azure and run them in Azure. Scripts can use PowerShell or Python so support both Windows and Linux). A webhook can be triggered with and HTTP POST request. A Hybrid runbook worker can be used to run on-premises or in another cloud.

It’s possible to use the Azure VM agent to run a command on a VM from Azure portal without logging in!

Windows Server 2019

Windows Server strategy starts with Azure. Windows Server 2019 is focused on:

  • Hybrid:
    • Backup/connect/replicate VMs.
    • Storage Migration Service to migrate unstructured data into Azure IaaS or another on-premises location (from 2003+ to 2016/19).
      1. Inventory (interrogate storage, network security, SMB shares and data).
      2. Transfer (pairings of source and destination), including ACLs, users and groups. Details are logged in a CSV file.
      3. Cutover (make the new server look like the old one – same name and IP address). Validate before cutover – ensure everything will be OK. Read-only process (except change of name and IP at the end for the old server).
    • Azure File Sync: centralise file storage in Azure and transform existing file servers into hot caches of data.
    • Azure Network Adapter to connect servers directly to Azure networks (see above).
  • Hyper-converged infrastructure (HCI):
    • The server market is still growing and is increasingly SSD-based.
    • Traditional rack looked like SAN, storage fabric, hypervisors, appliances (e.g. load balancer) and top of rack Ethernet switches.
    • Now we use standard x86 servers with local drives and software-defined everything. Manage with Admin Center in Windows Server (see below).
    • Windows Server now has support for persistent memory: DIMM-based; still there after a power-cycle.
    • The Windows Server Software Defined (WSSD) programme is the Microsoft approach to software-defined infrastructure.
  • Security: shielded VMs for Linux (VM as a black box, even for an administrator); integrated Windows Defender ATP; Exploit Guard; System Guard Runtime.
  • Application innovation: semi-annual updates are designed for containers. Windows Server 2019 is the latest LTSC channel so it has the 1709/1803 additions:
    • Enable developers and IT Pros to create cloud-native apps and modernise traditional apps using containers and micro services.
    • Linux containers on Windows host.
    • Service Fabric and Kubernetes for container orchestration.
    • Windows subsystem for Linux.
    • Optimised images for server core and nano server.

Windows Admin Center is core to the future of Windows Server management and, because it’s based on remote management, servers can be core or full installations – even containers (logs and console). Download from http://aka.ms/WACDownload

  • 50MB download, no need for a server. Runs in a browser and is included in Windows/Windows Server licence
  • Runs on a layer of PowerShell. Use the >_ icon to see the raw PowerShell used by Admin Center (copy and paste to use elsewhere).
  • Extensible platform.

What’s next?

  • More cloud integration
  • Update cadence is:
    • Insider builds every 2 weeks.
    • Semi-annual channel every 6 months (specifically for containers):
      • 1709/1803/1809/19xx.
    • Long-term servicing channel
      • Every 2-3 years.
      • 2016, 2019 (in September 2018), etc.

Windows Server 2008 and 2008 R2 reach the end of support in January 2020 but customers can move Windows Server 2008/2008 R2 servers to Azure and get 3 years of security updates for free (on-premises support is chargeable).

Further reading: What’s New in Windows Server 2019.

Containers/Azure Kubernetes Service

Containers:

  • Are fully-packaged applications that use a standard image format for better resource isolation and utilisation.
  • Are ready to deploy via an API call.
  • Are not Virtual machines (for Linux).
  • Do not use hardware virtualisation.
  • Offer no hard security boundary (for Linux).
  • Can be more cost effective/reliable.
  • Have no GUI.

Kubernetes is:

  • An open source system for auto-deployment, scaling and management of containerized apps.
  • Container Orchestrator to manage scheduling; affinity/anti-affinity; health monitoring; failover; scaling; networking; service discovery.
  • Modular and pluggable.
  • Self-healing.
  • Designed by Google based on a system they use to run billions of containers per week.
  • Described in “Phippy goes to the zoo”.

Azure container offers include:

  • Azure Container Instances (ACI): containers on demand (Linux or Windows) with no need to provision VMs or clusters; per-second billing; integration with other Azure services; a public IP; persistent storage.
  • Azure App Service for Linux: a fully-managed PaaS for containers including workflows and advanced features for web applications.
  • Azure Kubernetes Service (AKS): a managed Kubernetes offering.

Wrap-up

So, there you have it. An extremely long blog post with some highlights from my attendance at Microsoft Ignite | The Tour: London. It’s taken a while to write up so I hope the notes are useful to someone else!

Caching OneDrive for Business content when Files On-Demand is enabled

Not surprisingly, given who I work for, I’m a heavy user of Microsoft technologies. I have a Microsoft Surface Pro, running the latest versions of Windows 10 of Office 365 ProPlus, joined to Azure Active Directory and managed with Intune. I use all of the Office 365 Productivity apps. I AM A MICROSOFT POWER USER!

Enough of the drama! Let’s bring this down a level…

…I’m just a guy, using a laptop, trying to get a job done. It’s a tool.

OneDrive icon

Most of my files are stored in OneDrive for Business. There’s lots more space there than the typical SSD has available and so Microsoft introduced a feature called Files On-Demand, whereby you see the whole list of files but it’s only actually downloaded when you try to access it.

That sounds great, unless you travel a lot and work on trains and other places where network connectivity is less than ideal.

In my case, I have around 50GB of data in OneDrive and 90GB of free space on my Surface’s SSD so I have the potential to cache it all locally. I used to do this by turning off Files On-Demand but the latest build I’m running has disabled that capability for me.

It’s not feasible to touch every file and force it to be cached and I thought about asking my admins to reverse the setting to force the use of Files On-Demand but then I found another way around it…

If I right-click on a OneDrive file or folder in Windows Explorer there’s the option to “Always keep on this device”. [Update: Peter Bryant (@PJBryant) has flagged a method using the command line too – it seems there are new attributes P and U for Files On-Demand]

By applying this to one of the top-level folders in my OneDrive, I was able to force the files to be cached – regardless of whether Files On-Demand is enabled or not. Now, I can access all of the files in that folder (and any subfolders), even when I’m not connected to the Internet.

Explaining Office 365, with particular reference to the crossover between OneDrive, SharePoint and Teams


For most of my career, I’ve worked primarily with Microsoft products. And for the last three years, I’ve worked in a consulting, services and education organisation that’s entirely focused on extracting value for our customers from their investments in Microsoft technology (often via an Enterprise Agreement, or similar). So, living in my Microsoft-focused bubble, it’s easy to forget that there are organisations out there for who deploying Microsoft products is not the first choice. And I’ve found myself in a few online conversations where people are perplexed about Office 365 and which tool to use when.

I used to use the Office 365 Wheel from OnPoint solutions until I discovered Matt Wade’s “Periodic Table of Office 365”, which attempts to describe Office 365’s “ecosystem of applications in the cloud” in infographic format:

The web version even lets you select by licence – so, for most of my customers, Enterprise E3 or E5.

But, as I said, I’ve also been in a few discussions recently where I’ve tried to help others (often those who are familiar with Google’s tools) to understand where SharePoint, OneDrive for Business and Microsoft Teams fit in – i.e. which is used in what scenario?

A few weeks ago, I found myself trying to do that on the WB-40 Podcast WhatsApp group, where one member had asked for help with the various “file” constructs and another had replied that “not even Microsoft” knew that. Challenge accepted.

So, in short form for social media, I replied to the effect that:

  1. Teams is unfinished (IMHO) but built on top of Office 365 Groups (and very closely linked to SharePoint).
  2. SharePoint can be used for many things including a repository for team-based information – regardless of what those teams are (projects, hierarchy, function).
  3. OneDrive is a personal document store.

In effect OneDrive can be used to replace “home drives” and SharePoint to provide wider collaboration features/capabilities when a document moves from being “something I’m working on” to “something I’m ready to collaborate on”. Teams layers over that to provide chat-based workspace and more.

And then I added a caveat to say that all of the above is the way we work and many others do but there is not one single approach that fits all. And don’t even get me started with Yammer…

The key point for me is that organisations really should have an information management strategy and associated architecture, regardless of the technology choices made.

And, just in case it helps, this is how one UK Government department approaches things (I would credit my source, but don’t want to get anyone into trouble):

They split up documents into a lifecycle:

  1. Documents start life with a user, so can go in OneDrive.
    • As the user collaborates with colleagues those colleagues can gain shared access to the document in OneDrive.
    • They proposed the use of 2-year deletion policies on all OneDrive for Business files [I would question why… storage is not an issue with Enterprise versions of Office 365, and arbitrary time-based deletion is problematic when you go back to a document for a reference and find it’s gone…].
  2. If the original document leads to a scoped piece of work then the Documents are moved to an Office 365 Group, as that neatly fits in with a number of resources that are common to collaboration: Planner, Calendar, File Storage (SharePoint), etc. And O365 Groups underpin Teams.
    • However, this type of data is time limited.
    • They proposed the use of 2-year deletion policies on all O365 Groups [again, why?].
  3. If a document became part of organisational policy/guidance, etc. then the proposal was to create permanent SharePoint sites for document management or potentially to move such documents to the organisation’s Intranet service [which could be running on SharePoint Online], or other relevant location.

So, you can see the lifecycle properties:

  1. User (limited need to know).
  2. Group (wider need to know).
  3. Organisation (everyone can know).

This plan has the potential to allow the organisation to manage data in a better way and minimise the costs of the additional storage required for SharePoint. But, core to that is turning the idea that OneDrive for Business is personal use on its head. It’s a valid place to store business data, but users should manage the lifecycle of data better. And this needs to be plain for the user to understand so they can spend the minimum amount of time managing the data.

[i.e. they don’t like the idea that OneDrive for Business is a personal data store – it’s a data store provided to users as part of their job and they don’t like “personal” being part of that definition. My 4pth is that the limits of “personal” and “work” are increasingly eroded, but I can see that organisations have legal and regulatory concerns about the data held in systems that they manage.]

So, which Office 365 tool to use? There is no “one size fits all” but some of the above may help when you’re defining a strategy/architecture for managing that information…

How to stay current with Windows as a Service and Office 365 ProPlus


For many organisations, particularly those at “enterprise” scale, Windows and Office have tended to be updated infrequently, usually as major projects with associated capital expenditure. Meanwhile, operational IT functions that manage “business as usual” often avoid change because that change brings risks around the introduction of new technology that may have consequential effects. This approach is becoming increasingly untenable in a world of regular updates to software sold on a subscription basis.

This post looks at the impact of regularly updating Windows and Office in an organisation and how we need to modify our approach to reflect the world of Windows as a Service and “evergreen” Office 365?

Why do we need to stay current?

A good question. After all, surely if Windows and Office are working as required then there’s no need to change anything, is there? Unfortunately, things aren’t that simple and there are benefits of remaining current for many business stakeholders:

  • For the CIO: improved management, performance, stability and support for the latest hardware.
  • For the CSO: enhanced security against modern threats and zero-day attacks.
  • For end users: access to the latest features and capabilities for better productivity and creativity.

Every Windows release evolves the operating system architecture to better defend against attacks – not just patching! And Windows and Office updates support new ways of working: inking, voice control, improved navigation, etc.

So, updates are good – right?

How often do I need to update?

We’re no longer in a world of 5+5 years (mainstream+extended) support. Microsoft has publicly stated its intention to ship two feature updates to Windows each year (in Spring and Autumn). The latest of these is Windows 10 1803 (also known as Redstone 4), which actually shipped in April. Expect the next one in/around September 2018 (1809). Internally to Microsoft, there are new builds daily; and even publicly there are “Insider” Preview builds for evaluation.

That means that we need to stop thinking about Windows feature updates as projects and start thinking about them as process – i.e. make updating Windows (and Office, and supporting infrastructure) part of the business as usual norm.

OK, but what if I don’t update?

Put simply, if you choose not to stay up-to-date, you’ll build up a problem for later. The point about having predictable releases is that it should help planning

But each release is only supported for 18 months. That means that you need to be thinking about getting users on n-2 releases updated before it gets too close to their end of support. Today, that means:

  • Running 1703, take action to update.
  • Running 1709, plan to update.
  • Running 1803, trailblazer!

We’re no longer looking at major updates every 3-5 years; instead an approach of continuous service improvement is required. This lessens the impact of each change.

So that’s Windows, what about Office?

For those using Office 365 ProPlus (i.e. licensing the latest versions of Office applications through an Office 365 subscription), Windows and Office updates are aligned (not to the day, but to the Spring and Autumn cadence):

So, keep Office updated in line with Windows and you should be in a good place. Build a process that gives confidence and trust to move the two at the same time… the traditional approach of deploying Windows and Office separately often comes down to testing and deployment processes.

What about my deployment tools? Will they support the latest updates?

According to Microsoft, there are more than 100 million devices managed with System Center Configuration Manager (SCCM) and SCCM also needs to be kept up-to-date to support upcoming releases.

SCCM releases are not every 6 months – they should be every 4 months or so – and the intention is to update SCCM to support the next version of Windows/Office ahead of when they become available:

Again, start to prepare as early as possible – and think of this as a process, not a project. Deploy first to a limited set of users, then push more broadly:

Why has Microsoft made us work this way?

The world has changed. With Office existing on multiple platforms and systems under constant threat of attack from those who wish to steal our data (and money) it’s become necessary to move from a major update every 3-5 years to a continuous plan to remain in shape and execute every few months – providing high levels of stability and access to the latest features/functionality.

Across Windows, Office, Azure and System Center Microsoft is continually improving security, reliability and performance whilst integrating cloud services to add functionality and to simplify the process of staying current.

How can I move from managing updates as a project to making it part of the process?

As mentioned previously, adopting Windows as a Service involves a cultural shift from periodic projects to a regular process.

Organisations need to be continually planning and preparing for the next update using Insider Preview to understand the impact of upcoming changes and the potential provided by new features, including any training needs.

Applications, devices and infrastructure can be tested using targeted pilot deployments and then, once the update is generally available and known to work in the environment, a broader deployment can be instigated:

Aim to deploy to users following the model below for each stage:

  • Plan and prepare: 1%.
  • Targeted deployment: 9%.
  • Broad deployment: 90%.

Remember, this is about feature updates, not a new version of Windows. The underlying architecture will evolve over time but Windows as a Service is about smaller, incremental change rather than the big step changes we’ve seen in the past.

But what about testing applications with each new release of Windows?

Of course, applications need to be tested against new releases – and there will be dependencies on support from other vendors too – but it’s important that the flow of releases should not be held up by application testing. If you test every application before updating Windows, it will be difficult to hit the rollout cadence. Instead, proactively assess which applications are used by the majority of users and address these first. Aim to move 80-90% of users to the latest release(s) and reactively address issues with the remaining apps (maybe using a succession of mini-pilots) but don’t stop the process because there are still a few apps to get ready!

You can also use alternative deployment methods (such as virtualised applications or published applications) to work around compatibility issues.

It’s worth noting that most Windows 7-compatible apps will be compatible with Windows 10. The same app development platform (UWP), driver servicing model, etc. are used. Some device drivers may not exist for Windows 10 but most do and availability through Windows Update has improved for drivers and firmware. BIOS support is getting better too.

In addition, there are around a million applications registered in the Ready For Windows database, which can be used for spot-checking ISVs’ Windows 10 support for each application and its prevalence in the wild.

New cloud-enabled capabilities to guide your Windows 10 deployment

Windows Analytics is a cloud-based set of services that collects information from within Windows and provides actionable information to proactively improve your Windows  (and Office) environment.

Using Azure Log Analytics, Windows Analytics can advise on:

  • Readiness (Windows 10 Professional): planning and addressing actions for upgrade from Windows 7 and 8.1 as well as Windows 10 feature updates.
  • Compliance (Windows 10 Professional): for regular (monthly) updates.
  • Device health (Windows 10 Professional and Enterprise): assessing issues across estate (e.g. problematic device drivers).

OK, so I understand why I need to continuously update Windows, but how do I do it?

Microsoft recommends using a system of deployment rings (which might be implemented as groups in SCCM) to roll out to users in the 1% (Insider), 9% (Pilot) and 90% (Broad) deployments mentioned above. This approach allows for a consistent but controllable rollout.

Peer-to-peer download technologies are embedded in Windows that will minimise network usage and recent versions support express updates (only downloading deltas) whilst the impact on users can be minimised through scheduling.

When it comes to tools, there are a few options available:

  • Windows Update is the same service used by consumers to download updates at the rate governed by Microsoft.
  • Windows Update for Business is a version of Windows Update that allows an organisation to control their release schedule and set up deployment rings without any infrastructure.
  • Windows Software Update Services (WSUS) allows feature updates to be deployed when approved, and BranchCache can be used to minimise network impact.
  • Finally, SCCM can work with WSUS and offers Task Sequences, etc. to provide greater control over deployment.

What about the normal “Patch Tuesday” updates?

Twice-annual feature updates don’t replace the need to patch more regularly and Microsoft continues to release cumulative updates each month to resolve security and quality issues.

In effect, we should receive one feature update then five quality updates in each cycle:

Where can I find more information?

The following resources may be useful:

 

The contents of this post are based on a webcast delivered by Bruno Nowak (@BrunoNowak), Director of Product Marketing (Microsoft 365) at Microsoft.

Weeknote 16: Anonymous? (Week 17, 2018)


This week has been another one split between two end-user computing projects – one at the strategy/business case stage and another that’s slowly rolling out and proving that the main constraint on any project is the business’s ability to cope with the change.

I can’t say it’s all enjoyable at the moment – indeed I had to apply a great deal of restraint not to respond to lengthy email threads that asked “why aren’t we doing it this way”… but the inefficiencies of email are another subject, for another day.

So, instead of a recap of the week’s activities, I’ll focus on some experiences I’ve had recently with “anonymous” surveys. I’m generally quite cynical of these because if I have to log on to the platform to provide a response then it’s not truly anonymous – a point I highlighted to my colleagues in HR who ask a weekly “pulse” question. “It’s not on your record”, I was told – yet progress is logged against me (tasks due, tasks completed, etc.) and only accessible when I’m logged in to the HR system. It’s the same for SharePoint surveys – if I need to use my Active Directory credentials, then it’s not anonymous.

I’m approaching my third anniversary at risual and I picked up an idea for soliciting feedback (for my annual review) from colleagues, partners and customers from my colleague James Connolly, who has been using a survey tool for a couple of years now. Rather than use one of the tools on the wider Internet, like Survey Monkey or TypePad, I decided to try Microsoft Forms – which is a newish Office 365 capability. It was really simple to create a form (and to make it anonymous, once I worked out how) but what I’ve been most impressed with is the reporting, with the ability to export all responses to Excel for analysis, or to view either an aggregated view of responses or the detail on each individual response within Microsoft Forms.

I went to pains to make sure that the form is truly anonymous – not requiring logon, though I did invite people to leave their name if they were happy for me to contact them about the responses. Even so, with a sample size of around 50 people invited to complete the form and a 50% response rate, I can take a guess at who some of the responses are from. By the same token, there are others where I wish I knew who wrote the feedback so I could ask them to elaborate some more!

I won’t be doing anything with the results, except saying “this is what my colleagues and customers think of me and this is where I need to improve”, but it does re-enforce my thinking that very little in life is truly anonymous.

Next week includes a speaking gig at a Microsoft Modern Workplace popup event (though I’m not entirely comfortable with the demonstrations), more Windows 10 device rollouts and maybe, just maybe, some time to write some blog posts that aren’t just about my week…

UK Government Protective Marking and the Microsoft Cloud


I recently heard a Consultant from another Microsoft partner talking about storing “IL3” information in Azure. That rang alarm bells with me, because Impact Levels (ILs) haven’t been a “thing” for UK Government data since April 2014. For the record, here’s the official guidance on the UK Government data security classifications and this video explains why the system was changed:

Meanwhile, this one is a good example of what it means in practice:

So, what does that mean for storing data in Azure, Dynamics 365 and Office 365? Basically, information classified OFFICIAL can be stored in the Microsoft Cloud – for more information, refer to the Microsoft Trust Center. And, because OFFICIAL-SENSITIVE is not another classification (it’s merely highlighting information where additional care may be needed), that’s fine too.

I’ve worked with many UK Government organisations (local/regional, and central) and most are looking to the cloud as a means to reduce costs and improve services. The fact that more than 90% of public data is classified OFFICIAL (indeed, that’s the default for anything in Government) is no reason to avoid using the cloud.

Adopting cloud services means being ready for constant change


There’s a news story today about how Microsoft may be repositioning some (or all) of Skype for Business as Microsoft Teams (the collaborative group-based chat service built on various Office 365 services but Skype for Business in particular).

The details of that story are kind of irrelevant to this post; it’s the reaction I got on Twitter that I felt the need to comment on (when I hit 5 tweeted replies I thought a blog post might be more appropriate).

Change is part of consuming cloud services. There’s a service agreement and a subscription/licensing agreement – customers consume the service as the provider defines it. The service provider will generally give notice of change but you normally have to accept it (or leave). There is no option to stay on legacy versions of software for months or years at a time because you’re not ready to update your ways of working or other connected systems.

That is a big shift and many IT departments have not adjusted their thinking to adopt this new way of working.

I’ve seen many organisations moving to cloud services (mostly Office 365 and Azure) and stick with their current approach. They do things like try to map drive letters to OneDrive because that’s what users are used to, instead of showing them new (and often better) ways of working. They try to use old versions of Office with the latest services and wonder why the user experience is degraded. They think about the on-premises workloads (Exchange, Lync/Skype for Business, SharePoint) instead of the potential provided by the whole productivity platform that they have bought licences to use. They try to turn parts of the service off or hide them from users.

My former colleague Steve Harwood (@SteeveeH) did some work with one of risual’s customers to define a governance structure for Office 365. It’s great work – and maybe I’ll blog about it separately – but the point is that organisations need to think differently for the cloud.

Buying services from Microsoft, Amazon, Google, Salesforce, et al is not like buying them from the managed services provider that does its best to maintain a steady state and avoid change at all costs (or often at great cost!). Moving to the cloud means constant change. You may not have servers to keep up to date once your apps are sold on an “evergreen” subscription basis but you will need to keep client software up to date – not just traditional installed apps but mobile apps and browsers too. And when the service gains a new feature, it’s there for adoption. You may have the ability to hide it but that’s just a sticking plaster solution.

Often the cry is “but we need to train the users”. Do you really? Many of today’s business end users have grown up with technology. They are familiar with using services at home far more advanced than those provided by many workplaces. Intuitive user interfaces can go a long way and there’s no need to provide formal training for many IT changes. Instead, keep abreast of the advertised changes from your service provider (for example the Message Center in Office 365) and decide what the impact is of each new feature. Very few will need a full training package! Some well-written communications, combined with self-help forums and updated FAQs at the Service Desk will often be enough but there’s also the opportunity to offer access to Massive Open Online Courses (MOOCs) where training needs are more extensive.

There are, of course, examples of where service providers have rolled out new features with inadequate testing, or with too little notice but these are edge cases and generally there’s time to react. The problem comes when organisations stick their proverbial heads in the sand and try to ignore the inevitable change.

Providing fast mailbox access to Exchange Online in virtualised desktop scenarios


In last week’s post that provided a logical view on end user computing (EUC) architecture, I mentioned two sets of challenges that I commonly see with customers:

  1. “We invested heavily in thin client technologies and now we’re finding them to be over-engineered and expensive with multiple layers of technology to manage and control.”
  2. “We have a managed Windows desktop running <insert legacy version of Windows and Office here> but the business wants more flexibility than we can provide.”

What I didn’t say, is that I’m seeing a lot of Microsoft customers who have a combination of these and who are refreshing parts of their EUC provisioning without looking at the whole picture – for example, moving email from Exchange to Exchange Online but not adopting other Office 365 workloads and not updating their Office client applications (most notably Outlook).

In the last month, I’ve seen at least three organisations who have:

  • An investment in non-persistent virtualised desktops (using technology products from Citrix and others).
  • A stated objective to move email to Exchange Online.
  • Office Enterprise E3 or higher subscriptions (i.e. the licences for Office 365 ProPlus – for subscription-based evergreen Office clients) but no immediate intention to update Office from current levels (typically Office 2010).

These organisations are, in my opinion, making life unnecessarily difficult for themselves.

The technical challenges with such as solution come down to some basic facts:

  • If you move your email to the cloud, it’s further away in network terms. You will introduce latency.
  • Microsoft and Citrix both recommend caching Exchange mailbox data in Outlook.
  • Office 365 is designed to work with recent (2013 and 2016) versions of Office products. Previous versions may work, but with reduced functionality. For example, Outlook 2013 and later have the ability to control the amount of data cached locally – Outlook 2010 does not.

Citrix’s advice (in the Citrix Deployment Guide for Microsoft Office 365 for Citrix XenApp and XenDesktop 7.x) is using Outlook Cached Exchange Mode; however, they also state “For XenApp or non-persistent VDI models the Cached Exchange Mode .OST file is best located on an SMB file share within the XenApp local network”. My experience suggests that, where Citrix customers do not use Outlook Cached Exchange Mode, they will have a poor user experience connecting to mailboxes.

Often, a migration to Office 365  (e.g. to make use of cloud services for email, collaboration, etc.) is best combined with Office application updates. Whilst Outlook 2013 and later versions can control the amount of data that is cached, in a virtualised environment, this represents a user experience trade-off between reducing login times and reducing the impact of slow network access to the mailbox.

Put simply: you can’t have fast mailbox access to Exchange Online without caching on virtualised desktops, unless you want to add another layer of software complexity.

So, where does that leave customers who are unable or unwilling to follow Microsoft’s and Citrix’s advice? Effectively, there are two alternative approaches that may be considered:

  • The use of Outlook on the Web to access mailboxes using a browser. The latest versions of Outlook on the Web (formerly known as Outlook Web Access) are extremely well-featured and many users find that they are able to use the browser client to meet their requirements.
  • Third party solutions, such as those from FSLogix can be used to create “profile containers” for user data, such as cached mailbox data.

Using faster (SSD) disks for XenApp servers and improving the speed of the network connection (including the Internet connection) may also help but these are likely to be expensive options.

Alternatively, take a look at the bigger picture – go back to basics and look at how best to provide business users with a more flexible approach to end user computing.

Finding the PlanId for a Microsoft Planner Plan


Yesterday, I wrote about creating Microsoft Planner tasks from email using Microsoft Flow. At the time, my flow wasn’t quite working because for some reason Flow wouldn’t pull through the details of all of my plans.  I even deleted and recreated a plan but Flow would only show me one. And entering a Custom Value with the name of my plan in my flow resulted in a Schema error for field PlanId in entity Task: Field failed schema validation.

That was, until I found a very useful nugget of information in the PowerApps Community forums. To find the PlanId, open the corresponding Plan in a browser and the last part of the URL contains the PlanId:

Finding the PlanID for a Microsoft Planner Plan

Put that into your flow and the corresponding list of BucketIds should then be visible:

Bucket Id located based on the Plan Id

Now my flow runs and puts the plain-text contents of an email into the subject of a new task. Unfortunately, I’m still working on how to populate other fields in the task and I think I may have hit the current limits of the Microsoft Flow-Planner integration.