Weeknote 20/2020: back to work

Looking back on another week of tech exploits during the COVID-19 coronavirus chaos…

The end of my furlough

The week started off with exam study, working towards Microsoft exam AZ-300 (as mentioned last week). That was somewhat derailed when I was asked to return to work from Wednesday, ending my Furlough Leave at very short notice. With 2.5 days lost from my study plan, it shouldn’t have been a surprise that I ended my working week with a late-night exam failure (though it was still a disappointment).

Returning to work is positive though – whilst being paid to stay at home may seem ideal to some, it didn’t work so well for me. I wanted to make sure I made good use of my time, catching up on personal development activities that I’d normally struggle to fit in. But I was also acutely aware that there were things I could be doing to support colleagues but which I wasn’t allowed to. And, ultimately, I’m really glad to be employed during this period of economic uncertainty.

Smart cities

It looks like one of my main activities for the next few weeks will be working on a Data Strategy for a combined authority, so I spent Tuesday afternoon trying to think about some of the challenges that an organisation with responsibility for transportation and economic growth across a region might face. That led me to some great resources on smart cities including these:

  • There are some inspirational initiatives featured in this video from The Economist:
  • Finally (and if you only have a few minutes to spare), this short video from Vinci Energies provides an overview of what smart cities are really about:

Remote workshop delivery

I also had my first experience of taking part in a series of workshops delivered using Microsoft Teams. Teams is a tool that I use extensively, but normally for internal meetings and ad-hoc calls with clients, not for delivering consulting engagements.

Whilst they would undoubtedly have been easier performed face-to-face, that’s just not possible in the current climate, so the adaptation was necessary.

The rules are the same, whatever the format – preparation is key. Understand what you’re looking to get out of the session and be ready with content to drive the conversation if it’s not quite headed where you need it to.

Editing/deleting posts in Microsoft Teams private channels

On the subject of Microsoft Teams, I was confused earlier this week when I couldn’t edit one of my own posts in a private channel. Thanks to some advice from Steve Goodman (@SteveGoodman), I found that the ability to delete and/or edit messages is set separately on a private channel (normal channels inherit from the team).

The Microsoft Office app

Thanks to Alun Rogers (@AlunRogers), I discovered the Microsoft office app this week. It’s a great companion to Office 365 (or , searching across all apps, similar to Delve but in an app rather than in-browser. The Microsoft Office app is available for download from the Microsoft Store.

Azure Network Watcher

And, whilst on the subject of nuggets of usefulness in the Microsoft stable…

A little piece of history

I found an old map book on my shelf this week: a Halford’s Pocket Touring Atlas of Great Britain and Ireland, priced at sixpence. I love poring over maps – they provide a fascinating insight into the development of the landscape and the built environment.

That’s all for now

Those are just a few highlights (and a lowlight) from the week – there’s much more on my Twitter feed

“Disaster Recovery” and related thoughts…

Backup, Archive, High Availbility, Disaster Recovery, Business Continuity. All related. Yet all different.

One of my colleagues was recently faced with needing to run “a DR [disaster recovery] workshop” for a client. My initial impression was:

  • What disasters are they planning for?
  • I’ll bet they are thinking about Coronavirus and working remotely. That’s not really DR.
  • Or are they really thinking about a backup strategy?

So I decided to turn some of my rambling thoughts into a blog post. Each of these topics could be a post in its own right – I’m just scraping the surface here…

Let’s start with backup (and recovery)

Backups (of data) are a fairly simple concept. Anything that would create a problem if it was lost should be backed up. For example, my digital photos are considered to not exist at all unless they are synchronised (or backed up) to at least two other places (some network-attached storage, and the cloud).

In a business context, we run backups in order to be able to recover (restore) our content (configuration or data) within a given window. We may have weekly full backups and daily incremental or differential backups (perhaps with more regular snapshots), then retain parent, grandparent and great-grandparent copies of the full backups (four weeks) and keep each of these as (lunar) monthly backups for a year. That’s just an example – each organisation will have its own backup/retention policies and those backups may be stored on or off-site, on tape or disk.

In summary, backups are about making sure we have an up to date copy of our important configuration information and data, so we can recover it if the primary copy is lost or damaged.

And for bonus content, some services we might consider in a modern infrastructure context include Azure Backup or AWS Backup.

Backups must be verified and periodically tested in order to have any use.

Archiving information

When I wrote about backups above, I mentioned keeping multiple copies covering various points in time. Whilst some may consider this adequate for archival, archival is the storage of data for long-term preservation of read-only access – for example, documents that must be stored for an extended period of time (for example 7, 10, 25, 99 years). Once that would have been paper documents, in boxes. Now it might be digital files (or database contents) on tape or disk (potentially cloud storage).

Archival might still use backup software and associated retention policies, but we’ll think carefully about the medium we store it on. For very long term physical storage we might need to consider the media formats (paper is bulky and transferred to microfiche, or old magnetic media degrades, so it’s moved to optical storage – but the hardware becomes obsolete, so it’s moved to another format). If storing on disk (on-premises or in the cloud), we can use slower (cheaper) disks and accept that restoration from the archive may take additional time.

In summary, archival is about long-term data storage, generally measured in many years and archives might be stored off-line, or near-line.

Technologies we might use for archival are similar to backups, but we could consider lower-cost storage – e.g. Azure Storage‘s Cool or Archive tiers or Amazon S3 Glacier.

Keeping systems highly available

High Availability (HA) is about making sure that our systems are available for as much time as possible – or certainly within a given service level agreement (SLA).

Traditionally, we used technologies like a redundant array of inexpensive devices (RAID) for disks or memory, error checking memory, or redundant power supplies. We might also have created server clusters or farms. All of these methods have the intention of removing single points of failure (SPOFs).

In the cloud, we leave a lot of the infrastructure considerations to the cloud service provider and we design for failure in other ways.

  • We assume that virtual machines will fail and create availability sets.
  • We plan to scale out across multiple hosts for applications that can take advantage of that architecture.
  • We store data in multiple regions.
  • We may even consider multiple clouds.

Again, the level of redundancy built into the app and its supporting infrastructure must be designed according to requirements – as defined by the SLA. There may be no point in providing an expensive four nines uptime for an application that’s used once a month by one person, who works normal office hours. But, then again, what if that application is business critical – like payroll? Again, refer to the SLA – and maybe think about business continuity too… more on that in a moment.

Some of my clients have tried to implement Windows Server clusters in Azure. I’ve yet to be convinced and still consider that it’s old-world thinking applied in a contemporary scenario. There are better ways to design a highly available file service in 2020.

In summary, high availability is about ensuring that an application or service is available within the requirements of the associated service level agreement.

Technologies might include some of the hardware considerations I listed earlier, but these days we’re probably thinking more about:

Remember to also consider other applications/systems upon which an application relies.

Also, quoting from some of Microsoft’s training materials:

“To achieve four 9’s (99.99%), you probably can’t rely on manual intervention to recover from failures. The application must be self-diagnosing and self-healing.

Beyond four 9’s, it is challenging to detect outages quickly enough to meet the SLA.

Think about the time window that your SLA is measured against. The smaller the window, the tighter the tolerances. It probably doesn’t make sense to define your SLA in terms of hourly or daily uptime.”

Microsoft Learn: Design for recoverability and availability in Azure: High Availability

Disaster recovery

As the name suggests, Disaster Recovery (DR) is about recovering from a disaster, whatever that might be.

It could be physical damage to a piece of hardware (a switch, a server) that requires replacement or recovery from backup. It could be a whole server room or datacentre that’s been damaged or destroyed. It could be data loss as a result of malicious or accidental actions by an employee.

This is where DR plans come into play- firstly analysing the risks that might lead to disaster (including possible data loss and major downtime scenarios) and then looking at recovery objectives – the application’s recovery point objective (RPO) and recovery time objective (RTO).

Quoting Microsoft’s training materials again:

An illustration showing the duration, in hours, of the recovery point objective and recovery time objective from the time of the disaster.

“Recovery Point Objective (RPO): The maximum duration of acceptable data loss. RPO is measured in units of time, not volume: “30 minutes of data”, “four hours of data”, and so on. RPO is about limiting and recovering from data loss, not data theft.

Recovery Time Objective (RTO): The maximum duration of acceptable downtime, where “downtime” needs to be defined by your specification. For example, if the acceptable downtime duration is eight hours in the event of a disaster, then your RTO is eight hours.”

Microsoft Learn: Design for recoverability and availability in Azure: Disaster Recovery

For example, I may have a database that needs to be able to withstand no more than 15 minutes’ data loss and an associated SLA that dictates no more than 4 hours’ downtime in a given period. For that, my RPO is 15 minutes and the RTO is 4 hours. I need to make sure that I take snapshots (e.g. of transaction logs for replay) at least every 15 minutes and that my restoration process to get from offline to fully recovered takes no more than 4 hours (which will, of course, determine the technologies used).

Considerations when creating a DR plan might include:

  • What are the requirements for each application/service?
  • How are systems linked – what are the dependencies between applications/services?
  • How will you recover within the required RPO and RTO constraints?
  • How can replicated data be switched over?
  • Are there multiple environments (e.g. dev, test and production)?
  • How will you recover from logical errors in a database that might impact several generations of backup, or that may have spread through multiple data replicas?
  • What about cloud services – do you need to backup SaaS data (e.g. Office 365)? (Possibly not, if you’re happy with a retention-period based restoration from a “recycle bin” or similar but what if an administrator deletes some data?)

As can be seen, there are many factors here – more than I can go into in this blog post, but a disaster recovery strategy needs to consider backup/recovery, archive, availability (high or otherwise), technology and service (it may help to think about some of the ITIL service design processes).

In summary, disaster recovery is about having a plan to be able to recover from an event that results in downtime and data loss.

Technologies that might help include Azure Site Recovery. Applications can also be designed with data replication and recovery in mind, for example, using geo-replication capabilities in Azure Storage/Amazon S3, Azure SQL Server/Amazon RDS or using a globally-distributed database such as Azure Cosmos DB. And DR plans must be periodically tested.

Business continuity

Finally, Business Continuity (BC). This is something that many organisations will have had to contend with over the last few weeks and months.

BC is often confused with DR but they are different. Business continuity is about continuing to conduct business when something goes wrong. That may be how to carry on working whilst working on recovering from a disaster. Or it may be how to adapt processes to allow a workforce to continue functioning in compliance with social distancing regulations.

Again, BC needs a plan. But many of those plans will be reconsidered now – if your BC arrangements are that in the event of an office closure, people go to a hosted DR site with some spare equipment that will be made available within an agreed timescale, that might not help in the event of a global pandemic, when everyone else wants to use that facility. Instead, how will your workforce continue to work at home? Which systems are important?How will you provide secure remote access to those systems? (How will you serve customers whilst employees are also looking after children?) The list goes on.

Technology may help with BC, but technology alone will not provide a solution. The use of modern approaches to End User Computing will certainly make secure remote and mobile working a possibility (indeed, organisations that have taken a modern approach will probably already be familiar with those practices) but a lot of the issues will relate to people and process.

In summary, Business Continuity plans may be invoked if there is a disaster but they are about adapting business processes to maintain service in times of disruption.

Wrapping up

As I was writing this post, I thought about many tangents that I could go off and cover. I’m pretty sure the topic could be a book and this post scrapes the surface. Nevertheless, I hope my thoughts are useful and show that disaster recovery cannot be considered in isolation.

Microsoft Online Services: tenants, subscriptions and domain names

I often come across confusion with clients trying to understand the differences between tenants, subscriptions and domain names when deploying Microsoft services. This post attempts to clear up some misunderstandings and to – hopefully – make things a little clearer.

Each organisation has a Microsoft Online Services tenant which has a unique DNS name in the format organisationname.onmicrosoft.com. This is unique to the tenant and cannot be changed. Of course, a company can establish multiple organisations, each with its own tenant but these will always be independent of one another and need to be managed separately.

It’s important to remember that each tenant has a single Azure Active Directory (Azure AD). There is a 1:1 relationship between the Azure AD and the tenant. The Azure AD directory uses a unique tenant ID, represented in GUID format. Azure AD can be synchronised with an existing on premises Active Directory Domain Services (AD DS) directory using the Azure AD Connect software.

Multiple service offerings (services) can be deployed into the tenant: Office 365; Intune; Dynamics 365; Azure. Some of these services support multiple subscriptions that may be deployed for several reasons, including separation of administrative control. Quoting from the Microsoft documentation:

“An Azure subscription has a trust relationship with Azure Active Directory (Azure AD). A subscription trusts Azure AD to authenticate users, services, and devices.

Multiple subscriptions can trust the same Azure AD directory. Each subscription can only trust a single directory.”

Associate or add an Azure subscription to your Azure Active Directory tenant

Multiple custom (DNS) domain names can be applied to services – so mycompany.com, mycompany.co.uk and myoldcompanyname.com could all be directed to the same services – but there is still a limit of one tenant name per tenant.

Further reading

Subscriptions, licenses, accounts, and tenants for Microsoft’s cloud offerings.

A logical view on a virtual datacentre services architecture

A couple of years ago, I wrote a post about a logical view of an End-User Computing (EUC) architecture (which provides a platform for Modern Workplace). It’s served me well and the model continues to be developed (although the changes are subtle so it’s not really worth writing a new post for the 2019 version).

Building on the original EUC/Modern Workplace framework, I started to think what it might look like for datacentre services – and this is something I came up with last year that’s starting to take shape.

Just as for the EUC model, I’ve tried to step up a level from the technology – to get back to the logical building blocks of the solution so that I can apply them according to a specific client’s requirements. I know that it’s far from complete – just look at an Azure or AWS feature list and you can come up with many more classifications for cloud services – but I think it provides the basics and a starting point for a conversation:

Logical view of a virtual datacentre environment

Starting at the bottom left of the diagram, I’ll describe each of the main blocks in turn:

  • Whether hosted on-premises, co-located or making use of public cloud capabilities, Connectivity is a key consideration for datacentre services. This element of the solution includes the WAN connectivity between sites, site-to-site VPN connections to secure access to the datacentre, Internet breakout and network security at the endpoints – specifically the firewalls and other network security appliances in the datacentre.
  • Whilst many of the SBBs in the virtual datacentre services architecture are equally applicable for co-located or on-premises datacentres, there are some specific Cloud Considerations. Firstly, cloud solutions must be designed for failure – i.e. to design out any elements that may lead to non-availability of services (or at least to fail within agreed service levels). Depending on the organisation(s) consuming the services, there may also be considerations around data location. Finally, and most significantly, the cloud provider(s) must practice trustworthy computing and, ideally, will conform to the UK National Cyber Security Centre (NCSC)’s 14 cloud security principles (or equivalent).
  • Just as for the EUC/Modern Workplace architecture, Identity and Access is key to the provision of virtual datacentre services. A directory service is at the heart of the solution, combined with a model for limiting the scope of access to resources. Together with Role Based Access Control (RBAC), this allows for fine-grained access permissions to be defined. Some form of remote access is required – both to access services running in the datacentre and for management purposes. Meanwhile, identity integration is concerned with integrating the datacentre directory service with existing (on-premises) identity solutions and providing SSO for applications, both in the virtual datacentre and elsewhere in the cloud (i.e. SaaS applications).
  • Data Protection takes place throughout the solution – but key considerations include intrusion detection and endpoint security. Just as for end-user devices, endpoint security covers such aspects as firewalls, anti-virus/malware protection and encryption of data at rest.
  • In the centre of the diagram, the Fabric is based on the US National Institute of Standards and Technology (NIST)’s established definition of essential characteristics for cloud computing.
  • The NIST guidance referred to above also defines three service models for cloud computing: Infrastructure as a Service (IaaS); Platform as a Service (PaaS) and Software as a Service (SaaS).
  • In the case of IaaS, there are considerations around the choice of Operating System. Supported operating systems will depend on the cloud service provider.
  • Many cloud service providers will also provide one or more Marketplaces with both first and third-party (ISV) products ranging from firewalls and security appliances to pre-configured application servers.
  • Application Services are the real reason that the virtual datacentre services exist, and applications may be web, mobile or API-based. There may also be traditional hosted server applications – especially where IaaS is in use.
  • The whole stack is wrapped with a suite of Management Tools. These exist to ensure that the cloud services are effectively managed in line with expected practices and cover all of the operational tasks that would be expected for any datacentre including: licensing; resource management; billing; HA and disaster recovery/business continuity; backup and recovery; configuration management; software updates; automation; management policies and monitoring/alerting.

If you have feedback – for example, a glaring hole or suggestions for changes, please feel free to leave a comment below.

Microsoft Ignite | The Tour: London Recap

One of the most valuable personal development activities in my early career was a trip to the Microsoft TechEd conference in Amsterdam. I learned a lot – not just technically but about making the most of events to gather information, make new industry contacts, and generally top up my knowledge. Indeed, even as a relatively junior consultant, I found that dipping into multiple topics for an hour or so gave me a really good grounding to discover more (or just enough to know something about the topic) – far more so than an instructor-led training course.

Over the years, I attended further “TechEd”s in Amsterdam, Barcelona and Berlin. I fought off the “oh Mark’s on another jolly” comments by sharing information – incidentally, conference attendance is no “jolly” – there may be drinks and even parties but those are after long days of serious mental cramming, often on top of broken sleep in a cheap hotel miles from the conference centre.

Microsoft TechEd is no more. Over the years, as the budgets were cut, the standard of the conference dropped and in the UK we had a local event called Future Decoded. I attended several of these – and it was at Future Decoded that I discovered risual – where I’ve been working for almost four years now.

Now, Future Decoded has also fallen by the wayside and Microsoft has focused on taking it’s principal technical conference – Microsoft Ignite – on tour, delivering global content locally.

So, a few weeks ago, I found myself at the ExCeL conference centre in London’s Docklands, looking forward to a couple of days at “Microsoft Ignite | The Tour: London”.

Conference format

Just like TechEd, and at Future Decoded (in the days before I had to use my time between keynotes on stand duty!), the event was broken up into tracks with sessions lasting around an hour. Because that was an hour of content (and Microsoft event talks are often scheduled as an hour, plus 15 minutes Q&A), it was pretty intense, and opportunities to ask questions were generally limited to trying to grab the speaker after their talk, or at the “Ask the Experts” stands in the main hall.

One difference to Microsoft conferences I’ve previously attended was the lack of “level 400” sessions: every session I saw was level 100-300 (mostly 200/300). That’s fine – that’s the level of content I would expect but there may be some who are looking for more detail. If it’s detail you’re after then Ignite doesn’t seem to be the place.

Also, I noticed that Day 2 had fewer delegates and lacked some of the “hype” from Day 1: whereas the Day 1 welcome talk was over-subscribed, the Day 2 equivalent was almost empty and light on content (not even giving airtime to the conference sponsors). Nevertheless, it was easy to get around the venue (apart from a couple of pinch points).

Personal highlights

I managed to cover 11 topics over two days (plus a fair amount of networking). The track format of the event was intended to let a delegate follow a complete learning path but, as someone who’s a generalist (that’s what Architects have to be), I spread myself around to cover:

  • Dealing with a massive onset of data ingestion (Jeramiah Dooley/@jdooley_clt).
  • Enterprise network connectivity in a cloud-first world (Paul Collinge/@pcollingemsft).
  • Building a world without passwords.
  • Discovering Azure Tooling and Utilities (Simona Cotin/@simona_cotin).
  • Selecting the right data storage strategy for your cloud application (Jeramiah Dooley/@jdooley_clt).
  • Governance in Azure (Sam Cogan/@samcogan).
  • Planning and implementing hybrid network connectivity (Thomas Maurer/@ThomasMaurer).
  • Transform device management with Windows Autopilot, Intune and OneDrive (Michael Niehaus/@mniehaus and Mizanur Rahman).
  • Maintaining your hybrid environment (Niel Peterson/@nepeters).
  • Windows Server 2019 Deep Dive (Jeff Woolsey/@wsv_guy).
  • Consolidating infrastructure with the Azure Kubernetes Service (Erik St Martin/@erikstmartin).

In the past, I’d have written a blog post for each topic. I was going to say that I simply don’t have the time to do that these days but by the time I’d finished writing this post, I thought maybe I could have split it up a bit more! Regardless, here are some snippets of information from my time at Microsoft Ignite | The Tour: London. There’s more information in the slide decks – which are available for download, along with the content for the many sessions I didn’t attend.

Data ingestion

Ingesting data can be broken into:

  • Real-time ingestion.
  • Real-time analysis (see trends as they happen – and make changes to create a competitive differentiator).
  • Producing actions as patterns emerge.
  • Automating reactions in external services.
  • Making data consumable (in whatever form people need to use it).

Azure has many services to assist with this – take a look at IoT Hub, Azure Event Hubs, Azure Databricks and more.

Enterprise network connectivity for the cloud

Cloud traffic is increasing whilst traffic that remains internal to the corporate network is in decline. Traditional management approaches are no longer fit for purpose.

Office applications use multiple persistent connections – this causes challenges for proxy servers which generally degrade the Office 365 user experience. Remediation is possible, with:

  • Differentiated traffic – follow Microsoft advice to manage known endpoints, including the Office 365 IP address and URL web service.
  • Let Microsoft route traffic (data is in a region, not a place). Use DNS resolution to egress connections close to the user (a list of all Microsoft peering locations is available). Optimise the route length and avoid hairpins.
  • Assess network security using application-level security, reducing IP ranges and ports and evaluating the service to see if some activities can be performed in Office 365, rather than at the network edge (e.g. DLP, AV scanning).

For Azure:

  • Azure ExpressRoute is a connection to the edge of the Microsoft global backbone (not to a datacentre). It offers 2 lines for resilience and two peering types at the gateway – private and public (Microsoft) peering.
  • Azure Virtual WAN can be used to build a hub for a region and to connect sites.
  • Replace branch office routers with software-defined (SDWAN) devices and break out where appropriate.
Microsoft global network

Passwordless authentication

Basically, there are three options:

  • Windows Hello.
  • Microsoft Authenticator.
  • FIDO2 Keys.

Azure tooling and utilities

Useful resources include:

Selecting data storage for a cloud application

What to use? It depends! Classify data by:

  • Type of data:
    • Structured (fits into a table)
    • Semi-structured (may fit in a table but may also use outside metadata, external tables, etc.)
    • Unstructured (documents, images, videos, etc.)
  • Properties of the data:
    • Volume (how much)
    • Velocity (change rate)
    • Variety (sources, types, etc.)
Item TypeVolume Velocity Variety
Product catalogue Semi-structured High Low Low
Product photos Unstructured High Low Low
Sales data Semi-structured Medium High High

How to match data to storage:

  • Storage-driven: build apps on what you have.
  • Cloud-driven: deploy to the storage that makes sense.
  • Function-driven: build what you need; storage comes with it.

Governance in Azure

It’s important to understand what’s running in an Azure subscription – consider cost, security and compliance:

  • Review (and set a baseline):
    • Tools include: Resource Graph; Cost Management; Security Center; Secure Score.
  • Organise (housekeeping to create a subscription hierarchy, classify subscriptions and resources, and apply access rights consistently):
    • Tools include: Management Groups; Tags; RBAC;
  • Audit:
    • Make changes to implement governance without impacting people/work. Develop policies, apply budgets and audit the impact of the policies.
    • Tools include: Cost Management; Azure Policy.
  • Enforce
    • Change policies to enforcement, add resolution actions and enforce budgets.
    • Consider what will happen for non-compliance?
    • Tools include: Azure Policy; Cost Management; Azure Blueprints.
  • (Loop back to review)
    • Have we achieved what we wanted to?
    • Understand what is being spent and why.
    • Know that only approved resources are deployed.
    • Be sure of adhering to security practices.
    • Opportunities for further improvement.

Planning and implementing hybrid network connectivity

Moving to the cloud allows for fast deployment but planning is just as important as it ever was. Meanwhile, startups can be cloud-only but most established organisations have some legacy and need to keep some workloads on-premises, with secure and reliable hybrid communication.

Considerations include:

  • Extension of the internal protected network:
    • Should workloads in Azure only be accessible from the Internal network?
    • Are Azure-hosted workloads restricted from accessing the Internet?
    • Should Azure have a single entry and egress point?
    • Can the connection traverse the public Internet (compliance/regulation)?
  • IP addressing:
    • Existing addresses on-premises; public IP addresses.
    • Namespaces and name resolution.
  • Multiple regions:
    • Where are the users (multiple on-premises sites); where are the workloads (multiple Azure regions); how will connectivity work (should each site have its own connectivity)?
  • Azure virtual networks:
    • Form an isolated boundary with secure communications.
    • Azure-assigned IP addresses (no need for a DHCP server).
    • Segmented with subnets.
    • Network Security Groups (NSGs) create boundaries around subnets.
  • Connectivity:
    • Site to site (S2S) VPNs at up to 1Gbps
      • Encrypted traffic over the public Internet to the GatewaySubnet in Azure, which hosts VPN Gateway VMs.
      • 99.9% SLA on the Gateway in Azure (not the connection).
      • Don’t deploy production workloads on the GatewaySubnet; /26, /27 or /28 subnets recommended; don’t apply NSGs to the GatewaySubnet – i.e. let Azure manage it.
    • Dedicated connections (Azure ExpressRoute): private connection at up to 10Gbps to Azure with:
      • Private peering (to access Azure).
      • Microsoft peering (for Office 365, Dynamics 365 and Azure public IPs).
      • 99.9% SLA on the entire connection.
    • Other connectivity services:
      • Azure ExpressRoute Direct: a 100Gbps direct connection to Azure.
      • Azure ExpressRoute Global Reach: using the Microsoft network to connect multiple local on-premises locations.
      • Azure Virtual WAN: branch to branch and branch to Azure connectivity with software-defined networks.
  • Hybrid networking technologies:

Modern Device Management (Autopilot, Intune and OneDrive)

The old way of managing PC builds:

  1. Build an image with customisations and drivers
  2. Deploy to a new computer, overwriting what was on it
  3. Expensive – and the device has a perfectly good OS – time-consuming

Instead, how about:

  1. Unbox PC
  2. Transform with minimal user interaction
  3. Device is ready for productive use

The transformation is:

  • Take OEM-optimised Windows 10:
    • Windows 10 Pro and drivers.
    • Clean OS.
  • Plus software, settings, updates, features, user data (with OneDrive for Business).
  • Ready for productive use.

The goal is to reduce the overall cost of deploying devices. Ship to a user with half a page of instructions…

Windows Autopilot overview

Autopilot deployment is cloud driven and will eventually be centralised through Intune:

  1. Register device:
    • From OEM or Channel (manufacturer, model and serial number).
    • Automatically (existing Intune-managed devices).
    • Manually using a PowerShell script to generate a CSV file with serial number and hardware hash, which is then uploaded to the Intune portal.
  2. Assign Autopilot profile:
    • Use Azure AD Groups to assign/target.
    • The profile includes settings such as deployment mode, BitLocker encryption, device naming, out of box experience (OOBE).
    • An Azure AD device object is created for each imported Autopilot device.
  3. Deploy:
    • Needs Azure AD Premium P1/P2
    • Scenarios include:
      • User-driven with Azure AD:
        • Boot to OOBE, choose language, locale, keyboard and provide credentials.
        • The device is joined to Azure AD, enrolled to Intune and policies are applied.
        • User signs on and user-assigned items from Intune policy are applied.
        • Once the desktop loads, everything is present, including file links in OneDrive) – time depends on the software being pushed.
      • Self-deploying (e.g. kiosk, digital signage):
        • No credentials required; device authenticates with Azure AD using TPM 2.0.
      • User-driven with hybrid Azure AD join:
        • Requires Offline Domain Join Connector to create AD DS computer account.
        • Device connected to the corporate network (in order to access AD DS), registered with Autopilot, then as before.
        • Sign on to Azure AD and then to AD DS during deployment. If they use the same UPN then it makes things simple for users!
      • Autopilot for existing devices (Windows 7 to 10 upgrades):
        • Backup data in advance (e.g. with OneDrive)
        • Deploy generic Windows 10.
        • Run Autopilot user-driven mode (can’t harvest hardware hashes in Windows 7 so use a JSON config file in the image – the offline equivalent of a profile. Intune will ignore unknown device and Autopilot will use the file instead; after deployment of Windows 10, Intune will notice a PC in the group and apply the profile so it will work if the PC is reset in future).

Autopilot roadmap (1903) includes:

  • “White glove” pre-provisioning for end users: QR code to track, print welcome letter and shipping label!
  • Enrolment status page (ESP) improvements.
  • Cortana voiceover disabled on OOBE.
  • Self-updating Autopilot (update Autopilot without waiting to update Windows).

Maintaining your hybrid environment

Common requirements in an IaaS environment include wanting to use a policy-based configuration with a single management and monitoring solution and auto-remediation.

Azure Automation allows configuration and inventory; monitoring and insights; and response and automation. The Azure Portal provides a single pane of glass for hybrid management (Windows or Linux; any cloud or on-premises).

For configuration and state management, use Azure Automation State Configuration (built on PowerShell Desired State Configuration).

Inventory can be managed with Log Analytics extensions for Windows or Linux. An Azure Monitoring Agent is available for on-premises or other clouds. Inventory is not instant though – can take 3-10 minutes for Log Analytics to ingest the data. Changes can be visualised (for state tracking purposes) in the Azure Portal.

Azure Monitor and Log Analytics can be used for data-driven insights, unified monitoring and workflow integration.

Responding to alerts can be achieved with Azure Automation Runbooks, which store scripts in Azure and run them in Azure. Scripts can use PowerShell or Python so support both Windows and Linux). A webhook can be triggered with and HTTP POST request. A Hybrid runbook worker can be used to run on-premises or in another cloud.

It’s possible to use the Azure VM agent to run a command on a VM from Azure portal without logging in!

Windows Server 2019

Windows Server strategy starts with Azure. Windows Server 2019 is focused on:

  • Hybrid:
    • Backup/connect/replicate VMs.
    • Storage Migration Service to migrate unstructured data into Azure IaaS or another on-premises location (from 2003+ to 2016/19).
      1. Inventory (interrogate storage, network security, SMB shares and data).
      2. Transfer (pairings of source and destination), including ACLs, users and groups. Details are logged in a CSV file.
      3. Cutover (make the new server look like the old one – same name and IP address). Validate before cutover – ensure everything will be OK. Read-only process (except change of name and IP at the end for the old server).
    • Azure File Sync: centralise file storage in Azure and transform existing file servers into hot caches of data.
    • Azure Network Adapter to connect servers directly to Azure networks (see above).
  • Hyper-converged infrastructure (HCI):
    • The server market is still growing and is increasingly SSD-based.
    • Traditional rack looked like SAN, storage fabric, hypervisors, appliances (e.g. load balancer) and top of rack Ethernet switches.
    • Now we use standard x86 servers with local drives and software-defined everything. Manage with Admin Center in Windows Server (see below).
    • Windows Server now has support for persistent memory: DIMM-based; still there after a power-cycle.
    • The Windows Server Software Defined (WSSD) programme is the Microsoft approach to software-defined infrastructure.
  • Security: shielded VMs for Linux (VM as a black box, even for an administrator); integrated Windows Defender ATP; Exploit Guard; System Guard Runtime.
  • Application innovation: semi-annual updates are designed for containers. Windows Server 2019 is the latest LTSC channel so it has the 1709/1803 additions:
    • Enable developers and IT Pros to create cloud-native apps and modernise traditional apps using containers and micro services.
    • Linux containers on Windows host.
    • Service Fabric and Kubernetes for container orchestration.
    • Windows subsystem for Linux.
    • Optimised images for server core and nano server.

Windows Admin Center is core to the future of Windows Server management and, because it’s based on remote management, servers can be core or full installations – even containers (logs and console). Download from http://aka.ms/WACDownload

  • 50MB download, no need for a server. Runs in a browser and is included in Windows/Windows Server licence
  • Runs on a layer of PowerShell. Use the >_ icon to see the raw PowerShell used by Admin Center (copy and paste to use elsewhere).
  • Extensible platform.

What’s next?

  • More cloud integration
  • Update cadence is:
    • Insider builds every 2 weeks.
    • Semi-annual channel every 6 months (specifically for containers):
      • 1709/1803/1809/19xx.
    • Long-term servicing channel
      • Every 2-3 years.
      • 2016, 2019 (in September 2018), etc.

Windows Server 2008 and 2008 R2 reach the end of support in January 2020 but customers can move Windows Server 2008/2008 R2 servers to Azure and get 3 years of security updates for free (on-premises support is chargeable).

Further reading: What’s New in Windows Server 2019.

Containers/Azure Kubernetes Service

Containers:

  • Are fully-packaged applications that use a standard image format for better resource isolation and utilisation.
  • Are ready to deploy via an API call.
  • Are not Virtual machines (for Linux).
  • Do not use hardware virtualisation.
  • Offer no hard security boundary (for Linux).
  • Can be more cost effective/reliable.
  • Have no GUI.

Kubernetes is:

  • An open source system for auto-deployment, scaling and management of containerized apps.
  • Container Orchestrator to manage scheduling; affinity/anti-affinity; health monitoring; failover; scaling; networking; service discovery.
  • Modular and pluggable.
  • Self-healing.
  • Designed by Google based on a system they use to run billions of containers per week.
  • Described in “Phippy goes to the zoo”.

Azure container offers include:

  • Azure Container Instances (ACI): containers on demand (Linux or Windows) with no need to provision VMs or clusters; per-second billing; integration with other Azure services; a public IP; persistent storage.
  • Azure App Service for Linux: a fully-managed PaaS for containers including workflows and advanced features for web applications.
  • Azure Kubernetes Service (AKS): a managed Kubernetes offering.

Wrap-up

So, there you have it. An extremely long blog post with some highlights from my attendance at Microsoft Ignite | The Tour: London. It’s taken a while to write up so I hope the notes are useful to someone else!

UK Government Protective Marking and the Microsoft Cloud

I recently heard a Consultant from another Microsoft partner talking about storing “IL3” information in Azure. That rang alarm bells with me, because Impact Levels (ILs) haven’t been a “thing” for UK Government data since April 2014. For the record, here’s the official guidance on the UK Government data security classifications and this video explains why the system was changed:

Meanwhile, this one is a good example of what it means in practice:

So, what does that mean for storing data in Azure, Dynamics 365 and Office 365? Basically, information classified OFFICIAL can be stored in the Microsoft Cloud – for more information, refer to the Microsoft Trust Center. And, because OFFICIAL-SENSITIVE is not another classification (it’s merely highlighting information where additional care may be needed), that’s fine too.

I’ve worked with many UK Government organisations (local/regional, and central) and most are looking to the cloud as a means to reduce costs and improve services. The fact that more than 90% of public data is classified OFFICIAL (indeed, that’s the default for anything in Government) is no reason to avoid using the cloud.

Weeknote 6: User group and MVP events; a new smartwatch; ghost trains; and the start of Christmas (Week 48, 2017)

Milton Keynes – Rochdale – London – Leicester. Not quite New York – London – Paris but those are the towns and cities on my itinerary this week.

Every now and again, I find myself counting down the days to the weekend. This week has been different. It was manic, squeezing work in around lots of other activities but it was mostly enjoyable too.

The week at work

My work week started off with an opportunity to input to a report that I find quite exciting. I can’t say too much at the moment (though it should be released within the next couple of weeks and I’ll be shouting about it then) but it’s one of those activities that makes me think “I’d like to do more of this” (I already get referred to as the extra member of the risual marketing team, which I think they mean as a good thing!).

Bills have to be paid though (i.e. I need to keep my utilisation up!), so I’ve also had some consulting in the mix, writing a strategy for a customer who needs to modernise their datacentre.

On Wednesday evening, I managed to fit in a UK Azure User Group (@UKAzure) meeting in London, with Paul Andrew (@MrPaulAndrew) talking about Azure Data Factory – another opportunity to fill some gaps in my knowledge.

Then, back to work on Thursday, squeezing in a full day’s work before heading to the National Space Centre in Leicester in the afternoon for the UK MVP Community Connection. I’m not an MVP anymore (I haven’t been since 2011) but I am a member of the MVP Reconnect Programme, which means I still get invited to some of the events – and the two I’ve been to so far have been really worthwhile. One of my favourite sessions at the last event was Tony Wells from Resource IT (the guys who create the Microsoft Abbreviation Dictionary) talking about storytelling. This time we had a 3-hour workshop with an opportunity to put some of the techniques into practice.

The evening started with drinks in the space tower, then an IMAX film before dinner (and a quiz) in the Space Centre, surrounded by the exhibits. We returned the next day for a Microsoft business update, talks on ethics and diversity, on extending our audience reach and on mixed reality.

Unfortunately, my Friday afternoon was hijacked by other work… and the work week also spilt over into the weekend – something I generally try to avoid and which took the shine off things somewhat…

Social

I’ve had a full-on week with family too: my eldest son is one of six from Milton Keynes who have been selected to attend the Kandersteg International Scout Centre (KISC) in 2019 and, together with ten more who are off to the World Scout Jamboree in West Virginia, we have a lot of fund-raising to do (about £45,000 in total). That meant selling raffle tickets in the shopping centre for the opportunity to win a car on Monday evening, and a meeting on Tuesday evening to talk about fundraising ideas…

So, that’s out every evening, and a long day every day this week… by Friday I was ready to collapse in a heap.

The weekend

No cyclocross this weekend (well, there was, but it clashed with football), so I was on a different sort of Dad duty, running the line and trying not to anger parents from the other team with my ropey knowledge of the offside rule

It’s also December now, so my family have declared that Christmas celebrations can begin. Right from the moment I returned home on Friday evening I was accused of not being Christmassy enough and I was forced to listen to “Christmas Music” on the drive to my son’s football match (the compromise was that it could be my Christmas playlist).

Even I was amused to be followed in my car by a certain jolly chap:

My part in decorating the house consists of getting everything down from the loft, putting up the tree and lights, and then finding myself somewhere to hide for a couple of hours until it all looks lovely and sparkly. Unfortunately, the hiding time was actually spent polishing a presentation for Monday and fighting with Concur to complete my expenses… not exactly what I had in mind…

New tech

A couple of weeks ago, I mentioned that we now have a teenager in the house and my eldest son has managed to save enough birthday money to buy a smartwatch. He was thinking of a Garmin device until I reminded him how bad their software is when we sync our bike computers so he went for a Samsung Gear Sport. It looks pretty good if you have an Android phone. I have an iPhone and an Apple Watch (as you may recall from my recent tales of woe) but if I was an Android guy, I think the Gear Sport would be my choice…

Ghost trains

I forgot to add this tale to last week’s week note but I was travelling back home from Stafford recently when I noticed a re-branded Virgin Pendolino at the platform. My train wasn’t due for another 10 minutes so I didn’t check out where this one was going, so I was a little surprised to pass it again as I arrived in Milton Keynes two hours later, after I’d gone the long way (via Birmingham) and changed trains…

Checking on Realtime Trains showed me that I could have caught a direct train from Stafford, but it wasn’t on the public timetable. Indeed, although it stops at several stations, it’s listed as an empty coaching stock working (which is presumably why it is pathed on the slow lines including the Northampton loop). So, in addition to trains that stop at Milton Keynes only to set down (southbound) or pick up (northbound), it seems that Virgin run “ghost trains” too!

Listening

I listen to a lot of podcasts when I’m in the car. This week I spent a lot of time in the car. I recommend these two episodes:

Twitter highlights

I’m no GDPR expert but this looked useful:

Company branding is great until it makes the information you give out next-to-useless:

Credit is due to the social media team handling the @PremierInn account for Whitbread, they quickly confirmed that it is a J not an I (though I had worked it out).

@HolidayInn were equally on the ball when I complained about a lack of power sockets (and traffic noise insulation) at their Leicester City Centre hotel. Thankfully they replies were limited to Twitter and email – not midnight calls as my colleague Gavin Morrisson found when he tweeted about another Holiday Inn!

This made me smirk (I haven’t “elevated” my Mac yet…):

If you don’t get the joke, this should provide context.

I like this definition of “digital [transformation]”:

This short video looks at how we need to “debug the gender gap”:

The full film is available to stream/download from various sources… I intend to watch.

And, to wrap up with some humour, I enjoyed Chaz Hutton’s latest Post-it sketch:

(for more like this, check out InstaChaz on Instagram)

Finit

That’s it for now… more next week…

Weeknote 5: Playing with Azure; Black Friday; substandard online deliveries; and the usual tech/cycling/family mix (Week 47, 2017)

This weeknote is a bit of a rush-job – mostly because it’s Sunday afternoon and I’m writing this at the side of a public swimming pool whilst supervising a pool party… it will be late tonight when I get to finish it!

The week

There not a huge amount to say about this week though. It’s been as manic as usual, with a mixture of paid consulting days, pre-sales and time at Microsoft.

The time at Microsoft was excellent though – I spent Tuesday in their London offices, when Daniel Baker (@AzureDan) gave an excellent run through of some of the capabilities in Azure. I like to think I have a reasonable amount of Azure experience and I was really looking to top up my knowledge with recent developments as well as to learn a bit more about using some of the advanced workloads but I learned a lot that day. I think Dan is planning some more videos so watch his Twitter feed but his “Build a Company in a Day” slides are available for download.

On the topic of Azure, I managed to get the sentiment analysis demo I’ve been working on based on a conversation with my colleague Matt Bradley (@MattOnMicrosoft) and Daniel Baker also touched on it in his Build a Company in a Day workshop. It uses an Azure Logic App to:

  1. Monitor Twitter on a given topic;
  2. Detect sentiment with Azure Cognitive Services Text Analytics;
  3. Push data into Power BI dataset for visualisation;
  4. Send an email if the sentiment is below a certain value.

It’s a bit rough-and-ready (my Power BI skills are best described as “nascent”) but it’s not a bad demo – and it costs me pennies to run. You can also do something similar with Microsoft Flow instead of an Azure Logic App.

Black Friday

I hate Black Friday. Just an excuse to shift some excess stock onto greedy consumers ahead of Christmas…

…but it didn’t stop me buying things:

  • An Amazon Fire TV Stick to make our TV smart again (it has fewer and fewer apps available because it’s more than 3 years old…). Primarily I was after YouTube but my youngest is very excited about the Manchester City app!
  • Another set of Bluetooth speakers (because the kids keep “borrowing” my Bose Soundlink Mini 2).
  • Some Amazon buttons at a knock-down £1.99 (instead of £4.99) for IoT hacking.
  • A limited edition GCN cycle jersey that can come back to me from my family as a Christmas present!

The weekend

My weekend involved: cycling (my son was racing cyclocross again in the Central CX League); an evening out with my wife (disappointing restaurant in the next town followed by great gin in our local pub); a small hangover; some Zwift (to blow away the cobwebs – and although it was sunny outside, the chances of hitting black ice made the idea of a real road bike ride a bit risky); the pool party I mentioned earlier (belated 13th birthday celebrations for my eldest); 7 adolescent kids eating an enormous quantity of food back at ours; and… relax.

Other stuff

My eldest son discovered that the pressure washer can make bicycle bar tape white again! (I wrote a few years back about using baby wipes to clean bar tape but cyclocross mud goes way beyond even their magical properties.)

After posting my 7 days 7 photos efforts last week, I saw this:

I’ll get my coat.

I also learned a new term: “bikeshedding” (nothing to do with cycling… or smoking… or other teenage activities…):

It’s scary to see how much we’re cluttering space – not just our planet:

There’s a new DNS service in town:

I’ve switched the home connection from OpenDNS (now owned by Cisco) to 9.9.9.9 and will report back in a while…

This ad tells a great story:

Curve is now available to ordinary employees and not just business-people!

We recently switched back to Tesco for our online grocery shopping (we left years ago because it seemed someone was taking one or two items from every order, hoping we wouldn’t notice). Well, it seems things have improved in some ways, but not in others…

On the subject of less-than-wonderful online shopping experiences, after I criticised John Lewis for limiting website functionality instead of bursting to the cloud:

It seems they got their own back by shipping my wife’s Christmas present with Hermes, who dumped it on the front doorstep (outside the notified delivery timeframe) and left a card to say it had been delivered to a secure location:

It may be silly but this made me laugh:

Finally, for this week, I borrowed my son’s wireless charger to top up my iPhone. Charging devices without cables – it’s witchcraft, I tell you! Witchcraft!

Next week, I’ll be back with my customer in Rochdale, consulting on what risual calls the “Optimised Service Vision” so it was interesting to see Matt Ballantine’s slides on Bringing Service Design to IT Service. I haven’t seen Matt present these but it looks like our thinking is quite closely aligned…

That’s all folks!

That’s all for this week. I’m off to watch some more Halt and Catch Fire before I get some sleep in preparation for what looks like a busy week…

Weeknote 4: music; teenagers; creating a chatbot; tech, more tech and tech TV; 7 day photo challenge; and cycling (Week 46, 2017)

Another week, another weeknote…

There’s not much to say about work this week – I’ve mostly been writing documentation. I did spend a good chunk of Monday booking hotels and travel, only to find 12 days of consulting drop out of my diary again on Friday (cue hotel cancellations, etc.) but I guess that’s just life!

Family life: grime, rap and teens!

Outside work, it’s been good to be close to home and get involved in family life again.

I had the amusement of my 11 year-old and his friends rapping to their grime music on my car on the way to/from football training this week (we’re at the age where it’s “Dad, can we have my music on please?”) but there’s only so much Big Shaq I can take so I played some Eminem on the way back. It was quite endearing to hear my son say “I didn’t know you knew about Eminem!” after I dropped his mates off. I should make the most of these moments as the adulation is dropping off now he approaches his teens!

Talking of teens, my eldest turned 13 this week, which was a big day in the Wilson household:

 

I’m not sure how this little fella grew into this strong chap (or where the time in between has gone) but we introduced him to the Harry Enfield “Kevin the teenager” videos a few months ago. I thought they were funny when I was younger but couldn’t believe how accurate they are now I’m a parent. Our boys clearly understood the message too and looked a bit sheepish!

Tech

I did play with some tech this week – and I managed to create my very own chatbot without writing any code:

Virtual Mark (MarkBot1) uses the Microsoft QnA Maker and runs in Microsoft Azure. The process is described in James Marshall’s blog post and it’s very straightforward. I’m using Azure Functions and so far this serverless solution has cost me absolutely nothing to run!

It’s also interesting reading some of the queries that the bot has been asked, which have led to me extending its knowledge base a few times now. A question and answer chatbot is probably more suited to a set of tightly bounded questions on a topic (the things people can ask about me is pretty broad) but it’s a nice demo…

I also upgraded my work PC to the latest Windows 10 and Office builds (1709 and 1710 respectively), which gave me the ability to use a digital pen as a presentation clicker, which is nice, in a geek-novelty kind of way:

Tech TV

I have an Amazon Prime membership, which includes access to Amazon Prime Instant Video – including several TV shows that would otherwise only be available in the US. One I enjoy is Mr Robot – which although completely weird at times is also strangely addictive – and this week’s episode was particularly good (scoring 9.9 on IMDB). Whilst I was waiting for the next episode to come around, I found that I’d missed a whole season of Halt and Catch Fire too (I binge-watched the first three after they were recommended to me by Howard van Rooijen/@HowardvRooijen). Series 4 is the final one and that’s what presently keeping me from my sleep… but it’s really good!

I don’t have Netflix, but Silicon Cowboys has been recommended to me by Derek Goodridge (@workerthread). Just like the first series of Halt and Catch Fire, it’s the story of the original IBM PC clone manufacturers – Compaq – but in documentary format, rather than as a drama series.

iPhone images

Regular readers may recall that a few weeks ago I found myself needing to buy a new iPhone after I fell into the sea with my iPhone in my pocket, twisting my ankle in the process…

People have been telling me for ages that “the latest iPhone has a great camera” and, in daylight, I’m really impressed by the clarity and also the bokeh effect. It’s still a mobile phone camera with a tiny sensor though and that means it’s still really poor at night. If a full-frame DSLR struggles at times, an iPhone will be challenged I guess – but I’m still finding that I’m inspired to use the camera more.

7 Days 7 Photos

Last week, I mentioned the 7 days, 7 photos challenge. I’ve completed mine now and they are supposed to be without explanation but, now I have a set of 7 photos, I thought I would explain what and why I used these ones. I get the feeling that some people are just posting 7 pictures, one a day, but these really do relate to what I was doing each day – and I tried to nominate people for the challenge each day based on their relevance to the subject…

Day 1

7 Days 7 Photos Day 1

I spotted this pub as I walked to Farringdon station. I wondered if “the clerk and well” was the origin of the name for “Clerkenwell” and it turns out that it is. Anyway, I liked the view of the traditional London pub (I was on my way home from another one!) and challenged my brother, who’s a publican…

Day 2

7 Days 7 Photos Day 2

I liked the form in this photograph of my son’s CX bike on the roof of my car. It didn’t look so clean when we got back from cyclocross training though! I challenged my friend Andy, whose 40th birthday was the reason for my ride from London to Paris a few years ago…

Day 3

7 Days 7 Photos Day 3

Not technically a single photo – lets’ call it a triptych, I used the Diptic app (as recommended by Ben Seymour/@bseymour) to create this collage. I felt it was a little too personal to nominate my friend Kieran, whose medals are in the lower left image, so I nominated my friend James, who was leading the Scouts in our local remembrance day parade.

Day 4

7 Days 7 Photos Day 4

I found some failed backups on my Synology NAS this week. For some reason, Hyper Backup complained it didn’t have enough storage (I’m pretty sure it wasn’t Azure that ran out of space!) so I ran several backups, each one adding another folder until I had all of my new photos in the backup set. I felt the need to challenge a friend who works in IT – so I challenged my friend Stuart.

Day 5

7 Days 7 Photos Day 5

My son was cake-baking, for Children in Need, I think – or maybe it was my other son, baking his birthday cake. I can’t really remember. I challenged a friend who runs a local cafe and regularly bakes muffins…

Day 6

7 Days 7 Photos Day 6

Self-explanatory. My son’s own creation for his birthday. I challenged my wife for this one.

Day 7

7 Days 7 Photos Day 7

The last image is following an evening helping out at Scouts. Images of attempts to purify water through distillation were not that great, so I took a picture of the Scout Badge, and nominated my friend Phil, who’s another one of the local Scout leaders.

(All seven of these pictures were taken on an iPhone 8 Plus using the native camera app, then edited in Snapseed and uploaded to Flickr)

Other stuff

I like this:

And I remember shelves of tapes like these (though mine were all very neatly written, or computer-generated, even back in the 1980s):

On the topic of music, look up Master Boot Record on Spotify:

And this “Soundtrack for Coding” is pretty good for writing documentation too…

I added second-factor authentication to my WordPress blog this week. I couldn’t find anything that uses the Microsoft Authenticator, but this 2FA WordPress plugin from miniOrange uses Google Authenticator and was very easy to set up.

Some UK libraries have started loaning BBC Microbits but unfortunately not yet in my manor:

Being at home all week meant I went to see my GP about my twisted ankle (from the falling-into-the-sea incident). One referral later and I was able to see a physio… who’s already working wonders on helping to repair my damaged ligaments. And he says I can ride my bike too… so I’ll be back on Zwift even if cyclocross racing is out for the rest of the season.

Cycling

On the subject of Zwift, they announced a price rise this week. I understand that these things happen but it’s gone up 50% in the US (and slightly more than that here in the UK). All that really does is drive me to use Zwift in the winter and to cancel my membership in the summer. A more reasonable monthly fee might make me more inclined to sign up for 12 months at a time and create a recurring revenue for Zwift. Very strange business model, IMHO.

I particularly liked the last line of this article:

“Five minutes after the race
That was sooo fun! When can I do it again?!”

I may not have been riding cyclocross this weekend, but my son was, and Sunday was the popular Central Cyclocross League race at RAF Halton. With mud, sand, gravel and steep banks, long woodland sections and more, it looked epic. Maybe I’ll get to ride next year!

I did get to play with one of the RAF’s cranes (attached to a flatbed truck) though – amazing how much control there is – and had a go on the road safety rig too.

And of course, what else to eat at a cyclocross event but Belgian fries, mayo and waffles!

Finally, my friends at Kids Racing (@kidsracing) have some new kit in. Check out the video they filmed at the MK Bowl a couple of weeks back – and if you have kids in need of new cycling kit, maybe head over to HUP CC.

Wrap-up

That’s it for this week. Next week I have a bit more variation in my work (including another Microsoft event – Azure Ready in the UK) and I’m hoping to actually get some blog posts written… see you on the other side!

Adopting cloud services means being ready for constant change

There’s a news story today about how Microsoft may be repositioning some (or all) of Skype for Business as Microsoft Teams (the collaborative group-based chat service built on various Office 365 services but Skype for Business in particular).

The details of that story are kind of irrelevant to this post; it’s the reaction I got on Twitter that I felt the need to comment on (when I hit 5 tweeted replies I thought a blog post might be more appropriate).

Change is part of consuming cloud services. There’s a service agreement and a subscription/licensing agreement – customers consume the service as the provider defines it. The service provider will generally give notice of change but you normally have to accept it (or leave). There is no option to stay on legacy versions of software for months or years at a time because you’re not ready to update your ways of working or other connected systems.

That is a big shift and many IT departments have not adjusted their thinking to adopt this new way of working.

I’ve seen many organisations moving to cloud services (mostly Office 365 and Azure) and stick with their current approach. They do things like try to map drive letters to OneDrive because that’s what users are used to, instead of showing them new (and often better) ways of working. They try to use old versions of Office with the latest services and wonder why the user experience is degraded. They think about the on-premises workloads (Exchange, Lync/Skype for Business, SharePoint) instead of the potential provided by the whole productivity platform that they have bought licences to use. They try to turn parts of the service off or hide them from users.

My former colleague Steve Harwood (@SteeveeH) did some work with one of risual’s customers to define a governance structure for Office 365. It’s great work – and maybe I’ll blog about it separately – but the point is that organisations need to think differently for the cloud.

Buying services from Microsoft, Amazon, Google, Salesforce, et al is not like buying them from the managed services provider that does its best to maintain a steady state and avoid change at all costs (or often at great cost!). Moving to the cloud means constant change. You may not have servers to keep up to date once your apps are sold on an “evergreen” subscription basis but you will need to keep client software up to date – not just traditional installed apps but mobile apps and browsers too. And when the service gains a new feature, it’s there for adoption. You may have the ability to hide it but that’s just a sticking plaster solution.

Often the cry is “but we need to train the users”. Do you really? Many of today’s business end users have grown up with technology. They are familiar with using services at home far more advanced than those provided by many workplaces. Intuitive user interfaces can go a long way and there’s no need to provide formal training for many IT changes. Instead, keep abreast of the advertised changes from your service provider (for example the Message Center in Office 365) and decide what the impact is of each new feature. Very few will need a full training package! Some well-written communications, combined with self-help forums and updated FAQs at the Service Desk will often be enough but there’s also the opportunity to offer access to Massive Open Online Courses (MOOCs) where training needs are more extensive.

There are, of course, examples of where service providers have rolled out new features with inadequate testing, or with too little notice but these are edge cases and generally there’s time to react. The problem comes when organisations stick their proverbial heads in the sand and try to ignore the inevitable change.