Microsoft Ignite | The Tour: London Recap

One of the most valuable personal development activities in my early career was a trip to the Microsoft TechEd conference in Amsterdam. I learned a lot – not just technically but about making the most of events to gather information, make new industry contacts, and generally top up my knowledge. Indeed, even as a relatively junior consultant, I found that dipping into multiple topics for an hour or so gave me a really good grounding to discover more (or just enough to know something about the topic) – far more so than an instructor-led training course.

Over the years, I attended further “TechEd”s in Amsterdam, Barcelona and Berlin. I fought off the “oh Mark’s on another jolly” comments by sharing information – incidentally, conference attendance is no “jolly” – there may be drinks and even parties but those are after long days of serious mental cramming, often on top of broken sleep in a cheap hotel miles from the conference centre.

Microsoft TechEd is no more. Over the years, as the budgets were cut, the standard of the conference dropped and in the UK we had a local event called Future Decoded. I attended several of these – and it was at Future Decoded that I discovered risual – where I’ve been working for almost four years now.

Now, Future Decoded has also fallen by the wayside and Microsoft has focused on taking it’s principal technical conference – Microsoft Ignite – on tour, delivering global content locally.

So, a few weeks ago, I found myself at the ExCeL conference centre in London’s Docklands, looking forward to a couple of days at “Microsoft Ignite | The Tour: London”.

Conference format

Just like TechEd, and at Future Decoded (in the days before I had to use my time between keynotes on stand duty!), the event was broken up into tracks with sessions lasting around an hour. Because that was an hour of content (and Microsoft event talks are often scheduled as an hour, plus 15 minutes Q&A), it was pretty intense, and opportunities to ask questions were generally limited to trying to grab the speaker after their talk, or at the “Ask the Experts” stands in the main hall.

One difference to Microsoft conferences I’ve previously attended was the lack of “level 400” sessions: every session I saw was level 100-300 (mostly 200/300). That’s fine – that’s the level of content I would expect but there may be some who are looking for more detail. If it’s detail you’re after then Ignite doesn’t seem to be the place.

Also, I noticed that Day 2 had fewer delegates and lacked some of the “hype” from Day 1: whereas the Day 1 welcome talk was over-subscribed, the Day 2 equivalent was almost empty and light on content (not even giving airtime to the conference sponsors). Nevertheless, it was easy to get around the venue (apart from a couple of pinch points).

Personal highlights

I managed to cover 11 topics over two days (plus a fair amount of networking). The track format of the event was intended to let a delegate follow a complete learning path but, as someone who’s a generalist (that’s what Architects have to be), I spread myself around to cover:

  • Dealing with a massive onset of data ingestion (Jeramiah Dooley/@jdooley_clt).
  • Enterprise network connectivity in a cloud-first world (Paul Collinge/@pcollingemsft).
  • Building a world without passwords.
  • Discovering Azure Tooling and Utilities (Simona Cotin/@simona_cotin).
  • Selecting the right data storage strategy for your cloud application (Jeramiah Dooley/@jdooley_clt).
  • Governance in Azure (Sam Cogan/@samcogan).
  • Planning and implementing hybrid network connectivity (Thomas Maurer/@ThomasMaurer).
  • Transform device management with Windows Autopilot, Intune and OneDrive (Michael Niehaus/@mniehaus and Mizanur Rahman).
  • Maintaining your hybrid environment (Niel Peterson/@nepeters).
  • Windows Server 2019 Deep Dive (Jeff Woolsey/@wsv_guy).
  • Consolidating infrastructure with the Azure Kubernetes Service (Erik St Martin/@erikstmartin).

In the past, I’d have written a blog post for each topic. I was going to say that I simply don’t have the time to do that these days but by the time I’d finished writing this post, I thought maybe I could have split it up a bit more! Regardless, here are some snippets of information from my time at Microsoft Ignite | The Tour: London. There’s more information in the slide decks – which are available for download, along with the content for the many sessions I didn’t attend.

Data ingestion

Ingesting data can be broken into:

  • Real-time ingestion.
  • Real-time analysis (see trends as they happen – and make changes to create a competitive differentiator).
  • Producing actions as patterns emerge.
  • Automating reactions in external services.
  • Making data consumable (in whatever form people need to use it).

Azure has many services to assist with this – take a look at IoT Hub, Azure Event Hubs, Azure Databricks and more.

Enterprise network connectivity for the cloud

Cloud traffic is increasing whilst traffic that remains internal to the corporate network is in decline. Traditional management approaches are no longer fit for purpose.

Office applications use multiple persistent connections – this causes challenges for proxy servers which generally degrade the Office 365 user experience. Remediation is possible, with:

  • Differentiated traffic – follow Microsoft advice to manage known endpoints, including the Office 365 IP address and URL web service.
  • Let Microsoft route traffic (data is in a region, not a place). Use DNS resolution to egress connections close to the user (a list of all Microsoft peering locations is available). Optimise the route length and avoid hairpins.
  • Assess network security using application-level security, reducing IP ranges and ports and evaluating the service to see if some activities can be performed in Office 365, rather than at the network edge (e.g. DLP, AV scanning).

For Azure:

  • Azure ExpressRoute is a connection to the edge of the Microsoft global backbone (not to a datacentre). It offers 2 lines for resilience and two peering types at the gateway – private and public (Microsoft) peering.
  • Azure Virtual WAN can be used to build a hub for a region and to connect sites.
  • Replace branch office routers with software-defined (SDWAN) devices and break out where appropriate.
Microsoft global network

Passwordless authentication

Basically, there are three options:

  • Windows Hello.
  • Microsoft Authenticator.
  • FIDO2 Keys.

Azure tooling and utilities

Useful resources include:

Selecting data storage for a cloud application

What to use? It depends! Classify data by:

  • Type of data:
    • Structured (fits into a table)
    • Semi-structured (may fit in a table but may also use outside metadata, external tables, etc.)
    • Unstructured (documents, images, videos, etc.)
  • Properties of the data:
    • Volume (how much)
    • Velocity (change rate)
    • Variety (sources, types, etc.)
Item TypeVolume Velocity Variety
Product catalogue Semi-structured High Low Low
Product photos Unstructured High Low Low
Sales data Semi-structured Medium High High

How to match data to storage:

  • Storage-driven: build apps on what you have.
  • Cloud-driven: deploy to the storage that makes sense.
  • Function-driven: build what you need; storage comes with it.

Governance in Azure

It’s important to understand what’s running in an Azure subscription – consider cost, security and compliance:

  • Review (and set a baseline):
    • Tools include: Resource Graph; Cost Management; Security Center; Secure Score.
  • Organise (housekeeping to create a subscription hierarchy, classify subscriptions and resources, and apply access rights consistently):
    • Tools include: Management Groups; Tags; RBAC;
  • Audit:
    • Make changes to implement governance without impacting people/work. Develop policies, apply budgets and audit the impact of the policies.
    • Tools include: Cost Management; Azure Policy.
  • Enforce
    • Change policies to enforcement, add resolution actions and enforce budgets.
    • Consider what will happen for non-compliance?
    • Tools include: Azure Policy; Cost Management; Azure Blueprints.
  • (Loop back to review)
    • Have we achieved what we wanted to?
    • Understand what is being spent and why.
    • Know that only approved resources are deployed.
    • Be sure of adhering to security practices.
    • Opportunities for further improvement.

Planning and implementing hybrid network connectivity

Moving to the cloud allows for fast deployment but planning is just as important as it ever was. Meanwhile, startups can be cloud-only but most established organisations have some legacy and need to keep some workloads on-premises, with secure and reliable hybrid communication.

Considerations include:

  • Extension of the internal protected network:
    • Should workloads in Azure only be accessible from the Internal network?
    • Are Azure-hosted workloads restricted from accessing the Internet?
    • Should Azure have a single entry and egress point?
    • Can the connection traverse the public Internet (compliance/regulation)?
  • IP addressing:
    • Existing addresses on-premises; public IP addresses.
    • Namespaces and name resolution.
  • Multiple regions:
    • Where are the users (multiple on-premises sites); where are the workloads (multiple Azure regions); how will connectivity work (should each site have its own connectivity)?
  • Azure virtual networks:
    • Form an isolated boundary with secure communications.
    • Azure-assigned IP addresses (no need for a DHCP server).
    • Segmented with subnets.
    • Network Security Groups (NSGs) create boundaries around subnets.
  • Connectivity:
    • Site to site (S2S) VPNs at up to 1Gbps
      • Encrypted traffic over the public Internet to the GatewaySubnet in Azure, which hosts VPN Gateway VMs.
      • 99.9% SLA on the Gateway in Azure (not the connection).
      • Don’t deploy production workloads on the GatewaySubnet; /26, /27 or /28 subnets recommended; don’t apply NSGs to the GatewaySubnet – i.e. let Azure manage it.
    • Dedicated connections (Azure ExpressRoute): private connection at up to 10Gbps to Azure with:
      • Private peering (to access Azure).
      • Microsoft peering (for Office 365, Dynamics 365 and Azure public IPs).
      • 99.9% SLA on the entire connection.
    • Other connectivity services:
      • Azure ExpressRoute Direct: a 100Gbps direct connection to Azure.
      • Azure ExpressRoute Global Reach: using the Microsoft network to connect multiple local on-premises locations.
      • Azure Virtual WAN: branch to branch and branch to Azure connectivity with software-defined networks.
  • Hybrid networking technologies:

Modern Device Management (Autopilot, Intune and OneDrive)

The old way of managing PC builds:

  1. Build an image with customisations and drivers
  2. Deploy to a new computer, overwriting what was on it
  3. Expensive – and the device has a perfectly good OS – time-consuming

Instead, how about:

  1. Unbox PC
  2. Transform with minimal user interaction
  3. Device is ready for productive use

The transformation is:

  • Take OEM-optimised Windows 10:
    • Windows 10 Pro and drivers.
    • Clean OS.
  • Plus software, settings, updates, features, user data (with OneDrive for Business).
  • Ready for productive use.

The goal is to reduce the overall cost of deploying devices. Ship to a user with half a page of instructions…

Windows Autopilot overview

Autopilot deployment is cloud driven and will eventually be centralised through Intune:

  1. Register device:
    • From OEM or Channel (manufacturer, model and serial number).
    • Automatically (existing Intune-managed devices).
    • Manually using a PowerShell script to generate a CSV file with serial number and hardware hash, which is then uploaded to the Intune portal.
  2. Assign Autopilot profile:
    • Use Azure AD Groups to assign/target.
    • The profile includes settings such as deployment mode, BitLocker encryption, device naming, out of box experience (OOBE).
    • An Azure AD device object is created for each imported Autopilot device.
  3. Deploy:
    • Needs Azure AD Premium P1/P2
    • Scenarios include:
      • User-driven with Azure AD:
        • Boot to OOBE, choose language, locale, keyboard and provide credentials.
        • The device is joined to Azure AD, enrolled to Intune and policies are applied.
        • User signs on and user-assigned items from Intune policy are applied.
        • Once the desktop loads, everything is present, including file links in OneDrive) – time depends on the software being pushed.
      • Self-deploying (e.g. kiosk, digital signage):
        • No credentials required; device authenticates with Azure AD using TPM 2.0.
      • User-driven with hybrid Azure AD join:
        • Requires Offline Domain Join Connector to create AD DS computer account.
        • Device connected to the corporate network (in order to access AD DS), registered with Autopilot, then as before.
        • Sign on to Azure AD and then to AD DS during deployment. If they use the same UPN then it makes things simple for users!
      • Autopilot for existing devices (Windows 7 to 10 upgrades):
        • Backup data in advance (e.g. with OneDrive)
        • Deploy generic Windows 10.
        • Run Autopilot user-driven mode (can’t harvest hardware hashes in Windows 7 so use a JSON config file in the image – the offline equivalent of a profile. Intune will ignore unknown device and Autopilot will use the file instead; after deployment of Windows 10, Intune will notice a PC in the group and apply the profile so it will work if the PC is reset in future).

Autopilot roadmap (1903) includes:

  • “White glove” pre-provisioning for end users: QR code to track, print welcome letter and shipping label!
  • Enrolment status page (ESP) improvements.
  • Cortana voiceover disabled on OOBE.
  • Self-updating Autopilot (update Autopilot without waiting to update Windows).

Maintaining your hybrid environment

Common requirements in an IaaS environment include wanting to use a policy-based configuration with a single management and monitoring solution and auto-remediation.

Azure Automation allows configuration and inventory; monitoring and insights; and response and automation. The Azure Portal provides a single pane of glass for hybrid management (Windows or Linux; any cloud or on-premises).

For configuration and state management, use Azure Automation State Configuration (built on PowerShell Desired State Configuration).

Inventory can be managed with Log Analytics extensions for Windows or Linux. An Azure Monitoring Agent is available for on-premises or other clouds. Inventory is not instant though – can take 3-10 minutes for Log Analytics to ingest the data. Changes can be visualised (for state tracking purposes) in the Azure Portal.

Azure Monitor and Log Analytics can be used for data-driven insights, unified monitoring and workflow integration.

Responding to alerts can be achieved with Azure Automation Runbooks, which store scripts in Azure and run them in Azure. Scripts can use PowerShell or Python so support both Windows and Linux). A webhook can be triggered with and HTTP POST request. A Hybrid runbook worker can be used to run on-premises or in another cloud.

It’s possible to use the Azure VM agent to run a command on a VM from Azure portal without logging in!

Windows Server 2019

Windows Server strategy starts with Azure. Windows Server 2019 is focused on:

  • Hybrid:
    • Backup/connect/replicate VMs.
    • Storage Migration Service to migrate unstructured data into Azure IaaS or another on-premises location (from 2003+ to 2016/19).
      1. Inventory (interrogate storage, network security, SMB shares and data).
      2. Transfer (pairings of source and destination), including ACLs, users and groups. Details are logged in a CSV file.
      3. Cutover (make the new server look like the old one – same name and IP address). Validate before cutover – ensure everything will be OK. Read-only process (except change of name and IP at the end for the old server).
    • Azure File Sync: centralise file storage in Azure and transform existing file servers into hot caches of data.
    • Azure Network Adapter to connect servers directly to Azure networks (see above).
  • Hyper-converged infrastructure (HCI):
    • The server market is still growing and is increasingly SSD-based.
    • Traditional rack looked like SAN, storage fabric, hypervisors, appliances (e.g. load balancer) and top of rack Ethernet switches.
    • Now we use standard x86 servers with local drives and software-defined everything. Manage with Admin Center in Windows Server (see below).
    • Windows Server now has support for persistent memory: DIMM-based; still there after a power-cycle.
    • The Windows Server Software Defined (WSSD) programme is the Microsoft approach to software-defined infrastructure.
  • Security: shielded VMs for Linux (VM as a black box, even for an administrator); integrated Windows Defender ATP; Exploit Guard; System Guard Runtime.
  • Application innovation: semi-annual updates are designed for containers. Windows Server 2019 is the latest LTSC channel so it has the 1709/1803 additions:
    • Enable developers and IT Pros to create cloud-native apps and modernise traditional apps using containers and micro services.
    • Linux containers on Windows host.
    • Service Fabric and Kubernetes for container orchestration.
    • Windows subsystem for Linux.
    • Optimised images for server core and nano server.

Windows Admin Center is core to the future of Windows Server management and, because it’s based on remote management, servers can be core or full installations – even containers (logs and console). Download from http://aka.ms/WACDownload

  • 50MB download, no need for a server. Runs in a browser and is included in Windows/Windows Server licence
  • Runs on a layer of PowerShell. Use the >_ icon to see the raw PowerShell used by Admin Center (copy and paste to use elsewhere).
  • Extensible platform.

What’s next?

  • More cloud integration
  • Update cadence is:
    • Insider builds every 2 weeks.
    • Semi-annual channel every 6 months (specifically for containers):
      • 1709/1803/1809/19xx.
    • Long-term servicing channel
      • Every 2-3 years.
      • 2016, 2019 (in September 2018), etc.

Windows Server 2008 and 2008 R2 reach the end of support in January 2020 but customers can move Windows Server 2008/2008 R2 servers to Azure and get 3 years of security updates for free (on-premises support is chargeable).

Further reading: What’s New in Windows Server 2019.

Containers/Azure Kubernetes Service

Containers:

  • Are fully-packaged applications that use a standard image format for better resource isolation and utilisation.
  • Are ready to deploy via an API call.
  • Are not Virtual machines (for Linux).
  • Do not use hardware virtualisation.
  • Offer no hard security boundary (for Linux).
  • Can be more cost effective/reliable.
  • Have no GUI.

Kubernetes is:

  • An open source system for auto-deployment, scaling and management of containerized apps.
  • Container Orchestrator to manage scheduling; affinity/anti-affinity; health monitoring; failover; scaling; networking; service discovery.
  • Modular and pluggable.
  • Self-healing.
  • Designed by Google based on a system they use to run billions of containers per week.
  • Described in “Phippy goes to the zoo”.

Azure container offers include:

  • Azure Container Instances (ACI): containers on demand (Linux or Windows) with no need to provision VMs or clusters; per-second billing; integration with other Azure services; a public IP; persistent storage.
  • Azure App Service for Linux: a fully-managed PaaS for containers including workflows and advanced features for web applications.
  • Azure Kubernetes Service (AKS): a managed Kubernetes offering.

Wrap-up

So, there you have it. An extremely long blog post with some highlights from my attendance at Microsoft Ignite | The Tour: London. It’s taken a while to write up so I hope the notes are useful to someone else!

UK Government Protective Marking and the Microsoft Cloud


I recently heard a Consultant from another Microsoft partner talking about storing “IL3” information in Azure. That rang alarm bells with me, because Impact Levels (ILs) haven’t been a “thing” for UK Government data since April 2014. For the record, here’s the official guidance on the UK Government data security classifications and this video explains why the system was changed:

Meanwhile, this one is a good example of what it means in practice:

So, what does that mean for storing data in Azure, Dynamics 365 and Office 365? Basically, information classified OFFICIAL can be stored in the Microsoft Cloud – for more information, refer to the Microsoft Trust Center. And, because OFFICIAL-SENSITIVE is not another classification (it’s merely highlighting information where additional care may be needed), that’s fine too.

I’ve worked with many UK Government organisations (local/regional, and central) and most are looking to the cloud as a means to reduce costs and improve services. The fact that more than 90% of public data is classified OFFICIAL (indeed, that’s the default for anything in Government) is no reason to avoid using the cloud.

Seven technology trends to watch 2017-2020


Just over a week ago, risual held its bi-annual summit at the risual HQ in Stafford – the whole company back in the office for a day of learning with a new format: a mini-conference called risual:NXT.

I was given the task of running the technical track – with 6 speakers presenting on a variety of topics covering all of our technical practices: Cloud Infrastructure; Dynamics; Data Platform; Unified Intelligent Communications and Messaging; Business Productivity; and DevOps – but I was also privileged to be asked to present a keynote session on technology trends. Unfortunately, my 35-40 minutes of content had to be squeezed into 22 minutes… so this blog post summarises some of the points I wanted to get across but really didn’t have the time.

1. The cloud was the future once

For all but a very small number of organisations, not using the cloud means falling behind. Customers may argue that they can’t use cloud service because of regulatory or other reasons but that’s rarely the case – even the UK Police have recently been given the green light (the blue light?) to store information in Microsoft’s UK data centres.

Don’t get me wrong – hybrid cloud is more than tactical. It will remain part of the landscape for a while to come… that’s why Microsoft now has Azure Stack to provide a means for customers to run a true private cloud that looks and works like Azure in their own datacentres.

Thankfully, there are fewer and fewer CIOs who don’t see the cloud forming part of their landscape – even if it’s just commodity services like email in Office 365. But we need to think beyond lifting and shifting virtual machines to IaaS and running email in Office 365.

Organisations need to transform their cloud operations because that’s where the benefits are – embrace the productivity tools in Office 365 (no longer just cloud versions of Exchange/Lync/SharePoint but a full collaboration stack) and look to build new solutions around advanced workloads in Azure. Microsoft is way ahead in the PaaS space – machine learning (ML), advanced analytics, the Internet of Things (IoT) – there are so many scenarios for exploiting cloud services that simply wouldn’t be possible on-premises without massive investment.

And for those who still think they can compete with the scale that Microsoft (Amazon and Google) operate at, this video might provide some food for thought…

(and for a similar video from a security perspective…)

2. Data: the fuel of the future

I hate referring to data as “the new oil”. Oil is a finite resource. Data is anything but finite! It is a fuel though…

Data is what provides an economic advantage – there are businesses without data and those with. Data is the business currency of the future. Think about it: Facebook and Google are entirely based on data that’s freely given up by users (remember, if you’re not paying for a service – you are the service). Amazon wouldn’t be where it is without data.

So, thinking about what we do with that data: the 1st wave of the Internet was about connecting computers, 2nd was about people, the 3rd is devices.

Despite what you might read, IoT is not about connected kettles/fridges. It’s not even really about home automation with smart lightbulbs, thermostats and door locks. It’s about gathering information from billions of sensors out there. Then, we take that data and use it to make intelligent decisions and apply them in the real world. Artificial intelligence and machine learning feed on data – they are ying and yang to each other. We use data to train algorithms, then we use the algorithms to process more data.

The Microsoft Data Platform is about analytics and data driving a new wave of insights and opening up possibilities for new ways of working.

James Watt’s 18th Century steam engine led to an industrial revolution. The intelligent cloud is today’s version – moving us to the intelligence revolution.

3 Blockchain

Bitcoin is just one implementation of something known as the Blockchain. In this case as a digital currency.

But Blockchain is not just for monetary transactions – it’s more than that. It can be used for anything transactional. Blockchain is about a distributed ledger. Effectively, it allows parties to trust one another without knowing each other. The ledger is a record of every transaction, signed and tamper-proof.

The magic about Blockchain is that as the chain gets longer so does the entropy and the encryption level – effectively, the more the chain is used, the more secure it gets. That means infinite integrity.

(Read more in Jamie Skella’s “A blockchain explaination your parents could understand”.)

Blockchain is seen as strategic by Microsoft and by the UK government and it’s early days but we will see where people want to talk about integrity and data resilience with integrity. Databases – anything transactional – can be signed with blockchain.

A group of livestock farmers in Arkansas is using blockchain technology so customers can tell where their dinner comes from. They are applying blockchain technology to trace products from ‘farm to fork’ aiming to provide consumers with information about the origin and quality of the meat they buy.

Blockchain is finding new applications in the enterprise and Microsoft has announced the CoCo Framework to improve performance, confidentiality and governance characteristics of enterprise blockchain networks (read more in Simon Bisson’s article for InfoWorld). There’s also Blockchain as a service (in Azure) – and you can find more about Microsoft’s plans by reading up on “Project Bletchley”.

(BTW, Bletchley is a town in Buckinghamshire that’s now absorbed into Milton Keynes. Bletchley Park was the primary location of the UK Government’s wartime code-cracking efforts that are said to have shortened WW2 by around 2 years. Not a bad name for a cryptographic technology, hey?)

4 Into the third dimension

So we’ve had the ability to “print” in 3 dimensions for a while but now 3D is going further.Now we’re taking physical worlds into the virtual world and augmenting with information.

Microsoft doesn’t like the term augmented reality (because it’s being used for silly faces on photos) and they have coined the term mixed reality to describe taking untethered computing devices and creating a seamless overlap between physical and virtual worlds.

To make use of this we need to be able to scan and render 3D images, then move them into a virtual world. 3D is built into next Windows 10 release (the Fall Creators update, due on 17 October 2017). This will bring Paint 3D, a 3D Gallery, View 3D for our phones – so we can scan any object and import to a virtual world. With the adoption rates of new Windows 10 releases then that puts 3D on a market of millions of PCs.

This Christmas will see lots of consumer headsets in the market. Mixed reality will really take off after that. Microsoft is way ahead in the plumbing – all whilst we didn’t notice. They held their Hololens product back to be big in business (so that it wasn’t a solution without a problem). Now it can be applied to field worker scenarios, visualising things before they are built.

To give an example, recently, I had a builder quote for a loft extension at home. He described how the stairs will work and sketched a room layout – but what if I could have visualised it in a headset? Then imagine picking the paint, sofas, furniture, wallpaper, etc.

The video below shows how Ford and Microsoft have worked together to use mixed reality to shorten and improve product development:

5 The new dawn of artificial intelligence

All of the legends of AI are set by sci-fi (Metropolis, 2001 AD, Terminator). But AI is not about killing us all! Humans vs. machines? Deep Blue beating people at Chess, Jeopardy, then Google taking on Go. Heading into the economy and displacing jobs. Automation of business process/economic activity. Mass unemployment?

Let’s take a more optimistic view! It’s not about sentient/thinking machines or giving human rights to machines. That stuff is interesting but we don’t know where consciousness comes from!

AI is a toolbox of high-value tools and techniques. We can apply these to problems and appreciate the fundamental shift from programming machines to machines that learn.

Ai is not about programming logical steps – we can’t do that when we’re recognising images, speech, etc. Instead, our inspiration is biology, neural networks, etc. – using maths to train complex layers of neural networks led to deep learning.

Image recognition was “magic” a few years ago but now it’s part of everyday life. Nvidia’s shares are growing massively due to GPU requirements for deep learning and autonomous vehicles. And Microsoft is democratising AI (in its own applications – with an intelligent cloud, intelligent agents and bots).

NVIDIA Corporation stock price growth fuelled by demand for GPUs

So, about those bots…

A bot is a web app and a conversational user interface. We use them because natural language processing (NLP) and AI are here today. And because messaging apps rule the world. With bots, we can use Human language as a new user interface; bots are the new apps – our digital assistants.

We can employ bots in several scenarios today – including customer service and productivity – and this video is just one example, with Microsoft Cortana built into a consumer product:

The device is similar to Amazon’s popular Echo smart speaker and a skills kit is used to teach Cortana about an app; Ask “skillname to do something”. The beauty of Cortana is that it’s cross-platform so the skill can show up wherever Cortana does. More recently, Amazon and Microsoft have announced Cortana-Alexa integration (meanwhile Siri continues to frustrate…)

AI is about augmentation, not replacement. It’s true that bots may replace humans for many jobs – but new jobs will emerge. And it’s already here. It’s mainstream. We use recommendations for playlists, music, etc. We’re recognising people, emotions, etc. in images. We already use AI every day…

6 From silicon to cells

Every cell has a “programme” – DNA. And researchers have found that they can write code in DNA and control proteins/chemical processes. They can compile code to DNA and execute, creating molecular circuits. Literally programming biology.

This is absolutely amazing. Back when I was an MVP, I got the chance to see Microsoft Research talk about this in Cambridge. It blew my mind. That was in 2010. Now it’s getting closer to reality and Microsoft and the University of Washington have successfully used DNA for storage:

The benefits of DNA are that it’s very dense and it lasts for thousands of years so can always be read. And we’re just storing 0s and 1s – that’s much simpler than what DNA stores in nature.

7 Quantum computing

With massive data storage… the next step is faster computing – that’s where Quantum computing comes in.

I’m a geek and this one is tough to understand… so here’s another video:

Quantum computing is starting to gain momentum. Dominated by maths (quantum mechanics), it requires thinking in equations, not translating into physical things in your head. It has concepts like superposition (multiple states at the same time) and entanglement. Instead of gates being turned on/off it’s about controlling particles with nanotechnology.

A classical 2 bit on-off takes 2 clock cycles. One quantum bit (a Qubit) has multiple states at the same time. It can be used to solve difficult problems (the RSA 2048 challenge problem would take a billion years on a supercomputer but just 100 seconds on a 250-bit quantum computer). This can be applied to encryption and security, health and pharma, energy, biotech, environment, materials and engineering, AI and ML.

There’s a race for quantum computing hardware taking place and China sees this as a massively strategic direction. Meanwhile, the UK is already an academic centre of excellence – now looking to bring quantum computing to market. We’ll have usable devices in 2-3 years (where “usable” means that they won’t be cracking encryption, but will have initial applications in chemistry and biology).

Microsoft Research is leading a consortium called Station Q and, later this year, Microsoft will release a new quantum computing programming language, along with a quantum computing simulator. With these, developers will be able to both develop and debug quantum programs implementing quantum algorithms.

Predicting the future?

Amazon, Google and Microsoft each invest over $12bn p.a. on R&D. As demonstrated in the video above, their datacentres are not something that many organisations can afford to build but they will drive down the cost of computing. That drives down the cost for the rest of us to rent cloud services, which means more data, more AI – and the cycle continues.

I’ve shared 7 “technology bets” (and there are others, like the use of Graphene) that I haven’t covered – my list is very much influenced by my work with Microsoft technologies and services. We can’t always predict the future but all of these are real… the only bet is how big they are. Some are mainstream, some are up and coming – and some will literally change the world.

Credit: Thanks to Rob Fraser at Microsoft for the initial inspiration – and to Alun Rogers (@AlunRogers) for helping place some of these themes into context.

Adopting cloud services means being ready for constant change


There’s a news story today about how Microsoft may be repositioning some (or all) of Skype for Business as Microsoft Teams (the collaborative group-based chat service built on various Office 365 services but Skype for Business in particular).

The details of that story are kind of irrelevant to this post; it’s the reaction I got on Twitter that I felt the need to comment on (when I hit 5 tweeted replies I thought a blog post might be more appropriate).

Change is part of consuming cloud services. There’s a service agreement and a subscription/licensing agreement – customers consume the service as the provider defines it. The service provider will generally give notice of change but you normally have to accept it (or leave). There is no option to stay on legacy versions of software for months or years at a time because you’re not ready to update your ways of working or other connected systems.

That is a big shift and many IT departments have not adjusted their thinking to adopt this new way of working.

I’ve seen many organisations moving to cloud services (mostly Office 365 and Azure) and stick with their current approach. They do things like try to map drive letters to OneDrive because that’s what users are used to, instead of showing them new (and often better) ways of working. They try to use old versions of Office with the latest services and wonder why the user experience is degraded. They think about the on-premises workloads (Exchange, Lync/Skype for Business, SharePoint) instead of the potential provided by the whole productivity platform that they have bought licences to use. They try to turn parts of the service off or hide them from users.

My former colleague Steve Harwood (@SteeveeH) did some work with one of risual’s customers to define a governance structure for Office 365. It’s great work – and maybe I’ll blog about it separately – but the point is that organisations need to think differently for the cloud.

Buying services from Microsoft, Amazon, Google, Salesforce, et al is not like buying them from the managed services provider that does its best to maintain a steady state and avoid change at all costs (or often at great cost!). Moving to the cloud means constant change. You may not have servers to keep up to date once your apps are sold on an “evergreen” subscription basis but you will need to keep client software up to date – not just traditional installed apps but mobile apps and browsers too. And when the service gains a new feature, it’s there for adoption. You may have the ability to hide it but that’s just a sticking plaster solution.

Often the cry is “but we need to train the users”. Do you really? Many of today’s business end users have grown up with technology. They are familiar with using services at home far more advanced than those provided by many workplaces. Intuitive user interfaces can go a long way and there’s no need to provide formal training for many IT changes. Instead, keep abreast of the advertised changes from your service provider (for example the Message Center in Office 365) and decide what the impact is of each new feature. Very few will need a full training package! Some well-written communications, combined with self-help forums and updated FAQs at the Service Desk will often be enough but there’s also the opportunity to offer access to Massive Open Online Courses (MOOCs) where training needs are more extensive.

There are, of course, examples of where service providers have rolled out new features with inadequate testing, or with too little notice but these are edge cases and generally there’s time to react. The problem comes when organisations stick their proverbial heads in the sand and try to ignore the inevitable change.

Serverless and the death of DevOps


A couple of weeks back, I took a trip to London after work to attend the latest CloudCamp meet-up. It’s been a while since I last went to CloudCamp but I was intrigued by the title of the event: “Serverless and the death of DevOps?”. The death of DevOps? Surely not. Most organisations I’m working with are only just getting their heads around what DevOps is. Some are still confusing a cultural change with some tools (hey, we’ll adopt some new tools and rebrand our AppDev function as DevOps). If anything, DevOps is at the top of the hype curve; it can’t possibly be dead!

Well, 5 minutes into the event and, after Simon Wardley (@SWardley)’s introduction, I could see where he was coming from. Mix the following up with some “Wardley Mapping” and you can see that what’s being discussed is not really the death of DevOps (as a concept where development and operations teams work in a more integrated fashion) but it may well be a new cloud computing paradigm, in the form of “serverless” computing (AWS Lambda, Azure Functions, etc.):

  • Back in the beginning of computing, systems were hard-wired (e.g. Colossus).
  • Then, they evolved and we had custom-built computing (e.g. Leo) with the concept of applications and an operating system.
  • This evolved and new products (like the IBM 650) were born with novel architectural practices, based around the concept of compute as a product.
  • These systems had a high mean time to recover (MTTR) so the architecture of the day was designed around N+1, DR tests, scaling up.
  • Evolution continued and novel architectural practices became emerging, then good. Computing became more resilient.
  • Next came frameworks. We had applications and an emerging coding practice based around these frameworks, running on an operating system using good architectural practice, all built around the concept of compute as a product (a server).
  • All was happy.
  • Then along came the cloud. Compute was no longer a product but a utility. It brought new benefits of efficiency, pooling resources, agility. Computing had new sources of worth.
  • And organisations said “make my legacy cloudy” [actually, this is as far as many have got to…].
  • Some people asked “but shouldn’t architecture evolve too?” And, after the initial cries of “burn him, heretic”, a new novel architectural practice emerged, built around a low MTTR. It took seconds to get a new virtual machine, distributed systems were designed for failure, indeed chaos monkeys were introduced to the environment to introduce failure and ensure resilience. We introduced co-evolution (which has been practiced in other fields throughout history) and we called it DevOps.
  • This evolved until it became good architectural practice for the utility world and the old practices for a product world became legacy.
  • The legacy world was held back by inertia but the cloud was about user needs, measurement, automation, collaboration and fast feedback.
  • Then a new tribe began to rise up. Using commodity operating systems and functions as a framework. This framework is becoming a utility. And it will move from emerging to good practice, then best practice and “serverless” will be the future.
  • The old world will become legacy. Even the wonderful world of “DevOps”.
  • But, for now, if we say that “DevOps” is legacy, the response will be “burn him, heretic”.

So that’s the rise of serverless and the “death of DevOps”.

[Simon Wardley does a much better job of this… hopefully, there’s a video out there of him explaining the above somewhere…]

Do we need another as-a-service to describe functions?


Last week saw quarterly earnings reports for major cloud vendors and this tweet caught my eye:

You see, despite Azure growing by 93%, this suggests that Amazon has the cloud market sewn up. Except I’m not sure they do…

I think it would be interesting to see this separated into infrastructure-, platform- and software-as-a-service (IaaS/PaaS/SaaS). I suggest that would present three very different stories. And I’d expect that Amazon would only really be way out front for IaaS.

My friend and former colleague, Garry Martin (@GarryMartin) questioned the relevance of those “legacy” distinctions but I think they still have value today.

In the early days of what we now recognise as cloud computing, every vendor was applying their own brand of cloud-washing. It still happens today, with vendors claiming to offer IaaS when really they have a hosted service and a traditional delivery model.

Back in 2011, the US National Institute of Standards and Technology (NIST) defined cloud computing, including the service models of IaaS, PaaS and SaaS. Those service models, along with the (also abused) deployment models (public cloud, private cloud, etc.) have served us well but are they really legacy?

I don’t think they are. Six years is a long time in IT, let alone the cloud but I think IaaS, PaaS and SaaS are as relevant today as they were when NIST wrote their definition.

When asked how “serverless” technologies like AWS Lambda, Azure Functions or Google Cloud Functions fit in, I say they’re just PaaS. Done right.

Some people want to add another service model/definition for Function-as-a-Service (FaaS). But why? What value does it add? Functions are just PaaS but we’ve finally evolved to a place where we are moving past the point of caring about what the code runs on and letting the cloud manage that for us. That’s what PaaS has supposed to have been doing for years (after all, should I really need to define a number of instances to run my web application – that all sounds a bit like virtual machines to me…)

To my mind, “serverless” is just the ultimate platform as a service and we really don’t need another service model to describe it.

To quote a haiku from Onsi Fakhouri (@onsijoe):

“Here is my code
Run it in the cloud for me
I don’t care how”

Or, as Simon Wardley (@swardley) “fixed” this Cloud Foundry diagram:

Designing for failure does not necessarily mean multi-cloud


Earlier this week, Amazon Web Services’ S3 storage service suffered an outage that affected many websites (including popular sites to check if a website is down for everyone or just you!).

Unsurprisingly, this led to a lot of discussion about designing for failure – or not, it would seem in many cases, including the architecture behind Amazon’s own status pages:

The Amazon and Azure models are slightly different but in the past we’ve seen outages to the Azure identity system (for example) impact on other Microsoft services (Office 365). When that happened, Microsoft’s Office 365 status page didn’t update because of a caching/CDN issue. It seems Amazon didn’t learn from Microsoft’s mistakes!

Randy Bias (@RandyBias) is a former Director at OpenStack and a respected expert on many cloud concepts. Randy and I exchanged many tweets on the topic of the AWS outage but, after multiple replies, I thought a blog post might be more appropriate. You see, I hold the view that not all systems need to be highly available. Sometimes, failure is OK. It all comes down to requirements:

And, as my colleague Tim Siddle highlighted:

I agree. 100%.

So, what does that architecture look like? Well, it will vary according to the provider:

So, if we want to make sure our application can survive a region failure, there are ways to design around this. Just be ready for the solution we sold to the business based on using commodity cloud services to start to look rather expensive. Whereas on-premises we typically have two datacentres with resilient connections, then we’ll want to do the same in the cloud. But, just as not all systems are in all datacentres on-premises, that might also be the case in the cloud. If it’s a service for which some downtime can be tolerated, then we might not need to worry about a multi-region architecture. In cases where we’re not at all concerned about downtime we might not even use an availability set

Other times – i.e. if the application is a web service for which an outage would cause reputational or financial damage – we may have a requirement for higher availability.  That’s where so many of the services impacted by Tuesday’s AWS outage went wrong:

Of course, we might spread resources around regions for other reasons too – like placing them closer to users – but that comes back to my point about requirements. If there’s a requirement for fast, low-latency access then we need to design in the dedicated links (e.g. AWS Direct Connect or Azure ExpressRoute) and we’ll probably have more than one of them too, each terminating in a different region, with load balancers and all sorts of other considerations.

Because a cloud provider could be one of those single points of failure, many people are advocating multi-cloud architectures. But, if you think multi-region is expensive, get ready for some seriously complex architecture and associated costs in a multi-cloud environment. Just as in the on-premises world, many enterprises use a single managed services provider (albeit with multiple datacentres), in the cloud many of us will continue to use a single cloud provider.  Designing for failure does not necessarily mean multi-cloud.

Of course, a single-cloud solution has its risks. Randy is absolutely spot on in his reply below:

It could be argued that one man’s “lock-in” is another’s “making the most of our existing technology investments”. If I have a Microsoft Enterprise Agreement, I want to make sure that I use the software and services that I’m paying for. And running a parallel infrastructure on another cloud is probably not doing that. Not unless I can justify to the CFO why I’m running redundant systems just in case one goes down for a few hours.

That doesn’t mean we can avoid designing with the future in mind. We must always have an exit strategy and, where possible, think about designing systems with a level of abstraction to make them cloud-agnostic.

Ultimately though it all comes back to requirements – and the ability to pay. We might like an Aston Martin but if the budget is more BMW then we’ll need to make some compromises – with an associated risk, signed off by senior management, of course.

[Updated 2 March 2017 16:15 to include the Mark Twomey tweet that I missed out in the original edit]

IT transformation: why timing is crucial


In my work, I regularly find myself discussing transformation with customers who are thinking of moving some or all of their IT services to “the cloud”.  Previously, I’ve talked about a project where a phased approach was taken because of a hard deadline that was driving the whole programme:

  1. Lift and shift to infrastructure-as-a-service (IaaS) and software-as-a-service (SaaS).
  2. Look for service enhancements (transform) – for example re-architect using platform-as-a-service (PaaS).
  3. Iterate/align with sector-wide strategy for the vertical market.

The trouble with this approach is that, once phase 1 is over, the impetus to execute on later phases is less apparent. Organisations change, people move on, priorities shift. And that’s one reason why I now firmly believe that transformation has to happen throughout the project, in parallel with any migration to the cloud – not at the end.

My colleague Colin Hughes (@colinp_hughes) represented this in diagrammatical form in a recent presentation (unfortunately I can’t reproduce it on my personal blog) but it was interesting to listen to episode 6 of Matt Ballantine and Chris Weston’s WB-40 podcast when they were discussing a very similar topic.

In the podcast, Matt and Chris reinforced my view that just moving to the cloud is unlikely to save costs (independently of course – they’re probably not at all bothered about whether I agree or not!). Even if on the surface it appears that there are some savings, the costs may just have been moved elsewhere. Of course, there may be other advantages – like a better service, improved resilience, or other benefits (like reduced technical debt) – but just moving to IaaS is unlikely to be significantly less expensive.

Sure, we can move commodity services (email, etc.) to services like Office 365 but there’s limited advantage to be gained from just moving file servers, web servers, application servers, database servers, etc. from one datacentre to another (virtual) datacentre!

Instead, take the time to think about what applications need; how they could work differently; what would be the impact of using platform services; making use of a microservices-based approach*; could you even go further and re-architect to use so-called “serverless” computing* (e.g. Azure Functions or AWS Lambda)

But perhaps the most important point: digital transformation is not just about the IT – we need to re-design the business processes too if we’re really going to make a difference!

 

* I plan to explore these concepts in more detail in future blog posts.

Not all software consumed remotely is a cloud service


Helping a customer to move away from physical datacentres and into the cloud has been an exciting project to work on but my scope was purely the Microsoft workstream: migrating to Office 365 and a virtual datacentre in Azure. There’s much more to be done to move towards the consumption of software as a service (SaaS) in a disaggregated model – and many more providers to consider.

What’s become evident to me in recent weeks is that lots of software is still consumed in a traditional manner but as a hosted service. Take for example a financial services organisation who was ready to allow my customer access to their “private cloud” over a VPN from the virtual datacentre in Azure but then we hit a road block for routing the traffic. The Azure virtual datacentre is an extension of the customer’s network – using private IP addresses – but the service provider wanted to work with public IPs, which led to some extra routers being reployed (and some NATting of addresses somewhere along the way). Then along came another provider – with human resources applications accessed over unsecure HTTP (!). Not surprisingly, access across the Internet was not allowed and again we were relying on site-to-site VPNs to create a tunnel but the private IPs on our side were something the provider couldn’t cope with. More network wizardry was required.

I’m sure there’s a more elegant way to deal with this but my point is this: not all software consumed remotely is a cloud service. It may be licenced per user on a subscription model but if I can’t easily connect to the service from a client application (which will often be a browser) then it’s not really SaaS. And don’t get me started on the abuse of the term “private cloud”.

There’s a diagram I often use when talking to customers about different types of cloud deployments. it’s been around for years (and it’s not mine) but it’s based on the old NIST definitions.

Cloud computing delivery models

One customer highlighted to me recently that there are probably some extra columns between on-premises and IaaS for hosted and co-lo services but neither of these are “cloud”. They are old IT – and not really much more than a different sort of “on-premises”.

Critically, the NIST description of SaaS reads:

“The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based email), or a program interface. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited userspecific application configuration settings.”

The sooner that hosted services are offered in a multi-tenant model that facilitates consumption on demand and broad network access the better. Until then, we’ll be stuck in a world of site-to-site VPNs and NATted IP addresses…

Short takes: Amazon Web Services 101, Adobe Marketing Cloud and Milton Keynes Geek Night (#MKGN)


What a crazy week. On top of a busy work schedule, I’ve also found myself at some tech events that really deserve a full write-up but, for now, will have to make do with a summary…

Amazon Web Services 101

One of the events I attended this week was a “lunch and learn” session to give an introduction/overview of Amazon Web Services – kind of like a breakfast briefing, but at a more sociable hour of the day!

I already blogged about Amazon’s reference architecture for utility computing but I wanted to mention Ryan Shuttleworth’s (@RyanAWS) explaination of how Amazon Web Services (AWS) came about.

Contrary to popular belief, AWS didn’t grow out of spare capacity in the retail business but in building a service-oriented infrastructure for a scalable development environment to initially provide development services to internal teams and then to expose the amazon catalogue as a web service. Over time, Amazon found that developers were hungry for more and they moved towards the AWS mission to:

“Enable business and developers to use web services* to build scalable, sophisticated applications”

*What people now call “the cloud”

In fact, far from being the catalyst for AWS, Amazon’s retail business is just another AWS customer.

Adobe Marketing Cloud

Most people will be familiar with Adobe for their design and print products, whether that’s Photoshop, Lightroom, or a humble PDF reader.  I was invited to attend an event earlier this week to hear about the Adobe Marketing Cloud, which aims to become for marketers what the Creative Suite has for design professionals.  Whilst the use of “cloud” grates with me as a blatant abuse of a buzzword (if I’m generous, I suppose it is a SaaS suite of products…), Adobe has been acquiring companies (I think I heard $3bn mentioned as the total cost) and integrating technology to create a set of analytics, social, advertising, targeting and web experience management solutions and a real-time dashboard.

Milton Keynes Geek Night

MK Geek Night #mkgn

The third event I attended this week was the quarterly Milton Keynes Geek Night (this was the third one) – and this did not disappoint – it was well up to the standard I’ve come to expect from David Hughes (@DavidHughes) and Richard Wiggins (@RichardWiggins).

The evening kicked off with Dave Addey (@DaveAddey) of UK Train Times app fame, talking about what makes a good mobile app. Starting out from a 2010 Sunday Times article about the app gold rush, Dave explained why few people become smartphone app millionaires, but how to see if your idea is:

  • Is your mobile app idea really a good idea? (i.e. is it universal, is it international, and does it have lasting appeal – or, put bluntly, will you sell enough copies to make it worthwhile?)
  • Is it suitable to become a mobile app? (will it fill “dead time”, does it know where you go and use that to add value, is it “always there”, does it have ongoing use)
  • And how should you make it? (cross platform framework, native app, HTML, or hybrid?)

Dave’s talk warrants a blog post of it’s own – and hopefully I’ll return to the subject one day – but, for now, that’s the highlights.

Next up were the 5 minute talks, with Matt Clements (@MattClementsUK) talking about empowering business with APIs to:

  1. Increase sales by driving traffic.
  2. Improve your brand awareness by working with others.
  3. Increase innovation, by allowing others to interface with your platform.
  4. Create partnerships, with symbiotic relationships to develop complimentary products.
  5. Create satisfied customers – by focusing on the part you’re good at, and let others build on it with their expertise.

Then Adam Onishi (@OnishiWeb) gave a personal, and honest, talk about burnout, it’s effects, recognising the problem, and learning to deal with it.

And Jo Lankester (@JoSnow) talked about real-world responsive design and the lessons she has learned:

  1. Improve the process – collaborate from the outset.
  2. Don’t forget who you’re designing for – consider the users, in which context they will use a feature, and how they will use it.
  3. Learn to let go – not everything can be perfect.

Then, there were the usual one-minute slots from sponsors and others with a quick message, before the second keynote – from Aral Balkan (@Aral), talking about the high cost of free.

In an entertaining talk, loaded with sarcasm, profanity (used to good effect) but, most of all, intelligent insight, Aral explained the various business models we follow in the world of consumer technology:

  • Free – with consequential loss of privacy.
  • Paid – with consequential loss of audience (i.e. niche) and user experience.
  • Open – with consequential loss of good user experience, and a propensity to allow OEMs and operators to mess things up.

This was another talk that warrants a blog post of its own (although I’m told the session audio was recorded – so hopefully I’ll be able to put up a link soon) but Aral moved on to talk about a real alternative with mainstream consumer appeal that happens to be open. To achieve this, Aral says we need a revolution in open source culture in that open source and great user experience do not have to be mutually exclusive. We must bring design thinking to open source. Design-led open source.  Without this, Aral says, we don’t have an alternative to Twitter, Facebook, whatever-the-next-big-platform-is doing what they want to with our data. And that alternative needs to be open. Because if it’s just free, the cost is too high.

The next MK Geek Night will be on 21 March, and the date is already in my diary (just waiting for the Eventbrite notice!)

Photo credit: David Hughes, on Flickr. Used with permission.