What did we learn at Microsoft Ignite 2023?

Right now, there’s a whole load of journalists and influencers writing about what Microsoft announced at Ignite. I’m not a journalist, and Microsoft has long since stopped considering me as an influencer. Even so, I’m going to take a look at the key messages. Not the technology announcements – for those there’s the Microsoft Ignite 2023 Book of News – but the real things IT Leaders need to take away from this event.

Microsoft’s investment in OpenAI

It’s all about AI. I know, you’re tired of the hype, suffering from AI fatigue, but for Microsoft, it really is about AI. And if you were unconvinced just how important AI is to Microsoft’s strategy, their action to snap up key members of staff from an imploding OpenAI a week later should be all you need to see:

Tortoise Media‘s Barney Macintyre (@barneymac) summed it up brilliantly when he said that

“Satya Nadella, chief executive of Microsoft, has played a blinder. Altman’s firing raised the risk that he would lose a key ally at a company into which Microsoft has invested $13 billion. After it became clear the board wouldn’t accept his reinstatement, Nadella offered jobs to Altman, Brockman and other loyalist researchers thinking about leaving.

The upshot: a new AI lab, filled with talent and wholly owned by Microsoft – without the bossy board. An $86 billion subsidiary for a $13 billion investment.”

But the soap opera continued and, by the middle of the week, Altman was back at OpenAI, apparently with the blessing of Microsoft!

If nothing else, this whole saga should reinforce just how important OpenAI is to Microsoft.

The age of the Copilot

Copilot is Microsoft’s brand for a set of assistive technologies that will sit alongside applications and provide an agent experience, built on ChatGPT, Dall-E and other models. Copilots are going to be everywhere. So much so that there is a “stack” for Copilot and Satya described Microsoft as “a Copilot company”.

That stack consists of:

  • The AI infrastructure in Azure – all Copilots are built on AzureAI.
  • Foundation models from OpenAI, including the Azure OpenAI Service to provide access in a protected manner but also new OpenAI models, fine-tuning, hosted APIs, and an open source model catalogue – including Models as a Service in Azure.
  • Your data – and Microsoft is pushing Fabric as all the data management tools in one SaaS experience, with onwards flow to Microsoft 365 for improved decision-making, Purview for data governance, and Copilots to assist. One place to unify, prepare and model data (for AI to act upon).
  • Applications, with tools like Microsoft Teams becoming more than just communication and collaboration but a “multi-player canvas for business processes”.
  • A new Copilot Studio to extend and customise Microsoft Copilot, with 1100 prebuilt plugins and connectors for every Azure data service and many common enterprise data platforms.
  • All wrapped with a set of AI safety and security measures – both in the platform (model and safety system) and in application (metaprompts, grounding and user experience).

In addition to this, Bing Chat is now re-branded as Copilot – with an enterprise version at no additional cost to eligible Entra ID users. On LinkedIn this week, one Microsoft exec posted that “Copilot is going to be the new UI for work”.

In short, Copilots will be everywhere.

Azure as the world’s computer

Of course, other cloud platforms exist, but I’m writing about Microsoft here. So what did they announce that makes Azure even more powerful and suited to running these new AI workloads?

  • Re-affirming the commitment to zero carbon power sources and then becoming carbon negative.
  • Manufacturing their own hollow-core fibre to drive up speeds.
  • Azure Boost (offloading server virtualisation processes from the hypervisor to hardware).
  • Taking the innovation from Intel and AMD but also introducing new Microsoft silicon: Azure Cobalt (ARM-based CPU series) and Azure Maia (AI accelerator in the form of an LLM training and inference chip).
  • More AI models and APIs. New tooling (Azure AI Studio).
  • Improvements in the data layer with enhancements to Microsoft Fabric. The “Microsoft Intelligent Data Platform” now has 4 tenets: databases; analytics; AI; and governance.
  • Extending Copilot across every role and function (as I briefly discussed in the previous section).

In summary, and looking forward

Microsoft is powering ahead on the back of its AI investments. And, as tired of the hype as we may all be, it would be foolish to ignore it. Copilots look to be the next generation of assistive technology that will help drive productivity. Just as robots have become commonplace on production lines and impacted “blue collar” roles, AI is the productivity enhancement that will impact “white collar” jobs.

In time we’ll see AI and mixed reality coming together to make sense of our intent and the world around us. Voice, gestures, and where we look become new inputs – the world becomes our prompt and interface.

Featured images: screenshots from the Microsoft Ignite keynote stream, under fair use for copyright purposes.

Learning to be intelligent about artificial intelligence

This week promises to be a huge one in the world of Artificial Intelligence (AI). I should caveat that in that almost every week includes a barrage of news about AI. And, depending which articles you read, AI is either going to:

  • Take away all our jobs or create exciting new jobs.
  • Solve global issues like climate change or hasten climate change through massive data centre power and water requirements.
  • Lead to the demise of society as we know it or create a new utopia.

A week of high profile AI events

So, why is this week so special?

  1. First of all, the G7 nations have agreed a set of Guiding Principles and a Code of Conduct on AI. This has been lauded by the European Commission as complementing the legally binding rules that the EU co-legislators are currently finalising under the EU AI Act.
  2. Then, starting on Wednesday, the UK is hosting an AI Safety Summit at “the home of computing”, Bletchley Park. And this summit is already controversial with some questioning the diversity of the attendees, including Dr Sue Black, who famously championed saving Bletchley Park from redevelopment.
  3. The same day, Microsoft’s AI Copilots will become generally available to Enterprise users, and there’s a huge buzz around how the $30/user/month Copilot plays against other offers like Bing Chat Enterprise ($5/user/month), or even using public AI models.

All just another week in AI news. Or not, depending on how you view these things!

Is AI the big deal that it seems to be?

It’s only natural to ask questions about the potential that AI offers (specifically generative AI – gAI). It’s a topic that I covered in a recent technology advice note that I wrote.

In summary, I said that:

“gAI tools should be considered as assistive technologies that can help with researching, summarising and basic drafting but they are not a replacement for human expertise.

We need to train people on the limitations of gAI. We should learn lessons from social media, where nuanced narratives get reduced to polarised soundbites. Newspaper headlines do the same, but social media industrialised things. AI has the potential to be transformative. But we need to make sure that’s done in the right way.

Getting good results out of LLMs will be a skill – a new area of subject matter expertise (known as “prompt engineering”). Similarly, questioning the outputs of GANs to recognise fake imagery will require new awareness and critical thinking.”

Node 4 Technology Advice Note on Artificial Intelligence, September 2023.

Even as I’m writing this post, I can see a BBC headline that asks “Can Rishi Sunak’s big summit save us from AI nightmare?”. My response? Betteridge’s law probably applies here.

Could AI have saved a failed business?

Last weekend, The Sunday Times ran an article about the failed Babylon Healthcare organisation, titled “The app that promised an NHS ‘revolution’ then went down in flames”. The article is behind a paywall, but I’ve seen some extracts.

Two things appear to have caused Babylon’s downfall (at least in part). Not only did Babylon attract young and generally healthy patients to its telehealth services, but it also offered frictionless access.

So, it caused problems for traditional service providers, leaving them with an older, more frequently ill, and therefore more expensive sector of the population. And it caused problems for itself: who would have thought that if you offer people unlimited healthcare, they will use it?!

(In some cases, creating friction in provision of a service is a deliberate policy. I’m sure this is why my local GP doesn’t allow me to book appointments online. By making me queue up in person for one of a limited number of same-day appointments, or face a lengthy wait in a telephone queue, I’m less likely to make an appointment unless I really need it.)

The article talks about the pressures on Babylon to increase its use of artificial intelligence. It also seems to come to the conclusion that, had today’s generative AI tools been around when Babylon was launched, it would have been more successful. That’s a big jump, written by a consumer journalist, who seems to be asserting that generative AI is better at predicting health outcomes than expert system decision trees.

We need to be intelligent about how we use Artificial Intelligence

Let me be clear, generative AI makes stuff up. Literally. gAIs like ChatGPT work by predicting and generating the next word based on previous words – basically, on probability. And sometimes they get it wrong.

Last week, I asked ChatGPT to summarise some meeting notes. The summary it produced included a typo – a made-up word:

“A meeting took tanke place between Node4 & the Infrastructure team at <client name redacted> to discuss future technology integration, project workloads, cost control measures, and hybrid cloud strategy.”

Or, as one of my friends found when he asked ChatGPT to confirm a simple percentage calculation, it initially said one thing and then “changed its mind”!

Don’t get me wrong – these tools can be fantastic for creating drafts, but they do need to be checked. Many people seem to think that an AI generates a response from a database of facts and therefore must be correct.

In conclusion

As we traverse the future landscape painted by artificial intelligence, it’s vital that we arm ourselves with a sound understanding of its potential and limitations. AI has often been regarded as a silver bullet for many of our modern challenges, a shortcut to progress and optimised efficiency. But as we’ve explored in this blog post – whether it’s the G7 nations’ principles, Microsoft’s AI Copilot, or a fallen Babylon Healthcare – AI is not a one-size-fits-all solution. It’s a tool, often brilliant but fallible, offering us both unprecedented opportunities and new forms of challenges.

The promises brought by AI are enormous. This week’s events underscore the urgency to familiarise ourselves with AI, acknowledge its potential, and intelligently navigate its limitations. From a set of AI guiding principles on a global scale, to raising awareness on gAI, and analysing the role of AI in business successes and failures – it’s clear that being informed about AI is no longer an option but a necessity.

gAI tools, while they are transformative, need to be used as assistive technologies and not as replacements for human intellect and expertise. Embracing AI should not mean renouncing critical thinking and caution. So, as we interact with AI, let’s do it intelligently, asking the right questions and understanding its strengths and limitations. We need to be smart about using AI, recognizing both its potential and its constraints. This will enable us to harness its power effectively, while avoiding over-reliance or the creation of new, unforeseen problems.

It’s time we stop viewing AI through a lens of absolute salvation or doom, and start understanding it as a dynamic field that requires thoughtful and knowledgeable engagement. Evolution in human tech culture will not be judged by the power of our tools, but by our ability to skillfully and ethically wield them. So, let’s learn to be intelligent about how we use artificial intelligence.

Postscript

That conclusion was written by an AI, and edited by a human.

Featured image: screenshot from the BBC website, under fair use for copyright purposes.