OpenAI Atlas and the blurred line between search and synthesis

OpenAI’s new Atlas browser has certainly got people talking.

Some are excited — calling it a “Google killer” and a glimpse of how we’ll all navigate the web in future. Others are alarmed — pointing to privacy concerns, data collection prompts, and the idea of handing over browsing history and passwords to an AI company.

Jason Grant described his experience as “a giant dark pattern.” Matthew Dunn was more balanced — impressed by the features, but quick to warn businesses off using it. He’s right: if you wouldn’t paste confidential data into ChatGPT, you probably shouldn’t browse the company intranet through Atlas either.

Search vs. synthesis

When people say Atlas will replace Google, they’re missing the point. It’s not search in the traditional sense.

A search engine indexes existing content and returns links that might answer your question. Atlas — and systems like it — go a step further. They synthesise an answer, combining what’s on the web with what’s in your conversation and what they’ve “seen” before.

As Data Science Dojo explains, search engines are designed to find information that already exists, while synthesis engines are designed to create new information.

Or, as Vincent Hunt neatly puts it: “Search gives you links. Synthesis gives you insight.”

That shift sounds subtle, but it changes everything: how we ask questions, how we evaluate truth, and how much we trust the output.

As I said in my recent talk on AI Transformation at the Bletchley AI User Group, “Generative AI is not a search engine. It doesn’t retrieve facts. It generates language based on probabilities.” Google doesn’t know the truth either — it just gives you the most common answer to your question — but AI goes a step further. It merges, rewrites and repackages information. That can be powerful, but it’s also risky. It’s why I believe the AI-generated results that many search engines now return as default are inferior to traditional results, based on actual information sources.

Without strong governance, AI may be repurposing outdated content or drawing on biased data. Transparency matters — because trust is the real currency of AI adoption.

Why Atlas matters

In OpenAI’s announcement, Atlas is described as “bringing ChatGPT anywhere across the web — helping you in the window right where you are.”

It’s not just a search bar. It can summarise pages, compare options, fill out forms, or even complete tasks within websites. That’s a very different paradigm — one where the browser becomes a workspace, and the assistant becomes a collaborator.

A step towards agentic AI?

So, is Atlas really agentic? In part, yes.

Agentic AI describes systems that can act rather than just answer. They plan, execute and adapt — working on your behalf, not just waiting for your next prompt.

OpenAI’s own notes mention an agent mode that can “help you book reservations or edit documents you’re working on,” as reported by The Verge.

Others, like Practical Ecommerce, describe Atlas as “a push into agentic browsing — where the browser is now an AI agent too.”

It’s not full autonomy yet — more like assisted agency — but it’s a clear step in that direction.

Why it still needs caution

As exciting as it sounds, Atlas isn’t designed for enterprise use. It raises valid concerns about data privacy, security, and trust. You wouldn’t give a work browser access to sensitive credentials, and the same logic applies here.

As Matthew Dunn notes, ChatGPT “produces better output than Copilot, but with less security and privacy.” That’s a fair trade-off for some users, but not for organisations handling confidential information.

So, by all means, explore it — but do so with your eyes open.

And yes, I’ll still give it a try I decided not to install it after all

For all the justified concerns about privacy and data handling, I’ll still give Atlas a try. Even though I have Copilot at work, I pay for ChatGPT Pro for activities that are not directly related to my confidential work.

Atlas might extend that usefulness into how I browse, not just how I prompt. The key, as ever, is knowing what data you’re sharing — and making that a conscious choice, not an accidental one.

[Updated 24/10/2025: After writing and publishing this post, I decided not to install Atlas. There are a lot of security concerns about the way the browser stores local data, which may easily be exploited. Nevertheless, both OpenAI Atlas and Perplexity Comet are interesting developments, and the narrative about the differences between an AI search (synthesis) and a traditional search is still valid.]

Featured image: created by ChatGPT.

Tonight’s talk at the Bletchley AI User Group, and a new AI Resources page

Tonight, I’ll be giving a talk on AI Transformation at the Bletchley AI User Group.

Slides

I gave up on bit.ly QR codes/links to OneDrive* and hosted the slides on my own website. They are also embedded below:

20251021_Mark_Wilson_Bletchley_AI_UG_AI_Transformation

Alternatively, you can save my bandwidth by picking them up from my OneDrive instead!

Feedback

If you were at the talk, some feedback would be much appreciated, please. There’s a Microsoft Form for that!

Resources

I also reached a point where I was seeing more and more new AI content every day and I just… had… to… stop… adding… more… into… the… presentation. A few minutes vibe coding with ChatGPT gave me a static single page website with a search capability and a JSON-based data source. And ChatGPT even did the analysis, classification and tagging for me…

Anyway, my new AI Resources page is here and will be updated as and when I come across new artefacts.

*What was wrong with bit.ly?

Recent changes at bit.ly that mean they:

  • No longer support custom domain names on a free account (bye-bye mwil.it); and
  • Require a paid account to redirect short links after creation

The challenge I had was that I wanted to include a QR code for people to scan when I present the content, but that created a circular issue: I upload the slides, create a QR code, add the QR to the slides, upload the slides, the link changes… etc., etc.

(I wouldn’t mind paying for bit.ly, except that their plans are a bit expensive. This is a free website that creates a handful of short links each month and subscription fatigue is real…)

When software meets steel: agentic computing in the real world

I flew to Dublin last week as part of the team representing Node4 at a Microsoft Sales and Partner summit. But the event itself is not really relevant here — what struck me was the amount of robot tech I interacted with on the trip.

At Heathrow Terminal 5, I took one of the self-driving pods that connect the business car park with the terminal. Inside, Mitie’s robot cleaning machines were gliding quietly between travellers. And in Dublin Airport, our restaurant meal was brought out by a robot waitress called Bella.

It was only later that I realised these weren’t isolated novelties. They’re part of a pattern: we’re used to talking about agentic computing in a software sense but it also presents itself through hardware in the physical world.

The journey begins: autonomous pods at Heathrow

The Heathrow pods have been around for over a decade, but they still feel futuristic. You call one on demand, climb in, and it glides directly to your stop. There’s no driver, no timetable, and almost no wait. The system uses far less energy than a bus or car, and the whole thing is orchestrated by software that dispatches pods, avoids collisions and monitors usage.

It’s a neat demonstration of automation in motion: you make a request, and a machine physically carries it out.

Quiet efficiency: Mitie’s cleaning “cobots”

Inside the terminal, Mitie’s autonomous cleaning robots were at work. These cobots use sensors and cameras to map the concourse, clean for hours, then return to charge before resuming their shifts. They handle repetitive tasks while human staff focus on the harder jobs.

You could easily miss them — and that’s the point. They’re designed to blend in. The building, in a sense, is starting to help maintain itself.

Meet Bella: the robot waitress

In Dublin, things got more personal. The restaurant’s “BellaBot” rolled over with trays of food, blinking her animated eyes and purring polite phrases. The QR code was hard to scan (black text on a brass plate lacks contrast) and the ordering app didn’t work so human staff had to step in — but the experience was still surreal.

Bella’s design deliberately humanises the machine, using expressions and voice to make diners comfortable. For me, it was a bit too much. The technology was interesting; the personality, less so. I prefer my service robots less anthropomorphised.

This tension — between automation and human comfort — is one of the trickiest design challenges of our time.

A pattern emerges

Taken together, the pods, cleaning cobots and BellaBot reveal different layers of the same trend:

  • Mobility agents like the Heathrow pods move people and goods.
  • Maintenance agents like Mitie’s cobots quietly maintain infrastructure.
  • Service agents like BellaBot interact directly with us.

Each one extends software intelligence into the physical world. We’re no longer just automating data; we’re automating action.

And none of them works completely alone. The pods are overseen by a control centre. The cobots have human supervisors. Bella needs a human backup when the tech fails. This is automation with a safety net — hybrid systems that rely on graceful human fallback.

From airports to high streets

You don’t have to go through Heathrow or Dublin to see the same shift happening.

Closer to home, in Milton Keynes and Northampton (as well as in other towns and cities across the UK and more widely), small white Starship robots deliver groceries and takeaway food along pavements. They trundle quietly across zebra crossings, avoiding pedestrians and pets, using cameras and sensors to navigate. A smartphone app summons them; another unlocks the lid when your order arrives.

Like the airport pods, they make autonomy feel normal. Children wave to them. People barely notice them anymore. The line between software, service and physical action is blurring fast.

The thin end of the wedge

These examples show how automation is creeping into daily life — not replacing humans outright, but augmenting us.

The challenge now isn’t capability; it’s reliability. Systems like Bella’s ordering app work brilliantly until they don’t. What matters most is how smoothly they fail and how easily humans can step back in.

For now, that balance still needs work. But it’s clear where things are heading. The real frontier of AI isn’t in chatbots or copilots — it’s in physical agents that move, clean, deliver and serve. It’s software made tangible.

And while Bella’s blinking eyes may have been a step too far for me, it’s hard not to admire the direction of travel. The future isn’t just digital. It’s autonomous, electric, slightly quirky – and already waiting for you in the car park.

Featured image: created by ChatGPT.

Monthly retrospective: May 2025

I’ve been struggling to post retrospectives this year – they are pretty time consuming to write. But, you may have noticed the volume of content on the blog increasing lately. That’s because I finally have a workflow with ChatGPT prompts that help me draft content quickly, in my own style. (I even subscribe to ChatGPT now, and regular readers will know how I try to keep my subscription count down.) Don’t worry – it’s still human-edited (and there are parts of the web that ChatGPT can’t read – like my LinkedIn, Instagram and even parts of this blog) so it should still be authentic. It’s just less time-consuming to write – and hopefully better for you to read.

On the blog…

Home Assistant tinkering (again)

I’ve been continuing to fiddle with my smart home setup. This month’s project was replacing the ageing (and now unsupported) Volvo On Call integration in Home Assistant with the much better maintained HA Volvo Cars HACS integration. It works brilliantly – once you’ve jumped through the hoops to register for an API key via Volvo’s developer portal.

And no, that doesn’t mean I can now summon my car like KITT in Knight Rider – but I can check I locked it up and warm it up remotely. Which is almost as good. (As an aside, I saw KITT last month at the DTX conference in Manchester.)

Software-defined vehicles

On the subject of cars, I’ve been reflecting on how much modern cars depend on software – regardless of whether they’re petrol, diesel or electric. The EV vs. ICE debate often centres on simplicity and mechanics (less moving parts in an EV), but from my experience, the real pain points lie in the digital layer.

Take my own (Volvo V60, 2019 model year). Mechanically it’s fine and it’s an absolute luxury compared with the older cars that my wife and sons drive, but I’ve seen:

  • The digital dashboard reboot mid-drive
  • Apple CarPlay refusing to connect unless I “reboot” the vehicle
  • Road sign recognition systems confidently misreading speed limits

Right now, it’s back at the body shop (at their cost, thankfully) for corrosion issues on a supposedly premium marque. My next car will likely be electric – but it won’t be the drivetrain that convinces me. It’ll be the software experience. Or, more realistically, the lack of bad software. Though, based on Jonathan Phillips’ experience, new car software contains alarming typos in the UI, which indicates a lack of testing…

Thinking about the impact of generative AI

This update isn’t meant to be about AI – but it seems it is – because it’s become such a big part of my digital life now. And, increasingly, it’s something I spend more time discussing with my clients.

AI isn’t new. We’ve had robotic process automation (RPA), machine learning, data science and advanced analytics for years. I even studied neural networks at Poly’ in the early 1990s. But it’s generative AI that’s caught everyone’s imagination – and their budgets.

In Episode 239 of the WB-40 podcast (AI Leadership), I listened to Matt Cockbill talk about how it’s prompting a useful shift in how we think about technology. Less about “use cases” and more about “value cases” – how tech can improve outcomes, streamline services, and actually help achieve what the organisation set out to do.

The rush to digitise during COVID saw huge amounts of spending – enabling remote working or entrenching what was already there (hello, VDI). But now it feels like the purse strings are tightening, and some of that “why are we doing this again?” thinking is creeping back in. Just buying licences and rolling out tools is easy. Changing the way people work and deliver value? That’s the real work.

Meal planning… with a side of AI

I’ve also been experimenting with creating an AI-powered food coach to help me figure out what to eat, plan ahead, and avoid living off chocolate Hobnobs and toasted pitta. Still early days – but the idea of using an assistant to help nudge me towards healthier, simpler food is growing on me.

Reading: The Midnight Library

I don’t read much fiction – I’m more likely to be found trawling through a magazine or scrolling on my phone – but Matt Haig’s “The Midnight Library really got me. OK, so technically, I didn’t read it – it was an impulse purchase to use some credits before cancelling my Audible account – but it was a great listen. Beautifully read by Carey Mulligan, it’s one of those rare books that manages to be both dark and uplifting. Some reviews suggest that not everyone feels the same way – and my reading it at a time of grief and loss may have had an impact – but I found it to be one of my best reads in a long time.

Without spoiling anything, the idea of a liminal space between life and death – where you can explore the infinite versions of yourself – is quietly brilliant. Highly recommended. So much so that I bought another copy (dead tree edition) for my wife.

On LinkedIn this month…

It’s been a lively month over on LinkedIn, with my posts ranging from AI hype to the quirks of Gen-Z slang (and a fair dose of Node4 promotion). These are just a few of the highlights – follow me to get the full experience:

  • Jony and Sam’s mysterious new venture
    I waded into the announcement from Jony Ive and Sam Altman with, let’s say, a healthy dose of scepticism. A $6.5bn “something” was teased with a bland video and a promo image that felt more 80s album cover than product launch. It may be big. But right now? Vapourware.
  • Is the em dash trolling us?
    I chipped in on the debate about AI-written content and the apparent overuse of em dashes (—) –often flagged as an “AI tell” – usually by people who a) don’t understand English grammar or b) where LLMs learned to write. (I am aware that I incorrectly use en dashes in these posts, because people seem to find them less “offensive”.) But what if the em dash is trolling us?
  • Skibidi-bibidi-what-now?
    One of the lighter moments came with a post about Gen-Z/Gen-Alpha slang. As a Gen-Xer with young adult kids, I found a “translator” of sorts – and it triggered a few conversations about how language evolves. No promises I’ll be dropping “rizz” into meetings just yet. Have a look.
  • Politeness and prompting
    Following a pub chat with Phil Kermeen, I shared a few thoughts on whether being polite to AI makes a difference. TL;DR: it does. Here’s the post.
  • Mid-market momentum
    Finally, there have been lots of posts around the Node4 2025 Mid-Market Report. It was a big effort from a lot of people, including me, and I’m really proud of what we’ve produced. It’s packed with insights, based on bespoke research of over 600 IT and Business leaders.

Photos

A few snaps from my Insta’ feed…

https://www.instagram.com/markwilsonuk/p/DJr5Ui8N94u

For more updates…

That’s all for now. I probably missed a few things, but it’s a decent summary of what I’ve been up to at home and at work. I no longer use X, but follow me on LinkedIn (professional), Instagram (visual) and this blog for more updates – depending on which content you like best. Maybe even all three!

Next month…

A trip to Hamburg (to the world’s largest model railway); ramping up the work on Node4’s future vision; and hopefully I’ll fill in some of the gaps between January and May’s retrospectives!

Featured image: created by ChatGPT

Who gets paid when the machines take over?

Yesterday evening, I was at the Bletchley AI User Group in Milton Keynes. One of the talks was from Stephanie Stasey (/in/missai) (aka Miss AI), titled “Gen AI vs. white collar workers and trad wives – building a robot to make my bed”.

It was delivered as a monologue – which sounds negative, but really isn’t. In fact, it was engaging, sharp, and packed with food for thought. Stephanie brought a fresh perspective to a topic we’re all thinking about: how AI is reshaping both the world of work and the way we live at home.

The labour that goes unnoticed (but not undone)

One part of the talk touched on “trad wives” – not a term I was especially familiar with, but the theme resonated.

If you’d asked my wife and I in our 20s about how we’d divide up household tasks, we might have offered up a fair and balanced plan. But real life doesn’t always match the theory.

These days, we both work part-time – partly because unpaid labour (childcare, cooking, washing, cleaning, all the life admin) still needs doing. And there don’t seem to be enough hours when the laptop is closed.

The system isn’t broken – it’s working exactly as designed

The point I’ve been turning over for a while is this: it feels like we’re on the edge of something big.

We could be on the brink of a fundamental shift in how we think about work – if those in power wanted to make radical changes. I’ll avoid a full political detour, though I’m disheartened by the rise of the right and how often “ordinary people” are reminded of their place. (My youngest son calls me a champagne socialist – perhaps not entirely unfairly.)

Still, AI presents us with a rare opportunity to do things differently.

But instead of rethinking how work and value are distributed, we’re told to brace for disruption. The current narrative is that AI is coming for our jobs. Or a variation on that theme: “Don’t worry,” we’re told, “it won’t take your job – but someone using AI might”. That line’s often repeated. It’s catchy. But it’s also glib – and not always true.

I’m close enough to retirement that the disruption shouldn’t hit me too hard. But for my children’s generation? The impact could be massive.

What if we taxed the agents?

So here’s a thought: what if we taxed the AI agents?

If a business creates an agent to do the work a person would normally do – or could reasonably do – then that agent is taxed, like a human worker would be. It’s still efficient, still scalable, but the benefits are shared.

And, how would we live, if the jobs go away? That’s where Universal Basic Income (UBI) comes in (funded by taxes on agents, as well as on human effort).

Put simply, UBI provides everyone with enough to cover their basic needs – no strings attached. People can still work (and many will). For extra income. For purpose. For contribution. It just doesn’t have to be 9-to-5, five days a week. It could be four. Or two. The work would evolve, but people wouldn’t be left behind. It also means that the current, complex, and often unjust benefits system could be removed (perhaps with some exceptions, but certainly for the majority).

What could possibly go right?

So yes, the conversation around AI is full of what could go wrong. But what if we focused on what could go right?

We’ve got a window here – a rare one – to rethink work, contribution, and value. But we need imagination. And leadership. And a willingness to ask who benefits when the machines clock in.

Further reading on UBI

If you’re interested in UBI and how it might work in practice, here are some useful resources:

Featured image: author’s own.

Postmortem: deploying my static website with Azure Static Web Apps (eventually)

This all started out as a bit of vibe coding* in ChatGPT…

Yesterday, whilst walking the dog, I was listening to the latest episode of WB-40. Something Julia Bellis said gave me an idea for a simple custom GPT to help people (well, mostly me) eat better. ChatGPT helped me to create a custom GPT – which we named The Real Food Coach.

With the GPT created, I asked ChatGPT for something else: help me build a one-page website to link to it. In minutes I had something presentable: HTML, styling, fonts, icons – all generated in a few minutes. Pretty slick.

When it came to hosting, ChatGPT suggested something I hadn’t used previously: Azure Static Web Apps, rather than the Azure Storage Account route I’d used for hosting in the past. It sounded modern and neat – automatic GitHub integration, free SSL, global CDN. So I followed its advice.

ChatGPT was great. Until it wasn’t.

A quick win turns into a slow burn

The proof of concept came together quickly – code committed to GitHub, site created in Azure, workflow generated. All looked good. But the deploys failed. Then failed again. And again.

What should have taken 10 minutes quickly spiralled into a full evening of troubleshooting.

The critical confusion

The issue came down to two settings that look similar – but aren’t:

  • Deployment source – where your code lives (e.g. GitHub)
  • Deployment authorisation policy – how Azure authenticates deployments (either via GitHub OIDC or a manually managed deployment token)

ChatGPT had told me to use GitHub for both. That was the mistake.

Using GitHub as the authorisation method relies on Azure injecting a secret (AZURE_STATIC_WEB_APPS_API_TOKEN) into GitHub, but that never happened. I tried regenerating it, reauthorising GitHub, even manually wiring in deployment tokens – all of which conflicted with the setup Azure had created.

The result? Deploys that failed with:

“No matching Static Web App was found or the API key was invalid”

Eventually, after several rounds of broken builds, missing secrets, and deleting and recreating the app, I questioned the advice ChatGPT had originally given. Sure enough, it confirmed that yes – the authorisation policy should have been set to Deployment Token, not GitHub.

Thanks, ChatGPT. Bit late.

The right setup

Once I created the app with GitHub as the deployment source and Deployment Token as the authorisation policy, everything clicked.

I copied the token from Azure, added it to GitHub secrets, updated the workflow to remove unnecessary build steps, and redeployed.

Success.

Custom domain and tidy-up

Pointing my subdomain to the Static Web App was easy enough. I added the TXT record for domain verification, removed it once verified, and then added the CNAME. SSL was provisioned automatically by Azure.

I now have a clean, simple landing page pointing to my custom GPT – fast to load, easy to maintain, and served via CDN with HTTPS.

Lessons learned

  • ChatGPT can take you far, fast – but it can also give you confidently wrong advice. Check the docs, and question your co-pilot.
  • Azure Static Web Apps is fantastic for a simple website – I’m even using the free tier for this.
  • Authorisation and deployment are not the same thing. Get them wrong, and everything breaks – even if the rest looks correct.
  • Start again sooner – sometimes it’s faster to delete and recreate than to debug a half-working config.
  • DNS setup was smooth, but your DNS provider might need you to delete the TXT record after verification before you can create a CNAME).

Where is this website?

You can check out The Real Food Coach at realfood.markwilson.co.uk – and chat with the GPT at chat.openai.com/g/g-682dea4039b08191ad13050d0df8882f-the-real-food-coach.

*Joe Tomkinson told me that’s what it is. I’d heard of vibe coding but I thought it was something real developers do. Turns out it’s more likely to be numpties like me…

Monthly retrospective: January 2025

Last year I tried a thing – another attempt at weeknotes. Weeknotes became monthly retrospectives. Monthly retrospectives sometimes became every two months… and then they dried up completely last summer. I’m sorry. I was busy and, to be honest, this blog is not as important to me as it once was.

But then, an anonymous commenter said that they miss them and asked me to fill the gap to the end of 2024. That might happen (or it might join the great list of unwritten blog posts in the sky), but let’s have another go at the present. So, 31 January, 2025. Monthly retrospective…

At work

Things have really stepped up a gear at work. Last year I started to work on a future vision around which the Office of the CTO could structure its “thought leadership” content. Some important people supported it and I found myself co-presenting to our executive board. The next steps will remain confidential, but it’s no bad thing for me. And, the follow-on work has given me a lot of exposure to some of the marketing activities – my last fortnight has been full of market analysis and ideal client profiles.

But the last fortnight was not just those things. I had the hairbrained idea that, as productivity is is one of the outcomes we seek for our clients, maybe we should “do something for National Productivity Week”. After writing a series of blog posts (see below), and a fun day recording video content with our brand team, it feels like a one-man social media takeover. In fact, we had so much content that some of it will now have to go out next week. But that’s OK – productivity is not just for one week of the year. These are the posts that are live on the Node4 website today:

And the last post, next week, will be about building sustainable productivity approaches.

There are also a couple of videos up on LinkedIn:

And, earlier in the month (actually, it sneaked out on YouTube before Christmas but I asked for it to be pulled for an edit), there was this one. Not my best work… but it did lead to the purchase of a teleprompter which has made later videos so much easier!!!

Learning

Also on the work front, this month I completed my ILM Level 5 Award in Leadership and Management. Node4 runs this as part of a 7-month programme of workshops, with two coursework assignments that relate to four of the workshops. Over the last 7 months, I’ve covered:

  • Developing your personal leadership brand.
  • Inclusive leadership and motivation skills.
  • Managing and implementing strategic change.
  • Developing a High-performance team culture.
  • Manager as a coach.
  • Personal impact and emotional intelligence.
  • High impact presentations.

At home

Home Automation

I bought myself a Shelly temperature and humidity monitor for the Man Cave. It’s Home Assistant compatible, of course, so lets me use cheap overnight energy to stop the cabin from getting too cold/damp.

Also on the home automation front, I picked up some cheap Tapo P100 smart plugs. Like my no-name Chinese ESP32-based plugs, they are a better form factor than my older Kasa HS100/110 plugs so they don’t take space from the adjacent socket. But they lack any kind of reporting for energy usage so I should have got a pack of the slightly more expensive P110 models instead. I also struggled to add them to Home Assistant. They were recognised but wouldn’t authenticate, unless I reset my TP-Link password (which seemed to be the workaround – even if the password was the same)!

Getting away from it all

Aside from the tech, Mrs Wilson and I got away to London for a weekend, to celebrate a friend’s birthday. We were almost blown away by the tail of Storm Éowyn at Primrose Hill viewpoint but had fun (I’d never been before, but it’s in so many films!).

Tomorrow, I’m off to France for the UCI Cyclocross World Championships. Not competing of course (and disappointed that British Cycling is not sending a Women’s team or an U23 Men’s team). Just spectating. And probably consuming quite a lot of beer. And frites.

Writing

There have been some personal blog posts this month too:

In pictures

Some snaps from my Instagram:

A few thoughts on the UK Government’s AI announcement

Most of the text in this post previously appeared on my LinkedIn feed. I thought it should have been here…

Sometimes, I read something on LinkedIn and repost or comment, before realising I’ve pretty much written an entire blog post. On my phone. Twice, because I navigated away and lost the first attempt. Maybe I should have put in here, but it probably gets seen by more people on LinkedIn. Still, I own this platform, so I’m putting it up for posterity.

The post in question was one from the BBC’s Technology Editor, Zoe Kleinman. Zoe had posted insights about the UK Prime Minister’s “bold and ambitious plans to support the UK’s AI sector”.

Zoe’s post and articles are well worth a read, but I wanted to add some more:

“[…] I can see why the UK wants to position itself as an innovative place for growth, without being (quite as) reliant on US tech behemoths, but most of us have yet to establish what we want to use AI for.

Sure, “AI” is the perceived answer to everything at the moment – and there are some very large companies with very deep pockets pouring billions into “AI” – but it’s an arms race. “Big tech” hasn’t worked out how to make money from its AI investments yet. The tech giants just want to make sure they have a big slice of that pie when we do finally get there.

Putting aside the significant environmental and social challenges presented by AI (as mentioned in Zoe’s post […]), “we” (our companies and our countries) haven’t got a solid business case. We just know we can’t afford to be left behind…

We’ve used some AI technologies in a variety forms for years (for example Machine Learning) – and the recent advances in generative AI (genAI) have democratised access to AI assistants and opened a huge opportunity. But genAI is just one type of AI, and we don’t fully understand the large language models that underpin it.

One thing that sticks in my mind is something I heard on a recent podcast, when Johannes Kleske commented something along the lines of “when it’s in the future, it’s AI. Once we have worked out what to do with it, it’s just software.”

More on the UK Prime Minister’s AI announcement

Artificial Intelligence: Plan to ‘unleash AI’ across UK revealed [BBC News]

Microsoft Ignite 2024 on a page

This content is 1 year old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

You probably noticed, but Microsoft held its Ignite conference in Chicago last week. As is normal now, there’s a “Book of News” for all the major announcements and the keynotes are available for online review. But there’s an awful lot to sort through. Luckily, CNET created a 15 minute summary of Satya Nadella’s keynote:

Major announcements from Ignite 2024

Last year, I wrote about how it was clear that Microsoft is all about Artificial Intelligence (AI) and this year is no different. The rest of this post focuses on the main announcements with a little bit of analysis from yours truly on what the implications might be.

AnnouncementWhat it meansFind out more
Investing in security, particularly around Purview.Data governance is of central importance in the age of AI. Microsoft has announced updates to prevent oversharing, risky use of AI, and misuse of protected materials. With one of the major concerns being accidental access to badly-secured information, this will be an important development, for those that make use of it.https://aka.ms/Ignite2024Security/
Zero Day QuestA new hacking event with $4m in rewards. Bound to grab headlines!https://aka.ms/ZeroDayQuest
Copilot as the UI for AIIf there’s one thing to take away from Ignite it’s that Microsoft sees Copilot as the UI for AI (it becomes the organising layer for work and how it gets done).

1. Every employee will have a Copilot that knows them and their work – enhancing productivity and saving time.
2. There will be agents to automate business processes.
3. And the IT dept has a control system to manage secure and measure the impact of Copilot.
Copilot ActionsCopilot Actions are intended to reduce the time spent on repetitive everyday tasks – they were described as “Outlook Rules for the age of AI” (but for the entire Microsoft 365 ecosystem). I’m sceptical on these but willing to be convinced. Let’s see how well they work in practice.https://aka.ms/CopilotActions
Copilot AgentsIf 2023-4 were about generative AI, “agentic” computing is the term for 2025.

There will be Agents within the context of a team – teammates scoped to specific roles – e.g. a facilitator to keep meeting focus in Teams and manage follow-up/action items; a Project Management Agent in Planner – to create a plan and oversee task assignments/content creation; self-service agents to provide information – augmenting HR and IT departments to answer questions and complete tasks; and a SharePoint Agent per site – providing instant access to real-time information.

Organisations can create their own agents using Copilot Studio – and the aim is that it should be as easy to create an Agent as it is to create a document.
https://aka.ms/AgentsInM365
Copilot AnalyticsAnswering criticism about the cost of licensing Copilot, Microsoft is providing analytics to correlate usage to a business metric. Organisations will be able to tune their Copilot usage to business KPIs and show how Copilot usage is translating into business outcomes.https://aka.ms/CopilotAnalytics
Mobile Application Management on Windows 365Microsoft is clearly keen to push its “cloud PC” concept – Windows 365 – with new applications so that users can access a secure computing environment from iOS and Android devices. Having spent years working to bring clients away from expensive thin client infrastructure and back to properly managed “thick clients”, I’m not convinced about the “Cloud PC”, but maybe I’m just an old man shouting at the clouds…https://aka.ms/WindowsAppAndroid
Windows 365 LinkWindows 365 Link is a simple, secure purpose built access device (aka a thin PC). It’s admin-less and password-less with security configurations enabled by default that cannot be turned off. The aim is that users can connect directly to their cloud PC with no data left locally (available from April 2025). If you’re going to invest in this approach, then it could be a useful device – but it’s not a Microsoft version of a Mac Mini – it’s all about the cloud.https://aka.ms/Windows365Link
Windows Resiliency InitiativeDoes anyone remember “Trustworthy Computing”? Well, the Windows Resiliency Initiative is the latest attempt to make Windows more secure and reliable. It includes new features like Windows Hotpatch to apply critical updates without a restart across an entire IT estate. https://aka.ms/WinWithSecurity
Azure LocalA rebranding and expansion of Azure Stack to bring Azure Arc to the edge. Organisations can run mission critical workloads in distributed locations.https://aka.ms/AzureLocal
Azure Integrated HSMMicrosoft’s first in-house security chip hardens key management without impacting performance. This will be part of every new server deployed on Azure starting next year.https://aka.ms/AzureIntegratedHSM
Azure BoostMicrosoft’s first in-house data processing unit (DPU) is designed to accelerate data-centric workloads. It can run cloud storage workloads with 3x less power and 4x the performance.https://aka.ms/AzureBoostDPU
Preview NVIDIA Blackwall AI infrastructure on AzureBy this point, even I’m yawning, but this is a fantastically fast computing environment for optimised AI training workloads. It’s not really something that most of us will use.https://aka.ms/NDGB200v6
Azure HBv5Co-engineered with AMD, this was described as a new standard for high performance computing and cited as being up to 8 times faster than any other cloud VM.https://aka.ms/AzureHBv5

FabricSQL Server is coming natively to Fabric in the form of Microsoft Fabric Databases. The aim here is to simplify operational databases as Fabric already did for analytical requirements. It provides an enterprise data platform that serves all use cases, making use of open source formats in the Fabric OneLake data lake. I have to admit, it does sound very interesting, but there will undoubtedly be some nuances that I’ll leave to my data-focused colleagues.https://aka.ms/Fabric
Azure AI FoundryDescribed as a “first class application server for the AI age” – unifying all models, tooling, safety and monitoring into a single experience, integrated with development tools as a standalone SDK and a portal. 1800 models in the catalogue for model customisation and experimentation.https://aka.ms/MaaSExperimentation
https://aka.ms/CustomizationCollaborations
Azure AI Agent ServiceBuild, deploy and scale AI apps to automate business processes. Compared with Copilot Studio for a graphical approach, this provides a code-first approach for developers to create agents, grounded in data, wherever it is.https://ai.azure.com/
Other AI announcementsThere will be AI reports and other management capabilities in Foundry, including including evaluation of models.

Safety is important – with tools to build secure AI including PromptShield to detect/block manipulation of outputs and risk/safety evaluations for image content.
Quantum ComputingThis will be the buzzword that replaces AI in the coming years. Quantum is undoubtedly significant but it’s still highly experimental. Nevertheless, Microsoft is making progress in the Quantum arms race, with a the “World’s most powerful quantum computer” with 24 logical Qubits, double the previous record.https://aka.ms/AQIgniteBlog

Featured image: screenshots from the Microsoft Ignite keynote stream, under fair use for copyright purposes.

Putting AI to work: making content more accessible

This content is 1 year old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’m really struggling with AI right now. On the one hand, it’s infuriating when it doesn’t help me with the task or activity that I prompt it to, and hallucinates total garbage. I’m also concerned for humanity, as the blind faith that some have in AI seems to me to be pretty dangerous. But, on another level, if we strip away the overenthusiastic promises, and look at the little things that are now embedded in our productivity tools, it can be quite impressive.

Multiple takes

Today, I recorded a short video for an internal SharePoint site that supports some work I’m doing around Node4’s 2030 vision and strategy. It took about 8 takes and a lot of wrangling with technology for a simple talking head in front of a background. But I got there and, after an hour or so, I had the minute or so of content I wanted. I didn’t use professional video tools – just the things I have in Microsoft 365 and Windows 11 – I recorded myself in Teams against a custom background I’d created in PowerPoint, used Clipchamp to trim the video, and uploaded to Stream.

Copilot has entered the room

Then the cool stuff started. Microsoft Stream generated captions from the video. They weren’t quite right – but the edits were minimal and it let me make them. That means that I now have a working transcript and closed captions – assistive technologies to aid viewers.

But the impressive part was when I asked Copilot to summarise the content. This is what it came up with.

Key Topics:

  • Introduction to Vision2030: Mark Wilson, one of the Technology and Innovation Directors in Node4’s Office of the CTO, introduced Vision2030, a forward-looking strategy that outlines key trends and technologies expected to define the future of mid-market businesses. 0:00
  • Focus of Vision2030: Mark Wilson explained that Vision2030 focuses on integrating emerging innovations across all technology verticals in which Node4 operates, aiming to empower clients to anticipate and adapt to future technology trends, navigate complexity, and increase competitiveness and resilience. 0:21
  • Creating Lasting Value: Mark Wilson emphasized that Vision2030 is not just about adopting new technologies but also about helping clients create lasting value. 0:39
  • Future Content and Sales Plays: Mark Wilson mentioned that over the coming weeks and months, Node4 will create more content that builds on the core theme of enabling productivity and directly links to their sales plays, ensuring a clear connection from Vision through go-to-market to delivery. 0:45
  • Commitment to Mid-Market Businesses: Mark Wilson reiterated Node4’s commitment to guiding mid-market businesses through the transformation, ensuring they are well-positioned to succeed in the future digital economy. 0:57

Spot on. Sure, it has an edited transcription to work from but now my colleagues don’t even need to watch the video. (Which begs the question why I recorded it in the first place – to which the answer is choice).

Changing the medium

So now, lets take this a bit further… out of Copilot and Stream and into the real implications of this technology. Starting with a couple of observations:

  • When I’m driving, Apple CarPlay reads my messages to me. Or, I ask Siri to send a message, or to take a note.
  • When I’m in a group messaging situation, some people seem to have a propensity to create short form audio.

I used to think that WhatApp voice messages are the spawn of the devil. Why should I have to listen to someone drone on for 30 seconds when I could read a text message much more quickly? Is it because they couldn’t be bothered to type? Then someone suggested it might be because they struggle with writing. That puts a different lens on things.

Create and consume according to our individual preferences

Now, with technologies like this we can create content in audio/video or written form – and that same content can be consumed in audio/video or written form. We can each use our preferred methods to create a message, and the recipient can use their preferred method to consume it.

This is the stuff that really makes a difference – the little things that make someone’s life easier – all adding up to a bigger boost in our individual productivity, or just getting things done.

Featured image – author’s own screenshot