Tag: Artificial Intelligence

  • My 2026 anti-prediction: we won’t see an endless rise in generative AI

    My 2026 anti-prediction: we won’t see an endless rise in generative AI

    It’s the start of the year and everyone is writing their predictions. I’ve written a few for Node4 that will make their way onto the company socials — and in industry press, no doubt — but here’s one I’m publishing for myself:

    I think 2026 will be the year when tech companies quietly start to scale back on generative AI.

    Mark Wilson, 6 January 2026

    Over Christmas I was talking to a family member who, like many people, is convinced that AI — by which they really mean chatbots, copilots and agents — will just keep becoming more dominant.

    I’m not so sure. And I’m comfortable putting that on record now. But I don’t mean it’s all going away. Let me explain…

    Where the money comes from

    The biggest tech firms in the world are still pouring tens of billions into AI infrastructure. GPUs, custom silicon, data centres, power contracts, talent. That money has to come from somewhere. The uncomfortable truth is that many of the high-profile layoffs we’ve seen over the last two years aren’t about “AI replacing people”. They’re about reducing operating costs to fund AI investment. Humans out. CapEx in.

    That works for a while. But shareholders don’t accept “trust us, it’ll pay off eventually” indefinitely. At some point, the question becomes very simple: where is the sustainable revenue that justifies this level of spend?

    A land-grab without a business model

    Every hyperscaler and major platform vendor has invested as if generative AI is a winner-takes-most market. Own the models. Own the data. Own the developer ecosystem. Own the distribution. The logic is clear: if a viable business model emerges, they want the biggest possible slice of the pie.

    The problem is that the pie still hasn’t really materialised. We have impressive demos, widespread experimentation, and plenty of productivity anecdotes — but not many clear, repeatable use cases that consistently deliver real returns. Right now, it feels less like a gold rush and more like a game of chicken. Everyone keeps spending because they’re terrified of being the first to blink.

    Eventually, someone will.

    Slowing progress in the models themselves

    Another reason I’m sceptical is the pace of improvement itself. A lot of early excitement was based on the idea that bigger models would always mean better models. But that assumption is starting to wobble. Increasing amounts of AI-generated content are now fed back into new training datasets. Models learning from the outputs of other models.

    There is growing evidence that this can actually make them worse over time — less diverse, less accurate, more prone to error. Researchers call this model collapse. Whatever the name, it’s a reminder that data quality is finite, and simply scaling doesn’t guarantee progress.

    A noticeable shift in tone

    I also find it interesting how the tone has shifted. Not just from AI replacing humans to AI augmenting humans, but something broader. And I don’t mean the AI evangelists vs. the AI doomsayers back-and-forth that I see on LinkedIn every day either…

    A year or two ago, large language models were positioned as the future of AI. The centre of gravity. The thing everything else would orbit around. Listen carefully now, and the message from tech leaders is more cautious. LLMs are still important, but they’re increasingly framed as one tool among many.

    There’s more talk about smaller, domain-specific models. About optimisation rather than scale. About decision intelligence, automation, computer vision, edge AI, and good old-fashioned applied machine learning. In other words: AI that quietly does a job well, rather than AI that chats convincingly about it.

    That feels less like hype, and more like a course correction.

    A gradual change in direction

    I don’t know whether this ends in a classic “bubble burst”. I’m a technologist not an economist. What feels likely to me is a gradual change in direction. Investment doesn’t stop, but it becomes harder to justify. Projects get cut. Timelines stretch. Expectations reset. Some bets quietly fail.

    But there will be consequences. You can’t pour this much capital into something with limited realised outcomes and expect it to disappear without a trace.

    The natural resource crunch

    Then there’s the crucial element that should be worrying everyone: natural resources.

    Generative AI isn’t just expensive in financial terms. It’s expensive in physical ones. Energy, cooling, water, land, grid capacity. Even today’s hyperscale cloud providers are struggling in some regions. Power connections delayed. Capacity constrained. Grids already full.

    Water often gets mentioned — sometimes unfairly, because many data centres operate in closed-loop systems rather than constantly consuming new supply — but it still forms part of a broader environmental footprint that can’t be ignored.

    You can’t scale AI workloads indefinitely if the electricity and supporting infrastructure simply aren’t there. And while long-term solutions exist — nuclear, renewables, grid modernisation — they don’t move at the speed of venture capital or quarterly earnings calls.

    The problems AI can’t solve

    There are other headwinds too.

    Regulation is tightening, not loosening. Data quality remains a mess. Hallucinations are still a thing, however politely we rename them. The cost of inference hasn’t fallen as fast as many hoped. And for most organisations, the hardest problems are still boring ones: messy processes, poor data, unclear ownership, and a lack of change management.

    AI doesn’t fix those. It amplifies them.

    A more realistic 2026

    So no, I don’t think (generative) AI is “going away”. That would be daft. But I do think 2026 might be the year when generative AI stops being treated as the inevitable centrepiece of every tech strategy.

    Less breathless hype. Fewer moonshots. More realism.

    And perhaps, finally, a shift from how impressive is this model? to what problem does this actually solve, at a cost we can justify, in a world with finite resources?

    I’m happy to be wrong. But if I am, someone needs to explain where the money, power, and patience are all coming from — and for how long.

    Featured image: created by ChatGPT.

  • OpenAI Atlas and the blurred line between search and synthesis

    OpenAI Atlas and the blurred line between search and synthesis

    OpenAI’s new Atlas browser has certainly got people talking.

    Some are excited — calling it a “Google killer” and a glimpse of how we’ll all navigate the web in future. Others are alarmed — pointing to privacy concerns, data collection prompts, and the idea of handing over browsing history and passwords to an AI company.

    Jason Grant described his experience as “a giant dark pattern.” Matthew Dunn was more balanced — impressed by the features, but quick to warn businesses off using it. He’s right: if you wouldn’t paste confidential data into ChatGPT, you probably shouldn’t browse the company intranet through Atlas either.

    Search vs. synthesis

    When people say Atlas will replace Google, they’re missing the point. It’s not search in the traditional sense.

    A search engine indexes existing content and returns links that might answer your question. Atlas — and systems like it — go a step further. They synthesise an answer, combining what’s on the web with what’s in your conversation and what they’ve “seen” before.

    As Data Science Dojo explains, search engines are designed to find information that already exists, while synthesis engines are designed to create new information.

    Or, as Vincent Hunt neatly puts it: “Search gives you links. Synthesis gives you insight.”

    That shift sounds subtle, but it changes everything: how we ask questions, how we evaluate truth, and how much we trust the output.

    As I said in my recent talk on AI Transformation at the Bletchley AI User Group, “Generative AI is not a search engine. It doesn’t retrieve facts. It generates language based on probabilities.” Google doesn’t know the truth either — it just gives you the most common answer to your question — but AI goes a step further. It merges, rewrites and repackages information. That can be powerful, but it’s also risky. It’s why I believe the AI-generated results that many search engines now return as default are inferior to traditional results, based on actual information sources.

    Without strong governance, AI may be repurposing outdated content or drawing on biased data. Transparency matters — because trust is the real currency of AI adoption.

    Why Atlas matters

    In OpenAI’s announcement, Atlas is described as “bringing ChatGPT anywhere across the web — helping you in the window right where you are.”

    It’s not just a search bar. It can summarise pages, compare options, fill out forms, or even complete tasks within websites. That’s a very different paradigm — one where the browser becomes a workspace, and the assistant becomes a collaborator.

    A step towards agentic AI?

    So, is Atlas really agentic? In part, yes.

    Agentic AI describes systems that can act rather than just answer. They plan, execute and adapt — working on your behalf, not just waiting for your next prompt.

    OpenAI’s own notes mention an agent mode that can “help you book reservations or edit documents you’re working on,” as reported by The Verge.

    Others, like Practical Ecommerce, describe Atlas as “a push into agentic browsing — where the browser is now an AI agent too.”

    It’s not full autonomy yet — more like assisted agency — but it’s a clear step in that direction.

    Why it still needs caution

    As exciting as it sounds, Atlas isn’t designed for enterprise use. It raises valid concerns about data privacy, security, and trust. You wouldn’t give a work browser access to sensitive credentials, and the same logic applies here.

    As Matthew Dunn notes, ChatGPT “produces better output than Copilot, but with less security and privacy.” That’s a fair trade-off for some users, but not for organisations handling confidential information.

    So, by all means, explore it — but do so with your eyes open.

    And yes, I’ll still give it a try I decided not to install it after all

    For all the justified concerns about privacy and data handling, I’ll still give Atlas a try. Even though I have Copilot at work, I pay for ChatGPT Pro for activities that are not directly related to my confidential work.

    Atlas might extend that usefulness into how I browse, not just how I prompt. The key, as ever, is knowing what data you’re sharing — and making that a conscious choice, not an accidental one.

    [Updated 24/10/2025: After writing and publishing this post, I decided not to install Atlas. There are a lot of security concerns about the way the browser stores local data, which may easily be exploited. Nevertheless, both OpenAI Atlas and Perplexity Comet are interesting developments, and the narrative about the differences between an AI search (synthesis) and a traditional search is still valid.]

    Featured image: created by ChatGPT.

  • Tonight’s talk at the Bletchley AI User Group, and a new AI Resources page

    Tonight’s talk at the Bletchley AI User Group, and a new AI Resources page

    Tonight, I’ll be giving a talk on AI Transformation at the Bletchley AI User Group.

    Slides

    I gave up on bit.ly QR codes/links to OneDrive* and hosted the slides on my own website. They are also embedded below:

    Alternatively, you can save my bandwidth by picking them up from my OneDrive instead!

    Feedback

    If you were at the talk, some feedback would be much appreciated, please. There’s a Microsoft Form for that!

    Resources

    I also reached a point where I was seeing more and more new AI content every day and I just… had… to… stop… adding… more… into… the… presentation. A few minutes vibe coding with ChatGPT gave me a static single page website with a search capability and a JSON-based data source. And ChatGPT even did the analysis, classification and tagging for me…

    Anyway, my new AI Resources page is here and will be updated as and when I come across new artefacts.

    *What was wrong with bit.ly?

    Recent changes at bit.ly that mean they:

    • No longer support custom domain names on a free account (bye-bye mwil.it); and
    • Require a paid account to redirect short links after creation

    The challenge I had was that I wanted to include a QR code for people to scan when I present the content, but that created a circular issue: I upload the slides, create a QR code, add the QR to the slides, upload the slides, the link changes… etc., etc.

    (I wouldn’t mind paying for bit.ly, except that their plans are a bit expensive. This is a free website that creates a handful of short links each month and subscription fatigue is real…)

  • When software meets steel: agentic computing in the real world

    When software meets steel: agentic computing in the real world

    I flew to Dublin last week as part of the team representing Node4 at a Microsoft Sales and Partner summit. But the event itself is not really relevant here — what struck me was the amount of robot tech I interacted with on the trip.

    At Heathrow Terminal 5, I took one of the self-driving pods that connect the business car park with the terminal. Inside, Mitie’s robot cleaning machines were gliding quietly between travellers. And in Dublin Airport, our restaurant meal was brought out by a robot waitress called Bella.

    It was only later that I realised these weren’t isolated novelties. They’re part of a pattern: we’re used to talking about agentic computing in a software sense but it also presents itself through hardware in the physical world.

    The journey begins: autonomous pods at Heathrow

    The Heathrow pods have been around for over a decade, but they still feel futuristic. You call one on demand, climb in, and it glides directly to your stop. There’s no driver, no timetable, and almost no wait. The system uses far less energy than a bus or car, and the whole thing is orchestrated by software that dispatches pods, avoids collisions and monitors usage.

    It’s a neat demonstration of automation in motion: you make a request, and a machine physically carries it out.

    Quiet efficiency: Mitie’s cleaning “cobots”

    Inside the terminal, Mitie’s autonomous cleaning robots were at work. These cobots use sensors and cameras to map the concourse, clean for hours, then return to charge before resuming their shifts. They handle repetitive tasks while human staff focus on the harder jobs.

    You could easily miss them — and that’s the point. They’re designed to blend in. The building, in a sense, is starting to help maintain itself.

    Meet Bella: the robot waitress

    In Dublin, things got more personal. The restaurant’s “BellaBot” rolled over with trays of food, blinking her animated eyes and purring polite phrases. The QR code was hard to scan (black text on a brass plate lacks contrast) and the ordering app didn’t work so human staff had to step in — but the experience was still surreal.

    Bella’s design deliberately humanises the machine, using expressions and voice to make diners comfortable. For me, it was a bit too much. The technology was interesting; the personality, less so. I prefer my service robots less anthropomorphised.

    This tension — between automation and human comfort — is one of the trickiest design challenges of our time.

    A pattern emerges

    Taken together, the pods, cleaning cobots and BellaBot reveal different layers of the same trend:

    • Mobility agents like the Heathrow pods move people and goods.
    • Maintenance agents like Mitie’s cobots quietly maintain infrastructure.
    • Service agents like BellaBot interact directly with us.

    Each one extends software intelligence into the physical world. We’re no longer just automating data; we’re automating action.

    And none of them works completely alone. The pods are overseen by a control centre. The cobots have human supervisors. Bella needs a human backup when the tech fails. This is automation with a safety net — hybrid systems that rely on graceful human fallback.

    From airports to high streets

    You don’t have to go through Heathrow or Dublin to see the same shift happening.

    Closer to home, in Milton Keynes and Northampton (as well as in other towns and cities across the UK and more widely), small white Starship robots deliver groceries and takeaway food along pavements. They trundle quietly across zebra crossings, avoiding pedestrians and pets, using cameras and sensors to navigate. A smartphone app summons them; another unlocks the lid when your order arrives.

    Like the airport pods, they make autonomy feel normal. Children wave to them. People barely notice them anymore. The line between software, service and physical action is blurring fast.

    The thin end of the wedge

    These examples show how automation is creeping into daily life — not replacing humans outright, but augmenting us.

    The challenge now isn’t capability; it’s reliability. Systems like Bella’s ordering app work brilliantly until they don’t. What matters most is how smoothly they fail and how easily humans can step back in.

    For now, that balance still needs work. But it’s clear where things are heading. The real frontier of AI isn’t in chatbots or copilots — it’s in physical agents that move, clean, deliver and serve. It’s software made tangible.

    And while Bella’s blinking eyes may have been a step too far for me, it’s hard not to admire the direction of travel. The future isn’t just digital. It’s autonomous, electric, slightly quirky – and already waiting for you in the car park.

    Featured image: created by ChatGPT.

  • Monthly retrospective: May 2025

    Monthly retrospective: May 2025

    I’ve been struggling to post retrospectives this year – they are pretty time consuming to write. But, you may have noticed the volume of content on the blog increasing lately. That’s because I finally have a workflow with ChatGPT prompts that help me draft content quickly, in my own style. (I even subscribe to ChatGPT now, and regular readers will know how I try to keep my subscription count down.) Don’t worry – it’s still human-edited (and there are parts of the web that ChatGPT can’t read – like my LinkedIn, Instagram and even parts of this blog) so it should still be authentic. It’s just less time-consuming to write – and hopefully better for you to read.

    On the blog…

    Home Assistant tinkering (again)

    I’ve been continuing to fiddle with my smart home setup. This month’s project was replacing the ageing (and now unsupported) Volvo On Call integration in Home Assistant with the much better maintained HA Volvo Cars HACS integration. It works brilliantly – once you’ve jumped through the hoops to register for an API key via Volvo’s developer portal.

    And no, that doesn’t mean I can now summon my car like KITT in Knight Rider – but I can check I locked it up and warm it up remotely. Which is almost as good. (As an aside, I saw KITT last month at the DTX conference in Manchester.)

    Software-defined vehicles

    On the subject of cars, I’ve been reflecting on how much modern cars depend on software – regardless of whether they’re petrol, diesel or electric. The EV vs. ICE debate often centres on simplicity and mechanics (less moving parts in an EV), but from my experience, the real pain points lie in the digital layer.

    Take my own (Volvo V60, 2019 model year). Mechanically it’s fine and it’s an absolute luxury compared with the older cars that my wife and sons drive, but I’ve seen:

    • The digital dashboard reboot mid-drive
    • Apple CarPlay refusing to connect unless I “reboot” the vehicle
    • Road sign recognition systems confidently misreading speed limits

    Right now, it’s back at the body shop (at their cost, thankfully) for corrosion issues on a supposedly premium marque. My next car will likely be electric – but it won’t be the drivetrain that convinces me. It’ll be the software experience. Or, more realistically, the lack of bad software. Though, based on Jonathan Phillips’ experience, new car software contains alarming typos in the UI, which indicates a lack of testing…

    Thinking about the impact of generative AI

    This update isn’t meant to be about AI – but it seems it is – because it’s become such a big part of my digital life now. And, increasingly, it’s something I spend more time discussing with my clients.

    AI isn’t new. We’ve had robotic process automation (RPA), machine learning, data science and advanced analytics for years. I even studied neural networks at Poly’ in the early 1990s. But it’s generative AI that’s caught everyone’s imagination – and their budgets.

    In Episode 239 of the WB-40 podcast (AI Leadership), I listened to Matt Cockbill talk about how it’s prompting a useful shift in how we think about technology. Less about “use cases” and more about “value cases” – how tech can improve outcomes, streamline services, and actually help achieve what the organisation set out to do.

    The rush to digitise during COVID saw huge amounts of spending – enabling remote working or entrenching what was already there (hello, VDI). But now it feels like the purse strings are tightening, and some of that “why are we doing this again?” thinking is creeping back in. Just buying licences and rolling out tools is easy. Changing the way people work and deliver value? That’s the real work.

    Meal planning… with a side of AI

    I’ve also been experimenting with creating an AI-powered food coach to help me figure out what to eat, plan ahead, and avoid living off chocolate Hobnobs and toasted pitta. Still early days – but the idea of using an assistant to help nudge me towards healthier, simpler food is growing on me.

    Reading: The Midnight Library

    I don’t read much fiction – I’m more likely to be found trawling through a magazine or scrolling on my phone – but Matt Haig’s “The Midnight Library really got me. OK, so technically, I didn’t read it – it was an impulse purchase to use some credits before cancelling my Audible account – but it was a great listen. Beautifully read by Carey Mulligan, it’s one of those rare books that manages to be both dark and uplifting. Some reviews suggest that not everyone feels the same way – and my reading it at a time of grief and loss may have had an impact – but I found it to be one of my best reads in a long time.

    Without spoiling anything, the idea of a liminal space between life and death – where you can explore the infinite versions of yourself – is quietly brilliant. Highly recommended. So much so that I bought another copy (dead tree edition) for my wife.

    On LinkedIn this month…

    It’s been a lively month over on LinkedIn, with my posts ranging from AI hype to the quirks of Gen-Z slang (and a fair dose of Node4 promotion). These are just a few of the highlights – follow me to get the full experience:

    • Jony and Sam’s mysterious new venture
      I waded into the announcement from Jony Ive and Sam Altman with, let’s say, a healthy dose of scepticism. A $6.5bn “something” was teased with a bland video and a promo image that felt more 80s album cover than product launch. It may be big. But right now? Vapourware.
    • Is the em dash trolling us?
      I chipped in on the debate about AI-written content and the apparent overuse of em dashes (—) –often flagged as an “AI tell” – usually by people who a) don’t understand English grammar or b) where LLMs learned to write. (I am aware that I incorrectly use en dashes in these posts, because people seem to find them less “offensive”.) But what if the em dash is trolling us?
    • Skibidi-bibidi-what-now?
      One of the lighter moments came with a post about Gen-Z/Gen-Alpha slang. As a Gen-Xer with young adult kids, I found a “translator” of sorts – and it triggered a few conversations about how language evolves. No promises I’ll be dropping “rizz” into meetings just yet. Have a look.
    • Politeness and prompting
      Following a pub chat with Phil Kermeen, I shared a few thoughts on whether being polite to AI makes a difference. TL;DR: it does. Here’s the post.
    • Mid-market momentum
      Finally, there have been lots of posts around the Node4 2025 Mid-Market Report. It was a big effort from a lot of people, including me, and I’m really proud of what we’ve produced. It’s packed with insights, based on bespoke research of over 600 IT and Business leaders.

    Photos

    A few snaps from my Insta’ feed…

    https://www.instagram.com/markwilsonuk/p/DJr5Ui8N94u

    For more updates…

    That’s all for now. I probably missed a few things, but it’s a decent summary of what I’ve been up to at home and at work. I no longer use X, but follow me on LinkedIn (professional), Instagram (visual) and this blog for more updates – depending on which content you like best. Maybe even all three!

    Next month…

    A trip to Hamburg (to the world’s largest model railway); ramping up the work on Node4’s future vision; and hopefully I’ll fill in some of the gaps between January and May’s retrospectives!

    Featured image: created by ChatGPT

  • Who gets paid when the machines take over?

    Who gets paid when the machines take over?

    Yesterday evening, I was at the Bletchley AI User Group in Milton Keynes. One of the talks was from Stephanie Stasey (/in/missai) (aka Miss AI), titled “Gen AI vs. white collar workers and trad wives – building a robot to make my bed”.

    It was delivered as a monologue – which sounds negative, but really isn’t. In fact, it was engaging, sharp, and packed with food for thought. Stephanie brought a fresh perspective to a topic we’re all thinking about: how AI is reshaping both the world of work and the way we live at home.

    The labour that goes unnoticed (but not undone)

    One part of the talk touched on “trad wives” – not a term I was especially familiar with, but the theme resonated.

    If you’d asked my wife and I in our 20s about how we’d divide up household tasks, we might have offered up a fair and balanced plan. But real life doesn’t always match the theory.

    These days, we both work part-time – partly because unpaid labour (childcare, cooking, washing, cleaning, all the life admin) still needs doing. And there don’t seem to be enough hours when the laptop is closed.

    The system isn’t broken – it’s working exactly as designed

    The point I’ve been turning over for a while is this: it feels like we’re on the edge of something big.

    We could be on the brink of a fundamental shift in how we think about work – if those in power wanted to make radical changes. I’ll avoid a full political detour, though I’m disheartened by the rise of the right and how often “ordinary people” are reminded of their place. (My youngest son calls me a champagne socialist – perhaps not entirely unfairly.)

    Still, AI presents us with a rare opportunity to do things differently.

    But instead of rethinking how work and value are distributed, we’re told to brace for disruption. The current narrative is that AI is coming for our jobs. Or a variation on that theme: “Don’t worry,” we’re told, “it won’t take your job – but someone using AI might”. That line’s often repeated. It’s catchy. But it’s also glib – and not always true.

    I’m close enough to retirement that the disruption shouldn’t hit me too hard. But for my children’s generation? The impact could be massive.

    What if we taxed the agents?

    So here’s a thought: what if we taxed the AI agents?

    If a business creates an agent to do the work a person would normally do – or could reasonably do – then that agent is taxed, like a human worker would be. It’s still efficient, still scalable, but the benefits are shared.

    And, how would we live, if the jobs go away? That’s where Universal Basic Income (UBI) comes in (funded by taxes on agents, as well as on human effort).

    Put simply, UBI provides everyone with enough to cover their basic needs – no strings attached. People can still work (and many will). For extra income. For purpose. For contribution. It just doesn’t have to be 9-to-5, five days a week. It could be four. Or two. The work would evolve, but people wouldn’t be left behind. It also means that the current, complex, and often unjust benefits system could be removed (perhaps with some exceptions, but certainly for the majority).

    What could possibly go right?

    So yes, the conversation around AI is full of what could go wrong. But what if we focused on what could go right?

    We’ve got a window here – a rare one – to rethink work, contribution, and value. But we need imagination. And leadership. And a willingness to ask who benefits when the machines clock in.

    Further reading on UBI

    If you’re interested in UBI and how it might work in practice, here are some useful resources:

    Featured image: author’s own.

  • Postmortem: deploying my static website with Azure Static Web Apps (eventually)

    Postmortem: deploying my static website with Azure Static Web Apps (eventually)

    This all started out as a bit of vibe coding* in ChatGPT…

    Yesterday, whilst walking the dog, I was listening to the latest episode of WB-40. Something Julia Bellis said gave me an idea for a simple custom GPT to help people (well, mostly me) eat better. ChatGPT helped me to create a custom GPT – which we named The Real Food Coach.

    With the GPT created, I asked ChatGPT for something else: help me build a one-page website to link to it. In minutes I had something presentable: HTML, styling, fonts, icons – all generated in a few minutes. Pretty slick.

    When it came to hosting, ChatGPT suggested something I hadn’t used previously: Azure Static Web Apps, rather than the Azure Storage Account route I’d used for hosting in the past. It sounded modern and neat – automatic GitHub integration, free SSL, global CDN. So I followed its advice.

    ChatGPT was great. Until it wasn’t.

    A quick win turns into a slow burn

    The proof of concept came together quickly – code committed to GitHub, site created in Azure, workflow generated. All looked good. But the deploys failed. Then failed again. And again.

    What should have taken 10 minutes quickly spiralled into a full evening of troubleshooting.

    The critical confusion

    The issue came down to two settings that look similar – but aren’t:

    • Deployment source – where your code lives (e.g. GitHub)
    • Deployment authorisation policy – how Azure authenticates deployments (either via GitHub OIDC or a manually managed deployment token)

    ChatGPT had told me to use GitHub for both. That was the mistake.

    Using GitHub as the authorisation method relies on Azure injecting a secret (AZURE_STATIC_WEB_APPS_API_TOKEN) into GitHub, but that never happened. I tried regenerating it, reauthorising GitHub, even manually wiring in deployment tokens – all of which conflicted with the setup Azure had created.

    The result? Deploys that failed with:

    “No matching Static Web App was found or the API key was invalid”

    Eventually, after several rounds of broken builds, missing secrets, and deleting and recreating the app, I questioned the advice ChatGPT had originally given. Sure enough, it confirmed that yes – the authorisation policy should have been set to Deployment Token, not GitHub.

    Thanks, ChatGPT. Bit late.

    The right setup

    Once I created the app with GitHub as the deployment source and Deployment Token as the authorisation policy, everything clicked.

    I copied the token from Azure, added it to GitHub secrets, updated the workflow to remove unnecessary build steps, and redeployed.

    Success.

    Custom domain and tidy-up

    Pointing my subdomain to the Static Web App was easy enough. I added the TXT record for domain verification, removed it once verified, and then added the CNAME. SSL was provisioned automatically by Azure.

    I now have a clean, simple landing page pointing to my custom GPT – fast to load, easy to maintain, and served via CDN with HTTPS.

    Lessons learned

    • ChatGPT can take you far, fast – but it can also give you confidently wrong advice. Check the docs, and question your co-pilot.
    • Azure Static Web Apps is fantastic for a simple website – I’m even using the free tier for this.
    • Authorisation and deployment are not the same thing. Get them wrong, and everything breaks – even if the rest looks correct.
    • Start again sooner – sometimes it’s faster to delete and recreate than to debug a half-working config.
    • DNS setup was smooth, but your DNS provider might need you to delete the TXT record after verification before you can create a CNAME).

    Where is this website?

    You can check out The Real Food Coach at realfood.markwilson.co.uk – and chat with the GPT at chat.openai.com/g/g-682dea4039b08191ad13050d0df8882f-the-real-food-coach.

    *Joe Tomkinson told me that’s what it is. I’d heard of vibe coding but I thought it was something real developers do. Turns out it’s more likely to be numpties like me…

  • Monthly retrospective: January 2025

    Monthly retrospective: January 2025

    Last year I tried a thing – another attempt at weeknotes. Weeknotes became monthly retrospectives. Monthly retrospectives sometimes became every two months… and then they dried up completely last summer. I’m sorry. I was busy and, to be honest, this blog is not as important to me as it once was.

    But then, an anonymous commenter said that they miss them and asked me to fill the gap to the end of 2024. That might happen (or it might join the great list of unwritten blog posts in the sky), but let’s have another go at the present. So, 31 January, 2025. Monthly retrospective…

    At work

    Things have really stepped up a gear at work. Last year I started to work on a future vision around which the Office of the CTO could structure its “thought leadership” content. Some important people supported it and I found myself co-presenting to our executive board. The next steps will remain confidential, but it’s no bad thing for me. And, the follow-on work has given me a lot of exposure to some of the marketing activities – my last fortnight has been full of market analysis and ideal client profiles.

    But the last fortnight was not just those things. I had the hairbrained idea that, as productivity is is one of the outcomes we seek for our clients, maybe we should “do something for National Productivity Week”. After writing a series of blog posts (see below), and a fun day recording video content with our brand team, it feels like a one-man social media takeover. In fact, we had so much content that some of it will now have to go out next week. But that’s OK – productivity is not just for one week of the year. These are the posts that are live on the Node4 website today:

    And the last post, next week, will be about building sustainable productivity approaches.

    There are also a couple of videos up on LinkedIn:

    And, earlier in the month (actually, it sneaked out on YouTube before Christmas but I asked for it to be pulled for an edit), there was this one. Not my best work… but it did lead to the purchase of a teleprompter which has made later videos so much easier!!!

    Learning

    Also on the work front, this month I completed my ILM Level 5 Award in Leadership and Management. Node4 runs this as part of a 7-month programme of workshops, with two coursework assignments that relate to four of the workshops. Over the last 7 months, I’ve covered:

    • Developing your personal leadership brand.
    • Inclusive leadership and motivation skills.
    • Managing and implementing strategic change.
    • Developing a High-performance team culture.
    • Manager as a coach.
    • Personal impact and emotional intelligence.
    • High impact presentations.

    At home

    Home Automation

    I bought myself a Shelly temperature and humidity monitor for the Man Cave. It’s Home Assistant compatible, of course, so lets me use cheap overnight energy to stop the cabin from getting too cold/damp.

    Also on the home automation front, I picked up some cheap Tapo P100 smart plugs. Like my no-name Chinese ESP32-based plugs, they are a better form factor than my older Kasa HS100/110 plugs so they don’t take space from the adjacent socket. But they lack any kind of reporting for energy usage so I should have got a pack of the slightly more expensive P110 models instead. I also struggled to add them to Home Assistant. They were recognised but wouldn’t authenticate, unless I reset my TP-Link password (which seemed to be the workaround – even if the password was the same)!

    Getting away from it all

    Aside from the tech, Mrs Wilson and I got away to London for a weekend, to celebrate a friend’s birthday. We were almost blown away by the tail of Storm Éowyn at Primrose Hill viewpoint but had fun (I’d never been before, but it’s in so many films!).

    Tomorrow, I’m off to France for the UCI Cyclocross World Championships. Not competing of course (and disappointed that British Cycling is not sending a Women’s team or an U23 Men’s team). Just spectating. And probably consuming quite a lot of beer. And frites.

    Writing

    There have been some personal blog posts this month too:

    In pictures

    Some snaps from my Instagram:

  • A few thoughts on the UK Government’s AI announcement

    A few thoughts on the UK Government’s AI announcement

    Most of the text in this post previously appeared on my LinkedIn feed. I thought it should have been here…

    Sometimes, I read something on LinkedIn and repost or comment, before realising I’ve pretty much written an entire blog post. On my phone. Twice, because I navigated away and lost the first attempt. Maybe I should have put in here, but it probably gets seen by more people on LinkedIn. Still, I own this platform, so I’m putting it up for posterity.

    The post in question was one from the BBC’s Technology Editor, Zoe Kleinman. Zoe had posted insights about the UK Prime Minister’s “bold and ambitious plans to support the UK’s AI sector”.

    Zoe’s post and articles are well worth a read, but I wanted to add some more:

    “[…] I can see why the UK wants to position itself as an innovative place for growth, without being (quite as) reliant on US tech behemoths, but most of us have yet to establish what we want to use AI for.

    Sure, “AI” is the perceived answer to everything at the moment – and there are some very large companies with very deep pockets pouring billions into “AI” – but it’s an arms race. “Big tech” hasn’t worked out how to make money from its AI investments yet. The tech giants just want to make sure they have a big slice of that pie when we do finally get there.

    Putting aside the significant environmental and social challenges presented by AI (as mentioned in Zoe’s post […]), “we” (our companies and our countries) haven’t got a solid business case. We just know we can’t afford to be left behind…

    We’ve used some AI technologies in a variety forms for years (for example Machine Learning) – and the recent advances in generative AI (genAI) have democratised access to AI assistants and opened a huge opportunity. But genAI is just one type of AI, and we don’t fully understand the large language models that underpin it.

    One thing that sticks in my mind is something I heard on a recent podcast, when Johannes Kleske commented something along the lines of “when it’s in the future, it’s AI. Once we have worked out what to do with it, it’s just software.”

    More on the UK Prime Minister’s AI announcement

    Artificial Intelligence: Plan to ‘unleash AI’ across UK revealed [BBC News]

  • Microsoft Ignite 2024 on a page

    Microsoft Ignite 2024 on a page

    You probably noticed, but Microsoft held its Ignite conference in Chicago last week. As is normal now, there’s a “Book of News” for all the major announcements and the keynotes are available for online review. But there’s an awful lot to sort through. Luckily, CNET created a 15 minute summary of Satya Nadella’s keynote:

    Major announcements from Ignite 2024

    Last year, I wrote about how it was clear that Microsoft is all about Artificial Intelligence (AI) and this year is no different. The rest of this post focuses on the main announcements with a little bit of analysis from yours truly on what the implications might be.

    AnnouncementWhat it meansFind out more
    Investing in security, particularly around Purview.Data governance is of central importance in the age of AI. Microsoft has announced updates to prevent oversharing, risky use of AI, and misuse of protected materials. With one of the major concerns being accidental access to badly-secured information, this will be an important development, for those that make use of it.https://aka.ms/Ignite2024Security/
    Zero Day QuestA new hacking event with $4m in rewards. Bound to grab headlines!https://aka.ms/ZeroDayQuest
    Copilot as the UI for AIIf there’s one thing to take away from Ignite it’s that Microsoft sees Copilot as the UI for AI (it becomes the organising layer for work and how it gets done).

    1. Every employee will have a Copilot that knows them and their work – enhancing productivity and saving time.
    2. There will be agents to automate business processes.
    3. And the IT dept has a control system to manage secure and measure the impact of Copilot.
    Copilot ActionsCopilot Actions are intended to reduce the time spent on repetitive everyday tasks – they were described as “Outlook Rules for the age of AI” (but for the entire Microsoft 365 ecosystem). I’m sceptical on these but willing to be convinced. Let’s see how well they work in practice.https://aka.ms/CopilotActions
    Copilot AgentsIf 2023-4 were about generative AI, “agentic” computing is the term for 2025.

    There will be Agents within the context of a team – teammates scoped to specific roles – e.g. a facilitator to keep meeting focus in Teams and manage follow-up/action items; a Project Management Agent in Planner – to create a plan and oversee task assignments/content creation; self-service agents to provide information – augmenting HR and IT departments to answer questions and complete tasks; and a SharePoint Agent per site – providing instant access to real-time information.

    Organisations can create their own agents using Copilot Studio – and the aim is that it should be as easy to create an Agent as it is to create a document.
    https://aka.ms/AgentsInM365
    Copilot AnalyticsAnswering criticism about the cost of licensing Copilot, Microsoft is providing analytics to correlate usage to a business metric. Organisations will be able to tune their Copilot usage to business KPIs and show how Copilot usage is translating into business outcomes.https://aka.ms/CopilotAnalytics
    Mobile Application Management on Windows 365Microsoft is clearly keen to push its “cloud PC” concept – Windows 365 – with new applications so that users can access a secure computing environment from iOS and Android devices. Having spent years working to bring clients away from expensive thin client infrastructure and back to properly managed “thick clients”, I’m not convinced about the “Cloud PC”, but maybe I’m just an old man shouting at the clouds…https://aka.ms/WindowsAppAndroid
    Windows 365 LinkWindows 365 Link is a simple, secure purpose built access device (aka a thin PC). It’s admin-less and password-less with security configurations enabled by default that cannot be turned off. The aim is that users can connect directly to their cloud PC with no data left locally (available from April 2025). If you’re going to invest in this approach, then it could be a useful device – but it’s not a Microsoft version of a Mac Mini – it’s all about the cloud.https://aka.ms/Windows365Link
    Windows Resiliency InitiativeDoes anyone remember “Trustworthy Computing”? Well, the Windows Resiliency Initiative is the latest attempt to make Windows more secure and reliable. It includes new features like Windows Hotpatch to apply critical updates without a restart across an entire IT estate. https://aka.ms/WinWithSecurity
    Azure LocalA rebranding and expansion of Azure Stack to bring Azure Arc to the edge. Organisations can run mission critical workloads in distributed locations.https://aka.ms/AzureLocal
    Azure Integrated HSMMicrosoft’s first in-house security chip hardens key management without impacting performance. This will be part of every new server deployed on Azure starting next year.https://aka.ms/AzureIntegratedHSM
    Azure BoostMicrosoft’s first in-house data processing unit (DPU) is designed to accelerate data-centric workloads. It can run cloud storage workloads with 3x less power and 4x the performance.https://aka.ms/AzureBoostDPU
    Preview NVIDIA Blackwall AI infrastructure on AzureBy this point, even I’m yawning, but this is a fantastically fast computing environment for optimised AI training workloads. It’s not really something that most of us will use.https://aka.ms/NDGB200v6
    Azure HBv5Co-engineered with AMD, this was described as a new standard for high performance computing and cited as being up to 8 times faster than any other cloud VM.https://aka.ms/AzureHBv5

    FabricSQL Server is coming natively to Fabric in the form of Microsoft Fabric Databases. The aim here is to simplify operational databases as Fabric already did for analytical requirements. It provides an enterprise data platform that serves all use cases, making use of open source formats in the Fabric OneLake data lake. I have to admit, it does sound very interesting, but there will undoubtedly be some nuances that I’ll leave to my data-focused colleagues.https://aka.ms/Fabric
    Azure AI FoundryDescribed as a “first class application server for the AI age” – unifying all models, tooling, safety and monitoring into a single experience, integrated with development tools as a standalone SDK and a portal. 1800 models in the catalogue for model customisation and experimentation.https://aka.ms/MaaSExperimentation
    https://aka.ms/CustomizationCollaborations
    Azure AI Agent ServiceBuild, deploy and scale AI apps to automate business processes. Compared with Copilot Studio for a graphical approach, this provides a code-first approach for developers to create agents, grounded in data, wherever it is.https://ai.azure.com/
    Other AI announcementsThere will be AI reports and other management capabilities in Foundry, including including evaluation of models.

    Safety is important – with tools to build secure AI including PromptShield to detect/block manipulation of outputs and risk/safety evaluations for image content.
    Quantum ComputingThis will be the buzzword that replaces AI in the coming years. Quantum is undoubtedly significant but it’s still highly experimental. Nevertheless, Microsoft is making progress in the Quantum arms race, with a the “World’s most powerful quantum computer” with 24 logical Qubits, double the previous record.https://aka.ms/AQIgniteBlog

    Featured image: screenshots from the Microsoft Ignite keynote stream, under fair use for copyright purposes.