Tag: Generative AI

  • My 2026 anti-prediction: we won’t see an endless rise in generative AI

    My 2026 anti-prediction: we won’t see an endless rise in generative AI

    It’s the start of the year and everyone is writing their predictions. I’ve written a few for Node4 that will make their way onto the company socials — and in industry press, no doubt — but here’s one I’m publishing for myself:

    I think 2026 will be the year when tech companies quietly start to scale back on generative AI.

    Mark Wilson, 6 January 2026

    Over Christmas I was talking to a family member who, like many people, is convinced that AI — by which they really mean chatbots, copilots and agents — will just keep becoming more dominant.

    I’m not so sure. And I’m comfortable putting that on record now. But I don’t mean it’s all going away. Let me explain…

    Where the money comes from

    The biggest tech firms in the world are still pouring tens of billions into AI infrastructure. GPUs, custom silicon, data centres, power contracts, talent. That money has to come from somewhere. The uncomfortable truth is that many of the high-profile layoffs we’ve seen over the last two years aren’t about “AI replacing people”. They’re about reducing operating costs to fund AI investment. Humans out. CapEx in.

    That works for a while. But shareholders don’t accept “trust us, it’ll pay off eventually” indefinitely. At some point, the question becomes very simple: where is the sustainable revenue that justifies this level of spend?

    A land-grab without a business model

    Every hyperscaler and major platform vendor has invested as if generative AI is a winner-takes-most market. Own the models. Own the data. Own the developer ecosystem. Own the distribution. The logic is clear: if a viable business model emerges, they want the biggest possible slice of the pie.

    The problem is that the pie still hasn’t really materialised. We have impressive demos, widespread experimentation, and plenty of productivity anecdotes — but not many clear, repeatable use cases that consistently deliver real returns. Right now, it feels less like a gold rush and more like a game of chicken. Everyone keeps spending because they’re terrified of being the first to blink.

    Eventually, someone will.

    Slowing progress in the models themselves

    Another reason I’m sceptical is the pace of improvement itself. A lot of early excitement was based on the idea that bigger models would always mean better models. But that assumption is starting to wobble. Increasing amounts of AI-generated content are now fed back into new training datasets. Models learning from the outputs of other models.

    There is growing evidence that this can actually make them worse over time — less diverse, less accurate, more prone to error. Researchers call this model collapse. Whatever the name, it’s a reminder that data quality is finite, and simply scaling doesn’t guarantee progress.

    A noticeable shift in tone

    I also find it interesting how the tone has shifted. Not just from AI replacing humans to AI augmenting humans, but something broader. And I don’t mean the AI evangelists vs. the AI doomsayers back-and-forth that I see on LinkedIn every day either…

    A year or two ago, large language models were positioned as the future of AI. The centre of gravity. The thing everything else would orbit around. Listen carefully now, and the message from tech leaders is more cautious. LLMs are still important, but they’re increasingly framed as one tool among many.

    There’s more talk about smaller, domain-specific models. About optimisation rather than scale. About decision intelligence, automation, computer vision, edge AI, and good old-fashioned applied machine learning. In other words: AI that quietly does a job well, rather than AI that chats convincingly about it.

    That feels less like hype, and more like a course correction.

    A gradual change in direction

    I don’t know whether this ends in a classic “bubble burst”. I’m a technologist not an economist. What feels likely to me is a gradual change in direction. Investment doesn’t stop, but it becomes harder to justify. Projects get cut. Timelines stretch. Expectations reset. Some bets quietly fail.

    But there will be consequences. You can’t pour this much capital into something with limited realised outcomes and expect it to disappear without a trace.

    The natural resource crunch

    Then there’s the crucial element that should be worrying everyone: natural resources.

    Generative AI isn’t just expensive in financial terms. It’s expensive in physical ones. Energy, cooling, water, land, grid capacity. Even today’s hyperscale cloud providers are struggling in some regions. Power connections delayed. Capacity constrained. Grids already full.

    Water often gets mentioned — sometimes unfairly, because many data centres operate in closed-loop systems rather than constantly consuming new supply — but it still forms part of a broader environmental footprint that can’t be ignored.

    You can’t scale AI workloads indefinitely if the electricity and supporting infrastructure simply aren’t there. And while long-term solutions exist — nuclear, renewables, grid modernisation — they don’t move at the speed of venture capital or quarterly earnings calls.

    The problems AI can’t solve

    There are other headwinds too.

    Regulation is tightening, not loosening. Data quality remains a mess. Hallucinations are still a thing, however politely we rename them. The cost of inference hasn’t fallen as fast as many hoped. And for most organisations, the hardest problems are still boring ones: messy processes, poor data, unclear ownership, and a lack of change management.

    AI doesn’t fix those. It amplifies them.

    A more realistic 2026

    So no, I don’t think (generative) AI is “going away”. That would be daft. But I do think 2026 might be the year when generative AI stops being treated as the inevitable centrepiece of every tech strategy.

    Less breathless hype. Fewer moonshots. More realism.

    And perhaps, finally, a shift from how impressive is this model? to what problem does this actually solve, at a cost we can justify, in a world with finite resources?

    I’m happy to be wrong. But if I am, someone needs to explain where the money, power, and patience are all coming from — and for how long.

    Featured image: created by ChatGPT.

  • Andalucía remembered

    Andalucía remembered

    Two decades on, we came once more,
    To southern Spain, and sun-kissed shores.
    Nerja welcomed with skies so wide,
    Where sea and mountain gently collide.

    From terrace high, the blue expanse,
    Each morning caught us in a trance.
    Fresh coffee, then to the beach we’d roam,
    Before the heat would drive us home.

    Villages basked in golden light,
    The sea turned silver come the night.
    Warmth on skin, cool drinks in hand,
    We let the days unfold unplanned.

    Laughter echoed, glasses clinked,
    We paused, we smiled, we stopped to think.
    Days defined by time and place —
    Sun and family, gentle pace.

    One final day in Málaga’s hum,
    Before the holiday was done.
    Now back to clouds and colder climes,
    But held inside, those warmer times.

    (A collaboration between me, and ChatGPT… showing why I should stick to tech and leave the poetry to poets…)

    Featured image: author’s own.

  • Monthly retrospective: May 2025

    Monthly retrospective: May 2025

    I’ve been struggling to post retrospectives this year – they are pretty time consuming to write. But, you may have noticed the volume of content on the blog increasing lately. That’s because I finally have a workflow with ChatGPT prompts that help me draft content quickly, in my own style. (I even subscribe to ChatGPT now, and regular readers will know how I try to keep my subscription count down.) Don’t worry – it’s still human-edited (and there are parts of the web that ChatGPT can’t read – like my LinkedIn, Instagram and even parts of this blog) so it should still be authentic. It’s just less time-consuming to write – and hopefully better for you to read.

    On the blog…

    Home Assistant tinkering (again)

    I’ve been continuing to fiddle with my smart home setup. This month’s project was replacing the ageing (and now unsupported) Volvo On Call integration in Home Assistant with the much better maintained HA Volvo Cars HACS integration. It works brilliantly – once you’ve jumped through the hoops to register for an API key via Volvo’s developer portal.

    And no, that doesn’t mean I can now summon my car like KITT in Knight Rider – but I can check I locked it up and warm it up remotely. Which is almost as good. (As an aside, I saw KITT last month at the DTX conference in Manchester.)

    Software-defined vehicles

    On the subject of cars, I’ve been reflecting on how much modern cars depend on software – regardless of whether they’re petrol, diesel or electric. The EV vs. ICE debate often centres on simplicity and mechanics (less moving parts in an EV), but from my experience, the real pain points lie in the digital layer.

    Take my own (Volvo V60, 2019 model year). Mechanically it’s fine and it’s an absolute luxury compared with the older cars that my wife and sons drive, but I’ve seen:

    • The digital dashboard reboot mid-drive
    • Apple CarPlay refusing to connect unless I “reboot” the vehicle
    • Road sign recognition systems confidently misreading speed limits

    Right now, it’s back at the body shop (at their cost, thankfully) for corrosion issues on a supposedly premium marque. My next car will likely be electric – but it won’t be the drivetrain that convinces me. It’ll be the software experience. Or, more realistically, the lack of bad software. Though, based on Jonathan Phillips’ experience, new car software contains alarming typos in the UI, which indicates a lack of testing…

    Thinking about the impact of generative AI

    This update isn’t meant to be about AI – but it seems it is – because it’s become such a big part of my digital life now. And, increasingly, it’s something I spend more time discussing with my clients.

    AI isn’t new. We’ve had robotic process automation (RPA), machine learning, data science and advanced analytics for years. I even studied neural networks at Poly’ in the early 1990s. But it’s generative AI that’s caught everyone’s imagination – and their budgets.

    In Episode 239 of the WB-40 podcast (AI Leadership), I listened to Matt Cockbill talk about how it’s prompting a useful shift in how we think about technology. Less about “use cases” and more about “value cases” – how tech can improve outcomes, streamline services, and actually help achieve what the organisation set out to do.

    The rush to digitise during COVID saw huge amounts of spending – enabling remote working or entrenching what was already there (hello, VDI). But now it feels like the purse strings are tightening, and some of that “why are we doing this again?” thinking is creeping back in. Just buying licences and rolling out tools is easy. Changing the way people work and deliver value? That’s the real work.

    Meal planning… with a side of AI

    I’ve also been experimenting with creating an AI-powered food coach to help me figure out what to eat, plan ahead, and avoid living off chocolate Hobnobs and toasted pitta. Still early days – but the idea of using an assistant to help nudge me towards healthier, simpler food is growing on me.

    Reading: The Midnight Library

    I don’t read much fiction – I’m more likely to be found trawling through a magazine or scrolling on my phone – but Matt Haig’s “The Midnight Library really got me. OK, so technically, I didn’t read it – it was an impulse purchase to use some credits before cancelling my Audible account – but it was a great listen. Beautifully read by Carey Mulligan, it’s one of those rare books that manages to be both dark and uplifting. Some reviews suggest that not everyone feels the same way – and my reading it at a time of grief and loss may have had an impact – but I found it to be one of my best reads in a long time.

    Without spoiling anything, the idea of a liminal space between life and death – where you can explore the infinite versions of yourself – is quietly brilliant. Highly recommended. So much so that I bought another copy (dead tree edition) for my wife.

    On LinkedIn this month…

    It’s been a lively month over on LinkedIn, with my posts ranging from AI hype to the quirks of Gen-Z slang (and a fair dose of Node4 promotion). These are just a few of the highlights – follow me to get the full experience:

    • Jony and Sam’s mysterious new venture
      I waded into the announcement from Jony Ive and Sam Altman with, let’s say, a healthy dose of scepticism. A $6.5bn “something” was teased with a bland video and a promo image that felt more 80s album cover than product launch. It may be big. But right now? Vapourware.
    • Is the em dash trolling us?
      I chipped in on the debate about AI-written content and the apparent overuse of em dashes (—) –often flagged as an “AI tell” – usually by people who a) don’t understand English grammar or b) where LLMs learned to write. (I am aware that I incorrectly use en dashes in these posts, because people seem to find them less “offensive”.) But what if the em dash is trolling us?
    • Skibidi-bibidi-what-now?
      One of the lighter moments came with a post about Gen-Z/Gen-Alpha slang. As a Gen-Xer with young adult kids, I found a “translator” of sorts – and it triggered a few conversations about how language evolves. No promises I’ll be dropping “rizz” into meetings just yet. Have a look.
    • Politeness and prompting
      Following a pub chat with Phil Kermeen, I shared a few thoughts on whether being polite to AI makes a difference. TL;DR: it does. Here’s the post.
    • Mid-market momentum
      Finally, there have been lots of posts around the Node4 2025 Mid-Market Report. It was a big effort from a lot of people, including me, and I’m really proud of what we’ve produced. It’s packed with insights, based on bespoke research of over 600 IT and Business leaders.

    Photos

    A few snaps from my Insta’ feed…

    https://www.instagram.com/markwilsonuk/p/DJr5Ui8N94u

    For more updates…

    That’s all for now. I probably missed a few things, but it’s a decent summary of what I’ve been up to at home and at work. I no longer use X, but follow me on LinkedIn (professional), Instagram (visual) and this blog for more updates – depending on which content you like best. Maybe even all three!

    Next month…

    A trip to Hamburg (to the world’s largest model railway); ramping up the work on Node4’s future vision; and hopefully I’ll fill in some of the gaps between January and May’s retrospectives!

    Featured image: created by ChatGPT

  • Does vibe coding have a place in the world of professional development?

    Does vibe coding have a place in the world of professional development?

    I’ve been experimenting with generative AI lately – both in my day job and on personal projects – and I thought it was time to jot down some reflections. Not a deep think piece, just a few observations about how tools like Copilot and ChatGPT are starting to shape the way I work.

    In my professional life, I’ve used AI to draft meeting agendas, prepare documents, sketch out presentation outlines, and summarise lengthy reports. It’s a co-pilot in the truest sense – it doesn’t replace me, but it often gives me a head start. That said, the results are hit and miss, and I never post anything AI-generated without editing. Sometimes the AI gives me inspiration. Other times, it gives me American spelling and questionable grammar.

    But outside work is where things got interesting.

    I accidentally vibe coded

    It turns out there’s a name for what I’ve been doing in my spare time: vibe coding.

    First up, I wanted to connect a microcontroller to an OLED display and to control the display with a web form and a REST API. I didn’t know exactly how to do it, but I had a vague idea. I asked ChatGPT. It gave me code, wiring instructions, and step-by-step guidance to flash the firmware. It didn’t work out of the box – but with a few nudges to fix a compilation error and rework the wiring, I got it going.

    Then, I wanted to create a single-page website to showcase a custom GPT I’d built. Again, ChatGPT gave me the starter template. I published it to Azure Static Web Apps, with GitHub for source control and a CI/CD pipeline. All of it AI-assisted.

    Both projects were up and running quickly – but finishing them took a lot more effort. You can get 80% of the way with vibes, but the last 20% still needs graft, knowledge, or at the very least, stubborn persistence. And the 80% is the quick part – the 20% takes the time.

    What is vibe coding?

    In short: it’s when you code without fully knowing what you’re doing. You rely on generative AI tools to generate snippets, help debug errors, or explain unfamiliar concepts. You follow the vibe, not the manual.

    And while that might sound irresponsible, it’s increasingly common – especially as generative AI becomes more capable. If you’re solving a one-off problem or building a quick prototype, it can be a great approach.

    I should add some context: I do have a Computer Studies degree, and I can code. But aside from batch scripts and a bit of PowerShell, I haven’t written anything professionally since my 1992/93 internship – and that was in COBOL.

    So, yes, I have some idea of what’s going on. But I’m still firmly in vibe territory when it comes to ESP32 firmware or HTML/CSS layout.

    The good, the bad, and the undocumented

    Vibe coding has clear advantages:

    • You can build things you wouldn’t otherwise attempt.
    • You learn by doing – with AI as your tutor.
    • You get to explore new tech without wading through outdated forum posts.

    But it also has its pitfalls:

    • The AI isn’t always right (and often makes things up).
    • Debugging generated code can be a nightmare.
    • If you don’t understand what the code does, maintaining it is difficult – if not impossible.
    • AI doesn’t always follow best practices – and those change over time.
    • It may generate code that’s based on copyrighted sources. Licensing isn’t always clear.

    That last pair is increasingly important. Large language models are trained on public code from the Internet – but not everything online is a good example. Some of it is outdated. Some of it is inefficient. Some of it may not be free to use. So unless you know what you’re looking at (and where it came from), you risk building on shaky ground.

    Where next?

    Generative AI is changing how we create, code, and communicate. But it’s not a magic wand. It’s a powerful assistant – especially for those of us who are happy to get stuck in without always knowing where things will end up.

    Whether I’ve saved any time is up for debate. But I’ve definitely done more. Built more. Learned more.

    And that feels like progress.

    A version of this post was originally published on the Node4 blog.

    Featured image by James Osborne from Pixabay.

  • Generative AI is just a small part of the picture

    Generative AI is just a small part of the picture

    This post previously appeared on my LinkedIn feed. I thought it should have been here…

    They say that, when all you have is a hammer, every problem that needs solving looks like a nail. Well, something like that anyway. Generative AI (GenAI) is getting a lot of airtime right now, but it’s not the answer to everything. Want a quick draft of some content? Sure, here it is – I’ve made up some words for you that sound like they could work. (That is literally how an LLM works.)

    On the other hand, I spent yesterday afternoon grappling with Microsoft Copilot as it gave me lots of credible sounding information… with sources that just don’t exist, or don’t say the things it says they do. That’s quite frightening because many people will just believe the made-up stuff, repeat it and say “I got it from Copilot/ChatGPT/insert tool of choice”.

    Anyway, artificial intelligence (AI) is more than just GenAI – and last night I watched this video from Eric Siegel. Once all the hype about GenAI has died down, maybe we’ll find some better uses for other AI technologies like predictive AI. One thing is for sure, Artificial General Intelligence (AGI) is not coming any time soon…