It’s the start of the year and everyone is writing their predictions. I’ve written a few for Node4 that will make their way onto the company socials — and in industry press, no doubt — but here’s one I’m publishing for myself:
I think 2026 will be the year when tech companies quietly start to scale back on generative AI.
Mark Wilson, 6 January 2026
Over Christmas I was talking to a family member who, like many people, is convinced that AI — by which they really mean chatbots, copilots and agents — will just keep becoming more dominant.
I’m not so sure. And I’m comfortable putting that on record now. But I don’t mean it’s all going away. Let me explain…
Where the money comes from
The biggest tech firms in the world are still pouring tens of billions into AI infrastructure. GPUs, custom silicon, data centres, power contracts, talent. That money has to come from somewhere. The uncomfortable truth is that many of the high-profile layoffs we’ve seen over the last two years aren’t about “AI replacing people”. They’re about reducing operating costs to fund AI investment. Humans out. CapEx in.
That works for a while. But shareholders don’t accept “trust us, it’ll pay off eventually” indefinitely. At some point, the question becomes very simple: where is the sustainable revenue that justifies this level of spend?
A land-grab without a business model
Every hyperscaler and major platform vendor has invested as if generative AI is a winner-takes-most market. Own the models. Own the data. Own the developer ecosystem. Own the distribution. The logic is clear: if a viable business model emerges, they want the biggest possible slice of the pie.
The problem is that the pie still hasn’t really materialised. We have impressive demos, widespread experimentation, and plenty of productivity anecdotes — but not many clear, repeatable use cases that consistently deliver real returns. Right now, it feels less like a gold rush and more like a game of chicken. Everyone keeps spending because they’re terrified of being the first to blink.
Eventually, someone will.
Slowing progress in the models themselves
Another reason I’m sceptical is the pace of improvement itself. A lot of early excitement was based on the idea that bigger models would always mean better models. But that assumption is starting to wobble. Increasing amounts of AI-generated content are now fed back into new training datasets. Models learning from the outputs of other models.
There is growing evidence that this can actually make them worse over time — less diverse, less accurate, more prone to error. Researchers call this model collapse. Whatever the name, it’s a reminder that data quality is finite, and simply scaling doesn’t guarantee progress.
A noticeable shift in tone
I also find it interesting how the tone has shifted. Not just from AI replacing humans to AI augmenting humans, but something broader. And I don’t mean the AI evangelists vs. the AI doomsayers back-and-forth that I see on LinkedIn every day either…
A year or two ago, large language models were positioned as the future of AI. The centre of gravity. The thing everything else would orbit around. Listen carefully now, and the message from tech leaders is more cautious. LLMs are still important, but they’re increasingly framed as one tool among many.
There’s more talk about smaller, domain-specific models. About optimisation rather than scale. About decision intelligence, automation, computer vision, edge AI, and good old-fashioned applied machine learning. In other words: AI that quietly does a job well, rather than AI that chats convincingly about it.
That feels less like hype, and more like a course correction.
A gradual change in direction
I don’t know whether this ends in a classic “bubble burst”. I’m a technologist not an economist. What feels likely to me is a gradual change in direction. Investment doesn’t stop, but it becomes harder to justify. Projects get cut. Timelines stretch. Expectations reset. Some bets quietly fail.
But there will be consequences. You can’t pour this much capital into something with limited realised outcomes and expect it to disappear without a trace.
The natural resource crunch
Then there’s the crucial element that should be worrying everyone: natural resources.
Generative AI isn’t just expensive in financial terms. It’s expensive in physical ones. Energy, cooling, water, land, grid capacity. Even today’s hyperscale cloud providers are struggling in some regions. Power connections delayed. Capacity constrained. Grids already full.
Water often gets mentioned — sometimes unfairly, because many data centres operate in closed-loop systems rather than constantly consuming new supply — but it still forms part of a broader environmental footprint that can’t be ignored.
You can’t scale AI workloads indefinitely if the electricity and supporting infrastructure simply aren’t there. And while long-term solutions exist — nuclear, renewables, grid modernisation — they don’t move at the speed of venture capital or quarterly earnings calls.
The problems AI can’t solve
There are other headwinds too.
Regulation is tightening, not loosening. Data quality remains a mess. Hallucinations are still a thing, however politely we rename them. The cost of inference hasn’t fallen as fast as many hoped. And for most organisations, the hardest problems are still boring ones: messy processes, poor data, unclear ownership, and a lack of change management.
AI doesn’t fix those. It amplifies them.
A more realistic 2026
So no, I don’t think (generative) AI is “going away”. That would be daft. But I do think 2026 might be the year when generative AI stops being treated as the inevitable centrepiece of every tech strategy.
Less breathless hype. Fewer moonshots. More realism.
And perhaps, finally, a shift from how impressive is this model? to what problem does this actually solve, at a cost we can justify, in a world with finite resources?
I’m happy to be wrong. But if I am, someone needs to explain where the money, power, and patience are all coming from — and for how long.
Featured image: created by ChatGPT.











