“AI Eats the World”

At the recent NashTech Connect event, Benedict Evans delivered a keynote titled “AI Eats the World”.

Benedict (Ben) regularly publishes large presentations exploring long-term trends in the technology industry. As he describes it on his website: “Twice a year, I produce a big presentation exploring macro and strategic trends in the tech industry.”

“AI Eats the World” is Ben’s current presentation.

What I liked about this talk is that it combines several familiar themes into a clear narrative, illustrated with historical reference points.

Platform shifts

The talk opened with the idea that the technology industry moves in platform shifts.

We know the pattern by now. Mainframes gave way to PCs, then the web, then smartphones. Each time, the old world didn’t disappear, but innovation, investment and company creation moved to the new platform. New gatekeepers emerged. Old gatekeepers lost their grip. New markets appeared, old ones were reshaped, and some things were simply destroyed.

That matters both inside and outside the technology industry. Inside tech, platform shifts change which companies matter. Outside tech, every business ends up asking the same questions: is this just a new tool, a new source of revenue, or an existential threat? How much attention does it deserve?

Microsoft is a good example. When the PC sat at the centre of the industry, Microsoft dominated. When the centre shifted to smartphones, Microsoft became much less relevant for a period because it was no longer central to the platform that set the agenda. After missing the smartphone revolution, Microsoft rebuilt its position through cloud, but with far less dominance than it once had in the PC era.

[I personally witnessed Microsoft’s drive to push Azure as the company attempted to prevent Amazon Web Services (AWS) from gaining market dominance.]

Ben also pointed out that the first companies into a new market are rarely the ones that capture most of the long-term value. That was true for PCs, browsers, search, social media and smartphones. It may well prove true for generative AI too.

One consequence of this is that a lot of what surrounds a platform shift will turn out not to matter. There will be noise, hype, dead ends and ideas that look important for a while before fading away. That is normal.

Bubbles, hype and uncertainty

Platform shifts often bring bubbles with them.

People draw straight lines on charts, talk about exponential growth and insist that this time everything is different. In one sense they are right. Every major technology wave is different from the last one. But that doesn’t mean it can’t still be a bubble.

After the noise clears, the world has usually changed. The technology becomes woven into everyday life. It stops feeling novel and starts feeling normal.

That is why platform shifts matter even when the hype gets ahead of reality.

There is one important difference this time, though. With previous platform shifts we broadly understood the physical limits. We knew what phones couldn’t do. We knew what networks couldn’t support. With AI, we don’t really know why these models work as well as they do, which makes it harder to judge how far they might go.

That means the range of possible outcomes is unusually wide. This may turn out to be “only” as significant as the Internet, the PC or the smartphone. Or it may be something more fundamental. We simply don’t know yet.

Capital is flooding in

From there, the keynote moved inside the technology industry to look at capital, starting with two quotes from industry leaders:

“The risk of under-investing is significantly greater than the risk of over-investing.”
– Sundar Pichai

“The very worst case would be that we have just pre-built for a couple of years.”
– Mark Zuckerberg

That belief is driving extraordinary spending, apparently based on a fear of missing out (FOMO). The biggest platform companies are pouring vast sums into infrastructure. NVIDIA can’t keep up with demand. Infrastructure suppliers often benefit first in platform shifts – Ben compared the moment to earlier cycles where companies like Sun Microsystems rode the wave of new computing platforms.

The power industry is struggling to keep pace too. In some cases, access to electricity is becoming a bigger constraint than access to chips, raising questions about power grids, infrastructure planning and the wider political implications of AI’s energy demands.

“It’s been almost impossible to build capacity fast enough since ChatGPT launched.”
– Kevin Scott, Microsoft CTO

The scale is enormous. Hundreds of billions of dollars are being committed to data centres and associated infrastructure, with more to come. Some of that is funded from the enormous cash flows of highly profitable companies. Some increasingly depends on leasing, borrowing or other financial engineering as firms try to build faster than their balance sheets might naturally allow.

Ben illustrated how different this is from earlier eras of software economics. Microsoft’s historic model was effectively a one-dollar CD in a ten-dollar box sold for hundreds of dollars. Software used to be extraordinarily capital-light, and often benefited from powerful network effects that created dominant platforms.

AI infrastructure looks very different. The frontier keeps moving forward, but each step is more expensive than the last. That raises a difficult question: how long do you keep chasing the frontier?

If models continue to converge in capability, the risk is that they become commoditised.

More models, but not much separation between them

The investment boom has produced rapid progress. Models keep improving. New ones appear constantly. There are proprietary models, open-source models, sovereign models and a growing alphabet soup of acronyms.

But the leading models are increasingly clustered together in capability. On many benchmarks they sit within a relatively narrow range of one another. There may be a new leader every few weeks, but there is not much long-term separation.

Where there is separation is in distribution. OpenAI has broken through into public awareness in a way that others have not. Some competitors may perform just as well on benchmarks, but they don’t have the same reach.

The ambition for companies like OpenAI is to become the universal platform that others build on – the equivalent of Windows for the AI era. But the market may end up looking more like cloud computing, where organisations mix and match components from several providers.

That raises the possibility that AI models evolve more like the semiconductor industry than traditional software – capital intensive, expensive to build, and dominated by a small number of players rather than a single winner.

Models are improving quickly. What remains much less clear is where the long-term value will sit.

Most organisations are still in the “absorb” phase

Outside the technology industry, the pattern tends to be different. The difficulty is that no-one really knows what will work yet.

New technologies usually follow a sequence: first organisations absorb them. They turn them into features and automate the obvious tasks they already understand. Only later do new products, new revenue streams and real disruption appear.

The absorption phase is where most generative AI activity still sits today.

The obvious early use cases are software development, marketing, customer support and point solutions inside large organisations. In many of these cases the change is incremental but still useful: software becomes faster to build, new capabilities appear, and there are tasks where AI simply performs better than previous tools.

Part of the challenge is that generative AI behaves very differently from traditional software. It is good at things computers have historically been bad at – language, summarisation and pattern recognition – and bad at things computers have traditionally done well, such as precise calculation.

Even usage numbers can be misleading. Many people who have access to these tools don’t use them constantly. They use them occasionally, perhaps weekly or monthly, and often only for a narrow range of tasks. Even among those using them, relatively few are paying customers, and usage is often light.

Ben compared this to early web analytics. Page “hits” once looked impressive until people realised that every image on a page counted as another hit. The numbers sounded large, but they didn’t necessarily mean very much.

Many organisations are still running pilots, experimenting with use cases and trying to work out what this technology should actually do for them. The hard part is rarely writing the code. It is working out what the actual problem is, and who would use the solution.

“People don’t know what they want until you show it to them. You’ve got to start with the experience and work backwards to the technology” – Steve Jobs

Automation comes first. Change follows later.

Ben used spreadsheets as an analogy. Before VisiCalc, spreadsheets were literally sheets of paper. When VisiCalc appeared on the Apple II in 1978, it cost the equivalent of around $10,000 in today’s money.

But it could turn a project that took a week on paper into something that could be recalculated in minutes. For accountants, that was transformative. For lawyers, who rarely build financial models, it was far less interesting.

The first step was automation. The deeper changes came later.

Barcodes offer another example. Initially they were introduced to reduce the effort involved in stock-taking. But, once deployed across supermarkets, they enabled far more stock-keeping units (SKUs) to be managed and eventually reshaped the entire supply chain, with automated inventory systems and electronic reordering.

That leads to the next question: once we have automated the obvious things, what comes next?

What changes when you have “infinite interns”?

Ben framed the next phase with a thought experiment: what happens if AI gives you the equivalent of “infinite interns”?

[At this point I was reminded of Chris Weston’s 100 Homer Simpsons analogy.]

Do you do the same work with fewer people? Or do you do far more work with the same number of people?

This is similar to what happened in earlier industrial revolutions. Steam engines effectively gave nineteenth-century Britain the equivalent of millions of additional labour units. The question now is what generative AI might do as a productivity multiplier for knowledge work.

The deeper question is whether the tasks we are automating were ever the real job in the first place. Spreadsheets automated calculations, but the goal was never the spreadsheet itself – it was better financial understanding and decision-making.

That leads to a broader question: were we really trying to automate tasks, or were we trying to achieve something else entirely?

Recommendations, discovery and agentic commerce

This question becomes especially interesting when we look at recommendation and discovery online.

Before the Internet, human editors decided much of what people saw – in shops, newspapers and broadcasters. Internet platforms scaled that process through algorithms.

Platforms like Amazon often operate through correlation rather than deep understanding. They know that people who bought one SKU often buy another, even if the system doesn’t really understand the products themselves.

LLMs may create a different kind of recommendation layer. They may operate at a higher level of abstraction, interpreting intent rather than simply correlating behaviour.

There’s also a long-standing usability principle, often attributed to Bruce Tognazzini: a computer should never ask a user a question if it can work out the answer itself. Systems that understand intent more deeply may reduce the amount of searching, filtering and form-filling users currently have to do.

That could lead to very different buying journeys – asking an assistant what to buy rather than browsing a catalogue or search engine.

The closing perspective

There’s an old line that history doesn’t repeat, but it often rhymes. Technology waves tend to follow familiar patterns.

All the technologies people were excited about before generative AI are still there. E-commerce continues to grow. Autonomous vehicles are becoming more real. Smart glasses, robotics and other long-running ideas haven’t gone away.

That broader perspective matters because technology change is cumulative. New waves arrive before previous ones have fully played out.

Looking further back reinforces the point:

  • An IBM advert from the 1950s promised that an electronic calculator would deliver the equivalent of 150 engineers.

IBM 150 Extra Engineers 1951

  • A US government report from 1955 discussed the coming age of automation, including lifts (elevators). At the time many lifts required attendants to operate them. Then they were automated. Eventually people forgot that they had ever worked any other way. When automation works well, we stop noticing it.

Ben noted that image recognition once felt like something close to magic. Fifteen years ago it seemed almost impossible. Today it is a routine feature inside everyday software.

That brings us to Larry Tesler’s famous line:

“AI is whatever machines can’t do yet.” – Larry Tesler, 1970

Once it works, we stop calling it AI. It simply becomes software. Infrastructure. Just another ordinary part of how things work.