Author: Mark Wilson

  • Excuse the basic appearance whilst I try to improve site reliability

    Excuse the basic appearance whilst I try to improve site reliability

    I’ve been having a lot of problems with this website recently. It crashes multiple times daily, for several minutes at a time.

    I’m not a WordPress expert, but I turned on the debugging – and there is no debug.log file created in my wp-content folder.

    The problems seemed to start when I followed the security advice to update the version of PHP in use. Now I’m going back to basics. I’ll start off with a basic theme – and if that doesn’t fix the issue then I’ll turn off all the plug-ins and re-introduce them one-by-one.

    This will break things. Some pages won’t display as intended, some features may be unavailable. Frankly, the site doesn’t get enough visitors for me to worry too much about 2500 old posts, but I would like it to be reliable for the ones I do write occasionally.

    Please bear with me. It may take some time to fix.

    Featured image: created by ChatGPT.

  • My 2026 anti-prediction: we won’t see an endless rise in generative AI

    My 2026 anti-prediction: we won’t see an endless rise in generative AI

    It’s the start of the year and everyone is writing their predictions. I’ve written a few for Node4 that will make their way onto the company socials — and in industry press, no doubt — but here’s one I’m publishing for myself:

    I think 2026 will be the year when tech companies quietly start to scale back on generative AI.

    Mark Wilson, 6 January 2026

    Over Christmas I was talking to a family member who, like many people, is convinced that AI — by which they really mean chatbots, copilots and agents — will just keep becoming more dominant.

    I’m not so sure. And I’m comfortable putting that on record now. But I don’t mean it’s all going away. Let me explain…

    Where the money comes from

    The biggest tech firms in the world are still pouring tens of billions into AI infrastructure. GPUs, custom silicon, data centres, power contracts, talent. That money has to come from somewhere. The uncomfortable truth is that many of the high-profile layoffs we’ve seen over the last two years aren’t about “AI replacing people”. They’re about reducing operating costs to fund AI investment. Humans out. CapEx in.

    That works for a while. But shareholders don’t accept “trust us, it’ll pay off eventually” indefinitely. At some point, the question becomes very simple: where is the sustainable revenue that justifies this level of spend?

    A land-grab without a business model

    Every hyperscaler and major platform vendor has invested as if generative AI is a winner-takes-most market. Own the models. Own the data. Own the developer ecosystem. Own the distribution. The logic is clear: if a viable business model emerges, they want the biggest possible slice of the pie.

    The problem is that the pie still hasn’t really materialised. We have impressive demos, widespread experimentation, and plenty of productivity anecdotes — but not many clear, repeatable use cases that consistently deliver real returns. Right now, it feels less like a gold rush and more like a game of chicken. Everyone keeps spending because they’re terrified of being the first to blink.

    Eventually, someone will.

    Slowing progress in the models themselves

    Another reason I’m sceptical is the pace of improvement itself. A lot of early excitement was based on the idea that bigger models would always mean better models. But that assumption is starting to wobble. Increasing amounts of AI-generated content are now fed back into new training datasets. Models learning from the outputs of other models.

    There is growing evidence that this can actually make them worse over time — less diverse, less accurate, more prone to error. Researchers call this model collapse. Whatever the name, it’s a reminder that data quality is finite, and simply scaling doesn’t guarantee progress.

    A noticeable shift in tone

    I also find it interesting how the tone has shifted. Not just from AI replacing humans to AI augmenting humans, but something broader. And I don’t mean the AI evangelists vs. the AI doomsayers back-and-forth that I see on LinkedIn every day either…

    A year or two ago, large language models were positioned as the future of AI. The centre of gravity. The thing everything else would orbit around. Listen carefully now, and the message from tech leaders is more cautious. LLMs are still important, but they’re increasingly framed as one tool among many.

    There’s more talk about smaller, domain-specific models. About optimisation rather than scale. About decision intelligence, automation, computer vision, edge AI, and good old-fashioned applied machine learning. In other words: AI that quietly does a job well, rather than AI that chats convincingly about it.

    That feels less like hype, and more like a course correction.

    A gradual change in direction

    I don’t know whether this ends in a classic “bubble burst”. I’m a technologist not an economist. What feels likely to me is a gradual change in direction. Investment doesn’t stop, but it becomes harder to justify. Projects get cut. Timelines stretch. Expectations reset. Some bets quietly fail.

    But there will be consequences. You can’t pour this much capital into something with limited realised outcomes and expect it to disappear without a trace.

    The natural resource crunch

    Then there’s the crucial element that should be worrying everyone: natural resources.

    Generative AI isn’t just expensive in financial terms. It’s expensive in physical ones. Energy, cooling, water, land, grid capacity. Even today’s hyperscale cloud providers are struggling in some regions. Power connections delayed. Capacity constrained. Grids already full.

    Water often gets mentioned — sometimes unfairly, because many data centres operate in closed-loop systems rather than constantly consuming new supply — but it still forms part of a broader environmental footprint that can’t be ignored.

    You can’t scale AI workloads indefinitely if the electricity and supporting infrastructure simply aren’t there. And while long-term solutions exist — nuclear, renewables, grid modernisation — they don’t move at the speed of venture capital or quarterly earnings calls.

    The problems AI can’t solve

    There are other headwinds too.

    Regulation is tightening, not loosening. Data quality remains a mess. Hallucinations are still a thing, however politely we rename them. The cost of inference hasn’t fallen as fast as many hoped. And for most organisations, the hardest problems are still boring ones: messy processes, poor data, unclear ownership, and a lack of change management.

    AI doesn’t fix those. It amplifies them.

    A more realistic 2026

    So no, I don’t think (generative) AI is “going away”. That would be daft. But I do think 2026 might be the year when generative AI stops being treated as the inevitable centrepiece of every tech strategy.

    Less breathless hype. Fewer moonshots. More realism.

    And perhaps, finally, a shift from how impressive is this model? to what problem does this actually solve, at a cost we can justify, in a world with finite resources?

    I’m happy to be wrong. But if I am, someone needs to explain where the money, power, and patience are all coming from — and for how long.

    Featured image: created by ChatGPT.

  • Rethinking thought leadership

    Rethinking thought leadership

    I’ve never liked the term “thought leadership”. In fact, I hate it. My first run-in with it was back in 2010, when I was working for David Smith and Mark Locke in Fujitsu UK and Ireland’s Office of the CTO. Even then, we were pretty clear: you don’t get to call yourself a thought leader. That label is earned. Other people decide it for you, usually long after you’ve stopped trying to chase it.

    Fast-forward to today, and “thought leadership” is still something marketing teams everywhere love to talk about. It’s also something I recognise as part of my job in the Node4 OCTO. But my unease with the term has never really gone away.

    So when I came across some LinkedIn Learning training on “becoming a better thought leader” (and yes, even typing that makes my stomach turn), I braced myself. And then something interesting happened.

    I was introduced to the idea of a thought reader.

    A different take

    The course explained it like this

    A thought leader is an expert. The go-to person. The one with the depth, the scars, the experience, the opinions. All fine. We know that world.

    But a thought reader is different. A thought reader is someone who pays attention to the world around them.

    Someone who tracks what’s happening in the market, in politics, in technology, in society. Someone who can read the room, not just the textbook. Someone who can bring context rather than just content.

    Not an ivory-tower specialist. Not a voice shouting into the void. But someone grounded in what’s actually going on.

    It’s the person who joins the dots and says: “I see what’s happening here, and here’s what it might mean for you.”

    And that resonated

    Because unlike thought leadership, I think thought readership can be claimed. You can choose to be someone who stays curious, who pays attention, who reads widely and listens well.

    And if I’m honest, that feels a lot closer to where I sit.

    A definition worth noting

    Along the way, I also stumbled across a piece from the University of Exeter Business School that tries to rescue the term “thought leadership” by giving it a clearer, more grounded definition. They describe it as:

    “Knowledge from a trusted, eminent and authoritative source that is actionable and provides valuable solutions for stakeholders.”

    And to be fair, that feels right. It talks about trust, action and value. It suggests the label is something you earn, not something you declare.

    What I can claim

    What I can claim, though, is that I spend a lot of time trying to understand what’s going on out there. Reading widely. Noticing patterns. Making connections. Understanding context so I can explain things in a way that’s useful.

    Less “sage on a stage”, and more “person who’s done the research so you don’t have to”.

    And that feels much more like a thought reader than a thought leader.

    I still won’t claim to be a thought leader — that’s for others to decide.

    But, from today, I might, occasionally, claim to be a thought reader.

    And that feels much more honest.

    Featured image: created by ChatGPT.

  • A sense of community, on Remembrance Sunday

    A sense of community, on Remembrance Sunday

    Every year, on the second Sunday in November, in common with many others up and down the country, the town where I live comes together. In Olney, around a thousand people gather in the Market Place – some in uniform, some in suits, some just in coats and scarves against the November chill.

    We come to remember. Some are there for those who fell in conflicts past. Others come to support their children in youth organisations. Some stand beside friends or colleagues. Whatever the reason, we make the time to come together.

    As we approach the hour, the traffic stops and the Last Post sounds. Then, as the clock strikes eleven, silence falls. For two minutes, the town pauses.

    In that moment, it’s not about politics or religion or background. It’s about shared respect. About community. About remembering what was lost and valuing what we still have – the freedom, the friendship, the ability to stand side by side in peace.

    And that sense of community matters more than ever. We live in a time when society feels increasingly divided – when algorithms on so-called social media feed us outrage and misinformation; when newspapers twist headlines to fit an agenda; when too much of life pushes us into an us-versus-them mentality. Yet, this morning, I saw the opposite. I saw a huge cross-section of our community come together – young and old, rich and poor – all standing side by side.

    There was representation from the armed forces, police, fire service, youth groups, churches, schools, sports clubs, charities and the Women’s Institute. Each laying wreaths, but all sharing a common purpose.

    Even in a town of fewer than 10,000 people, there were faces I rarely see – and some I don’t always see eye-to-eye with. But today, none of that mattered. We were all there for the same reason, and there’s mutual respect in that.

    We may be fortunate here – a small market town in what was once Buckinghamshire, close to areas of high employment and relative comfort – but not everyone here has privilege. Some are struggling. Yet today, that didn’t matter either. For a short while, everyone stood together.

    This year’s remembrance feels especially poignant. Eighty years since the Second World War, conflict still rages in too many parts of the world. Nationalism is on the rise again – flags waved with more anger than pride – and it’s easy to forget what those who served fought so hard for.

    It would have been easy to stay at home this morning. But I’m glad I walked down to the Market Place. Because what I saw was the very best of our community – people united by remembrance, respect and gratitude.

    When the Reveille sounded and life resumed, there was a quiet pride. A reminder that community isn’t just something that happens online or when it’s convenient. It’s something we live, together, year after year. And today, it was lived in remembrance of those who served – and those who never came home.

    Featured images: author’s own.

  • The day I forgot my wallet – and it didn’t matter

    The day I forgot my wallet – and it didn’t matter

    Yesterday I left the house without my wallet.

    Once upon a time that would have been a disaster – but it didn’t matter in the slightest. I had my iPhone. My Apple Wallet held my train tickets and my virtual payment cards, and everything just worked.

    At some point, I realised I’ve quietly crossed the line into a world where my phone is my wallet. It doesn’t just hold my payment cards – it replaces the cards I’d need to withdraw cash too. Which raises a question: if physical cards disappear, will we need NFC-enabled ATMs to keep access to cash alive?

    The cashless tipping point

    According to UK Finance’s UK Payment Markets 2025 report, cards now account for around two-thirds of all payments in the UK. Cash, once king, has slipped below 10% – fewer than one in ten transactions. Over half of UK adults now contactless payments, including both plastic cards and mobile wallets such as Apple Pay or Google Pay.

    The Bank of England says cash won’t die out any time soon, but it’s hard to ignore the direction of travel. Every tap of a card or phone accelerates the shift.

    Why cash still matters

    And yet, we’re not a cashless society – at least, not officially.

    The Financial Services and Markets Act 2023 gave the Financial Conduct Authority (FCA) powers to make sure people can still withdraw and deposit notes and coins. Its new Access to Cash Regime came into force last September.

    So even as digital payments dominate, the UK is deliberately keeping cash alive – not for nostalgia, but for resilience and inclusion.

    Because while most of us can pay with a tap, around 5% of adults still have no internet access, and many more are what Ofcom calls “digitally disadvantaged” – they’re online, but lack confidence or skills.

    Cash also serves as a fallback when the technology fails. Power cut, network outage, or card terminal on the blink – the humble £10 note still works.

    Emotional value

    Then there’s the emotional side.

    The Bank of England points out that many people prefer cash for budgeting – physically seeing money leave your hand is more tangible than a number on a screen.

    There’s also the matter of privacy. Every card transaction leaves a digital trail; cash doesn’t. For some, that’s reason enough.

    The cost question

    One argument that keeps popping up is the cost of card payments. Some businesses still cite high processing fees, especially for low-value sales. Others quietly admit that banking and securing cash costs money too.

    And it’s rare to find a truly “cash-only” business these days. In a 2020-21 survey by HMRC, only 1% of small businesses described themselves as cash-only.

    The “cash only” question

    That 1% does make me raise an eyebrow though.

    Whenever I see a “cash only” sign, I can’t help wondering whether every pound is being reported to His Majesty’s Revenue and Customs. It’s probably unfair – there are genuine reasons for preferring cash – but the association is hard to shake.

    HMRC’s own research shows some tradespeople see “cash jobs” as unlikely to be caught. Maybe that says more about culture than crime, but it lingers in the background.

    Should we mourn the loss of cash?

    Personally, I don’t think so.

    I like the convenience of digital payments, the security of not carrying notes, and the way my wallet has quietly become redundant. The only time I use cash regularly now is at the local market, where some of the traders are cash-only and others just prefer it. Ironically, I still have some euros in my wallet, but rarely any pounds!

    But I do think we need to protect choice. A fully digital economy can’t leave behind those who aren’t ready, able or willing to join it.

    The regulators seem to agree. Access to cash is now a legal right, even if accepting it isn’t.

    Reflection

    Forgetting my wallet was a small thing, but it made me stop and think about how quickly we’ve moved from contactless cards to contactless lives. And as much as I enjoy the convenience of paying with a phone, maybe we should all keep a few notes for emergencies.

    Featured image: created by ChatGPT.

  • OpenAI Atlas and the blurred line between search and synthesis

    OpenAI Atlas and the blurred line between search and synthesis

    OpenAI’s new Atlas browser has certainly got people talking.

    Some are excited — calling it a “Google killer” and a glimpse of how we’ll all navigate the web in future. Others are alarmed — pointing to privacy concerns, data collection prompts, and the idea of handing over browsing history and passwords to an AI company.

    Jason Grant described his experience as “a giant dark pattern.” Matthew Dunn was more balanced — impressed by the features, but quick to warn businesses off using it. He’s right: if you wouldn’t paste confidential data into ChatGPT, you probably shouldn’t browse the company intranet through Atlas either.

    Search vs. synthesis

    When people say Atlas will replace Google, they’re missing the point. It’s not search in the traditional sense.

    A search engine indexes existing content and returns links that might answer your question. Atlas — and systems like it — go a step further. They synthesise an answer, combining what’s on the web with what’s in your conversation and what they’ve “seen” before.

    As Data Science Dojo explains, search engines are designed to find information that already exists, while synthesis engines are designed to create new information.

    Or, as Vincent Hunt neatly puts it: “Search gives you links. Synthesis gives you insight.”

    That shift sounds subtle, but it changes everything: how we ask questions, how we evaluate truth, and how much we trust the output.

    As I said in my recent talk on AI Transformation at the Bletchley AI User Group, “Generative AI is not a search engine. It doesn’t retrieve facts. It generates language based on probabilities.” Google doesn’t know the truth either — it just gives you the most common answer to your question — but AI goes a step further. It merges, rewrites and repackages information. That can be powerful, but it’s also risky. It’s why I believe the AI-generated results that many search engines now return as default are inferior to traditional results, based on actual information sources.

    Without strong governance, AI may be repurposing outdated content or drawing on biased data. Transparency matters — because trust is the real currency of AI adoption.

    Why Atlas matters

    In OpenAI’s announcement, Atlas is described as “bringing ChatGPT anywhere across the web — helping you in the window right where you are.”

    It’s not just a search bar. It can summarise pages, compare options, fill out forms, or even complete tasks within websites. That’s a very different paradigm — one where the browser becomes a workspace, and the assistant becomes a collaborator.

    A step towards agentic AI?

    So, is Atlas really agentic? In part, yes.

    Agentic AI describes systems that can act rather than just answer. They plan, execute and adapt — working on your behalf, not just waiting for your next prompt.

    OpenAI’s own notes mention an agent mode that can “help you book reservations or edit documents you’re working on,” as reported by The Verge.

    Others, like Practical Ecommerce, describe Atlas as “a push into agentic browsing — where the browser is now an AI agent too.”

    It’s not full autonomy yet — more like assisted agency — but it’s a clear step in that direction.

    Why it still needs caution

    As exciting as it sounds, Atlas isn’t designed for enterprise use. It raises valid concerns about data privacy, security, and trust. You wouldn’t give a work browser access to sensitive credentials, and the same logic applies here.

    As Matthew Dunn notes, ChatGPT “produces better output than Copilot, but with less security and privacy.” That’s a fair trade-off for some users, but not for organisations handling confidential information.

    So, by all means, explore it — but do so with your eyes open.

    And yes, I’ll still give it a try I decided not to install it after all

    For all the justified concerns about privacy and data handling, I’ll still give Atlas a try. Even though I have Copilot at work, I pay for ChatGPT Pro for activities that are not directly related to my confidential work.

    Atlas might extend that usefulness into how I browse, not just how I prompt. The key, as ever, is knowing what data you’re sharing — and making that a conscious choice, not an accidental one.

    [Updated 24/10/2025: After writing and publishing this post, I decided not to install Atlas. There are a lot of security concerns about the way the browser stores local data, which may easily be exploited. Nevertheless, both OpenAI Atlas and Perplexity Comet are interesting developments, and the narrative about the differences between an AI search (synthesis) and a traditional search is still valid.]

    Featured image: created by ChatGPT.

  • Tonight’s talk at the Bletchley AI User Group, and a new AI Resources page

    Tonight’s talk at the Bletchley AI User Group, and a new AI Resources page

    Tonight, I’ll be giving a talk on AI Transformation at the Bletchley AI User Group.

    Slides

    I gave up on bit.ly QR codes/links to OneDrive* and hosted the slides on my own website. They are also embedded below:

    20251021_Mark_Wilson_Bletchley_AI_UG_AI_Transformation

    Alternatively, you can save my bandwidth by picking them up from my OneDrive instead!

    Feedback

    If you were at the talk, some feedback would be much appreciated, please. There’s a Microsoft Form for that!

    Resources

    I also reached a point where I was seeing more and more new AI content every day and I just… had… to… stop… adding… more… into… the… presentation. A few minutes vibe coding with ChatGPT gave me a static single page website with a search capability and a JSON-based data source. And ChatGPT even did the analysis, classification and tagging for me…

    Anyway, my new AI Resources page is here and will be updated as and when I come across new artefacts.

    *What was wrong with bit.ly?

    Recent changes at bit.ly that mean they:

    • No longer support custom domain names on a free account (bye-bye mwil.it); and
    • Require a paid account to redirect short links after creation

    The challenge I had was that I wanted to include a QR code for people to scan when I present the content, but that created a circular issue: I upload the slides, create a QR code, add the QR to the slides, upload the slides, the link changes… etc., etc.

    (I wouldn’t mind paying for bit.ly, except that their plans are a bit expensive. This is a free website that creates a handful of short links each month and subscription fatigue is real…)

  • When software meets steel: agentic computing in the real world

    When software meets steel: agentic computing in the real world

    I flew to Dublin last week as part of the team representing Node4 at a Microsoft Sales and Partner summit. But the event itself is not really relevant here — what struck me was the amount of robot tech I interacted with on the trip.

    At Heathrow Terminal 5, I took one of the self-driving pods that connect the business car park with the terminal. Inside, Mitie’s robot cleaning machines were gliding quietly between travellers. And in Dublin Airport, our restaurant meal was brought out by a robot waitress called Bella.

    It was only later that I realised these weren’t isolated novelties. They’re part of a pattern: we’re used to talking about agentic computing in a software sense but it also presents itself through hardware in the physical world.

    The journey begins: autonomous pods at Heathrow

    The Heathrow pods have been around for over a decade, but they still feel futuristic. You call one on demand, climb in, and it glides directly to your stop. There’s no driver, no timetable, and almost no wait. The system uses far less energy than a bus or car, and the whole thing is orchestrated by software that dispatches pods, avoids collisions and monitors usage.

    It’s a neat demonstration of automation in motion: you make a request, and a machine physically carries it out.

    Quiet efficiency: Mitie’s cleaning “cobots”

    Inside the terminal, Mitie’s autonomous cleaning robots were at work. These cobots use sensors and cameras to map the concourse, clean for hours, then return to charge before resuming their shifts. They handle repetitive tasks while human staff focus on the harder jobs.

    You could easily miss them — and that’s the point. They’re designed to blend in. The building, in a sense, is starting to help maintain itself.

    Meet Bella: the robot waitress

    In Dublin, things got more personal. The restaurant’s “BellaBot” rolled over with trays of food, blinking her animated eyes and purring polite phrases. The QR code was hard to scan (black text on a brass plate lacks contrast) and the ordering app didn’t work so human staff had to step in — but the experience was still surreal.

    Bella’s design deliberately humanises the machine, using expressions and voice to make diners comfortable. For me, it was a bit too much. The technology was interesting; the personality, less so. I prefer my service robots less anthropomorphised.

    This tension — between automation and human comfort — is one of the trickiest design challenges of our time.

    A pattern emerges

    Taken together, the pods, cleaning cobots and BellaBot reveal different layers of the same trend:

    • Mobility agents like the Heathrow pods move people and goods.
    • Maintenance agents like Mitie’s cobots quietly maintain infrastructure.
    • Service agents like BellaBot interact directly with us.

    Each one extends software intelligence into the physical world. We’re no longer just automating data; we’re automating action.

    And none of them works completely alone. The pods are overseen by a control centre. The cobots have human supervisors. Bella needs a human backup when the tech fails. This is automation with a safety net — hybrid systems that rely on graceful human fallback.

    From airports to high streets

    You don’t have to go through Heathrow or Dublin to see the same shift happening.

    Closer to home, in Milton Keynes and Northampton (as well as in other towns and cities across the UK and more widely), small white Starship robots deliver groceries and takeaway food along pavements. They trundle quietly across zebra crossings, avoiding pedestrians and pets, using cameras and sensors to navigate. A smartphone app summons them; another unlocks the lid when your order arrives.

    Like the airport pods, they make autonomy feel normal. Children wave to them. People barely notice them anymore. The line between software, service and physical action is blurring fast.

    The thin end of the wedge

    These examples show how automation is creeping into daily life — not replacing humans outright, but augmenting us.

    The challenge now isn’t capability; it’s reliability. Systems like Bella’s ordering app work brilliantly until they don’t. What matters most is how smoothly they fail and how easily humans can step back in.

    For now, that balance still needs work. But it’s clear where things are heading. The real frontier of AI isn’t in chatbots or copilots — it’s in physical agents that move, clean, deliver and serve. It’s software made tangible.

    And while Bella’s blinking eyes may have been a step too far for me, it’s hard not to admire the direction of travel. The future isn’t just digital. It’s autonomous, electric, slightly quirky – and already waiting for you in the car park.

    Featured image: created by ChatGPT.

  • Delayed by the signs that are supposed to keep us moving

    Delayed by the signs that are supposed to keep us moving

    After a late flight back into Heathrow last night, I just wanted to get home. It should have taken about an hour. Instead, it took almost two and a half — a slow-motion crawl through the Home Counties, lit by flashing amber lights, unclear diversions and matrix signs that seemed to know nothing about what was actually happening on the ground.

    After I had negotiated the first closure on the M25 (J18-20), National Highways had used the variable signs to warn of closures on the A1 — miles away and irrelevant to traffic heading north and about to turn onto the M1. What they didn’t mention was the full closure of the M1 (J9-11) which was just a few junctions ahead (I joined at 6A and saw nothing until after the J7/8 exit). When I finally reached the cones and flashing arrows, it was too late to do anything but follow the long, meandering diversion through half of Bedfordshire.

    The irony is that the technology is all there. We have live traffic feeds, sensors, cameras, and signs capable of displaying accurate, timely information. But it only works if the people behind the systems use it well. Otherwise, the signs are just expensive noise.

    And once you start seeing inconsistent or irrelevant messages, you stop trusting them. We’ve all driven under a gantry showing a sudden 40 mph limit for no apparent reason. Or a “Fog” warning on a perfectly clear morning. (I was once told by a former highways engineer that’s often down to spiders nesting in the sensor housing — which makes sense, but doesn’t exactly inspire confidence.)

    The result is predictable. When technology over-warns, people tune it out. It’s the same problem you see in many digital systems — from workplace dashboards to AI assistants. Data without context or accuracy doesn’t help anyone. Trust is built on relevance, timeliness and credibility. Without those, the message just becomes background noise.

    I’m not against the tech — quite the opposite. These systems can make our roads safer and our journeys smoother. But they only do that when they’re properly configured, maintained and used by people who understand what the data means. Otherwise, we end up ignoring the very systems designed to help us — and taking the scenic route home when all we really want is our own bed.

    Featured image: created by ChatGPT.

  • Transformation theatre: when digital isn’t enough

    Transformation theatre: when digital isn’t enough

    I’m not a frequent flyer. Indeed, I avoid flying if there’s an alternative (like high speed rail) but I’ve had a British Airways account for years – probably over twenty. But, when I tried to log in ahead of today’s flight to Dublin, it seemed to have vanished. My PIN didn’t work, the password reset email never arrived, and WhatsApp customer support confirmed the bad news: my account had been closed.

    No problem, I thought – just reopen it. Except I couldn’t. The advisor explained that although the account didn’t exist anymore, my email address was still in their system. To open a new account, I’d need to use a different email address.

    I told them that I only have one address. Because, frankly, I shouldn’t need to create another just to fit around their IT quirks.

    Eventually, the advisor said they’d request my email be deleted so I could open a new account “after a few days”.

    In the meantime, I can still manage my booking using just my surname and booking reference – which always feels worryingly insecure. (Fun fact: behind almost every flight is the SABRE system that dates back to 1964).

    When transformation is skin-deep

    This is a classic example of where “digital transformation” falls short. The airline has done the visible stuff – shiny mobile apps, chatbots, WhatsApp support – but the underlying customer processes are unchanged.

    I can interact through modern digital channels, but I’m still dealing with the same rigid, legacy back-end that can’t handle a simple scenario like reopening a dormant account. The transformation has been cosmetic, not structural.

    It’s a reminder that customer experience isn’t about channels; it’s about outcomes. If a customer can’t achieve their goal, no amount of digital polish will make it a good experience.

    Joined-up journeys, not disconnected systems

    On theme that stood out at DTX London earlier this month was the importance of mapping and managing the customer journey – understanding what customers are trying to do, where friction exists, and how internal processes support (or hinder) that experience.

    It’s not enough to build another interface. True digital transformation requires breaking down silos, re-thinking workflows, and aligning systems around real customer needs. If the back-end can’t flex, the front-end experience will always be compromised.

    The lesson

    In the end, I’m sure my problem will sort itself out – BA will eventually delete my old email record, and I’ll open a new account. But the irony is clear: digital transformation done badly just creates new frustrations through modern channels.

    Transformation isn’t about adding apps and chatbots. It’s about re-engineering the processes that sit beneath them so customers don’t end up stuck in digital limbo.

    Featured image: created by ChatGPT.