TIL: About custom emojis in Microsoft Teams

I was in a Teams meeting recently, where someone added a version of our company logo as a reaction to a message. I’d never seen that before, and I was intrigued.

A little Googling later, and the AI Overview gave me my answer:

“To create custom emoji teams in Microsoft Teams, you need to upload images or GIFs as custom emojis, which can then be used by all members of your organization. This involves selecting the “Emoji, GIFs and Stickers” option in the message box, navigating to “Your org’s emoji,” and then choosing “Add emoji” to upload your custom content.”

This is what it looks like (though the screen grab has removed my cursor!)

And then…

You can read more about managing custom emoji in Teams, in the Microsoft support article on Use Custom Emoji in Microsoft Teams.

Same word, different world: cloud in context

Almost 15 years ago, the US National Institute of Standards and Technology (NIST) published its definition of cloud computing. It outlined the essential characteristics of cloud, the service models (Infrastructure as a Service, Platform as a Service, and Software as a Service), and the deployment models – public, private, and hybrid. For a while, it felt like the industry had a common understanding of what cloud meant.

Cloud, in this context, is a concept – a way of thinking about how IT services are delivered. We might implement it in different ways, but we’re broadly talking about on-demand computing resources, elastic scalability, and usage-based metering. At least, that was the theory.

Same trend, different lens

Lately, I’ve seen conversations where different groups of technology experts view cloud through very different lenses. One group talks about organisations repatriating workloads from the cloud, while another highlights how they’re helping businesses modernise to the cloud. Both are right – they’re just looking at the same thing from opposite ends.

One sees rising cloud costs and workload suitability questions. The other sees the opportunity to modernise legacy applications and deliver value faster. Neither is wrong. But without that shared context, the conversation quickly becomes disjointed.

It’s perfectly possible – and entirely logical – for a majority of organisations to be moving one or more workloads away from the cloud (e.g. IaaS workloads that are poorly suited, or were not transformed), while at the same time many others are embracing SaaS to modernise their business applications.

Security still matters – even in SaaS

In another recent discussion, a speaker gave a solid presentation on cloud security challenges – configuration management, data protection, identity controls, and the like. Then came a question from the audience: “But what about us in the world of SaaS?”

It was a fair point. But again, it revealed the disconnect. Security considerations don’t go away with SaaS – they just shift. You might not patch servers anymore, but you still need to manage identities, access, and data sharing.

Microsoft explained the shared responsibility model back in 2018. They made it clear that while your provider handles infrastructure and platform security, you’re still on the hook for things like information protection and user behaviour.

Stop the tribalism

This is where it all starts to fall apart. “Cloud” has become such a broad umbrella that it hides the diversity underneath. Infrastructure-, platform-, software as a service – they’re all cloud, but they’re not the same. It’s almost as though we need to begin every meeting with a clarification: which cloud are we talking about?

We need to move past this tribalism. It’s unhelpful, and often gets in the way of progress. When we default to our own perspective – infrastructure vs. applications, on-prem vs. SaaS – we risk talking past each other.

Speak the same language

As technologists, we have a responsibility to be clear. If we’re talking about cloud, let’s define the scope. If we’re making assumptions, let’s surface them. Whether your focus is on platforms, apps, infrastructure or security, the goal is the same: to deliver value through technology.

So next time someone starts a sentence with “cloud is…”, pause. Ask them which bit they mean. It might just save everyone a lot of confusion.

Featured image: created by ChatGPT

Is this really progress? The downsides of the app economy

A simple plan goes sideways

I should have just phoned the local taxi company.

But no. I listened to my son — who, to be fair, meant well — and took a modern, app-based approach. “Use Bolt,” he said. “It’s just like Uber. I use it all the time.” And that, it turns out, was the problem.

We’re on holiday in Andalucía, trying to get to a nearby village for dinner. I booked the ride through Bolt, selecting a pickup slot between 17:00 and 17:10. The app stressed the importance of punctuality: we had to be outside on time, or the driver might only wait five minutes before cancelling. Fair enough. So at 16:57 we were all standing by the road, ready to go.

By 17:10, nothing had happened.

Then the app admitted defeat. “No drivers available.” Ride cancelled. “WTF? I booked that hours ago”, thought I.

Platform to nowhere

OK, so Bolt has no drivers and our evening plans are in disarray. So I tried Uber. First, it quotes a price. Then it suggests paying extra to “increase the chance of getting a ride”. You know, because surge pricing and nudging people into premium options is apparently what counts as innovation now. Unsurprisingly, that didn’t work either. After taking my money and several minutes of repeatedly trying to get a driver to take the ride, feasting on my iPhone battery and drinking my mobile data allowance, it finally gave up on the cutesey “sorry about the wait” and admitted that there we no drivers available.

So here we are. No car. No ride. No confidence that we’d even get back if we somehow found our own way there.

I did everything right. I booked in advance. I turned up early. I trusted the system.

And the system is broken.

The slow death of a good idea

This is the very definition of enshittification — the gradual degradation of a service as it prioritises growth, then revenue, then cost-cutting. What starts out as a great user experience slowly turns to dust as the platform corners the market and lets quality slide. First they undercut local businesses, drawing customers in. Then they ration supply, push up prices, and shrug when things go wrong.

Talking with my son about his experience back home, and it seems the traditional minicab firms are fewer and fussier — because they’ve been undercut too. Just like high street shops were when Amazon trained us to expect fast, cheap delivery with no human interaction. And what happens when the competition’s gone? Well, you can guess what happens to the prices and service levels.

App-based everything

It’s not just transport or retail. It’s food delivery, hotel bookings, eBooks, music, even how we consume the news. Everything’s an app. Everything’s global. Everything’s slick — until it isn’t.

And I’m clearly not alone in thinking this. I recently reshared a post on LinkedIn that echoed a lot of my frustrations. From self-checkouts to smart homes, booking travel to banking queries, we’ve been sold the idea that technology makes everything easier. But the pattern seems to be this: companies use technology to reduce their own costs — while shifting the work onto us.

We scan our own groceries. We manage our own bookings. We dig through online portals to find PDFs that digitise our paper records. We chat to bots that often can’t help, and fill in digital forms just to get the most basic of services.

Grumpy Old Man

So yes, I may joke about my Grumpy Old Man Syndrome, but the underlying point stands: just because we can do something digitally, doesn’t mean we should. Especially when the “innovation” really just amounts to taking humans out of the loop and putting consumers to work.

Of course, some people prefer the new ways. Not everyone wants to chat with a travel agent or ring up a cab office. For many, the ability to access data and services 24/7 is empowering.

Progress means keeping options open

But what matters most — and what so often gets forgotten — is that there should still be a choice. Because if everything goes digital with no fallback, we risk excluding those who can’t (or don’t want to) play the app economy game.

Maybe I am just an old man yelling at the cloud (service). But the app-based, gig-driven, digitally transformed economy isn’t always a step forwards. The frictionless experience often hides a lot of pain behind the scenes — for drivers, small businesses, and occasionally, for frustrated tourists standing by a road in southern Europe, hoping the app will work this time.

The world is increasingly run by tech bros in California. And we’re all a little poorer — culturally and economically — for it.

Featured image: created by ChatGPT

Post Script

[14/7/25] Yesterday morning, I walked to the local tourist office. I know, really old school! There, I spoke to a really helpful lady, who advised me on local buses and taxis. We called a central number for taxis in the town — no advanced bookings can be made but the first off the rank came to collect us. And we had our evening in the next village. It was lovely.

Everything is connected

“Blue Screens of Death” on public displays amuse me. There’s something oddly entertaining about a railway station departure board throwing a Windows fit or a high street advertising screen stuck waiting for a DHCP lease. It’s the digital equivalent of seeing behind the curtain — a reminder that there’s a PC in there somewhere, doing its best to run the show.

And that’s the thing, isn’t it? It is just a PC. These days, so many of the things we take for granted — the TV, the fridge, the car — are just another node on the network.

Take my car, for instance. I recently renewed the subscription for its connected services. For a modest fee, I can check if it’s locked, see its mileage, locate it on a map, get a journey history for my expense report, and browse a whole range of data that used to stay hidden behind a service technician’s diagnostic port. And it’s not just a luxury add-on — since 2018, EU regulations have required all new cars to include built-in connectivity for emergency calls (eCall), so the hardware is there whether I subscribe or not.

Later in the day, I stood at the station as a train rolled in. Its destination display had gone wrong and cycled through what looked like debug information — including an internal IP address and MAC address. Those probably belonged to the signage system, but I’d bet good money the whole train runs a local network, linking everything from passenger information to onboard diagnostics, and connecting it all back to control centres via mobile data or trackside Wi-Fi.

It made me think: everything really is connected now. Not in the “1990s vision of the Internet of Things” kind of way — more in a quietly omnipresent, of course it’s online sort of way. If it’s powered, it’s probably got a network interface. And if it’s got a network interface, it’s probably talking to something.

There’s real value in that — better data, faster diagnostics, new services, and a smoother user experience. But it comes with risk too: more complexity, more places where things can go wrong, and a bigger attack surface. When my coffee machine has its own app and occasionally needs a reboot, I know we’ve crossed a line.

The future is here. It’s connected. And every now and then, it forgets its IP address.

Featured image: created by ChatGPT

I got phished – and I should have known better

After decades working in IT, I really should have known better. I do the training. Every year, like everyone else, I click through the e-learning as quickly as I can, answer the quiz at the end, and move on to something else. I’ve even been that person who feels suitably smug when he spots the simulated phishing attempt and logs a support ticket, just in case. AKA a smart arse. A real joy to work with, I’m sure.

But this time, I slipped up. I was travelling back from Germany and needed to buy a CIV ticket for the UK leg of the journey – from St Pancras International to my local station. The ticket office at the station could only help with the outbound leg – and that was no use to me, because (to my eternal shame and against my environmental principles) I flew out. With Ryanair. For a short hop. I wanted to take the train both ways, but couldn’t justify the cost to my travelling companion. Even though the alternative was Ryanair.

With time running out, I tried Mark Smith, The Man in Seat 61, for advice. No luck. In a final attempt, I contacted Eurostar via X (formerly Twitter). I usually avoid X – because of Elon Musk – but I needed to get an answer quickly.

Spot the warning signs

I got a reply. From @EurostarUKcs – apparently the “Eurostar UK Support Line”. The odd capitalisation should have tipped me off. So should the fact they only had two followers and had been set up in May 2025. But I missed that. I followed them back. I sent them a Direct Message. They replied. This looked like help. And then they insisted on speaking by phone.

I was on a moving train so I explained that a call wasn’t ideal. But they called anyway, on WhatsApp, from a +245 number (Kenya). And that’s when it clicked. This was a scam.

Damage control

I hung up. Deleted the message history. Contacted Eurostar (the real one) just in case anyone tried to change my booking. Luckily, I hadn’t given away much more than the sort of information you might post in a public forum. But it was still more than I should have.

I got away lightly. There was no harm done, just a dented ego. But the whole episode was a timely reminder: it’s not just your mother-in-law who gets phished. It can happen to any of us – even the smug ones who think they know better.

Lessons learned

  • Don’t assume every official-looking account on social media is legit – especially if it’s brand new and barely followed.
  • Be wary of unsolicited calls – especially via WhatsApp or from unusual international numbers.
  • Trust your gut – if something feels off, it probably is.

I share this not because I want sympathy, but because it’s important. If it can happen to me, it can happen to anyone. Stay alert.

Featured image: created by ChatGPT

How I dodged Microsoft’s Copilot price hike

I’ve been reading for a while about Microsoft rolling out Copilot to Microsoft 365 Personal and Family subscriptions. It’s been hyped as the future of productivity – but it comes at a price.

In my case, the price was set to rise from £79.99 to £104.99 a year. That’s quite a jump, especially when the main new feature (Microsoft 365 Copilot) is something I already have access to at work. For personal use, I rely on ChatGPT – I think it gives me better results, at least when I don’t need to connect my AI assistant to corporate data.

Don’t get me wrong – Copilot can be useful. But I don’t need it at home and at work, and I certainly don’t want to pay extra for something I don’t use.

Over the last few months, I’ve seen various posts suggesting a workaround – downgrading to “Classic” Microsoft 365. This essentially gives you the same subscription you had before all the AI bells and whistles were added, without the price hike.

Today, after a timely nudge from Jamie Thomson, I went ahead.

How I did it

There’s a helpful Money Saving Expert article that lays out the steps. But in short:

  • I went to the Microsoft Account dashboard.
  • I hit “cancel” on my existing Microsoft 365 Family subscription. That sounds scary, but it doesn’t actually cancel right away.
  • Microsoft then offered me the chance to switch to a different plan – one that wasn’t visible previously.
  • I chose the “Classic” subscription. Same renewal date, same benefits (for me), but back to £79.99.

Worth it?

Time will tell, but for now, I’ve kept the services I use and avoided an unwanted price jump. If you don’t need Copilot at home – and already have it at work – this might be worth a look.

Featured image: author’s own screenshot from the Microsoft website.

Monthly retrospective: May 2025

I’ve been struggling to post retrospectives this year – they are pretty time consuming to write. But, you may have noticed the volume of content on the blog increasing lately. That’s because I finally have a workflow with ChatGPT prompts that help me draft content quickly, in my own style. (I even subscribe to ChatGPT now, and regular readers will know how I try to keep my subscription count down.) Don’t worry – it’s still human-edited (and there are parts of the web that ChatGPT can’t read – like my LinkedIn, Instagram and even parts of this blog) so it should still be authentic. It’s just less time-consuming to write – and hopefully better for you to read.

On the blog…

Home Assistant tinkering (again)

I’ve been continuing to fiddle with my smart home setup. This month’s project was replacing the ageing (and now unsupported) Volvo On Call integration in Home Assistant with the much better maintained HA Volvo Cars HACS integration. It works brilliantly – once you’ve jumped through the hoops to register for an API key via Volvo’s developer portal.

And no, that doesn’t mean I can now summon my car like KITT in Knight Rider – but I can check I locked it up and warm it up remotely. Which is almost as good. (As an aside, I saw KITT last month at the DTX conference in Manchester.)

Software-defined vehicles

On the subject of cars, I’ve been reflecting on how much modern cars depend on software – regardless of whether they’re petrol, diesel or electric. The EV vs. ICE debate often centres on simplicity and mechanics (less moving parts in an EV), but from my experience, the real pain points lie in the digital layer.

Take my own (Volvo V60, 2019 model year). Mechanically it’s fine and it’s an absolute luxury compared with the older cars that my wife and sons drive, but I’ve seen:

  • The digital dashboard reboot mid-drive
  • Apple CarPlay refusing to connect unless I “reboot” the vehicle
  • Road sign recognition systems confidently misreading speed limits

Right now, it’s back at the body shop (at their cost, thankfully) for corrosion issues on a supposedly premium marque. My next car will likely be electric – but it won’t be the drivetrain that convinces me. It’ll be the software experience. Or, more realistically, the lack of bad software. Though, based on Jonathan Phillips’ experience, new car software contains alarming typos in the UI, which indicates a lack of testing…

Thinking about the impact of generative AI

This update isn’t meant to be about AI – but it seems it is – because it’s become such a big part of my digital life now. And, increasingly, it’s something I spend more time discussing with my clients.

AI isn’t new. We’ve had robotic process automation (RPA), machine learning, data science and advanced analytics for years. I even studied neural networks at Poly’ in the early 1990s. But it’s generative AI that’s caught everyone’s imagination – and their budgets.

In Episode 239 of the WB-40 podcast (AI Leadership), I listened to Matt Cockbill talk about how it’s prompting a useful shift in how we think about technology. Less about “use cases” and more about “value cases” – how tech can improve outcomes, streamline services, and actually help achieve what the organisation set out to do.

The rush to digitise during COVID saw huge amounts of spending – enabling remote working or entrenching what was already there (hello, VDI). But now it feels like the purse strings are tightening, and some of that “why are we doing this again?” thinking is creeping back in. Just buying licences and rolling out tools is easy. Changing the way people work and deliver value? That’s the real work.

Meal planning… with a side of AI

I’ve also been experimenting with creating an AI-powered food coach to help me figure out what to eat, plan ahead, and avoid living off chocolate Hobnobs and toasted pitta. Still early days – but the idea of using an assistant to help nudge me towards healthier, simpler food is growing on me.

Reading: The Midnight Library

I don’t read much fiction – I’m more likely to be found trawling through a magazine or scrolling on my phone – but Matt Haig’s “The Midnight Library really got me. OK, so technically, I didn’t read it – it was an impulse purchase to use some credits before cancelling my Audible account – but it was a great listen. Beautifully read by Carey Mulligan, it’s one of those rare books that manages to be both dark and uplifting. Some reviews suggest that not everyone feels the same way – and my reading it at a time of grief and loss may have had an impact – but I found it to be one of my best reads in a long time.

Without spoiling anything, the idea of a liminal space between life and death – where you can explore the infinite versions of yourself – is quietly brilliant. Highly recommended. So much so that I bought another copy (dead tree edition) for my wife.

On LinkedIn this month…

It’s been a lively month over on LinkedIn, with my posts ranging from AI hype to the quirks of Gen-Z slang (and a fair dose of Node4 promotion). These are just a few of the highlights – follow me to get the full experience:

  • Jony and Sam’s mysterious new venture
    I waded into the announcement from Jony Ive and Sam Altman with, let’s say, a healthy dose of scepticism. A $6.5bn “something” was teased with a bland video and a promo image that felt more 80s album cover than product launch. It may be big. But right now? Vapourware.
  • Is the em dash trolling us?
    I chipped in on the debate about AI-written content and the apparent overuse of em dashes (—) –often flagged as an “AI tell” – usually by people who a) don’t understand English grammar or b) where LLMs learned to write. (I am aware that I incorrectly use en dashes in these posts, because people seem to find them less “offensive”.) But what if the em dash is trolling us?
  • Skibidi-bibidi-what-now?
    One of the lighter moments came with a post about Gen-Z/Gen-Alpha slang. As a Gen-Xer with young adult kids, I found a “translator” of sorts – and it triggered a few conversations about how language evolves. No promises I’ll be dropping “rizz” into meetings just yet. Have a look.
  • Politeness and prompting
    Following a pub chat with Phil Kermeen, I shared a few thoughts on whether being polite to AI makes a difference. TL;DR: it does. Here’s the post.
  • Mid-market momentum
    Finally, there have been lots of posts around the Node4 2025 Mid-Market Report. It was a big effort from a lot of people, including me, and I’m really proud of what we’ve produced. It’s packed with insights, based on bespoke research of over 600 IT and Business leaders.

Photos

A few snaps from my Insta’ feed…

https://www.instagram.com/markwilsonuk/p/DJr5Ui8N94u

For more updates…

That’s all for now. I probably missed a few things, but it’s a decent summary of what I’ve been up to at home and at work. I no longer use X, but follow me on LinkedIn (professional), Instagram (visual) and this blog for more updates – depending on which content you like best. Maybe even all three!

Next month…

A trip to Hamburg (to the world’s largest model railway); ramping up the work on Node4’s future vision; and hopefully I’ll fill in some of the gaps between January and May’s retrospectives!

Featured image: created by ChatGPT

Who gets paid when the machines take over?

Yesterday evening, I was at the Bletchley AI User Group in Milton Keynes. One of the talks was from Stephanie Stasey (/in/missai) (aka Miss AI), titled “Gen AI vs. white collar workers and trad wives – building a robot to make my bed”.

It was delivered as a monologue – which sounds negative, but really isn’t. In fact, it was engaging, sharp, and packed with food for thought. Stephanie brought a fresh perspective to a topic we’re all thinking about: how AI is reshaping both the world of work and the way we live at home.

The labour that goes unnoticed (but not undone)

One part of the talk touched on “trad wives” – not a term I was especially familiar with, but the theme resonated.

If you’d asked my wife and I in our 20s about how we’d divide up household tasks, we might have offered up a fair and balanced plan. But real life doesn’t always match the theory.

These days, we both work part-time – partly because unpaid labour (childcare, cooking, washing, cleaning, all the life admin) still needs doing. And there don’t seem to be enough hours when the laptop is closed.

The system isn’t broken – it’s working exactly as designed

The point I’ve been turning over for a while is this: it feels like we’re on the edge of something big.

We could be on the brink of a fundamental shift in how we think about work – if those in power wanted to make radical changes. I’ll avoid a full political detour, though I’m disheartened by the rise of the right and how often “ordinary people” are reminded of their place. (My youngest son calls me a champagne socialist – perhaps not entirely unfairly.)

Still, AI presents us with a rare opportunity to do things differently.

But instead of rethinking how work and value are distributed, we’re told to brace for disruption. The current narrative is that AI is coming for our jobs. Or a variation on that theme: “Don’t worry,” we’re told, “it won’t take your job – but someone using AI might”. That line’s often repeated. It’s catchy. But it’s also glib – and not always true.

I’m close enough to retirement that the disruption shouldn’t hit me too hard. But for my children’s generation? The impact could be massive.

What if we taxed the agents?

So here’s a thought: what if we taxed the AI agents?

If a business creates an agent to do the work a person would normally do – or could reasonably do – then that agent is taxed, like a human worker would be. It’s still efficient, still scalable, but the benefits are shared.

And, how would we live, if the jobs go away? That’s where Universal Basic Income (UBI) comes in (funded by taxes on agents, as well as on human effort).

Put simply, UBI provides everyone with enough to cover their basic needs – no strings attached. People can still work (and many will). For extra income. For purpose. For contribution. It just doesn’t have to be 9-to-5, five days a week. It could be four. Or two. The work would evolve, but people wouldn’t be left behind. It also means that the current, complex, and often unjust benefits system could be removed (perhaps with some exceptions, but certainly for the majority).

What could possibly go right?

So yes, the conversation around AI is full of what could go wrong. But what if we focused on what could go right?

We’ve got a window here – a rare one – to rethink work, contribution, and value. But we need imagination. And leadership. And a willingness to ask who benefits when the machines clock in.

Further reading on UBI

If you’re interested in UBI and how it might work in practice, here are some useful resources:

Featured image: author’s own.

Does vibe coding have a place in the world of professional development?

I’ve been experimenting with generative AI lately – both in my day job and on personal projects – and I thought it was time to jot down some reflections. Not a deep think piece, just a few observations about how tools like Copilot and ChatGPT are starting to shape the way I work.

In my professional life, I’ve used AI to draft meeting agendas, prepare documents, sketch out presentation outlines, and summarise lengthy reports. It’s a co-pilot in the truest sense – it doesn’t replace me, but it often gives me a head start. That said, the results are hit and miss, and I never post anything AI-generated without editing. Sometimes the AI gives me inspiration. Other times, it gives me American spelling and questionable grammar.

But outside work is where things got interesting.

I accidentally vibe coded

It turns out there’s a name for what I’ve been doing in my spare time: vibe coding.

First up, I wanted to connect a microcontroller to an OLED display and to control the display with a web form and a REST API. I didn’t know exactly how to do it, but I had a vague idea. I asked ChatGPT. It gave me code, wiring instructions, and step-by-step guidance to flash the firmware. It didn’t work out of the box – but with a few nudges to fix a compilation error and rework the wiring, I got it going.

Then, I wanted to create a single-page website to showcase a custom GPT I’d built. Again, ChatGPT gave me the starter template. I published it to Azure Static Web Apps, with GitHub for source control and a CI/CD pipeline. All of it AI-assisted.

Both projects were up and running quickly – but finishing them took a lot more effort. You can get 80% of the way with vibes, but the last 20% still needs graft, knowledge, or at the very least, stubborn persistence. And the 80% is the quick part – the 20% takes the time.

What is vibe coding?

In short: it’s when you code without fully knowing what you’re doing. You rely on generative AI tools to generate snippets, help debug errors, or explain unfamiliar concepts. You follow the vibe, not the manual.

And while that might sound irresponsible, it’s increasingly common – especially as generative AI becomes more capable. If you’re solving a one-off problem or building a quick prototype, it can be a great approach.

I should add some context: I do have a Computer Studies degree, and I can code. But aside from batch scripts and a bit of PowerShell, I haven’t written anything professionally since my 1992/93 internship – and that was in COBOL.

So, yes, I have some idea of what’s going on. But I’m still firmly in vibe territory when it comes to ESP32 firmware or HTML/CSS layout.

The good, the bad, and the undocumented

Vibe coding has clear advantages:

  • You can build things you wouldn’t otherwise attempt.
  • You learn by doing – with AI as your tutor.
  • You get to explore new tech without wading through outdated forum posts.

But it also has its pitfalls:

  • The AI isn’t always right (and often makes things up).
  • Debugging generated code can be a nightmare.
  • If you don’t understand what the code does, maintaining it is difficult – if not impossible.
  • AI doesn’t always follow best practices – and those change over time.
  • It may generate code that’s based on copyrighted sources. Licensing isn’t always clear.

That last pair is increasingly important. Large language models are trained on public code from the Internet – but not everything online is a good example. Some of it is outdated. Some of it is inefficient. Some of it may not be free to use. So unless you know what you’re looking at (and where it came from), you risk building on shaky ground.

Where next?

Generative AI is changing how we create, code, and communicate. But it’s not a magic wand. It’s a powerful assistant – especially for those of us who are happy to get stuck in without always knowing where things will end up.

Whether I’ve saved any time is up for debate. But I’ve definitely done more. Built more. Learned more.

And that feels like progress.

A version of this post was originally published on the Node4 blog.

Featured image by James Osborne from Pixabay.

Building my own train departure board (because why not?)

In the UK, we’re lucky to have access to a rich supply of transport data. From bus timetables to cycle hire stats, there’s a load of open data just waiting to be used in clever ways. Some of the more interesting data – at least for geeks like me – is contained in the National Rail data feeds. These provide real-time information about trains moving around the country. Every late-running service, every platform change, every cancelled train… it’s all there, in near real-time.

There are already some excellent tools built on top of this data. You may have come across some of these sites:

  • RealTimeTrains: essential for anyone who wants to go beyond the standard passenger information displayed at the station.
  • Live Departures Info: perfect for an always-on browser display – e.g. on digital signage.
  • LED Departure Board: provides a view of upcoming departures or arrivals at a chosen station.

There are even physical displays, like those from UK Departure Boards. These look to be beautifully made and ideal for an office or hallway wall. But they’re not cheap. And that got me thinking…

Why not build my own?

Armed with a Raspberry Pi Zero W and an inexpensive OLED display, I decided to have a go at making my own.

A quick bit of Googling turned up an excellent website by Jonathan Foot, whose DIY departure board guidance gave me exactly what I needed. It walks through how to connect everything up, pull real-time train data, and output it to a screen. There’s even a GitHub repo for the code. Perfect.

Well, almost.

A slightly different display

Jonathan recommends a particular OLED display but I thought it was a bit on the pricey side. In the spirit of experimentation (and budget-conscious tinkering), I opted for a 3.12″ OLED display (256×64, SSD1322, SPI) from AliExpress. I think it’s the same – just from another channel.

This wasn’t entirely straightforward.

The display I received was described as SPI-compatible, but it wasn’t actually configured for SPI out of the box. I sent the first one back. Then I realised they’re all like that – you have to modify the board yourself.

Breaking out the soldering iron

There were no jumpers to move. No handy DIP switch to flick. Instead, I had to convert it from 80xx to 4SPI mode. This involved removing a resistor (R6), then soldering a link between two pads (R5). Not the hardest job in the world, but definitely not plug-and-play either.

This wasn’t ideal. I’m terrible at soldering, and I’d deliberately bought versions of the Raspberry Pi and the display with pluggable headers. But hey, I’d got this far. The worst thing that could happen is that I blew up a £12 display, right?

The modifications that I made to the display. (The information I needed is printed as a table on the back of the board.)

Once that was done, though – magic! The display came to life with data from my local station, and a rolling list of upcoming arrivals and departures. It’s a surprisingly satisfying thing to see working for the first time, especially knowing what went into making it happen.

What’s next?

All that’s left now is to print a case (or find someone with a 3D printer who owes me a favour). For around £50 in total, I’ve got a configurable, real-time train departure board that wouldn’t look out of place in a study, hallway or even the living room (subject to domestic approval, of course).

It’s been a fun little side project. A mix of software tinkering, a bit of hardware hacking, and that moment of joy when it all works together. And if you’ve ever looked at those expensive display boards and thought, I bet I could make one of those – well, now you know… you probably can.

Featured image: author’s own.