Andalucía remembered

Two decades on, we came once more,
To southern Spain, and sun-kissed shores.
Nerja welcomed with skies so wide,
Where sea and mountain gently collide.

From terrace high, the blue expanse,
Each morning caught us in a trance.
Fresh coffee, then to the beach we’d roam,
Before the heat would drive us home.

Villages basked in golden light,
The sea turned silver come the night.
Warmth on skin, cool drinks in hand,
We let the days unfold unplanned.

Laughter echoed, glasses clinked,
We paused, we smiled, we stopped to think.
Days defined by time and place —
Sun and family, gentle pace.

One final day in Málaga’s hum,
Before the holiday was done.
Now back to clouds and colder climes,
But held inside, those warmer times.

(A collaboration between me, and ChatGPT… showing why I should stick to tech and leave the poetry to poets…)

Featured image: author’s own.

Monthly retrospective: May 2025

I’ve been struggling to post retrospectives this year – they are pretty time consuming to write. But, you may have noticed the volume of content on the blog increasing lately. That’s because I finally have a workflow with ChatGPT prompts that help me draft content quickly, in my own style. (I even subscribe to ChatGPT now, and regular readers will know how I try to keep my subscription count down.) Don’t worry – it’s still human-edited (and there are parts of the web that ChatGPT can’t read – like my LinkedIn, Instagram and even parts of this blog) so it should still be authentic. It’s just less time-consuming to write – and hopefully better for you to read.

On the blog…

Home Assistant tinkering (again)

I’ve been continuing to fiddle with my smart home setup. This month’s project was replacing the ageing (and now unsupported) Volvo On Call integration in Home Assistant with the much better maintained HA Volvo Cars HACS integration. It works brilliantly – once you’ve jumped through the hoops to register for an API key via Volvo’s developer portal.

And no, that doesn’t mean I can now summon my car like KITT in Knight Rider – but I can check I locked it up and warm it up remotely. Which is almost as good. (As an aside, I saw KITT last month at the DTX conference in Manchester.)

Software-defined vehicles

On the subject of cars, I’ve been reflecting on how much modern cars depend on software – regardless of whether they’re petrol, diesel or electric. The EV vs. ICE debate often centres on simplicity and mechanics (less moving parts in an EV), but from my experience, the real pain points lie in the digital layer.

Take my own (Volvo V60, 2019 model year). Mechanically it’s fine and it’s an absolute luxury compared with the older cars that my wife and sons drive, but I’ve seen:

  • The digital dashboard reboot mid-drive
  • Apple CarPlay refusing to connect unless I “reboot” the vehicle
  • Road sign recognition systems confidently misreading speed limits

Right now, it’s back at the body shop (at their cost, thankfully) for corrosion issues on a supposedly premium marque. My next car will likely be electric – but it won’t be the drivetrain that convinces me. It’ll be the software experience. Or, more realistically, the lack of bad software. Though, based on Jonathan Phillips’ experience, new car software contains alarming typos in the UI, which indicates a lack of testing…

Thinking about the impact of generative AI

This update isn’t meant to be about AI – but it seems it is – because it’s become such a big part of my digital life now. And, increasingly, it’s something I spend more time discussing with my clients.

AI isn’t new. We’ve had robotic process automation (RPA), machine learning, data science and advanced analytics for years. I even studied neural networks at Poly’ in the early 1990s. But it’s generative AI that’s caught everyone’s imagination – and their budgets.

In Episode 239 of the WB-40 podcast (AI Leadership), I listened to Matt Cockbill talk about how it’s prompting a useful shift in how we think about technology. Less about “use cases” and more about “value cases” – how tech can improve outcomes, streamline services, and actually help achieve what the organisation set out to do.

The rush to digitise during COVID saw huge amounts of spending – enabling remote working or entrenching what was already there (hello, VDI). But now it feels like the purse strings are tightening, and some of that “why are we doing this again?” thinking is creeping back in. Just buying licences and rolling out tools is easy. Changing the way people work and deliver value? That’s the real work.

Meal planning… with a side of AI

I’ve also been experimenting with creating an AI-powered food coach to help me figure out what to eat, plan ahead, and avoid living off chocolate Hobnobs and toasted pitta. Still early days – but the idea of using an assistant to help nudge me towards healthier, simpler food is growing on me.

Reading: The Midnight Library

I don’t read much fiction – I’m more likely to be found trawling through a magazine or scrolling on my phone – but Matt Haig’s “The Midnight Library really got me. OK, so technically, I didn’t read it – it was an impulse purchase to use some credits before cancelling my Audible account – but it was a great listen. Beautifully read by Carey Mulligan, it’s one of those rare books that manages to be both dark and uplifting. Some reviews suggest that not everyone feels the same way – and my reading it at a time of grief and loss may have had an impact – but I found it to be one of my best reads in a long time.

Without spoiling anything, the idea of a liminal space between life and death – where you can explore the infinite versions of yourself – is quietly brilliant. Highly recommended. So much so that I bought another copy (dead tree edition) for my wife.

On LinkedIn this month…

It’s been a lively month over on LinkedIn, with my posts ranging from AI hype to the quirks of Gen-Z slang (and a fair dose of Node4 promotion). These are just a few of the highlights – follow me to get the full experience:

  • Jony and Sam’s mysterious new venture
    I waded into the announcement from Jony Ive and Sam Altman with, let’s say, a healthy dose of scepticism. A $6.5bn “something” was teased with a bland video and a promo image that felt more 80s album cover than product launch. It may be big. But right now? Vapourware.
  • Is the em dash trolling us?
    I chipped in on the debate about AI-written content and the apparent overuse of em dashes (—) –often flagged as an “AI tell” – usually by people who a) don’t understand English grammar or b) where LLMs learned to write. (I am aware that I incorrectly use en dashes in these posts, because people seem to find them less “offensive”.) But what if the em dash is trolling us?
  • Skibidi-bibidi-what-now?
    One of the lighter moments came with a post about Gen-Z/Gen-Alpha slang. As a Gen-Xer with young adult kids, I found a “translator” of sorts – and it triggered a few conversations about how language evolves. No promises I’ll be dropping “rizz” into meetings just yet. Have a look.
  • Politeness and prompting
    Following a pub chat with Phil Kermeen, I shared a few thoughts on whether being polite to AI makes a difference. TL;DR: it does. Here’s the post.
  • Mid-market momentum
    Finally, there have been lots of posts around the Node4 2025 Mid-Market Report. It was a big effort from a lot of people, including me, and I’m really proud of what we’ve produced. It’s packed with insights, based on bespoke research of over 600 IT and Business leaders.

Photos

A few snaps from my Insta’ feed…

https://www.instagram.com/markwilsonuk/p/DJr5Ui8N94u

For more updates…

That’s all for now. I probably missed a few things, but it’s a decent summary of what I’ve been up to at home and at work. I no longer use X, but follow me on LinkedIn (professional), Instagram (visual) and this blog for more updates – depending on which content you like best. Maybe even all three!

Next month…

A trip to Hamburg (to the world’s largest model railway); ramping up the work on Node4’s future vision; and hopefully I’ll fill in some of the gaps between January and May’s retrospectives!

Featured image: created by ChatGPT

Does vibe coding have a place in the world of professional development?

I’ve been experimenting with generative AI lately – both in my day job and on personal projects – and I thought it was time to jot down some reflections. Not a deep think piece, just a few observations about how tools like Copilot and ChatGPT are starting to shape the way I work.

In my professional life, I’ve used AI to draft meeting agendas, prepare documents, sketch out presentation outlines, and summarise lengthy reports. It’s a co-pilot in the truest sense – it doesn’t replace me, but it often gives me a head start. That said, the results are hit and miss, and I never post anything AI-generated without editing. Sometimes the AI gives me inspiration. Other times, it gives me American spelling and questionable grammar.

But outside work is where things got interesting.

I accidentally vibe coded

It turns out there’s a name for what I’ve been doing in my spare time: vibe coding.

First up, I wanted to connect a microcontroller to an OLED display and to control the display with a web form and a REST API. I didn’t know exactly how to do it, but I had a vague idea. I asked ChatGPT. It gave me code, wiring instructions, and step-by-step guidance to flash the firmware. It didn’t work out of the box – but with a few nudges to fix a compilation error and rework the wiring, I got it going.

Then, I wanted to create a single-page website to showcase a custom GPT I’d built. Again, ChatGPT gave me the starter template. I published it to Azure Static Web Apps, with GitHub for source control and a CI/CD pipeline. All of it AI-assisted.

Both projects were up and running quickly – but finishing them took a lot more effort. You can get 80% of the way with vibes, but the last 20% still needs graft, knowledge, or at the very least, stubborn persistence. And the 80% is the quick part – the 20% takes the time.

What is vibe coding?

In short: it’s when you code without fully knowing what you’re doing. You rely on generative AI tools to generate snippets, help debug errors, or explain unfamiliar concepts. You follow the vibe, not the manual.

And while that might sound irresponsible, it’s increasingly common – especially as generative AI becomes more capable. If you’re solving a one-off problem or building a quick prototype, it can be a great approach.

I should add some context: I do have a Computer Studies degree, and I can code. But aside from batch scripts and a bit of PowerShell, I haven’t written anything professionally since my 1992/93 internship – and that was in COBOL.

So, yes, I have some idea of what’s going on. But I’m still firmly in vibe territory when it comes to ESP32 firmware or HTML/CSS layout.

The good, the bad, and the undocumented

Vibe coding has clear advantages:

  • You can build things you wouldn’t otherwise attempt.
  • You learn by doing – with AI as your tutor.
  • You get to explore new tech without wading through outdated forum posts.

But it also has its pitfalls:

  • The AI isn’t always right (and often makes things up).
  • Debugging generated code can be a nightmare.
  • If you don’t understand what the code does, maintaining it is difficult – if not impossible.
  • AI doesn’t always follow best practices – and those change over time.
  • It may generate code that’s based on copyrighted sources. Licensing isn’t always clear.

That last pair is increasingly important. Large language models are trained on public code from the Internet – but not everything online is a good example. Some of it is outdated. Some of it is inefficient. Some of it may not be free to use. So unless you know what you’re looking at (and where it came from), you risk building on shaky ground.

Where next?

Generative AI is changing how we create, code, and communicate. But it’s not a magic wand. It’s a powerful assistant – especially for those of us who are happy to get stuck in without always knowing where things will end up.

Whether I’ve saved any time is up for debate. But I’ve definitely done more. Built more. Learned more.

And that feels like progress.

A version of this post was originally published on the Node4 blog.

Featured image by James Osborne from Pixabay.

Generative AI is just a small part of the picture

This content is 1 year old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

This post previously appeared on my LinkedIn feed. I thought it should have been here…

They say that, when all you have is a hammer, every problem that needs solving looks like a nail. Well, something like that anyway. Generative AI (GenAI) is getting a lot of airtime right now, but it’s not the answer to everything. Want a quick draft of some content? Sure, here it is – I’ve made up some words for you that sound like they could work. (That is literally how an LLM works.)

On the other hand, I spent yesterday afternoon grappling with Microsoft Copilot as it gave me lots of credible sounding information… with sources that just don’t exist, or don’t say the things it says they do. That’s quite frightening because many people will just believe the made-up stuff, repeat it and say “I got it from Copilot/ChatGPT/insert tool of choice”.

Anyway, artificial intelligence (AI) is more than just GenAI – and last night I watched this video from Eric Siegel. Once all the hype about GenAI has died down, maybe we’ll find some better uses for other AI technologies like predictive AI. One thing is for sure, Artificial General Intelligence (AGI) is not coming any time soon…