Are you in the UK and looking at using Apple AirPods Pro 2 as hearing aids? Read this first!

I’m sorry for the clickbait headline, but the urgency is real, because I’m seeing people making purchasing decisions based on a technical feature that’s not available in the UK yet.

If you’re a middle-aged man or woman, you may have noticed that it’s difficult to hear people in some social situations. I certainly have, and so have some of my friends. Generally in pubs and bars with hard surfaces and lots of background noise.

I tell myself that I need to get a professional hearing test. I keep trying at Specsavers when I have my eyes tested but have struggled with appointment availability. And anyway, it’s not that bad. Plus I don’t have a couple of thousand pounds ready for buying hearing aids.

Apple is bringing Hearing Health capabilities to the masses

When I heard that Apple AirPods Pro 2 have hearing aid capabilities, I was very interested. A consumer tech device that might help me in those limited circumstances when I need to wear a hearing aid, without the financial outlay.

It’s been possible to create an audiogram and use it with your AirPods (or other headphones) for a while, but there’s lots of excitement as Apple is bringing Hearing Health capabilities natively to the iPhone with iOS18 and with AirPods Pro 2. But, if you’re in the UK, you might want to hold off…

Here’s the problem: AirPods Pro 2 do not yet have regulatory approval as hearing aids in the UK.

They do in many other countries, but not here. Not at the time of researching this post in late-November 2024. But there is a global website, and a global ad campaign. Apple even says in the notes for this ad that:

“The Hearing Test and Hearing Aid features are regulated health features that require approval and will be offered after authorization is received. Feature availability varies by region”

Unfortunately, I’ve seen people (including those with profound hearing loss) saying they will ask Santa for some AirPods Pro for Christmas, based on advertising this feature.

So, what can you do?

  1. Firstly, and I rarely give this advice to anyone, turn off automatic updates. Do not let your iPhone update to iOS 18.x. Manually apply updates for 17.x. Of course, that means you won’t get other iOS18 goodness either, but Apple Intelligence isn’t available in the UK yet either (like the Heading Aid feature, it’s “coming soon”).
  2. Then, download the Mimi app, find a quiet space and carry out a hearing test. Follow these instructions to save the audiogram to Apple Health and set up the Headphone Accomodations for your AirPods. Basically, you can get some what Apple will bring to the UK, but only with older operating systems that don’t have the Apple capabilities built in (and turned on for other regions).
  3. Finally, keep an eye on the Apple website. This is the page that has the details on regional availability for the Apple Hearing Heath features.

One more thing

The new Hearing Health features are for Apple AirPods Pro 2. I checked mine: they are listed on my receipt as “AirPods Pro (2nd generation)”. Is that the same thing? The short answer is “yes”, but it took me a while to get that information.

I had an infuriating online chat with Apple Support, who seemed incapable of understanding my question, despite me providing serial numbers and product codes. Thankfully, I also found an Apple support article, which gave me the answer (yes). Mine are model number A3048 which is now called “AirPods Pro 2 with MagSafe Charging Case (USB-C)”. Why can’t they just say “the marketing folks changed the name”?

Featured image by Miguel Angel Avila on Unsplash.

Microsoft Ignite 2024 on a page

You probably noticed, but Microsoft held its Ignite conference in Chicago last week. As is normal now, there’s a “Book of News” for all the major announcements and the keynotes are available for online review. But there’s an awful lot to sort through. Luckily, CNET created a 15 minute summary of Satya Nadella’s keynote:

Major announcements from Ignite 2024

Last year, I wrote about how it was clear that Microsoft is all about Artificial Intelligence (AI) and this year is no different. The rest of this post focuses on the main announcements with a little bit of analysis from yours truly on what the implications might be.

AnnouncementWhat it meansFind out more
Investing in security, particularly around Purview.Data governance is of central importance in the age of AI. Microsoft has announced updates to prevent oversharing, risky use of AI, and misuse of protected materials. With one of the major concerns being accidental access to badly-secured information, this will be an important development, for those that make use of it.https://aka.ms/Ignite2024Security/
Zero Day QuestA new hacking event with $4m in rewards. Bound to grab headlines!https://aka.ms/ZeroDayQuest
Copilot as the UI for AIIf there’s one thing to take away from Ignite it’s that Microsoft sees Copilot as the UI for AI (it becomes the organising layer for work and how it gets done).

1. Every employee will have a Copilot that knows them and their work – enhancing productivity and saving time.
2. There will be agents to automate business processes.
3. And the IT dept has a control system to manage secure and measure the impact of Copilot.
Copilot ActionsCopilot Actions are intended to reduce the time spent on repetitive everyday tasks – they were described as “Outlook Rules for the age of AI” (but for the entire Microsoft 365 ecosystem). I’m sceptical on these but willing to be convinced. Let’s see how well they work in practice.https://aka.ms/CopilotActions
Copilot AgentsIf 2023-4 were about generative AI, “agentic” computing is the term for 2025.

There will be Agents within the context of a team – teammates scoped to specific roles – e.g. a facilitator to keep meeting focus in Teams and manage follow-up/action items; a Project Management Agent in Planner – to create a plan and oversee task assignments/content creation; self-service agents to provide information – augmenting HR and IT departments to answer questions and complete tasks; and a SharePoint Agent per site – providing instant access to real-time information.

Organisations can create their own agents using Copilot Studio – and the aim is that it should be as easy to create an Agent as it is to create a document.
https://aka.ms/AgentsInM365
Copilot AnalyticsAnswering criticism about the cost of licensing Copilot, Microsoft is providing analytics to correlate usage to a business metric. Organisations will be able to tune their Copilot usage to business KPIs and show how Copilot usage is translating into business outcomes.https://aka.ms/CopilotAnalytics
Mobile Application Management on Windows 365Microsoft is clearly keen to push its “cloud PC” concept – Windows 365 – with new applications so that users can access a secure computing environment from iOS and Android devices. Having spent years working to bring clients away from expensive thin client infrastructure and back to properly managed “thick clients”, I’m not convinced about the “Cloud PC”, but maybe I’m just an old man shouting at the clouds…https://aka.ms/WindowsAppAndroid
Windows 365 LinkWindows 365 Link is a simple, secure purpose built access device (aka a thin PC). It’s admin-less and password-less with security configurations enabled by default that cannot be turned off. The aim is that users can connect directly to their cloud PC with no data left locally (available from April 2025). If you’re going to invest in this approach, then it could be a useful device – but it’s not a Microsoft version of a Mac Mini – it’s all about the cloud.https://aka.ms/Windows365Link
Windows Resiliency InitiativeDoes anyone remember “Trustworthy Computing”? Well, the Windows Resiliency Initiative is the latest attempt to make Windows more secure and reliable. It includes new features like Windows Hotpatch to apply critical updates without a restart across an entire IT estate. https://aka.ms/WinWithSecurity
Azure LocalA rebranding and expansion of Azure Stack to bring Azure Arc to the edge. Organisations can run mission critical workloads in distributed locations.https://aka.ms/AzureLocal
Azure Integrated HSMMicrosoft’s first in-house security chip hardens key management without impacting performance. This will be part of every new server deployed on Azure starting next year.https://aka.ms/AzureIntegratedHSM
Azure BoostMicrosoft’s first in-house data processing unit (DPU) is designed to accelerate data-centric workloads. It can run cloud storage workloads with 3x less power and 4x the performance.https://aka.ms/AzureBoostDPU
Preview NVIDIA Blackwall AI infrastructure on AzureBy this point, even I’m yawning, but this is a fantastically fast computing environment for optimised AI training workloads. It’s not really something that most of us will use.https://aka.ms/NDGB200v6
Azure HBv5Co-engineered with AMD, this was described as a new standard for high performance computing and cited as being up to 8 times faster than any other cloud VM.https://aka.ms/AzureHBv5

FabricSQL Server is coming natively to Fabric in the form of Microsoft Fabric Databases. The aim here is to simplify operational databases as Fabric already did for analytical requirements. It provides an enterprise data platform that serves all use cases, making use of open source formats in the Fabric OneLake data lake. I have to admit, it does sound very interesting, but there will undoubtedly be some nuances that I’ll leave to my data-focused colleagues.https://aka.ms/Fabric
Azure AI FoundryDescribed as a “first class application server for the AI age” – unifying all models, tooling, safety and monitoring into a single experience, integrated with development tools as a standalone SDK and a portal. 1800 models in the catalogue for model customisation and experimentation.https://aka.ms/MaaSExperimentation
https://aka.ms/CustomizationCollaborations
Azure AI Agent ServiceBuild, deploy and scale AI apps to automate business processes. Compared with Copilot Studio for a graphical approach, this provides a code-first approach for developers to create agents, grounded in data, wherever it is.https://ai.azure.com/
Other AI announcementsThere will be AI reports and other management capabilities in Foundry, including including evaluation of models.

Safety is important – with tools to build secure AI including PromptShield to detect/block manipulation of outputs and risk/safety evaluations for image content.
Quantum ComputingThis will be the buzzword that replaces AI in the coming years. Quantum is undoubtedly significant but it’s still highly experimental. Nevertheless, Microsoft is making progress in the Quantum arms race, with a the “World’s most powerful quantum computer” with 24 logical Qubits, double the previous record.https://aka.ms/AQIgniteBlog

Featured image: screenshots from the Microsoft Ignite keynote stream, under fair use for copyright purposes.

Putting AI to work: making content more accessible

I’m really struggling with AI right now. On the one hand, it’s infuriating when it doesn’t help me with the task or activity that I prompt it to, and hallucinates total garbage. I’m also concerned for humanity, as the blind faith that some have in AI seems to me to be pretty dangerous. But, on another level, if we strip away the overenthusiastic promises, and look at the little things that are now embedded in our productivity tools, it can be quite impressive.

Multiple takes

Today, I recorded a short video for an internal SharePoint site that supports some work I’m doing around Node4’s 2030 vision and strategy. It took about 8 takes and a lot of wrangling with technology for a simple talking head in front of a background. But I got there and, after an hour or so, I had the minute or so of content I wanted. I didn’t use professional video tools – just the things I have in Microsoft 365 and Windows 11 – I recorded myself in Teams against a custom background I’d created in PowerPoint, used Clipchamp to trim the video, and uploaded to Stream.

Copilot has entered the room

Then the cool stuff started. Microsoft Stream generated captions from the video. They weren’t quite right – but the edits were minimal and it let me make them. That means that I now have a working transcript and closed captions – assistive technologies to aid viewers.

But the impressive part was when I asked Copilot to summarise the content. This is what it came up with.

Key Topics:

  • Introduction to Vision2030: Mark Wilson, one of the Technology and Innovation Directors in Node4’s Office of the CTO, introduced Vision2030, a forward-looking strategy that outlines key trends and technologies expected to define the future of mid-market businesses. 0:00
  • Focus of Vision2030: Mark Wilson explained that Vision2030 focuses on integrating emerging innovations across all technology verticals in which Node4 operates, aiming to empower clients to anticipate and adapt to future technology trends, navigate complexity, and increase competitiveness and resilience. 0:21
  • Creating Lasting Value: Mark Wilson emphasized that Vision2030 is not just about adopting new technologies but also about helping clients create lasting value. 0:39
  • Future Content and Sales Plays: Mark Wilson mentioned that over the coming weeks and months, Node4 will create more content that builds on the core theme of enabling productivity and directly links to their sales plays, ensuring a clear connection from Vision through go-to-market to delivery. 0:45
  • Commitment to Mid-Market Businesses: Mark Wilson reiterated Node4’s commitment to guiding mid-market businesses through the transformation, ensuring they are well-positioned to succeed in the future digital economy. 0:57

Spot on. Sure, it has an edited transcription to work from but now my colleagues don’t even need to watch the video. (Which begs the question why I recorded it in the first place – to which the answer is choice).

Changing the medium

So now, lets take this a bit further… out of Copilot and Stream and into the real implications of this technology. Starting with a couple of observations:

  • When I’m driving, Apple CarPlay reads my messages to me. Or, I ask Siri to send a message, or to take a note.
  • When I’m in a group messaging situation, some people seem to have a propensity to create short form audio.

I used to think that WhatApp voice messages are the spawn of the devil. Why should I have to listen to someone drone on for 30 seconds when I could read a text message much more quickly? Is it because they couldn’t be bothered to type? Then someone suggested it might be because they struggle with writing. That puts a different lens on things.

Create and consume according to our individual preferences

Now, with technologies like this we can create content in audio/video or written form – and that same content can be consumed in audio/video or written form. We can each use our preferred methods to create a message, and the recipient can use their preferred method to consume it.

This is the stuff that really makes a difference – the little things that make someone’s life easier – all adding up to a bigger boost in our individual productivity, or just getting things done.

Featured image – author’s own screenshot

The enshittification of technical support information

I apologise to anyone inadvertently offended with the title of this post, but if the BBC’s Technology Editor can use the word “enshittification” as an example of retrograde progress, then so will I. It was also the American Dialect Society’s “Digital Word of the Year” in 2023, so that seems fair.

You see, I’m writing this on my phone, as I wait for my PC to be rebuilt onto a new corporate setup (and as the US Election results come in, which have not lightened my mood).

One of the issues I had whilst preparing to wipe all my data was a OneNote export process which said:

“OneNote cannot pack these sections because some sections have not yet been fully synced and could not be downloaded. These pages will not be saved. Continue anyway? Yes | No”

Great. It’s probably some embedded media that no longer exists… but that message is not very helpful for identifying a problem. There are hundreds of pages of notes across about a hundred sections. Which pages won’t be saved?

Still it could be worse. There’s plenty of cutesy messages these days that say something along the lines of:

“Oops, something went wrong”.

Nice. What went wrong? Can you give me a clue please so I might be able to find a resolution to the issue?

A brief history of technical support resources

When I started working in IT, we didn’t have the World Wide Web. Admittedly, we didn’t have as much information at our fingertips either but IT support often required a subscription to a database of knowledge articles. These were created by the software vendor and were reliable. At ICL we had something called PC-PARIS (PARIS was the support knowledge base). Microsoft had TechNet subscriptions with support information and updated software. Both of these were distributed to subscribers in CD-ROM format. This was a time when a typical Internet connection ran at around 33Kbps.

Errors had codes. We could look up what they meant and identify the cause. Not now. What does “[your application] ran into a problem and had to stop” mean?

As the web took off, so did blogs. People like me posted answers to problems we found. Then big websites started SEOing their way to the top and user-submitted content was the way forwards. Some of that content was good. Some of it less so. And we’ve all seen that geek on an online forum who doesn’t answer the question but instead provides a terse comment as to why the thing being asked is totally wrong because why would anyone do it that way anyway?

AI FTW?

Now we have AI answering the questions. Google or Bing will insert an AI-generated response at the top of the search results and, sometimes it’s helpful. But it’s an LLM – it literally makes stuff up based on probability. Some of that stuff is just wrong.

I may just be a grumpy old man, but I really just want detailed problem reports, accurate analysis, and reliable search results. Because not all progress is good progress.

Featured image from Pixabay.

Think about the end-user experience

I recently marked 30 years working full-time in the IT industry. That’s a long time. When I started, we didn’t all have laptops (I shared a PC in the office), we had phones on desks, administrators to help with our… administration, and work was a place where we went as well as a thing that we did.

Over time, I’ve seen a lot of change: new systems, processes, ways of working. But right now is the biggest of them all. For the last nine-and-a-half years, all of my work has been stored in one Office 365 tenant. Now it’s being migrated to another, as part of some cleanup from a merger/acquisition that took place a while ago.

I’m just a normal end-user

I’m just a user. Albeit one with a technical background. And maybe that’s why I’m concerned. During the Covid-19 pandemic, I was issued a new laptop and everything was rebuilt using self-service. It went very well, but this is different. There is no going back. Once my laptop has been wiped and rebuilt into the new organisation, there is no “old machine” to go back to if I incorrectly synced my data.

Sure, I’ve done this before – but only when I’ve left one employer to go somewhere else. Never when I’ve been trying to continue work in the same place.

People change management

To be clear, the migration team has been great. This is not your typical internal IT project – this is being run properly. There are end-user communications, assigned tasks to complete to help make sure everything goes smoothly, FAQs, migration guides. ADKAR is in full flow. It all looks like it should go swimmingly. But 30 years of working in tech tells me to expect the unexpected (plus a tendency to be over-anxious and to catastrophise). And modern security practices mean that, if I was to make a copy of all my data on an external drive, “just in case”, I’ll set all sorts of alarm bells ringing in the SOC.

I’ll have to roll with it.

The schedule

So, there’s the technical issues resolved – or at least put to one side. Next is the migration window. It runs for 2 weeks. But the second of those weeks is the school half term holidays in a sizeable chunk of the UK. I, for one, will be away from work. I also have an assignment to complete by the end of the month, all the usual pre-holiday preparations squaring work away, and this is whilst I have two days taking part in an AI hackathon event and two days when I’m on call for questions in relation to our Microsoft Azure Expert MSP audit. “I’m sorry, I can’t find that information right now because I’m rebuilding my PC and migrating between Microsoft 365 tenants” isn’t going to go down well.

In short, there is no good time for my migration. And this is what it’s like for “real” end-users in our clients’ organisations. When they don’t want to clear down their email or delete old data it’s (generally) not because they are awkward (well, not always). They have a job to do, and we (IT) are doing something with one of the tools that they use to do that job. There’s uncertainty about how things will work after the migration and they need to allocate time. Time that they may not have.

Walking in someone else’s shoes

All too often, us IT folks just say “it’ll be fine”, without understanding the uncertainty that we impose on our customers – the users of the systems that we manage. Maybe it’s good for me to stand in their shoes, to be a typical business end-user, to understand what it’s like to be on the end of an IT project. Maybe we should all do it more often, and then we can run better projects.

Featured image by Kosta from Pixabay.

Monthly retrospective: August 2024

I know it’s nearly the end of September now, but here’s my look back at what I got up to in August…

The two most significant events were:

Right. Rant over. What else did I get up to in August?

  • Well, for starters, I actually wrote a blog post (not just the one about my holidays). Inspired by one of the challenges my team faces, I wrote about choosing between chats and channels in Microsoft Teams (and the types of channels that should be used). Spoiler: if you want to collaborate, or to communicate with more than a few users, choose a channel. Though, as one colleague pointed out, if the decision needs a flow chart, it’s probably too complicated…
  • I needed to visit the Apple Store in Milton Keynes and found that it’s moved, expanded, and has a huge new screen across the entire back wall. My pictures don’t do it justice but it’s a big improvement on the old store…
  • I found myself commenting on a former colleague’s LinkedIn post about Klarna replacing human employees with AI. There’s enough in there for a whole blog post… but I think it’s worth a look at Marcus’ post and the various replies…
  • I also weighed in on my colleague Glenn Akester’s post about the Crowdstrike outage (not a “Microsoft outage”!) looking at what happened, and critically, some of the lessons we can learn to reduce the impact of similar problems in future.
  • I also got involved in a discussion about Gartner Hype Cycles, after it seemed the whole world picked up on one article by the Economist that seemed (prematurely, in my opinion), to be suggesting that AI is through the hype now (and that not all technologies go through the hype cycle).
  • Following on from the AI post, we move to the underlying data:
  • I’ve had a Bookings page on Microsoft 365 for a while but it doesn’t get much use. That was until I added it to my LinkedIn profile and posted about Node4’s partnership with Elastio for ransomware protection. The first contact came soon after. Admittedly, it wasn’t a customer but it was good to connect, understand someone else’s proposition and bear it in mind for the future!
  • A colleague flagged a CIO post by Brian Solis about “the end of digital transformation and the rise of business model innovation”. He was asking if the digital transformation term is tired and if AI will make business leaders rethink their business models to make good use of it. Again, I think, given time, I could write a whole post on this topic but here’s the top line:
    • It’s an interesting article but predicated on the author’s view that much digital transformation missed the transformation part and digitised existing processes. I think that’s the key there – abuse of a term for marketing purposes… 
    • Many people skipped the business transformation part (what we used to call business process re-engineering). The author seems to be saying that we now need to do that, but using AI… and dreaming up a new term “business model innovation”.
    • I don’t think we need new terms to confuse the issue. What we need is to double down on Digital Transformation. AI is part of the toolset, but it’s also the latest shiny thing. I still think this post that I wrote five years ago has legs. 
  • On that note, I’ve started to write a book… on digital transformation (to be co-authored with my colleague Bjoern Hirtenjohann). More on that as it starts to take shape.
  • Diving into tech for a moment, this post explains why I’ve been receiving emails from Microsoft about a deleted Azure subscription. They were concerning but everything seemed to work, so I put them to the back of my mind. It turns out that it’s just really bad communication and, retiring outdated “Access to Azure Active Directory” subscriptions is removing a legacy method for integrating Azure AD (now Entra ID) with Office 365 (now Microsoft 365). There’s more on the Access to Azure Active Directory subscriptions in this post by Jason Fritts.
  • Before I wrap up this month’s retrospective, here’s a couple more articles that piqued my interest:
    • Sonos’ CEO was forced to admit that S2 isn’t coming back, because the technical architecture has moved on and it can no longer work. Time will tell whether this becomes a really damaging point in the company’s history. They’ve ridden out controversy in the past (over technical obsolescence) but it really is important to listen to customer feedback on your app. Personally, I find it unusable and mostly rely on Spotify’s integration with my Sonos speakers…
    • High-end racing bikes are now vulnerable to hacking. Yep. You read that correctly. Wireless gears can be hacked… potentially to impede other riders’ progress.

Watching and listening

It seems I haven’t been posting any photos recently (not since my holiday), so in place of the normal section on photography, a couple of things I’ve been enjoying recently:

  • Watching series 1 of Sherwood on BBC iPlayer. We haven’t got further than episode 1 of the second series yet (and season 3 has been commissioned), but I found series 1 fascinating from the perspective of someone who was only a child at the time of The Miners’ Strike but has seen the long term implications of the UK’s move from manufacturing to a services economy.
  • Now that you can only pause an Audible subscription once a year, I’m trying to find things to spend my surplus credits on (and then cancel my subscription). Non-fiction can be a bit heavy sometimes (and I listen to a lot of podcasts) so I tried some fiction… KL Slater’s “Message Deleted” was quickly consumed via my AirPods as I went about my weekend activities!

That’s all for now – watch out for September’s retrospective in a week or two!

Featured image: author’s own

The rise of the local cloud

The discount retailer Lidl recently found its way into the news as its parent company began offering cloud services to outside organisations. That got us thinking about Node4’s UK-based cloud services, and how, sometimes, a local cloud is exactly the solution that’s required to meet clients’ needs.

The rise of self-service cloud services

Its well-known that Amazon Web Services (AWS) began when Amazon realised the value of its own IT systems and the investment it had made in shared services. Amazon had created a shared IT platform so its engineering organisations, which were previously spending a high proportion of their time on IT infrastructure problems, could instead focus on customer-facing innovation. As the shared services provided value for amazon.com, Amazon saw the potential in providing web services to other organisations and AWS was born. Through AWS, Amazon began providing external customers access to an on-demand cloud computing platform on a pay-as-you-go basis.

Since then, we’ve seen Microsoft (Azure) and Google Cloud Platform (GCP) launch and compete with AWS. In the US, Europe and elsewhere, these three “hyperscale platforms” dominate cloud computing services.

Unexpected item in the bagging area

With the rise of the hyperscalers, the major systems integrators suddenly had new competition. Companies didn’t have to invest in their own infrastructure – they could consume services according to demand. And if they weren’t running their own infrastructure, then they didn’t need someone to host and manage it for them.

Some of the major SIs launched their own clouds, but they have never had the scale or the traction of the hyperscalers. So, the SIs partnered with one or more of the hyperscalers, albeit reluctantly, to respond to customer demand.

Germany is particularly well-known for its stringent privacy and data protection laws but it’s not unique. In response to this, the hyperscalers began placing datacentres in specific regions to address data sovereignty concerns. Here in the UK, Microsoft opened the UK South and UK West regions in Azure. AWS and GCP made similar investments. And let’s not forget that markets like China are so large that they have their own hyperscale providers!

In the middle at Lidl

Recently, the German discount retailer Lidl made the news as its parent company Schwarz Group started to offer cloud computing and cybersecurity services to corporate customers.

In a remarkable parallel with AWS’ history, the Irish Times reported that Schwarz started with a system built for internal use in 2021. Schwarz Digits, an IT unit that initially served only the group’s internal needs branched out to serve external clients in 2022. Now there’s an independent company within the Schwarz Group, StackIT, existing alongside XM Cyber and the group’s minority investment in a German AI startup called Aleph Alpha.

Marketed as “the German business cloud”, StackIT’s main selling point is that all client data is processed and stored exclusively in Germany and Austria. It offers a suite of cloud services, focused on the German retail (unsurprisingly), health and public sectors.

The case for local cloud

So why am I writing on the Node4 blog about a German retailer’s entry in to cloud services? Surely they are a competitor? Well, yes and no. StackIT is all about the German market – it’s their USP. Node4 is expert in the UK mid-market. We don’t compete, but there are some parallels. Whilst Node4 hasn’t grown out of a retailer (we were born into datacentre hosting and connectivity), we do have a local cloud for UK-based clients.

So why might you want a local cloud?

  • The most obvious reason is data sovereignty. Depending on the service that your company provides to its customers, you might be required to keep data in-country. This may be for regulatory purposes or simply to meet a contractual requirement.
  • Whilst public cloud can be made secure, some organisations may have heightened security requirements that necessitate keeping data within national borders to mitigate risks associated with transferring data outside the local jurisdiction.
  • If performance is a concern, running services locally can reduce latency and improve performance, especially for applications that require real-time processing or have high bandwidth needs. And sometimes it’s just about keeping application layers close to one another.
  • From a corporate social responsibility perspective, it’s good to support the local economy. And other economic and political factors may mean that avoiding the potential geopolitical risks associated with relying on foreign cloud providers is prudent. If the war in Ukraine has made us question our energy and food sources, for some organisations the source of their IT services is critical too.
  • And sometimes, particularly if your application workload is to be “lifted and shifted”, with limited or no transformation, then a local cloud can be more cost-effective than the hyperscale platforms.

Node4’s Virtual Datacentre (VDC) is designed to address the needs of clients who need a local, UK-based cloud. We can offer a geo-resilient UK-based self-service cloud platform, with full automation, predictable consumption billing and 24×7 support. As a bonus, if you’re currently struggling with Broadcom price increases, it’s simple to migrate into our cloud too!

The Pragmatic Cloud

At Node4, we talk about the “Pragmatic Cloud”. We are open and transparent about the reasons for the recommendations that we make to our clients about their cloud investments. We can do this because Node4 is in a reasonably unusual position within mid-market managed services providers. We partner with Microsoft to provide hyperscale capabilities in the public cloud, but we also have our own hosting and connectivity options, including hybrid and edge capabilities. We can offer something for everyone.

So, whether it’s hosting/co-location services for the servers that you run (whether as a private cloud or simply using traditional management methods), self-service infrastructure as a service (IaaS) on our VDC platform, or expert advice on adopting Microsoft Azure to modernise your applications, we can help.

Feel free to get in touch using the contact details below – and let’s have a conversation.

This post was originally published on the Node4 blog.

Featured image by Michael Reichelt from Pixabay.

Self-scan stress in Sainsbury’s. And why don’t UK supermarkets use electronic shelf labels?

Almost every Thursday morning, before I start work, I visit the town market to buy food. After that, I do the weekly supermarket shop. Most people can understand me shopping locally and supporting the market. The question I’m sometimes asked is why I don’t do the supermarket shop online? It’s partly because I’ve learned that the store is well-stocked on a Thursday morning and I can do the weekly shop in 20-30 minutes. There’s also an element of dissatisfaction with previous online supermarket shopping experiences.

I mostly shop at Sainsbury’s. There are some items that we get from Lidl in the next town (though there’s an Aldi locally now, so that may change) and I have to go to Tesco or Waitrose for some other items because the local Sainsbury’s is too small. I also use Costco. Basically, I know what I can get where, at what price/quality.

“SmartShop”

In Sainsbury’s, I use the SmartShop self-scanning technology. According to the Sainsbury’s website:

“SmartShop is the new way to shop at Sainsbury’s. Just scan, bag and go, it couldn’t be easier!”

I started to use this a few years ago, when Sainsbury’s ran a campaign to encourage its use. Then, just a few weeks ago, some tills were removed in our local store to enlarge the self-checkout area. I’ve also switched from using the app on my phone to an in-store handset as I found the barcode scanning to be more efficient.

Random checks

A few months ago, almost every shop was being selected for a “random” check. Sainsbury’s explains that:

“Sometimes customers can double scan an item by mistake, or an item might end up in your trolley that hasn’t been scanned properly. So from time to time we might ask you to have your shopping re-scanned by one of our colleagues.

These rescans are random and they’ll only happen at checkout.”

These were annoying (as it was a regular occurrence), but understandable, until one time the entire shop had to be re-scanned. One of the advantages of self-scanning is that you can carefully place your items in bags so they are not damaged. I watched as my items were re-scanned and roughly repacked for me. I took a deep breath and walked away.

I understand why stores do this. Shoplifting is a huge issue nationally, thought more of a problem in some stores than others. But this policy on self-scans is effectively saying “we think you might be stealing from us”. There’s no apology when no theft is found.

There is an argument that self-scan is also a cost saving measure for supermarkets. That needs to be weighed up against the shrinkage and the customer experience. Some stores simply won’t install self-scan in certain areas, because of the risk.

The “random” checks stopped for a while but today, I was selected again. It’s fair to say that I did not respond well. In fact, I was enraged. 12 September is not a great day for me (it would have been my late Father’s birthday) but I honestly don’t know if that was a factor in my anger when a full re-scan was required. I insisted on speaking to a manager – I don’t believe the scans are random and I’m sure there’s some pattern recognition on my shopping habits via my Nectar card. The last couple of weeks’ shopping was small (with one teenager away from home). This week I spent more, including alcohol, and it felt like I was being singled out.

Pricing errors

The irony is that, after the store re-scanned all my shopping, I found mistakes in their pricing! Far from me adding to Sainsbury’s shrinkage bill, they were not passing on advertised savings to customers.

Readers will probably be familiar with the concept of discounts for loyalty card holders. Tesco has Clubcard, Sainsbury’s has Nectar, other retailers have their own schemes too. These are controversial for some, but I’m comfortable accepting that I trade my data for cash. After all, I give data about my habits away all the time on the Internet, using “free” services (if you’re not paying for the product, you are the product).

I found that Sainsbury’s had not passed on a Nectar discount on one of my items. Furthermore, because the ePOS system was not configured with the correct price, it would presumably have been overcharging every customer who bought that item and used their Nectar card.

Then, later in the day, I spotted that some of the personalised Nectar offers from a SmartShop scan were not passed to me when I’d had the full shop scanned through a normal till. Those offers were actually a reason for me to buy multiple items, rather than just one. They had increased the volume of the sale, but I’d ended up paying the full price.

Both of these mistakes were corrected by staff but they shouldn’t have happened.

In summary, when Sainsbury’s systems suspected I might have been shoplifting, it actually turned out that they owed me money.

Teenage kicks

I started my working life in retail. As a teenager, I worked for Bejam (now Iceland), and then a few years at Safeway. It was mostly stacking shelves but also warehouse work and checkouts when the store was busy. I saw the change from manual pricing to ePOS with barcodes, and I worked on a number of store openings and refits. After I decided to go to Polytechnic instead of joining Safeway’s management programme, I came back in my holidays and worked night shifts. That period of my life taught me quite a lot about supermarket retail and, fundamentally, not much has changed since. Of course, there have been some developments – like just-in-time deliveries replacing in-store warehouse space and the creation of digital services such as online shopping and self-scan.

One thing that does seem to have changed though is the checks on price labels. At Safeway in the late ’80s and early ’90s, it was a full-time job to check every price in store and manage changes/promotions. If the shelf edge labels didn’t match the computer then the customers were charged the wrong price. That was taken seriously back then.

This attention to detail seems to be gone. I imagine it was a cost-cutting efficiency (as is self-service). Nowadays, I regularly spot pricing errors in Sainsbury’s and it usually leads to store staff removing errant shelf edge tags. And Sainsbury’s are not alone – the local Co-op and OneStop stores seem to have similar issues.

Electronic shelf labelling

So, why don’t UK supermarkets use electronic shelf labels (ESLs), like those seen in continental Europe? I did some basic analysis and it seems that early trials were inconclusive, with concerns around cost, technology and operational challenges. So, just like any IT system really.

On the other hand, the benefits include efficiency, dynamic pricing, customer information and sustainability. The Grocer reported in 2021 that ESLs were making a comeback but I’ve not seen much evidence to suggest it’s happening quickly.

So what might ESLs cost for a store like the one where I shop, which was only built 5 years ago?

My local Sainsbury’s store cost £3.3m to build and is 1610 square metres in size. A few prompts to an AI assistant has told me that:

  • A store this size can be expected to stock 20-25,000 product lines.
  • The cost of ESLs can vary depending on the brand and features but an investment for 10,000 lines would be around £50-80,000.

So, about £125-200,000 for a store this size (between 3.7 and 6% of the £3.3m budget) to have accurate pricing in store.

No business case?

The thing is, that, in addition to my teenage shelf-stacking, I have some IT experience of working in retail. When I was at Polo Ralph Lauren in the early 2000s it was a lot easier to justify application spend than infrastructure. If IT spend doesn’t add to the bottom line, then the business case is unlikely to be approved. And if stores make more money from advertising offers that are not applied, why would they invest in a system to display accurate pricing?

Call me a cynic, but could that be the real reason why UK stores haven’t invested in electronic ticketing?

Featured image: author’s own.

How to communicate in Teams: channels, or chats; standard, private or shared?

Love it or loath it, over the last few years Microsoft Teams has emerged as the dominant collaboration tool for companies, large and small. But, let’s face it, how much has the typical organisation invested into getting the most out of Teams? The same can be said for most of Microsoft 365 – we buy the licences and expect people to just get on and use it.

I could, and probably should, write a post about technology adoption, but this one is about Teams. Specifically about when to create a chat and when to use a channel. And what sorts of channels to use when. And if you want a post about technology adoption, here’s one I wrote several years ago

You see, a few weeks ago, I was fortunate enough to attend the Commsverse conference. And one of the talks I found insightful was Robert Mulsow (in/robert-mulsow)’s “How to select the right communication in Teams: Feedback from the field”.

What’s the problem?

Teams has two forms of text-based chat:

  1. Chats, created from a conversation. Informal. Ad-hoc. No real management.
  2. Channels, created as a shared space within a Team. Designed to scale, supported by SharePoint.

My (organisational) team has both! And no-one knows where to post:

  • We have a “chat”, but that’s been problematic since one of the team (who probably created the chat) left the organisation. And there’s not much that an admin can do to support a chat – it’s all down to users.
  • We also have a Team, with some channels. But some people in the team prefer to use the chat (something to do with notifications). Oh yes, and we currently work across four Microsoft 365 tenants. That will change, but for now it’s not easy (that’s mergers and acquisitions for you…)

Which to use when – chat or channel?

It’s actually quite simple. Use chats for 1:1 or 1:few conversations. And then leave them behind, until you chat with that person or those people again. Chats are not intended to scale. They are for ad-hoc comms with small groups of people. They are absolutely not intended for groups of many people.

When you want to create a collaboration space, create a team. That team will have a “General” channel, and you can create more to suit the collaboration requirements of the team. 

Simple. All I need to do now is convince my colleagues to stop using the team chat. Hmm. Maybe not so simple.

But there are many different types of channel!

Channels inside teams can be standard, private or shared.

To understand these, it helps to understand the relationship(s) between Entra ID (ex-Azure Active Directory) and Teams. And that comes back to the concept of a tenant. I wrote about tenants a few years ago.

“[…] Each organisation has a Microsoft Online Services tenant […] each tenant has a single Azure Active Directory (Azure AD). There is a 1:1 relationship between the Azure AD and the tenant. […]”

Microsoft Online Services: tenants, subscriptions and domain names, markwilson.it, January 2020

Think of your tenant as an office block. When business partners visit from another office block, they come through the entrance area. If they have the wrong badge, they won’t get in. They are signed in as guest and escorted to a meeting space. That’s how it works in Teams when you are invited to a meeting.

Now, let’s go a bit further with this analogy. Regular guests may get their own badge and they can go directly to meeting without being escorted. They are not just authenticated but are authorised for access. Channels are the Teams version of rooms that guests can go in/out of.

Once upon a time, to switch tenants, we had to exit Teams and go back in with some different credentials. In our office block analogy that’s leaving the building and going into another. But now, it’s more like a bridge between (office blocks). You can access another tenant with your own “badge” – you don’t need guest access.

This is where Shared Channels work. It requires a trust to be established in Microsoft Entra ID and in technical terms it uses external identities and cross tenant access settings. This is more difficult to control as there are no specific badges to revoke – it relies on a trusted partner organisation to provide access. The key word here is “trust”.

In our analogy, shared channels really just show the corridor and the room, everything else in the office block is hidden from view (behind locked doors).

Confused?

All of this flexibility is potentially confusing so let’s bring it back a level – think about what you want to share. And it’s not as clear with external access and shared channels as it was when you had to exit/enter as a guest.

It gets even more complicated with sensitivity labels – they are inherited. You can’t drop down to a lower level of security later. Let’s just leave that to one side for a moment.

Which to use when?

Which to used depends whether you are collaborating or communicating.  

  • How many people are involved? You can add people to a chat but they don’t inherit the permissions on files. When you remove members they can access existing information but can’t add more.
  • Chats can get out of hand quickly so in most cases you’ll want to consider a team.
  • If you don’t already have a team, think about about future requirements and scope. You might just create a channel in an existing team!
  • If there is already a team in place, check if there is a channel for the topic and post there or create a channel as applicable. And what kind of channel will be driven by its intended use. And you must have policies to allow.

Here’s a flow chart to help the decision!

As you can see, it’s too complicated really. People just want to send a message. But, if the necessary Teams and channels are in place, with guidance, then it should be straightforward.

I’ve added a little more of Robert’s guidance to consider in the table below:

When creating a…Remember to…
Group chatRename the chat so it makes sense in future
Pin the chat
TeamUse a clear name
Provide a description
Create tags
Channel (any type)Use a clear name
Provide a description
Post in a channelUse the subject line
Use tags (if applicable)

And, in addition to all of that, issue some clear collaboration guidelines for people to follow.

Conclusion

Chat or channel is simple – if you want to collaborate, or to communicate with more than a few users, choose a channel. The type of channel is less straightforward – but in most cases you’ll want a standard channel in your Team. Only when you start sharing information across organisational boundaries (between tenants) will you need to think about guest access, shared channels and private channels.

Featured image: Microsoft Image Gallery

Unveiling the real treasure: moving beyond ERP to unleash (and protect) the power of data

In our rapidly changing world, sophisticated ERP solutions are becoming more crucial than ever for organisations looking to streamline operations and strive for operational excellence. But whilst the ERP plays an instrumental role in that, it’s vitally important to recognise that it’s not just the ERP system that holds value – the real treasure is the data within these systems.

For years now, IT leaders have been talking about a “data-centric world” and how to “democratise” access to the hidden treasure that an organisation’s data contains. And it’s still the case that the monumental value and potential of business data should not be understated. But since 2023, we’ve seen a shift. No longer is it just about the data – the rise of generative Artificial Intelligence (AI) tools has moved the conversation on to how AI can be used to help us better understand and use the data that we hold.

When I’ve spoken to our clients, I’ve seen their thirst for AI-driven insights. AI, coupled with robust data, holds the potential to shift paradigms, drive smart decisions and boost operational efficiency. However, before we can unlock this vast potential, the data needs to be structured and labelled correctly.

The value of structured data and correct labelling

When correctly structured and labelled, data empowers organisations to accurately understand their operations, pinpoint potential bottlenecks and unleash the value nestled in their ERP solutions.

In refining this data, we allow AI models to consume it effectively, driving insights that empower organisations to predict trends, anticipate customer needs, enhance supply chain processes, reduce overhead and foster innovation. Essentially, we convert the commonly inaccessible gold beneath the ERP systems into a well-drilled, fully accessible mine that yields precious insights — the real treasure.

But there’s another source of data that exists outside the core business systems. The ERP holds a trove of operational data that we can index, query and search, but our unstructured data has potential, too. We can ingest it to a modern data platform where it can be cleansed and interrogated. The raw information, once difficult to analyse and gain insights from, becomes useful, and the rise of AI assistants like Microsoft Copilot means we need to think about how these new tools can and should be used.

Getting ready for Copilot

As one of the largest Microsoft partners in the UK, Node4 hears a lot of buzz about Microsoft’s Copilot AI Assistants (of which there are several).

Just as Delve taught organisations to think about what could be indexed and searched in SharePoint and elsewhere, there are some preparatory steps needed for Copilot too. Copilot can only see what an individual can see, but often that’s more than was intended.

Node4 can work with clients to assess readiness including privacy, policies, monitoring, integration and training. And, of course, we’re constantly monitoring developments so we can advise on new announcements, like Copilot+.

The spectre of ransomware

Another trend that we’re seeing is the rise in ransomware attacks. Cyber-insurance data from September 2023 showed not only a 12% increase in ransomware claims frequency; but that that 36% of claimants paid ransoms (often after negotiating down the amount). 

In 2024, ransomware attacks are expected to evolve with more sophisticated techniques. And, in the same way that more extreme weather events have demonstrated we really can’t ignore climate change, the rise in ransomware means we can’t say “they won’t go after us” anymore. Ransomware is not just an issue for large multinational organisations – it’s increasingly a problem for small and medium enterprises too.

What this means is that all that data that we’re talking about unlocking value from needs to be protected. Our clients are talking to us about resilience and immutability – the ability to protect their data from attack.

Data resilience and immutability: the cornerstones of a data-centric approach

Data resilience and immutability (the ability of data to endure and remain unchanged, respectively) are vital. They ensure businesses can withstand unexpected events and safeguard their strategic intelligence. This is particularly true in the light of today’s escalating threats, such as ransomware attacks, which have jeopardised many businesses’ operations globally.

Every data-generated insight within an ERP environment is a precious source of intelligence. This intelligence can provide an invaluable competitive edge, and so it’s not surprising that it’s become an enticing target for malicious attacks. Ransomware, in particular, can hold this treasure hostage, dramatically disrupting a business’ operations and possibly leading to insurmountable financial or reputational damage.

In this context, it’s crucial to secure data with impregnable defences against such threats. At Node4, we prioritise data protection, resilience and immutability as cornerstones of our service offerings. The prosperous mine of business intelligence in ERP systems demands top-notch security protocols and safeguards.

The critical necessity of securing data

Just as an ERP system is valueless without correctly structured and labelled data, the impact and efficiency of this data can be catastrophically neutralised without adequate backup and security measures. Recovering data after a devastating event like a ransomware attack can be a painful, costly and time-consuming undertaking. However, these hurdles can be significantly mitigated, if not entirely avoided, through effective backup strategies and stringent data security policies.

At Node4, we believe that securing and regularly backing up data is not just an option – it’s a critical necessity. That’s why we’ve invested in state-of-the-art data centres with high performance interconnects and immutable storage options. The fact that we run our own data centres, as well as being one of Microsoft’s strongest UK partners means we’re able to work with clients for private, hybrid or public cloud. Our business continuity and recovery solutions cover all aspects of data resilience, and this comprehensive protection is imperative to maintain the integrity and availability of data essential to ERP processes.

The journey ahead: from ERP to data-driven intelligence

ERP solutions are undoubtedly substantive components of a robust business structure. They serve as the backbone of operational processes and hold potential to streamline business workflows. Yet it’s crucial to shift our focus beyond the ERP system and unlock the real source of value residing within – structured, organised, and secured data.

As we continue to take organisations on a digital transformation journey, the true winners will be those who understand and master the mechanism that turns raw data into decisive insights. These forward-thinking organisations will harness the power of AI effectively, backed by well-structured and secure data, to drive efficiency, innovation and competitive advantage, while diligently protecting their treasure from emerging threats. This is what we call AI transformation.

The journey from ERP to data-driven intelligence is underway. Embarking on it will not only shape the future of organisations across the world, but also dictate their survival in an increasingly interconnected and data-driven world. In the orchestration of this transformation, the ERP may conduct the score, but it’s the data that composes the symphony and, ultimately, steals the show.

This post was originally published on the Node4 blog.

Featured image by yatsusimnetcojp from Pixabay.