Weeknote 2024/09: radio; podcasts; AI and more

This week’s weeknote is short. It’s also a little earlier than usual because today’s my wedding anniversary and I was busy trying to get everything wrapped up before flying away for the weekend…

…so, in chronological order – but all mixed up between work and play:

  • Two weeks ago I said One Day (on Netflix) was a rom-com. Well… maybe not a comedy. A romantic drama? Regardless, we finished the series last weekend. There were tears. Mostly mine. And I highly recommend it for anyone who left uni’ in the UK in the 90s…
  • After passing my amateur radio foundation exam a couple of weeks ago, I have my callsign from Ofcom. I’m now M7OLN…
    • Last weekend, I met up with Christian Payne/Documentally (G5DOC) and talked radios among other things over an enjoyable cafe lunch…
    • I’m having trouble getting into local repeaters on a handheld radio from my place but we worked out my config issues so I know the radio is set up properly.
    • I can hear the local repeater but I need to put a better antenna up at home. That could be tricky. If only I could safely get closer to this chimney stack…
    • I’ve also ordered an antenna and window mount for the car. And discovered that there is a radio shop close enough to click and collect (Moonraker).
  • As a slight tangent from amateur radio (I can’t bring myself to call it HAM), I’ve discovered LoRa and some Meshtastic nodes are on their way. More on what that means when I have them set up…
  • I now have an identity on the Node4 Microsoft 365 tenant (don’t get me started on how difficult it is to bring multiple organisations into one but I have huge respect for my colleague who is managing this). Judging by the emails I’m receiving, I’m not the first person to have used this alias. I can deal with the emails for trainers and other fashion items… but it seems they were a Manchester United fan too, which is harder to take.
  • On Tuesday, I recorded a podcast with my colleague Bjoern (in/bjeorn-hirtenjohann). It was great fun and I was very chuffed when the producer, Beth, told me I could have a new career as a radio host. It may have been a joke but I would like to do more of this.
  • Then, I headed Bletchley for the Bletchley Park Microsoft AI User Group event. I was AI-jaded when I arrived. I was AI-buzzing when I left.
    • Will Rowe (@MSFTRecruit) made us laugh, a lot, at about recruiters.
    • I made some great connections.
    • I learned some cool things about AI prompting from Lydia Carroll (in/lydiacarroll) and about digital ecosystems from Chris Huntingford (@ThatPlatformGuy).
    • I also did some improv’ – volunteering for an unscripted, 1-2 minute talk on AI, that children would understand. Thanks to Stephanie Stasey (in/missai) for giving me the chance to get out of my comfort zone whilst practicing something I want to do more of – presenting.
  • I’ve also started to kick some thoughts around about what it means to be technical leader… and how I can encourage others.
  • And, in a discussion about recognition, someone who will remain anonymous shared this comment with me… I feel seen:

“I’m also an introvert that overcompensates BTW. People confuse my enthusiasm, facilitation, and contribution as me being extrovert. Secretly I’m like a Duracell Bunny using a bad battery – it wears down quite quickly!“

  • (I was exhausted on Wednesday, after Tuesday’s exploits.)
  • Thursday ended with an example of when AI chatbots go wrong:
  • There were some blog posts not written this week that need to be:
    • My journey into amateur radio
    • Writing better AI prompts
  • Next week is looking even busier (with only 3 days at work) but I’ll try and write them soon.

Right, time to go, I have a plane to catch.

This week in photos

Not that many… I’m sure there will be more next week.

Featured image: author’s own, from the last time I flew with Wizz Air

Weeknote 2024/08: re-organisation and recovery

Last week’s weeknote was huge.

This week’s weeknote is more… focused.

(I may finally be finding the right balance…)

At work

  • There have been some changes. A minor re-organisation that brings the Office of the CTO closer to the delivery end of the business – with a renewed focus on innovation and technology leadership. This makes me much happier.
  • I brokered a successful introduction between a data science contact I made at the recent AWS event and my OCTO colleague who looks after data and analytics.
  • I did some script-writing as preparation for some podcasts we’re recording next week.
  • And I published a blog post about the supposed demise of cloud, where apparently lots of people are moving back on-premises because it’s “too expensive”. Hmmm:
  • Also, because nobody engages with AI blog posts, I made a little observation on LinkedIn:
  • I spent quite a bit of time working on the ransomware offering that I’ve mentioned a few times now. Once we finalise the cost model I’ll start to shout some more.
  • And someone actually booked some time with me using my Microsoft Bookings page!

At home

  • Mrs W did, as predicted, read last week’s weeknote :-)
    • I’m pleased to report that she had an enjoyable birthday and my cake baking was successful.
  • Matt is happy in Spain (for a few weeks), riding his bike in the sunshine and mixing with professionals and amateurs alike.
    • Two new cyclocross frames arrived last week too, so his bedroom back home looks like a workshop as he prepares for gravel/cyclocross later in the year.
    • Unfortunately, his groupset is wearing out (the interior components on Shimano 105-spec shifters are fine for leisure riders like me, but not for people who ride more miles on their bike than many people drive). Alpkit were selling off some surplus 105 Di2 groupsets and one is now in our house. The theory being that there’s less to wear out with an electronic groupset. I’m not convinced!
  • Ben had a great half term holiday with friends in Devon. He’s back home safely now. The Young Person’s Railcard is a wonderful scheme.
  • And I’m bouncing from day to day, ticking things off lists and generally trying to balance being a good Dad, a good husband, and to get myself back in shape, mentally and physically. Once I’d finished work for the week:
    • I took myself along to a talk about using multimeters, at one of the local clubs and societies in Olney, which filled a few gaps in my geek knowledge before I caught up with my friend James for a couple of pints.
    • And I took a ride on a local railway line that’s recently reopened after a year or so with no service. For a few weeks it’s £1 each way between Bedford and Bletchley so I decided to get a different view of the various developments along the Marston Vale. Old brickworks are now energy recovery facilities and country parks, but there’s lots more to see too.

In tech

  • OpenAI launched a text-to-video model called Sora:
  • The BBC looked back on child futurologists from 50 years ago:
  • I found Timo Elliott’s cartoons – including this one on AI:
  • And BT sold its London tower, which has long since lost its use for radio communications:
  • Whilst I feel for Kate (@katebevan), I’m pleased to see someone else finds these UI features as frustrating as me. See also country dropdowns where I scroll and scroll to get to United Kingdom but someone thought the USA was important enough to put at the top of the list:

Next week

Don’t be surprised if I skip a week on the weeknotes… I’m going to be very busy at the end of next week… but I’ll be back soon.

Featured image: author’s own

Weeknote 2024/05: using ChatGPT to overcome writer’s block; and why do UK supermarkets use technology so badly?

This week’s weeknote is shorter. Just a few nuggets… but I did actually write some real, standalone blog posts:

I hope you enjoy them. There was another one I planned about anti-social media. I thought I had it in note form but I can’t find the notes now. Maybe that will follow soon. But there’s also a possibility it will go to the great list of unwritten or part-written blog posts…

Some artificial assistance from ChatGPT

For the last few weeks, I’ve been trying to write some data sheets to describe some of the Node4 services that I’m responsible for. I’ve really struggled – not so much to understand what the service entails – but to generate lists of features and benefits.

One of my colleagues is a big fan of ChatGPT. I’m not – because I value quality writing – and I’m concerned it just churns out very formulaic text filled with buzzwords. (Sadly, in this case that might be exactly what I need!). In addition, I’ve probably mentioned previously that my wife is a copywriter so I am a little biased. Even so, ChatGPT 4’s content has at least allowed me to move past my writer’s block – it gave me a draft that I could refine.

Retail pricing inefficiencies

I started my career working in supermarkets (Bejam/Iceland, and then Safeway). It was the time when we saw the end of individual price ticketing and the start of barcode scanning. Back in those days (the late 1980s), it was someone’s job to make sure that the shelf edge tickets matched the store computer.

I’ve just got back from a trip to a major UK supermarket. I’m not going to name the chain, because I’ve had similar issues in others, but it was interesting to see, yet again, an advertised offer that didn’t match the scanned price. And the store’s reaction is almost always to remove the shelf edge ticket (not to correct the computer).

But we have technology that can keep these things aligned. e-ink displays are used on shelf edges in some other countries – it mystifies me that we don’t use them in the UK.

Retailers will argue that they work on small margins that that investment in systems is secondary to reducing prices. Except that right now they are doing it badly – and inefficiently too!

Not only would the use of e-ink displays allow a guaranteed match between the shelf edge and the point of sale systems, but they would remove an admin task of replacing tickets (something which is clearly not done well). They could also allow for demand-based pricing, though I’m less keen on that idea…

Plus “random” checks for self-scanning

Then, to add insult to injury, the store systems selected me for a “random” check. For a dozen items, totalling £12.69. And it seems to happen quite frequently (hence the quotes around the word random). Not long ago they were encouraging us to use an app and self-scan. Now they seem to be seeing self-scanners as potential criminals. Either innovate, use the technology, and take action when someone is abusing the system, or pay for more staff to run checkouts. The choice is yours Tesco, Sainsbury’s, Co-op, et al. But stop treating the customers that help you reduce your costs as potential shoplifters.

More “coffees”

Last week’s weeknote featured the concept of “coffees”, as meeting people without an agenda, to catch-up and to learn. No sooner had I hit publish, then I met up with another old colleague and friend, David Saxon (@DMSaxon). David and I worked together for many years and he’s now at Microsoft so we still have a lot in common. He was staying near me last weekend, so it was a great opportunity for dinner and a chat.

I didn’t line anything up during the work week but as we roll into a new month there will be another pairing in the WB-40 podcast coffee club, plus I’ve got a couple of former team members that I really must check in with. And, in a few weeks, I’m due to catch up with my former colleague then manager, and long-time mentor, Mark Locke.

Things that caught my eye this week

At home

I’m at the stage of life where frequently at least one of my sons is away from home. Last weekend my wife was too – so there was just me, my youngest son, and the dog. Since Sunday evening, we’ve been a complete family again – which has been good. Matt’s back from two weeks skiing (which he referred to as altitude training) and is quite pleased (and surprised) to have been taking Strava segments on skis (he’s used to it on his bike). I need to make the most of it though before he goes back to Greece for a training camp. He’s racing next weekend, so I have one more trip away to support him before he disappears for a couple of months.

I’ve also booked the exam for my RSGB Foundation Licence (as promised at the last Milton Keynes Geek Night), so I have some revision to do.

Finally, I’m giving myself a gold star, because today, I restrained my “inner chimp”. I received a text message from my son’s school, advising me that he will soon be held back for a detention. That’s fine. He needs to learn. But it niggled me that the message contained a glaring grammatical error. This is a school which is very proud of its history and standards for students but doesn’t always follow through with its own communications. The pedantic side of me was desperate to reply and point out the mistake but I managed to restrain myself!

That’s all for now

No tech projects, no new TV, no podcasts of note, no photos. I’ll be back next week with another weeknote – and hopefully soon I’ll be able to shout about a cool new service I’ve been working on for Node4.

Featured image created using the Clippy Meme Generator at Imgflip.

Weeknote 1/2024: A new beginning

Wow, that was a bump. New Year celebrations over, a day off for the public holiday, and straight back to work.

After a lot of uncertainty in December, I’ve been keen to get stuck in to something valuable, and I’m not breaking any confidentiality by saying that my focus right now is on refreshing the collateral behind Node4’s Public Cloud offerings. I need to work across the business – my Office of the CTO (OCTO) role is about strategy, innovation and offering development – but the work also needs to include specialist sales colleagues, our marketing teams, and of course the experts that actually deliver the engagements.

So that’s the day job. Alongside that, I’ve been:

  • Avoiding stating any grand new year resolutions. I’ll only break them. It was literally hours before I broke my goal of not posting on Twitter/X this year. Though I did step away from a 453-day streak on Duolingo to focus my spare time on other, hopefully less gamified, pursuits:
  • Doing far too little exercise. A recurring health condition is impacting my ability to walk, run, cycle and to get back to Caveman Conditioning. It’s getting a bit better but it may be another week before I can have my new year fitness kick-start.
  • Eating badly. Logging everything in the Zoe app is helping me to see what I should avoid (spoiler: I need to eat more plants and less sweet stuff) but my willpower is still shockingly bad. I was also alarmed to see Prof. Tim Spector launching what appeared to be an ultra-processed food (UPF) product. More on that after I’ve got to M&S and actually seen the ingredients list for the Zoe Gut Shot, but others are telling me it’s not a UPF.
  • Redesigning the disaster recovery strategy for my photos. I learned the hard way several years ago that RAID is not a backup, and nothing exists unless it’s in three places. For me that’s the original, a copy on my Synology NAS, and copy in the cloud. My cloud (Azure) backups were in a proprietary format from the Synology Hyper Backup program, so I’ve started to synchronise the native files by following a very useful article from Charbel Nemnom, MVP. Unfortunately the timestamps get re-written on synchronisation, but the metadata is still inside the files and these are the disaster copies – hopefully I’ll never need to rely on them.
  • Watching the third season of Slow Horses. No spoilers please. I still have 4 episodes to watch… but it’s great TV.
  • Watching Mr Bates vs. The Post Office. The more I learn about the Post Office Scandal, the more I’m genuinely shocked. I worked for Fujitsu (and, previously, ICL) for just over 15 years. I was nothing to do with Horizon, and knew nothing of the scandal, but it’s really made me think about the values of the company where I spent around half my career to date.
  • Spreading some of my late Father-in-law’s ashes by his tree in the Olney Community Orchard.
  • Meeting up with old friends from my “youth”, as one returns to England from his home in California, for a Christmas visit.

Other things

Other things I found noteworthy this week:

  • Which came first, the chicken or the egg scissors or the blister-pack?

Press coverage

This week, I was quoted in this article:

Coming up

This weekend will see:

  • A return to Team MK Youth Cycle Coaching. Our local cyclo-cross league is finished for the 2023/4 season so we’re switching back to road cycling as we move into the new year.
  • Some home IT projects (more on them next week).
  • General adulting and administration.

Next week, I’ll be continuing the work I mentioned at the head of this post, but also joining an online Group Coaching session from Professor John Amaechi OBE. I have no idea what to expect but I’m a huge fan of his wise commentary. I’m also listening to The Promises of Giants on Audible. (I was reading on Kindle, but switched to the audiobook.)

This week in photos

Featured image: Author’s own
(this week’s flooding of the River Great Ouse at Olney)

Celebrating ChatGPT’s first birthday…

I was asked to comment on a few GPT-related questions by Node4’s PR agency. I haven’t seen where those comments are featured yet, but I decided to string them together into my own post…

How has ChatGPT prompted innovation over the past year, transforming industries for the better?

ChatGPT’s main impact in 2023 has been on driving a conversation around generative AI, and demonstrating the potential that this technology has to make truly impactful change. It’s certainly stoked a lot of interest in a lot of sectors and is clearly right at the top of the hype curve right now.

It’s sparked conversations around driving innovation and transforming a variety of industries in remarkable ways, including:

  • Revolutionising Customer Experiences: AI-powered chatbots, bolstered by sophisticated language models like ChatGPT, can engage with customers in natural language, offering real-time assistance, answering queries, and even providing personalized recommendations. This level of interaction not only improves customer satisfaction but also opens new avenues for businesses to understand their customers on a deeper level.
  • Enhancing Decision-Making and Strategic Planning: By harnessing the power of AI, leaders can make informed decisions that are driven by data rather than intuition alone. This has impacted decision-making processes and strategic planning across industries.
  • Transforming the Economy: ChatGPT, with its human-like writing abilities, holds the promise of automating all sorts of tasks that were previously thought to be solely in the realm of human creativity and reasoning, from writing to creating graphics to summarizing and analysing data. This has left economists unsure how jobs and overall productivity might be affected [as we will discuss in a moment].
  • Developer Tools: OpenAI and Microsoft have unveiled a series of artificial intelligence tool updates, including the ability for developers to create custom versions of ChatGPT. These innovations are designed to make it easier for developers to incorporate AI into their projects, whether they’re building chatbots, integrating AI into existing systems, or creating entirely new applications.

These advancements signal a new direction in tackling complex challenges. The impact of ChatGPT on workers and the economy as a whole is far-reaching and continues to evolve. 2023 was just the start – and the announcements from companies like Microsoft on how it will use ChatGPT and other AI technologies at the heart of its strategy show that we are still only getting started on an exciting journey.

Elon Musk has recently claimed that AI will end all jobs, is this actually a reality or is it scaremongering?

Mr Musk was one of original backers of OpenAI, the company that created ChatGPT; however he resigned from their board in 2018. At the time that was over conflicts of interest with Tesla’s AI work and Musk now has another AI startup called xAI. He’s well-known for his controversial opinions, and his comment about AI ending all jobs is aimed to fuel controversy.

Computing has been a disruptor throughout the last few generations. Generative AI is the latest iteration of that cycle.

Will we see jobs replaced by AI? Almost certainly! But new jobs will be created in their place.

To give an example, when I entered the workplace in the 1990s, we didn’t have social media and all the job roles that are built around it today; PR involved phoning journalists and taking clippings from newspapers to show where coverage had been gained for clients; and advertising was limited to print, TV, radio, and billboards. That world has changed immensely in the last 30 years and that’s just one sector.

For another example, look at retail where a huge number of transactions are now online and even those in-store may be self-service. Or the rise in logistics opportunities (warehousing, transportation) as a result of all our online commerce and the fuel for ever more variety in our purchases.

The jobs that my sons are taking on as they enter the workplace are vastly different to the ones I had. And the jobs their children take a generation later will be different again.

Just as robots have become commonplace on production lines and impacted “blue collar” roles, AI is the productivity enhancement that will impact “white collar” jobs.

So will AI end work? Maybe one day, but not for a while yet!

When and if it does, way out in the future, we will need new social constructs – perhaps a Universal Basic Income – to replace wages and salaries from employment, but right now that’s a distant dream.

How far is there still to go to overcome the ethical nightmares surrounding the technology e.g. data privacy, algorithm bias?

There’s a lot of work that’s been done to overcome ethical issues with artificial intelligence but there’s still a lot to do.

The major source of bias is the training data used by models. We’re looking at AI in the context of ChatGPT and my understanding is that ChatGPT was trained, in part, on information from the world-wide web. That data varies tremendously in quality and scope.

As it becomes easier for organisations to generate their own generative AI models, tuned with their own data, we can expect to see improved quality in the outcomes. Whether that improved quality translates into ethical responses depends on the decisions made by the humans that decide how to use the AI results. OpenAI and its partners will be keen to demonstrate how they are improving the models that they create and reducing bias, but this is a broader social issue, and we can’t simply rely on technology.

ChatGPT has developed so much in just a year, what does the next year look like for its capabilities? How will the workplace look different this time next year, thanks to ChatGPT?

We’ve been using forms of AI in our work for years – for example predictive text, design suggestions, grammar correction – but the generative AI that ChatGPT has made us all familiar with over the last year is a huge step forwards.

Microsoft is a significant investor in OpenAI (the creators of ChatGPT) and the licensing of the OpenAI models is fundamental to Microsoft’s strategy. So much so that, when Sam Altman and Greg Brockman (OpenAI’s CEO and CTO respectively) abruptly left the company, Microsoft moved quickly to offer them the opportunity to set up an advanced AI Research function inside Microsoft. Even though Altman and Brockman were soon reinstated at OpenAI, it shows how important this investment is to Microsoft.

In mid-November 2023, the main theme of the Microsoft Ignite conference was AI, including new computing capabilities, the inclusion of a set of Copilots in Microsoft products – and the ability for Microsoft’s customers to create their own Copilots. Indeed, in his keynote, Microsoft CEO Satya Nadella repeatedly referred to the “age of the Copilot”.

These Copilots are assistive technologies – agents to help us all be more productive in our work. The Copilots use ChatGPT and other models to generate text and visual content.

Even today, I regularly use ChatGPT to help me with a first draft of a document, or to help me write an Excel formula. The results are far from perfect, but the output is “good enough” for a first draft that I can then edit. That’s the inspiration to get me going, or the push to see something on the page, which I can then take to the next level.

This is what’s going to influence our workplaces over the next year or so. Whereas now we’re talking about the potential of Copilots, next year we’ll be using them for real.

What about 5 or 10 years time?

5 or 10 years is a long time in technology. And the pace of change seems to be increasing (or maybe that’s just a sign of my age).

I asked Microsoft Copilot (which uses ChatGPT) what the big innovations of 2013 were and it said: Google Glass (now confined to the history books); Oculus Rift (now part of Meta’s plans for augmented reality) and Bitcoin (another controversial technology whose fans say it will be world-changing and critics say has no place in society). For 2018 it was duelling neural networks (AI); babel-fish earbuds (AI) and zero-carbon natural gas.

The fact that none of these are commonplace today tells me that predicting the future is hard!

If pushed, and talking specifically about innovations where AI plays a part, I’d say that we’ll be looking at a world where:

  • Our interactions with technology will have changed – we will have more (and better) spoken interaction with our devices (think Apple Siri or Amazon Alexa, but much improved). Intelligent assistants will be everywhere. And the results they produce will be integrated with Augmented Reality (AR) to create new computing experiences that improve productivity in the workplace – be that an industrial setting or an office.
  • The computing services that are provided by companies like Node4 but also by the hyperscalers (Microsoft, Amazon, et al.) will have evolved. AI will change the demands of our clients and that will be reflected in the ways that we provide compute, storage and connectivity services. We’ll need new processors (moving beyond CPUs, GPUs, TPUs, to the next AI-driven approach), new approaches to power and cooling, and new approaches to data connectivity (hollow-core fibre, satellite communications, etc.).
  • People will still struggle to come to grips with new computing paradigms. We’ll need to invest more and more effort into helping people make good use of the technologies that are available to them. Crucially, we’ll also need to help current and future generations develop critical thinking skills to consider whether the content they see is real or computer-generated.

Anything else to add?

Well, a lot has been made of ChatGPT and its use in an educational context. One question is “should it be banned?”

In the past, calculators (and even slide rules – though they were before my time!) were banned before they became accepted tools, even in the exam hall. No-one would say today “you can’t use Google to search for information”. That’s the viewpoint we need to get to with generative AI. ChatGPT is just a tool and, at some point, instead of banning these tools, educational establishments (and broader society) will issue guidelines for their acceptable use.

Postscript

Bonus points for working out which part of this was original content by yours truly, and which had some AI assistance…

Featured image: generated via Microsoft Copilot, powered by DALL-E 3.

What did we learn at Microsoft Ignite 2023?

Right now, there’s a whole load of journalists and influencers writing about what Microsoft announced at Ignite. I’m not a journalist, and Microsoft has long since stopped considering me as an influencer. Even so, I’m going to take a look at the key messages. Not the technology announcements – for those there’s the Microsoft Ignite 2023 Book of News – but the real things IT Leaders need to take away from this event.

Microsoft’s investment in OpenAI

It’s all about AI. I know, you’re tired of the hype, suffering from AI fatigue, but for Microsoft, it really is about AI. And if you were unconvinced just how important AI is to Microsoft’s strategy, their action to snap up key members of staff from an imploding OpenAI a week later should be all you need to see:

Tortoise Media‘s Barney Macintyre (@barneymac) summed it up brilliantly when he said that

“Satya Nadella, chief executive of Microsoft, has played a blinder. Altman’s firing raised the risk that he would lose a key ally at a company into which Microsoft has invested $13 billion. After it became clear the board wouldn’t accept his reinstatement, Nadella offered jobs to Altman, Brockman and other loyalist researchers thinking about leaving.

The upshot: a new AI lab, filled with talent and wholly owned by Microsoft – without the bossy board. An $86 billion subsidiary for a $13 billion investment.”

But the soap opera continued and, by the middle of the week, Altman was back at OpenAI, apparently with the blessing of Microsoft!

If nothing else, this whole saga should reinforce just how important OpenAI is to Microsoft.

The age of the Copilot

Copilot is Microsoft’s brand for a set of assistive technologies that will sit alongside applications and provide an agent experience, built on ChatGPT, Dall-E and other models. Copilots are going to be everywhere. So much so that there is a “stack” for Copilot and Satya described Microsoft as “a Copilot company”.

That stack consists of:

  • The AI infrastructure in Azure – all Copilots are built on AzureAI.
  • Foundation models from OpenAI, including the Azure OpenAI Service to provide access in a protected manner but also new OpenAI models, fine-tuning, hosted APIs, and an open source model catalogue – including Models as a Service in Azure.
  • Your data – and Microsoft is pushing Fabric as all the data management tools in one SaaS experience, with onwards flow to Microsoft 365 for improved decision-making, Purview for data governance, and Copilots to assist. One place to unify, prepare and model data (for AI to act upon).
  • Applications, with tools like Microsoft Teams becoming more than just communication and collaboration but a “multi-player canvas for business processes”.
  • A new Copilot Studio to extend and customise Microsoft Copilot, with 1100 prebuilt plugins and connectors for every Azure data service and many common enterprise data platforms.
  • All wrapped with a set of AI safety and security measures – both in the platform (model and safety system) and in application (metaprompts, grounding and user experience).

In addition to this, Bing Chat is now re-branded as Copilot – with an enterprise version at no additional cost to eligible Entra ID users. On LinkedIn this week, one Microsoft exec posted that “Copilot is going to be the new UI for work”.

In short, Copilots will be everywhere.

Azure as the world’s computer

Of course, other cloud platforms exist, but I’m writing about Microsoft here. So what did they announce that makes Azure even more powerful and suited to running these new AI workloads?

  • Re-affirming the commitment to zero carbon power sources and then becoming carbon negative.
  • Manufacturing their own hollow-core fibre to drive up speeds.
  • Azure Boost (offloading server virtualisation processes from the hypervisor to hardware).
  • Taking the innovation from Intel and AMD but also introducing new Microsoft silicon: Azure Cobalt (ARM-based CPU series) and Azure Maia (AI accelerator in the form of an LLM training and inference chip).
  • More AI models and APIs. New tooling (Azure AI Studio).
  • Improvements in the data layer with enhancements to Microsoft Fabric. The “Microsoft Intelligent Data Platform” now has 4 tenets: databases; analytics; AI; and governance.
  • Extending Copilot across every role and function (as I briefly discussed in the previous section).

In summary, and looking forward

Microsoft is powering ahead on the back of its AI investments. And, as tired of the hype as we may all be, it would be foolish to ignore it. Copilots look to be the next generation of assistive technology that will help drive productivity. Just as robots have become commonplace on production lines and impacted “blue collar” roles, AI is the productivity enhancement that will impact “white collar” jobs.

In time we’ll see AI and mixed reality coming together to make sense of our intent and the world around us. Voice, gestures, and where we look become new inputs – the world becomes our prompt and interface.

Featured images: screenshots from the Microsoft Ignite keynote stream, under fair use for copyright purposes.

Learning to be intelligent about artificial intelligence

This week promises to be a huge one in the world of Artificial Intelligence (AI). I should caveat that in that almost every week includes a barrage of news about AI. And, depending which articles you read, AI is either going to:

  • Take away all our jobs or create exciting new jobs.
  • Solve global issues like climate change or hasten climate change through massive data centre power and water requirements.
  • Lead to the demise of society as we know it or create a new utopia.

A week of high profile AI events

So, why is this week so special?

  1. First of all, the G7 nations have agreed a set of Guiding Principles and a Code of Conduct on AI. This has been lauded by the European Commission as complementing the legally binding rules that the EU co-legislators are currently finalising under the EU AI Act.
  2. Then, starting on Wednesday, the UK is hosting an AI Safety Summit at “the home of computing”, Bletchley Park. And this summit is already controversial with some questioning the diversity of the attendees, including Dr Sue Black, who famously championed saving Bletchley Park from redevelopment.
  3. The same day, Microsoft’s AI Copilots will become generally available to Enterprise users, and there’s a huge buzz around how the $30/user/month Copilot plays against other offers like Bing Chat Enterprise ($5/user/month), or even using public AI models.

All just another week in AI news. Or not, depending on how you view these things!

Is AI the big deal that it seems to be?

It’s only natural to ask questions about the potential that AI offers (specifically generative AI – gAI). It’s a topic that I covered in a recent technology advice note that I wrote.

In summary, I said that:

“gAI tools should be considered as assistive technologies that can help with researching, summarising and basic drafting but they are not a replacement for human expertise.

We need to train people on the limitations of gAI. We should learn lessons from social media, where nuanced narratives get reduced to polarised soundbites. Newspaper headlines do the same, but social media industrialised things. AI has the potential to be transformative. But we need to make sure that’s done in the right way.

Getting good results out of LLMs will be a skill – a new area of subject matter expertise (known as “prompt engineering”). Similarly, questioning the outputs of GANs to recognise fake imagery will require new awareness and critical thinking.”

Node 4 Technology Advice Note on Artificial Intelligence, September 2023.

Even as I’m writing this post, I can see a BBC headline that asks “Can Rishi Sunak’s big summit save us from AI nightmare?”. My response? Betteridge’s law probably applies here.

Could AI have saved a failed business?

Last weekend, The Sunday Times ran an article about the failed Babylon Healthcare organisation, titled “The app that promised an NHS ‘revolution’ then went down in flames”. The article is behind a paywall, but I’ve seen some extracts.

Two things appear to have caused Babylon’s downfall (at least in part). Not only did Babylon attract young and generally healthy patients to its telehealth services, but it also offered frictionless access.

So, it caused problems for traditional service providers, leaving them with an older, more frequently ill, and therefore more expensive sector of the population. And it caused problems for itself: who would have thought that if you offer people unlimited healthcare, they will use it?!

(In some cases, creating friction in provision of a service is a deliberate policy. I’m sure this is why my local GP doesn’t allow me to book appointments online. By making me queue up in person for one of a limited number of same-day appointments, or face a lengthy wait in a telephone queue, I’m less likely to make an appointment unless I really need it.)

The article talks about the pressures on Babylon to increase its use of artificial intelligence. It also seems to come to the conclusion that, had today’s generative AI tools been around when Babylon was launched, it would have been more successful. That’s a big jump, written by a consumer journalist, who seems to be asserting that generative AI is better at predicting health outcomes than expert system decision trees.

We need to be intelligent about how we use Artificial Intelligence

Let me be clear, generative AI makes stuff up. Literally. gAIs like ChatGPT work by predicting and generating the next word based on previous words – basically, on probability. And sometimes they get it wrong.

Last week, I asked ChatGPT to summarise some meeting notes. The summary it produced included a typo – a made-up word:

“A meeting took tanke place between Node4 & the Infrastructure team at <client name redacted> to discuss future technology integration, project workloads, cost control measures, and hybrid cloud strategy.”

Or, as one of my friends found when he asked ChatGPT to confirm a simple percentage calculation, it initially said one thing and then “changed its mind”!

Don’t get me wrong – these tools can be fantastic for creating drafts, but they do need to be checked. Many people seem to think that an AI generates a response from a database of facts and therefore must be correct.

In conclusion

As we traverse the future landscape painted by artificial intelligence, it’s vital that we arm ourselves with a sound understanding of its potential and limitations. AI has often been regarded as a silver bullet for many of our modern challenges, a shortcut to progress and optimised efficiency. But as we’ve explored in this blog post – whether it’s the G7 nations’ principles, Microsoft’s AI Copilot, or a fallen Babylon Healthcare – AI is not a one-size-fits-all solution. It’s a tool, often brilliant but fallible, offering us both unprecedented opportunities and new forms of challenges.

The promises brought by AI are enormous. This week’s events underscore the urgency to familiarise ourselves with AI, acknowledge its potential, and intelligently navigate its limitations. From a set of AI guiding principles on a global scale, to raising awareness on gAI, and analysing the role of AI in business successes and failures – it’s clear that being informed about AI is no longer an option but a necessity.

gAI tools, while they are transformative, need to be used as assistive technologies and not as replacements for human intellect and expertise. Embracing AI should not mean renouncing critical thinking and caution. So, as we interact with AI, let’s do it intelligently, asking the right questions and understanding its strengths and limitations. We need to be smart about using AI, recognizing both its potential and its constraints. This will enable us to harness its power effectively, while avoiding over-reliance or the creation of new, unforeseen problems.

It’s time we stop viewing AI through a lens of absolute salvation or doom, and start understanding it as a dynamic field that requires thoughtful and knowledgeable engagement. Evolution in human tech culture will not be judged by the power of our tools, but by our ability to skillfully and ethically wield them. So, let’s learn to be intelligent about how we use artificial intelligence.

Postscript

That conclusion was written by an AI, and edited by a human.

Featured image: screenshot from the BBC website, under fair use for copyright purposes.

This is what happened when I asked ChatGPT to write a blog post…

In recent weeks, I’ve been watching the buzz of activity about large language models and generative AI. So I decided to see what would happen if I asked Open AI’s ChatGPT to write a blog post for me. I gave it the following prompt:

Write a blog post of up to 800 words that discusses the use of generative AI tools like ChatGPT to automate human activities Use writing a blog post as an example. Include some commentary about the potential social impacts of using AI to generate content. Also discuss whether there are any concerns that need to be considered - such as whether AI-generated content is accurate.

“Generative AI tools like ChatGPT have been making waves in the field of automation and artificial intelligence. These tools have the ability to generate human-like text, helping to automate certain human activities such as writing, customer service, and more. While the potential benefits of these tools are clear, there are also concerns about their impact on society and the accuracy of the content they produce.

One of the most significant benefits of generative AI tools like ChatGPT is their ability to automate repetitive or time-consuming tasks. For example, a blogger who regularly posts new content may use ChatGPT to generate article outlines, saving time and effort. Customer service teams can use generative AI tools to provide quick and accurate responses to frequently asked questions, improving the overall customer experience.

Another benefit of generative AI tools is their ability to create content in multiple languages, allowing businesses to expand their reach into new markets. These tools can also be used to generate personalized content for individuals, such as email newsletters or social media posts, which can improve engagement and lead to higher conversions.

However, the use of generative AI tools also raises concerns about the potential social impacts of using AI to generate content. One concern is that these tools could lead to a loss of jobs, particularly in industries where writing and content creation are essential. While automation has always been a part of technological progress, there is a risk that AI could replace more jobs than it creates.

Another concern is that AI-generated content could be used to spread misinformation and propaganda. With the ability to create convincing, human-like text, there is a risk that malicious actors could use these tools to create fake news stories or manipulate public opinion. This could have serious consequences for democracy and social stability.

Accuracy is another concern when it comes to generative AI tools. While these tools are designed to create human-like text, they may not always produce accurate or reliable information. This could be particularly problematic in fields like journalism or science, where accuracy is essential. As AI tools become more sophisticated, it will be important to ensure that the content they produce is factually correct and trustworthy.

Despite these concerns, generative AI tools like ChatGPT have the potential to revolutionize the way we create and consume content. As with any new technology, it is important to approach these tools with caution and to carefully consider their potential impact on society. However, if used responsibly, these tools could improve efficiency, accuracy, and creativity in a variety of industries.

In the case of writing blog posts, for example, using ChatGPT to generate outlines or ideas could be a useful starting point, but it is important to remember that these tools are not a substitute for human creativity and expertise. While AI-generated content may be faster and more efficient, it may lack the nuance and personality that comes from a human writer.

To ensure that AI-generated content is accurate and reliable, it is important to use reliable sources of information and to fact-check any content that is produced. This is particularly important when it comes to sensitive or controversial topics, where misinformation can have serious consequences.

Overall, the use of generative AI tools like ChatGPT to automate human activities is a promising development that has the potential to improve efficiency and creativity in a variety of fields. However, it is important to approach these tools with caution and to carefully consider their potential impact on society. With responsible use and careful oversight, generative AI tools could become an important part of our technological landscape, helping us to achieve more than we ever thought possible.”

Is it good enough?

As it happens, the resulting text is quite believable, but it’s not actually a very good post. A different AI, the one from Yoast SEO that helps with the readability of my content, highlighted that there aren’t enough subheadings. Also, more than half the sentences are a bit long (more than 20 words). And it’s using too much passive voice (I do that too, so I guess it could be considered authentic!).

Garbage In Garbage Out (GIBO)

I tried asking ChatCPT to improve its writing, based on the points above but I didn’t get far:

Re-write the above with shorter sentences to make it easier to read, and use sub-headings.

And then:

Try again, and this time use try to use active voice instead of passive voice... and make the subheadings a bit snappier!

It made some changes, but really the way I see tools like this working is as a very quick first draft for review and update by a human writer. We just have to be careful not to let those who don’t understand or who don’t value writing say “just get an AI to write it in a few seconds”.

Featured image by Alexandra_Koch from Pixabay.

Weeknote 18/2020: Microsoft 365, the rise of the humans and some data platform discovery

This content is 4 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Some highlights from the last week of “lockdown” lunacy*…

Office 365 rebranding to Microsoft 365

For the last couple of years, Microsoft has had a subscription bundle called Microsoft 365, which includes Office 365 Enterprise, Enterprise Mobility and Security and Windows 10 Enterprise. Now some bright spark has decided to rebrand some Office 365 products as Microsoft 365. Except for the ones that they haven’t (Office 365 Enterprise E1/3/5). And Office 365 ProPlus (the subscription-based version of the Office applications) is now “Microsoft 365 Apps for Enterprise”. Confused? Join the club…

Read more on the Microsoft website.

The Rise of the Humans

A few years ago, I met Dave Coplin (@DCoplin). At the time he was working for Microsoft, with the assumed title of “Chief Envisioning Officer” (which was mildly amusing when he was called upon to interview the real Microsoft CEO, Satya Nadella at Future Decoded). Dave’s a really smart guy and a great communicator with a lot of thoughts about how technology might shape our futures so I’m very interested in his latest project: a YouTube Channel called The Rise of the Humans.

Episode 1 streamed on Wednesday evening and featured a discussion on Algorithmic Bias (and why it’s so important to understand who wrote an algorithm that might be judging you) along with some discussion about some of the tech news of the week and “the new normal” for skills development, education and technology. There’s also a workshop to accompany the podcast, which I intend to try out with my family…

Data Platform Discovery Day

I spent Thursday in back-to-back webcasts, but that was a good thing. I’d stumbled across the presence of Data Platform Discovery Day and I joined the European event to learn about all sorts of topics, with talks delivered by MVPs from around the world.

The good thing for me was that the event was advertised as “level 100” and, whilst some of the presenters struggled with that concept, I was able to grasp just enough knowledge on a variety of topics including:

  • Azure Data Factory.
  • Implementing Power BI in the enterprise.
  • An introduction to data science.
  • SQL Server and containers.
  • The importance of DevOps (particularly apt as I finished reading The Pheonix Project this week).
  • Azure SQL Database Managed Instances.
  • Data analysis strategy with Power BI.

All in all, it was a worthwhile investment of time – and there’s a lot there for me to try and put into practice over the coming weeks.

2×2

I like my 2x2s, and found this one that may turn out to be very useful over the coming weeks and months…

Blogging

I wrote part 2 of my experiences getting started with Azure Sphere, this time getting things working with a variety of Azure Services including IoT Hub, Time Series Insights and IoT Central.

Decorating

I spent some time “rediscovering” my desk under the piles of assorted “stuff” this week. I also, finally, put my holographic Windows 2000 CD into a frame and it looks pretty good on the wall!

* I’m just trying to alliterate. I don’t really think social distancing is lunacy. It’s not lockdown either.

Amazon Web Services (AWS) Summit: London Recap

This content is 5 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve written previously about the couple of days I spent at ExCeL in February, learning about Microsoft’s latest developments at the Ignite Tour and, a few weeks later I found myself back at the same venue, this time focusing on Amazon Web Services (AWS) at the London AWS Summit (four years since my last visit).

Even with a predominantly Microsoft-focused client base, there are situations where a multi-cloud solution is required and so, it makes sense for me to expand my knowledge to include Amazon’s cloud offerings. I may not have the detail and experience that I have with Microsoft Azure, but certainly enough to make an informed choice within my Architect role.

One of the first things I noticed is that, for Amazon, it’s all about the numbers. The AWS Summit had a lot of attendees – 12000+ were claimed, for more than 60 technical sessions supported by 98 sponsoring partners. Frankly, it felt to me that there were a few too many people there at times…

AWS is clearly growing – citing 41% growth comparing Q1 2019 with Q1 2018. And, whilst the comparisons with the industrial revolution and the LSE research that shows 95% of today’s startups would find traditional IT models limiting today were all good and valid, the keynote soon switched to focus on AWS claims of “more”. More services. More depth. More breadth.

There were some good customer slots in the keynote: Sainsbury’s Group CIO Phil Jordan and Group Digital Officer Clodagh Moriaty spoke about improving online experiences, integrating brands such as Nectar and Sainsbury’s, and using machine learning to re-plan retail space and to plan online deliveries. Ministry of Justice CDIO Tom Read talked about how the MOJ is moving to a microservice-based application architecture.

After the keynote, I immersed myself in technical sessions. In fact, I avoided the vendor booths completely because the room was absolutely packed when I tried to get near. My afternoon consisted of:

  • Driving digital transformation using artificial intelligence by Steven Bryen (@Steven_Bryen) and Bjoern Reinke.
  • AWS networking fundamentals by Perry Wald and Tom Adamski.
  • Creating resilience through destruction by Adrian Hornsby (@adhorn).
  • How to build an Alexa Skill in 30 minutes by Andrew Muttoni (@muttonia).

All of these were great technical sessions – and probably too much for a single blog post but, here goes anyway…

Driving digital transformation using artificial intelligence

Amazon thinks that driving better customer experience requires Artificial Intelligence (AI), specifically Machine Learning (ML). Using an old picture of London Underground workers sorting through used tickets in the 1950s to identify the most popular journeys, Steven Bryen suggested that more data leads to better analytics and better outcomes that can be applied in more ways (in a cyclical manner).

The term “artificial intelligence” has been used since John McCarthy coined it in 1955. The AWS view is that AI taking off because of:

  • Algorithms.
  • Data (specifically the ability to capture and store it at scale).
  • GPUs and acceleration.
  • Cloud computing.

Citing research from PwC [which I can’t find on the Internet], AWS claim that world GDP was $80Tn in 2018 and is expected to be $112Tn in 2030  ($15.7Tn of which can be attributed to AI).

Data science, artificial intelligence, machine learning and deep learning can be thought of as a series of concentric rings.

Machine learning can be supervised learning (betting better at finding targets); unsupervised (assume nothing and question everything); or reinforcement learning (rewarding high performing behaviour).

Amazon claims extensive AI experience through its own ML experience:

  • Recommendations Engine
  • Prime Air
  • Alexa
  • Go (checkoutless stores)
  • Robotic warehouses – taking trolleys to packer to scan and pack (using an IoT wristband to make sure robots avoid maintenance engineers).

Every day Amazon applies new AI/ML-based improvements to its business, at a global scale through AWS.

Challenges for organisations are that:

  • ML is rare
  • plus: Building and scaling ML technology is hard
  • plus: Deploying and operating models in production is time-consuming and expensive
  • equals: a lack of cost-effective easy-to-use and scalable ML services

Most time is spent getting data ready to get intelligence from it. Customers need a complete end-to-end ML stack and AWS provides that with edge technologies such as Greengrass for offline inference and modelling in SageMaker. The AWS view is that ML prediction becomes a RESTful API call.

With the scene set, Steven Bryen handed over to Bjoern Reinke, Drax Retail’s Director of Smart Metering.

Drax has converted former coal-fired power stations to use biomass: capturing carbon into biomass pellets, which are burned to create steam that drives turbines – representing 15% of the UK’s renewable energy.

Drax uses a systems thinking approach with systems of record, intelligence and engagement

System of intelligence need:

  • Trusted data.
  • Insight everywhere.
  • Enterprise automation.

Customers expect tailoring: efficiency; security; safety; and competitive advantage.

Systems of intelligence can be applied to team leaders, front line agents (so they already know that customer has just been online looking for a new tariff), leaders (for reliable data sources), and assistant-enabled recommendations (which are no longer futuristic).

Fragmented/conflicting data is pumped into a data lake from where ETL and data warehousing technologies are used for reporting and visualisation. But Drax also pull from the data lake to run analytics for data science (using Inawisdom technology).

The data science applications can monitor usage and see base load, holidays, etc. Then, they can look for anomalies – a deviation from an established time series. This might help to detect changes in tenants, etc. and the information can be surfaced to operations teams.

AWS networking fundamentals

After hearing how AWS can be used to drive insight into customer activities, the next session was back to pure tech. Not just tech but infrastructure (all be it as a service). The following notes cover off some AWS IaaS concepts and fundamentals.

Customers deploy into virtual private cloud (VPC) environments within AWS:

  • For demonstration purposes, a private address range (CIDR) was used – 172.31.0.0/16 (a private IP range from RFC 1918). Importantly, AWS ranges should be selected to avoid potential conflicts with on-premises infrastructure. Amazon recommends using /16 (65536 addresses) but network teams may suggest something smaller.
  • AWS is dual-stack (IPv4 and IPv6) so even if an IPv6 CIDR is used, infrastructure will have both IPv4 and IPv6 addresses.
  • Each VPC should be broken into availability zones (AZs), which are risk domains on different power grids/flood profiles and a subnet placed in each (e.g. 172.31.0.0/24, 172.31.1.0/24, 172.31.2.0/24).
  • Each VPC has a default routing table but an administrator can create and assign different routing tables to different subnets.

To connect to the Internet you will need a connection, a route and a public address:

  • Create a public subnet (one with public and private IP addresses).
  • Then, create an Internet Gateway (IGW).
  • Finally, Create a route so that the default gateway is the IGW (172.31.0.0/16 local and 0.0.0.0/0 igw_id).
  • Alternatively, create a private subnet and use a NAT gateway for outbound only traffic and direct responses (172.31.0.0/16 local and 0.0.0.0/0 nat_gw_id).

Moving on to network security:

  • Network Security Groups (NSGs) provide a stateful distributed firewall so a request from one direction automatically sets up permissions for a response from the other (avoiding the need to set up separate rules for inbound and outbound traffic).
    • Using an example VPC with 4 web servers and 3 back end servers:
      • Group into 2 security groups
      • Allow web traffic from anywhere to web servers (port 80 and source 0.0.0.0/0)
      • Only allow web servers to talk to back end servers (port 2345 and source security group ID)
  • Network Access Control Lists (NACLs) are stateless – they are just lists and need to be explicit to allow both directions.
  • Flow logs work at instance, subnet or VPC level and write output to S3 buckets or CloudWatch logs. They can be used for:
    • Visibility
    • Troubleshooting
    • Analysing traffic flow (no payload, just metadata)
      • Network interface
      • Source IP and port
      • Destination IP and port
      • Bytes
      • Condition (accept/reject)
  • DNS in a VPC is switched on by default for resolution and assigning hostnames (rather than just using IP addresses).
    • AWS also has the Route 53 service for customers who would like to manage their own DNS.

Finally, connectivity options include:

  • Peering for private communication between VPCs
    • Peering is 1:1 and can be in different regions but the CIDR must not overlap
    • Each VPC owner can send a request which is accepted by the owner on the other side. Then, update the routing tables on the other side.
    • Peering can get complex if there are many VPCs. There is also a limit of 125 peerings so a Transit Gateway can be used to act as a central point but there are some limitations around regions.
    • Each Transit Gateway can support up to 5000 connections.
  • AWS can be connected to on-premises infrastructure using a VPN or with AWS Direct Connect
    • A VPN is established with a customer gateway and a virtual private gateway is created on the VPC side of the connection.
      • Each connection has 2 tunnels (2 endpoints in different AZs).
      • Update the routing table to define how to reach on-premises networks.
    • Direct Connect
      • AWS services on public address space are outside the VPC.
      • Direct Connect locations have a customer or partner cage and an AWS cage.
      • Create a private virtual interface (VLAN) and a public virtual interface (VLAN) for access to VPC and to other AWS services.
      • A Direct Connect Gateway is used to connect to each VPC
    • Before Transit Gateway customers needed a VPN per VPC.
      • Now they can consolidate on-premises connectivity
      • For Direct Connect it’s possible to have a single tunnel with a Transit Gateway between the customer gateway and AWS.
  • Route 53 Resolver service can be used for DNS forwarding on-premises to AWS and vice versa.
  • VPC Sharing provides separation of resources with:
    • An Owner account to set up infrastructure/networking.
    • Subnets shared with other AWS accounts so they can deploy into the subnet.
  • Interface endpoints make an API look as if it’s part of an organisation’s VPC.
    • They override the public domain name for service.
    • Using a private link can only expose a specific service port and control the direction of communications and no longer care about IP addresses.
  • Amazon Global Accelerator brings traffic onto the AWS backbone close to end users and then uses that backbone to provide access to services.

Creating resilience through destruction

Adrian Horn presenting at AWS Summit London

One of the most interesting sessions I saw at the AWS Summit was Adrian Horn’s session that talked about deliberately breaking things to create resilience – which is effectively the infrastructure version of test-driven development (TDD), I guess…

Actually, Adrian made the point that it’s not so much the issues that bringing things down causes as the complexity of bringing them back up.

“Failures are a given and everything will eventually fail over time”

Werner Vogels, CTO, Amazon.com

We may break a system into microservices to scale but we also need to think about resilience: the ability for a system to handle and eventually recover from unexpected conditions.

This needs to consider a stack that includes:

  • People
  • Application
  • Network and Data
  • Infrastructure

And building confidence through testing only takes us so far. Adrian referred to another presentation, by Jesse Robbins, where he talks about creating resilience through destruction.

Firefighters train to build intuition – so they know what to do in the event of a real emergency. In IT, we have the concept of chaos engineering – deliberately injecting failures into an environment:

  • Start small and build confidence:
    • Application level
    • Host failure
    • Resource attacks (CPU, latency…)
    • Network attacks (dependencies, latency…)
    • Region attack
    • Human attack (remove a key resource)
  • Then, build resilient systems:
    • Steady state
    • Hypothesis
    • Design and run an experiment
    • Verify and learn
    • Fix
    • (maybe go back to experiment or to start)
  • And use bulkheads to isolate parts of the system (as in shipping).

Think about:

  • Software:
    • Certificate Expiry
    • Memory leaks
    • Licences
    • Versioning
  • Infrastructure:
    • Redundancy (multi-AZ)
    • Use of managed services
    • Bulkheads
    • Infrastructure as code
  • Application:
    • Timeouts
    • Retries with back-offs (not infinite retries)
    • Circuit breakers
    • Load shedding
    • Exception handing
  • Operations:
    • Monitoring and observability
    • Incident response
    • Measure, measure, measure
    • You build it, your run it

AWS’ Well Architected framework has been developed to help cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications, based on some of these principles.

Adrian then moved on to consider what a steady state looks like:

  • Normal behaviour of system
  • Business metric (e.g. pulse of Netflix – multiple clicks on play button if not working)
    • Amazon extra 100ms load time led to 1% drop in sales (Greg Linden)
    • Google extra 500ms of load time led to 20% fewer searches (Marissa Mayer)
    • Yahoo extra 400ms of load time caused 5-9% increase in back clicks (Nicole Sullivan)

He suggests asking questions about “what if?” and following some rules of thumb:

  • Start very small
  • As close as possible to production
  • Minimise the blast radius
  • Have an emergency stop
    • Be careful with state that can’t be rolled back (corrupt or incorrect data)

Use canary deployment with A-B testing via DNS or similar for chaos experiment (1%) or normal (99%).

Adrian then went on to demonstrate his approach to chaos engineering, including:

  • Fault injection queries for Amazon Aurora (can revert immediately)
    • Crash a master instance
    • Fail a replica
    • Disk failure
    • Disk congestion
  • DDoS yourself
  • Add latency to network
    • ~ tc qdisc add dev eth0 root netem delay 200ms
  • https://github.com/Netflix/SimianArmy
    • Shut down services randomly
    • Slow down performance
    • Check conformity
    • Break an entire region
    • etc.
  • The chaos toolkit
  • Gremin
    • Destruction as a service!
  • ToxiProxy
    • Sit between components and add “toxics” to test impact of issues
  • Kube-Money project (for Kubernetes)
  • Pumba (for Docker)
  • Thundra (for Lambda)

Use post mortems for correction of errors – the 5 whys. Also, understand that there is no isolated “cause” of an accident.

My notes don’t do Adrian’s talk justice – there’s so much more that I could pick up from re-watching his presentation. Adrian tweeted a link to his slides and code – if you’d like to know more, check them out:

How to build an Alexa Skill in 30 minutes

Spoiler: I didn’t have a working Alexa skill at the end of my 30 minutes… nevertheless, here’s some info to get you started!

Amazon’s view is that technology tries to constrain us. Things got better with mobile and voice is the next step forward. With voice, we can express ourselves without having to understand a user interface [except we do, because we have to know how to issue commands in a format that’s understood – that’s the voice UI!].

I get the point being made – to add an item to a to-do list involves several steps:

  • Find phone
  • Unlock phone
  • Find app
  • Add item
  • etc.

Or, you could just say (for example) “Alexa, ask Ocado to add tuna to my trolley”.

Alexa is a service in the AWS cloud that understands request and acts upon them. There are two components:

  • Alexa voice service – how a device manufacturer adds Alexa to its products.
  • Alexa Skills Kit – to create skills that make something happen (and there are currently more than 80,000 skills available).

An Alexa-enabled device only needs to know to wake up, then stream some “mumbo jumbo” to the cloud, at which point:

  • Automatic speech recognition with translate text to speech
  • Natural language understanding will infer intent (not just text, but understanding…)

Creating skills is requires two parts:

Alexa-hosted skills use Lambda under the hood and creating the skill involves:

  1. Give the skill a name.
  2. Choose the development model.
  3. Choose a hosting method.
  4. Create a skill.
  5. Test in a simulation environment.

Finally, some more links that may be useful:

In summary

Looking back, the technical sessions made my visit to the AWS Summit worthwhile but overall, I was a little disappointed, as this tweet suggests:

Would I recommend the AWS Summit to others? Maybe. Would I watch the keynote from home? No. Would I try to watch some more technical sessions? Absolutely, if they were of the quality I saw on the day. Would I bother to go to ExCeL with 12000 other delegates herded like cattle? Probably not…