Learning to be intelligent about artificial intelligence

This week promises to be a huge one in the world of Artificial Intelligence (AI). I should caveat that in that almost every week includes a barrage of news about AI. And, depending which articles you read, AI is either going to:

  • Take away all our jobs or create exciting new jobs.
  • Solve global issues like climate change or hasten climate change through massive data centre power and water requirements.
  • Lead to the demise of society as we know it or create a new utopia.

A week of high profile AI events

So, why is this week so special?

  1. First of all, the G7 nations have agreed a set of Guiding Principles and a Code of Conduct on AI. This has been lauded by the European Commission as complementing the legally binding rules that the EU co-legislators are currently finalising under the EU AI Act.
  2. Then, starting on Wednesday, the UK is hosting an AI Safety Summit at “the home of computing”, Bletchley Park. And this summit is already controversial with some questioning the diversity of the attendees, including Dr Sue Black, who famously championed saving Bletchley Park from redevelopment.
  3. The same day, Microsoft’s AI Copilots will become generally available to Enterprise users, and there’s a huge buzz around how the $30/user/month Copilot plays against other offers like Bing Chat Enterprise ($5/user/month), or even using public AI models.

All just another week in AI news. Or not, depending on how you view these things!

Is AI the big deal that it seems to be?

It’s only natural to ask questions about the potential that AI offers (specifically generative AI – gAI). It’s a topic that I covered in a recent technology advice note that I wrote.

In summary, I said that:

“gAI tools should be considered as assistive technologies that can help with researching, summarising and basic drafting but they are not a replacement for human expertise.

We need to train people on the limitations of gAI. We should learn lessons from social media, where nuanced narratives get reduced to polarised soundbites. Newspaper headlines do the same, but social media industrialised things. AI has the potential to be transformative. But we need to make sure that’s done in the right way.

Getting good results out of LLMs will be a skill – a new area of subject matter expertise (known as “prompt engineering”). Similarly, questioning the outputs of GANs to recognise fake imagery will require new awareness and critical thinking.”

Node 4 Technology Advice Note on Artificial Intelligence, September 2023.

Even as I’m writing this post, I can see a BBC headline that asks “Can Rishi Sunak’s big summit save us from AI nightmare?”. My response? Betteridge’s law probably applies here.

Could AI have saved a failed business?

Last weekend, The Sunday Times ran an article about the failed Babylon Healthcare organisation, titled “The app that promised an NHS ‘revolution’ then went down in flames”. The article is behind a paywall, but I’ve seen some extracts.

Two things appear to have caused Babylon’s downfall (at least in part). Not only did Babylon attract young and generally healthy patients to its telehealth services, but it also offered frictionless access.

So, it caused problems for traditional service providers, leaving them with an older, more frequently ill, and therefore more expensive sector of the population. And it caused problems for itself: who would have thought that if you offer people unlimited healthcare, they will use it?!

(In some cases, creating friction in provision of a service is a deliberate policy. I’m sure this is why my local GP doesn’t allow me to book appointments online. By making me queue up in person for one of a limited number of same-day appointments, or face a lengthy wait in a telephone queue, I’m less likely to make an appointment unless I really need it.)

The article talks about the pressures on Babylon to increase its use of artificial intelligence. It also seems to come to the conclusion that, had today’s generative AI tools been around when Babylon was launched, it would have been more successful. That’s a big jump, written by a consumer journalist, who seems to be asserting that generative AI is better at predicting health outcomes than expert system decision trees.

We need to be intelligent about how we use Artificial Intelligence

Let me be clear, generative AI makes stuff up. Literally. gAIs like ChatGPT work by predicting and generating the next word based on previous words – basically, on probability. And sometimes they get it wrong.

Last week, I asked ChatGPT to summarise some meeting notes. The summary it produced included a typo – a made-up word:

“A meeting took tanke place between Node4 & the Infrastructure team at <client name redacted> to discuss future technology integration, project workloads, cost control measures, and hybrid cloud strategy.”

Or, as one of my friends found when he asked ChatGPT to confirm a simple percentage calculation, it initially said one thing and then “changed its mind”!

Don’t get me wrong – these tools can be fantastic for creating drafts, but they do need to be checked. Many people seem to think that an AI generates a response from a database of facts and therefore must be correct.

In conclusion

As we traverse the future landscape painted by artificial intelligence, it’s vital that we arm ourselves with a sound understanding of its potential and limitations. AI has often been regarded as a silver bullet for many of our modern challenges, a shortcut to progress and optimised efficiency. But as we’ve explored in this blog post – whether it’s the G7 nations’ principles, Microsoft’s AI Copilot, or a fallen Babylon Healthcare – AI is not a one-size-fits-all solution. It’s a tool, often brilliant but fallible, offering us both unprecedented opportunities and new forms of challenges.

The promises brought by AI are enormous. This week’s events underscore the urgency to familiarise ourselves with AI, acknowledge its potential, and intelligently navigate its limitations. From a set of AI guiding principles on a global scale, to raising awareness on gAI, and analysing the role of AI in business successes and failures – it’s clear that being informed about AI is no longer an option but a necessity.

gAI tools, while they are transformative, need to be used as assistive technologies and not as replacements for human intellect and expertise. Embracing AI should not mean renouncing critical thinking and caution. So, as we interact with AI, let’s do it intelligently, asking the right questions and understanding its strengths and limitations. We need to be smart about using AI, recognizing both its potential and its constraints. This will enable us to harness its power effectively, while avoiding over-reliance or the creation of new, unforeseen problems.

It’s time we stop viewing AI through a lens of absolute salvation or doom, and start understanding it as a dynamic field that requires thoughtful and knowledgeable engagement. Evolution in human tech culture will not be judged by the power of our tools, but by our ability to skillfully and ethically wield them. So, let’s learn to be intelligent about how we use artificial intelligence.

Postscript

That conclusion was written by an AI, and edited by a human.

Featured image: screenshot from the BBC website, under fair use for copyright purposes.

This is what happened when I asked ChatGPT to write a blog post…

This content is 2 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In recent weeks, I’ve been watching the buzz of activity about large language models and generative AI. So I decided to see what would happen if I asked Open AI’s ChatGPT to write a blog post for me. I gave it the following prompt:

Write a blog post of up to 800 words that discusses the use of generative AI tools like ChatGPT to automate human activities Use writing a blog post as an example. Include some commentary about the potential social impacts of using AI to generate content. Also discuss whether there are any concerns that need to be considered - such as whether AI-generated content is accurate.

“Generative AI tools like ChatGPT have been making waves in the field of automation and artificial intelligence. These tools have the ability to generate human-like text, helping to automate certain human activities such as writing, customer service, and more. While the potential benefits of these tools are clear, there are also concerns about their impact on society and the accuracy of the content they produce.

One of the most significant benefits of generative AI tools like ChatGPT is their ability to automate repetitive or time-consuming tasks. For example, a blogger who regularly posts new content may use ChatGPT to generate article outlines, saving time and effort. Customer service teams can use generative AI tools to provide quick and accurate responses to frequently asked questions, improving the overall customer experience.

Another benefit of generative AI tools is their ability to create content in multiple languages, allowing businesses to expand their reach into new markets. These tools can also be used to generate personalized content for individuals, such as email newsletters or social media posts, which can improve engagement and lead to higher conversions.

However, the use of generative AI tools also raises concerns about the potential social impacts of using AI to generate content. One concern is that these tools could lead to a loss of jobs, particularly in industries where writing and content creation are essential. While automation has always been a part of technological progress, there is a risk that AI could replace more jobs than it creates.

Another concern is that AI-generated content could be used to spread misinformation and propaganda. With the ability to create convincing, human-like text, there is a risk that malicious actors could use these tools to create fake news stories or manipulate public opinion. This could have serious consequences for democracy and social stability.

Accuracy is another concern when it comes to generative AI tools. While these tools are designed to create human-like text, they may not always produce accurate or reliable information. This could be particularly problematic in fields like journalism or science, where accuracy is essential. As AI tools become more sophisticated, it will be important to ensure that the content they produce is factually correct and trustworthy.

Despite these concerns, generative AI tools like ChatGPT have the potential to revolutionize the way we create and consume content. As with any new technology, it is important to approach these tools with caution and to carefully consider their potential impact on society. However, if used responsibly, these tools could improve efficiency, accuracy, and creativity in a variety of industries.

In the case of writing blog posts, for example, using ChatGPT to generate outlines or ideas could be a useful starting point, but it is important to remember that these tools are not a substitute for human creativity and expertise. While AI-generated content may be faster and more efficient, it may lack the nuance and personality that comes from a human writer.

To ensure that AI-generated content is accurate and reliable, it is important to use reliable sources of information and to fact-check any content that is produced. This is particularly important when it comes to sensitive or controversial topics, where misinformation can have serious consequences.

Overall, the use of generative AI tools like ChatGPT to automate human activities is a promising development that has the potential to improve efficiency and creativity in a variety of fields. However, it is important to approach these tools with caution and to carefully consider their potential impact on society. With responsible use and careful oversight, generative AI tools could become an important part of our technological landscape, helping us to achieve more than we ever thought possible.”

Is it good enough?

As it happens, the resulting text is quite believable, but it’s not actually a very good post. A different AI, the one from Yoast SEO that helps with the readability of my content, highlighted that there aren’t enough subheadings. Also, more than half the sentences are a bit long (more than 20 words). And it’s using too much passive voice (I do that too, so I guess it could be considered authentic!).

Garbage In Garbage Out (GIBO)

I tried asking ChatCPT to improve its writing, based on the points above but I didn’t get far:

Re-write the above with shorter sentences to make it easier to read, and use sub-headings.

And then:

Try again, and this time use try to use active voice instead of passive voice... and make the subheadings a bit snappier!

It made some changes, but really the way I see tools like this working is as a very quick first draft for review and update by a human writer. We just have to be careful not to let those who don’t understand or who don’t value writing say “just get an AI to write it in a few seconds”.

Featured image by Alexandra_Koch from Pixabay.

Weeknote 18/2020: Microsoft 365, the rise of the humans and some data platform discovery

This content is 4 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Some highlights from the last week of “lockdown” lunacy*…

Office 365 rebranding to Microsoft 365

For the last couple of years, Microsoft has had a subscription bundle called Microsoft 365, which includes Office 365 Enterprise, Enterprise Mobility and Security and Windows 10 Enterprise. Now some bright spark has decided to rebrand some Office 365 products as Microsoft 365. Except for the ones that they haven’t (Office 365 Enterprise E1/3/5). And Office 365 ProPlus (the subscription-based version of the Office applications) is now “Microsoft 365 Apps for Enterprise”. Confused? Join the club…

Read more on the Microsoft website.

The Rise of the Humans

A few years ago, I met Dave Coplin (@DCoplin). At the time he was working for Microsoft, with the assumed title of “Chief Envisioning Officer” (which was mildly amusing when he was called upon to interview the real Microsoft CEO, Satya Nadella at Future Decoded). Dave’s a really smart guy and a great communicator with a lot of thoughts about how technology might shape our futures so I’m very interested in his latest project: a YouTube Channel called The Rise of the Humans.

Episode 1 streamed on Wednesday evening and featured a discussion on Algorithmic Bias (and why it’s so important to understand who wrote an algorithm that might be judging you) along with some discussion about some of the tech news of the week and “the new normal” for skills development, education and technology. There’s also a workshop to accompany the podcast, which I intend to try out with my family…

Data Platform Discovery Day

I spent Thursday in back-to-back webcasts, but that was a good thing. I’d stumbled across the presence of Data Platform Discovery Day and I joined the European event to learn about all sorts of topics, with talks delivered by MVPs from around the world.

The good thing for me was that the event was advertised as “level 100” and, whilst some of the presenters struggled with that concept, I was able to grasp just enough knowledge on a variety of topics including:

  • Azure Data Factory.
  • Implementing Power BI in the enterprise.
  • An introduction to data science.
  • SQL Server and containers.
  • The importance of DevOps (particularly apt as I finished reading The Pheonix Project this week).
  • Azure SQL Database Managed Instances.
  • Data analysis strategy with Power BI.

All in all, it was a worthwhile investment of time – and there’s a lot there for me to try and put into practice over the coming weeks.

2×2

I like my 2x2s, and found this one that may turn out to be very useful over the coming weeks and months…

Blogging

I wrote part 2 of my experiences getting started with Azure Sphere, this time getting things working with a variety of Azure Services including IoT Hub, Time Series Insights and IoT Central.

Decorating

I spent some time “rediscovering” my desk under the piles of assorted “stuff” this week. I also, finally, put my holographic Windows 2000 CD into a frame and it looks pretty good on the wall!

* I’m just trying to alliterate. I don’t really think social distancing is lunacy. It’s not lockdown either.

Amazon Web Services (AWS) Summit: London Recap

This content is 5 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve written previously about the couple of days I spent at ExCeL in February, learning about Microsoft’s latest developments at the Ignite Tour and, a few weeks later I found myself back at the same venue, this time focusing on Amazon Web Services (AWS) at the London AWS Summit (four years since my last visit).

Even with a predominantly Microsoft-focused client base, there are situations where a multi-cloud solution is required and so, it makes sense for me to expand my knowledge to include Amazon’s cloud offerings. I may not have the detail and experience that I have with Microsoft Azure, but certainly enough to make an informed choice within my Architect role.

One of the first things I noticed is that, for Amazon, it’s all about the numbers. The AWS Summit had a lot of attendees – 12000+ were claimed, for more than 60 technical sessions supported by 98 sponsoring partners. Frankly, it felt to me that there were a few too many people there at times…

AWS is clearly growing – citing 41% growth comparing Q1 2019 with Q1 2018. And, whilst the comparisons with the industrial revolution and the LSE research that shows 95% of today’s startups would find traditional IT models limiting today were all good and valid, the keynote soon switched to focus on AWS claims of “more”. More services. More depth. More breadth.

There were some good customer slots in the keynote: Sainsbury’s Group CIO Phil Jordan and Group Digital Officer Clodagh Moriaty spoke about improving online experiences, integrating brands such as Nectar and Sainsbury’s, and using machine learning to re-plan retail space and to plan online deliveries. Ministry of Justice CDIO Tom Read talked about how the MOJ is moving to a microservice-based application architecture.

After the keynote, I immersed myself in technical sessions. In fact, I avoided the vendor booths completely because the room was absolutely packed when I tried to get near. My afternoon consisted of:

  • Driving digital transformation using artificial intelligence by Steven Bryen (@Steven_Bryen) and Bjoern Reinke.
  • AWS networking fundamentals by Perry Wald and Tom Adamski.
  • Creating resilience through destruction by Adrian Hornsby (@adhorn).
  • How to build an Alexa Skill in 30 minutes by Andrew Muttoni (@muttonia).

All of these were great technical sessions – and probably too much for a single blog post but, here goes anyway…

Driving digital transformation using artificial intelligence

Amazon thinks that driving better customer experience requires Artificial Intelligence (AI), specifically Machine Learning (ML). Using an old picture of London Underground workers sorting through used tickets in the 1950s to identify the most popular journeys, Steven Bryen suggested that more data leads to better analytics and better outcomes that can be applied in more ways (in a cyclical manner).

The term “artificial intelligence” has been used since John McCarthy coined it in 1955. The AWS view is that AI taking off because of:

  • Algorithms.
  • Data (specifically the ability to capture and store it at scale).
  • GPUs and acceleration.
  • Cloud computing.

Citing research from PwC [which I can’t find on the Internet], AWS claim that world GDP was $80Tn in 2018 and is expected to be $112Tn in 2030  ($15.7Tn of which can be attributed to AI).

Data science, artificial intelligence, machine learning and deep learning can be thought of as a series of concentric rings.

Machine learning can be supervised learning (betting better at finding targets); unsupervised (assume nothing and question everything); or reinforcement learning (rewarding high performing behaviour).

Amazon claims extensive AI experience through its own ML experience:

  • Recommendations Engine
  • Prime Air
  • Alexa
  • Go (checkoutless stores)
  • Robotic warehouses – taking trolleys to packer to scan and pack (using an IoT wristband to make sure robots avoid maintenance engineers).

Every day Amazon applies new AI/ML-based improvements to its business, at a global scale through AWS.

Challenges for organisations are that:

  • ML is rare
  • plus: Building and scaling ML technology is hard
  • plus: Deploying and operating models in production is time-consuming and expensive
  • equals: a lack of cost-effective easy-to-use and scalable ML services

Most time is spent getting data ready to get intelligence from it. Customers need a complete end-to-end ML stack and AWS provides that with edge technologies such as Greengrass for offline inference and modelling in SageMaker. The AWS view is that ML prediction becomes a RESTful API call.

With the scene set, Steven Bryen handed over to Bjoern Reinke, Drax Retail’s Director of Smart Metering.

Drax has converted former coal-fired power stations to use biomass: capturing carbon into biomass pellets, which are burned to create steam that drives turbines – representing 15% of the UK’s renewable energy.

Drax uses a systems thinking approach with systems of record, intelligence and engagement

System of intelligence need:

  • Trusted data.
  • Insight everywhere.
  • Enterprise automation.

Customers expect tailoring: efficiency; security; safety; and competitive advantage.

Systems of intelligence can be applied to team leaders, front line agents (so they already know that customer has just been online looking for a new tariff), leaders (for reliable data sources), and assistant-enabled recommendations (which are no longer futuristic).

Fragmented/conflicting data is pumped into a data lake from where ETL and data warehousing technologies are used for reporting and visualisation. But Drax also pull from the data lake to run analytics for data science (using Inawisdom technology).

The data science applications can monitor usage and see base load, holidays, etc. Then, they can look for anomalies – a deviation from an established time series. This might help to detect changes in tenants, etc. and the information can be surfaced to operations teams.

AWS networking fundamentals

After hearing how AWS can be used to drive insight into customer activities, the next session was back to pure tech. Not just tech but infrastructure (all be it as a service). The following notes cover off some AWS IaaS concepts and fundamentals.

Customers deploy into virtual private cloud (VPC) environments within AWS:

  • For demonstration purposes, a private address range (CIDR) was used – 172.31.0.0/16 (a private IP range from RFC 1918). Importantly, AWS ranges should be selected to avoid potential conflicts with on-premises infrastructure. Amazon recommends using /16 (65536 addresses) but network teams may suggest something smaller.
  • AWS is dual-stack (IPv4 and IPv6) so even if an IPv6 CIDR is used, infrastructure will have both IPv4 and IPv6 addresses.
  • Each VPC should be broken into availability zones (AZs), which are risk domains on different power grids/flood profiles and a subnet placed in each (e.g. 172.31.0.0/24, 172.31.1.0/24, 172.31.2.0/24).
  • Each VPC has a default routing table but an administrator can create and assign different routing tables to different subnets.

To connect to the Internet you will need a connection, a route and a public address:

  • Create a public subnet (one with public and private IP addresses).
  • Then, create an Internet Gateway (IGW).
  • Finally, Create a route so that the default gateway is the IGW (172.31.0.0/16 local and 0.0.0.0/0 igw_id).
  • Alternatively, create a private subnet and use a NAT gateway for outbound only traffic and direct responses (172.31.0.0/16 local and 0.0.0.0/0 nat_gw_id).

Moving on to network security:

  • Network Security Groups (NSGs) provide a stateful distributed firewall so a request from one direction automatically sets up permissions for a response from the other (avoiding the need to set up separate rules for inbound and outbound traffic).
    • Using an example VPC with 4 web servers and 3 back end servers:
      • Group into 2 security groups
      • Allow web traffic from anywhere to web servers (port 80 and source 0.0.0.0/0)
      • Only allow web servers to talk to back end servers (port 2345 and source security group ID)
  • Network Access Control Lists (NACLs) are stateless – they are just lists and need to be explicit to allow both directions.
  • Flow logs work at instance, subnet or VPC level and write output to S3 buckets or CloudWatch logs. They can be used for:
    • Visibility
    • Troubleshooting
    • Analysing traffic flow (no payload, just metadata)
      • Network interface
      • Source IP and port
      • Destination IP and port
      • Bytes
      • Condition (accept/reject)
  • DNS in a VPC is switched on by default for resolution and assigning hostnames (rather than just using IP addresses).
    • AWS also has the Route 53 service for customers who would like to manage their own DNS.

Finally, connectivity options include:

  • Peering for private communication between VPCs
    • Peering is 1:1 and can be in different regions but the CIDR must not overlap
    • Each VPC owner can send a request which is accepted by the owner on the other side. Then, update the routing tables on the other side.
    • Peering can get complex if there are many VPCs. There is also a limit of 125 peerings so a Transit Gateway can be used to act as a central point but there are some limitations around regions.
    • Each Transit Gateway can support up to 5000 connections.
  • AWS can be connected to on-premises infrastructure using a VPN or with AWS Direct Connect
    • A VPN is established with a customer gateway and a virtual private gateway is created on the VPC side of the connection.
      • Each connection has 2 tunnels (2 endpoints in different AZs).
      • Update the routing table to define how to reach on-premises networks.
    • Direct Connect
      • AWS services on public address space are outside the VPC.
      • Direct Connect locations have a customer or partner cage and an AWS cage.
      • Create a private virtual interface (VLAN) and a public virtual interface (VLAN) for access to VPC and to other AWS services.
      • A Direct Connect Gateway is used to connect to each VPC
    • Before Transit Gateway customers needed a VPN per VPC.
      • Now they can consolidate on-premises connectivity
      • For Direct Connect it’s possible to have a single tunnel with a Transit Gateway between the customer gateway and AWS.
  • Route 53 Resolver service can be used for DNS forwarding on-premises to AWS and vice versa.
  • VPC Sharing provides separation of resources with:
    • An Owner account to set up infrastructure/networking.
    • Subnets shared with other AWS accounts so they can deploy into the subnet.
  • Interface endpoints make an API look as if it’s part of an organisation’s VPC.
    • They override the public domain name for service.
    • Using a private link can only expose a specific service port and control the direction of communications and no longer care about IP addresses.
  • Amazon Global Accelerator brings traffic onto the AWS backbone close to end users and then uses that backbone to provide access to services.

Creating resilience through destruction

Adrian Horn presenting at AWS Summit London

One of the most interesting sessions I saw at the AWS Summit was Adrian Horn’s session that talked about deliberately breaking things to create resilience – which is effectively the infrastructure version of test-driven development (TDD), I guess…

Actually, Adrian made the point that it’s not so much the issues that bringing things down causes as the complexity of bringing them back up.

“Failures are a given and everything will eventually fail over time”

Werner Vogels, CTO, Amazon.com

We may break a system into microservices to scale but we also need to think about resilience: the ability for a system to handle and eventually recover from unexpected conditions.

This needs to consider a stack that includes:

  • People
  • Application
  • Network and Data
  • Infrastructure

And building confidence through testing only takes us so far. Adrian referred to another presentation, by Jesse Robbins, where he talks about creating resilience through destruction.

Firefighters train to build intuition – so they know what to do in the event of a real emergency. In IT, we have the concept of chaos engineering – deliberately injecting failures into an environment:

  • Start small and build confidence:
    • Application level
    • Host failure
    • Resource attacks (CPU, latency…)
    • Network attacks (dependencies, latency…)
    • Region attack
    • Human attack (remove a key resource)
  • Then, build resilient systems:
    • Steady state
    • Hypothesis
    • Design and run an experiment
    • Verify and learn
    • Fix
    • (maybe go back to experiment or to start)
  • And use bulkheads to isolate parts of the system (as in shipping).

Think about:

  • Software:
    • Certificate Expiry
    • Memory leaks
    • Licences
    • Versioning
  • Infrastructure:
    • Redundancy (multi-AZ)
    • Use of managed services
    • Bulkheads
    • Infrastructure as code
  • Application:
    • Timeouts
    • Retries with back-offs (not infinite retries)
    • Circuit breakers
    • Load shedding
    • Exception handing
  • Operations:
    • Monitoring and observability
    • Incident response
    • Measure, measure, measure
    • You build it, your run it

AWS’ Well Architected framework has been developed to help cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications, based on some of these principles.

Adrian then moved on to consider what a steady state looks like:

  • Normal behaviour of system
  • Business metric (e.g. pulse of Netflix – multiple clicks on play button if not working)
    • Amazon extra 100ms load time led to 1% drop in sales (Greg Linden)
    • Google extra 500ms of load time led to 20% fewer searches (Marissa Mayer)
    • Yahoo extra 400ms of load time caused 5-9% increase in back clicks (Nicole Sullivan)

He suggests asking questions about “what if?” and following some rules of thumb:

  • Start very small
  • As close as possible to production
  • Minimise the blast radius
  • Have an emergency stop
    • Be careful with state that can’t be rolled back (corrupt or incorrect data)

Use canary deployment with A-B testing via DNS or similar for chaos experiment (1%) or normal (99%).

Adrian then went on to demonstrate his approach to chaos engineering, including:

  • Fault injection queries for Amazon Aurora (can revert immediately)
    • Crash a master instance
    • Fail a replica
    • Disk failure
    • Disk congestion
  • DDoS yourself
  • Add latency to network
    • ~ tc qdisc add dev eth0 root netem delay 200ms
  • https://github.com/Netflix/SimianArmy
    • Shut down services randomly
    • Slow down performance
    • Check conformity
    • Break an entire region
    • etc.
  • The chaos toolkit
  • Gremin
    • Destruction as a service!
  • ToxiProxy
    • Sit between components and add “toxics” to test impact of issues
  • Kube-Money project (for Kubernetes)
  • Pumba (for Docker)
  • Thundra (for Lambda)

Use post mortems for correction of errors – the 5 whys. Also, understand that there is no isolated “cause” of an accident.

My notes don’t do Adrian’s talk justice – there’s so much more that I could pick up from re-watching his presentation. Adrian tweeted a link to his slides and code – if you’d like to know more, check them out:

How to build an Alexa Skill in 30 minutes

Spoiler: I didn’t have a working Alexa skill at the end of my 30 minutes… nevertheless, here’s some info to get you started!

Amazon’s view is that technology tries to constrain us. Things got better with mobile and voice is the next step forward. With voice, we can express ourselves without having to understand a user interface [except we do, because we have to know how to issue commands in a format that’s understood – that’s the voice UI!].

I get the point being made – to add an item to a to-do list involves several steps:

  • Find phone
  • Unlock phone
  • Find app
  • Add item
  • etc.

Or, you could just say (for example) “Alexa, ask Ocado to add tuna to my trolley”.

Alexa is a service in the AWS cloud that understands request and acts upon them. There are two components:

  • Alexa voice service – how a device manufacturer adds Alexa to its products.
  • Alexa Skills Kit – to create skills that make something happen (and there are currently more than 80,000 skills available).

An Alexa-enabled device only needs to know to wake up, then stream some “mumbo jumbo” to the cloud, at which point:

  • Automatic speech recognition with translate text to speech
  • Natural language understanding will infer intent (not just text, but understanding…)

Creating skills is requires two parts:

Alexa-hosted skills use Lambda under the hood and creating the skill involves:

  1. Give the skill a name.
  2. Choose the development model.
  3. Choose a hosting method.
  4. Create a skill.
  5. Test in a simulation environment.

Finally, some more links that may be useful:

In summary

Looking back, the technical sessions made my visit to the AWS Summit worthwhile but overall, I was a little disappointed, as this tweet suggests:

Would I recommend the AWS Summit to others? Maybe. Would I watch the keynote from home? No. Would I try to watch some more technical sessions? Absolutely, if they were of the quality I saw on the day. Would I bother to go to ExCeL with 12000 other delegates herded like cattle? Probably not…

Weeknote 4: music; teenagers; creating a chatbot; tech, more tech and tech TV; 7 day photo challenge; and cycling (Week 46, 2017)

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Another week, another weeknote…

There’s not much to say about work this week – I’ve mostly been writing documentation. I did spend a good chunk of Monday booking hotels and travel, only to find 12 days of consulting drop out of my diary again on Friday (cue hotel cancellations, etc.) but I guess that’s just life!

Family life: grime, rap and teens!

Outside work, it’s been good to be close to home and get involved in family life again.

I had the amusement of my 11 year-old and his friends rapping to their grime music on my car on the way to/from football training this week (we’re at the age where it’s “Dad, can we have my music on please?”) but there’s only so much Big Shaq I can take so I played some Eminem on the way back. It was quite endearing to hear my son say “I didn’t know you knew about Eminem!” after I dropped his mates off. I should make the most of these moments as the adulation is dropping off now he approaches his teens!

Talking of teens, my eldest turned 13 this week, which was a big day in the Wilson household:

 

I’m not sure how this little fella grew into this strong chap (or where the time in between has gone) but we introduced him to the Harry Enfield “Kevin the teenager” videos a few months ago. I thought they were funny when I was younger but couldn’t believe how accurate they are now I’m a parent. Our boys clearly understood the message too and looked a bit sheepish!

Tech

I did play with some tech this week – and I managed to create my very own chatbot without writing any code:

Virtual Mark (MarkBot1) uses the Microsoft QnA Maker and runs in Microsoft Azure. The process is described in James Marshall’s blog post and it’s very straightforward. I’m using Azure Functions and so far this serverless solution has cost me absolutely nothing to run!

It’s also interesting reading some of the queries that the bot has been asked, which have led to me extending its knowledge base a few times now. A question and answer chatbot is probably more suited to a set of tightly bounded questions on a topic (the things people can ask about me is pretty broad) but it’s a nice demo…

I also upgraded my work PC to the latest Windows 10 and Office builds (1709 and 1710 respectively), which gave me the ability to use a digital pen as a presentation clicker, which is nice, in a geek-novelty kind of way:

Tech TV

I have an Amazon Prime membership, which includes access to Amazon Prime Instant Video – including several TV shows that would otherwise only be available in the US. One I enjoy is Mr Robot – which although completely weird at times is also strangely addictive – and this week’s episode was particularly good (scoring 9.9 on IMDB). Whilst I was waiting for the next episode to come around, I found that I’d missed a whole season of Halt and Catch Fire too (I binge-watched the first three after they were recommended to me by Howard van Rooijen/@HowardvRooijen). Series 4 is the final one and that’s what presently keeping me from my sleep… but it’s really good!

I don’t have Netflix, but Silicon Cowboys has been recommended to me by Derek Goodridge (@workerthread). Just like the first series of Halt and Catch Fire, it’s the story of the original IBM PC clone manufacturers – Compaq – but in documentary format, rather than as a drama series.

iPhone images

Regular readers may recall that a few weeks ago I found myself needing to buy a new iPhone after I fell into the sea with my iPhone in my pocket, twisting my ankle in the process…

People have been telling me for ages that “the latest iPhone has a great camera” and, in daylight, I’m really impressed by the clarity and also the bokeh effect. It’s still a mobile phone camera with a tiny sensor though and that means it’s still really poor at night. If a full-frame DSLR struggles at times, an iPhone will be challenged I guess – but I’m still finding that I’m inspired to use the camera more.

7 Days 7 Photos

Last week, I mentioned the 7 days, 7 photos challenge. I’ve completed mine now and they are supposed to be without explanation but, now I have a set of 7 photos, I thought I would explain what and why I used these ones. I get the feeling that some people are just posting 7 pictures, one a day, but these really do relate to what I was doing each day – and I tried to nominate people for the challenge each day based on their relevance to the subject…

Day 1

7 Days 7 Photos Day 1

I spotted this pub as I walked to Farringdon station. I wondered if “the clerk and well” was the origin of the name for “Clerkenwell” and it turns out that it is. Anyway, I liked the view of the traditional London pub (I was on my way home from another one!) and challenged my brother, who’s a publican…

Day 2

7 Days 7 Photos Day 2

I liked the form in this photograph of my son’s CX bike on the roof of my car. It didn’t look so clean when we got back from cyclocross training though! I challenged my friend Andy, whose 40th birthday was the reason for my ride from London to Paris a few years ago…

Day 3

7 Days 7 Photos Day 3

Not technically a single photo – lets’ call it a triptych, I used the Diptic app (as recommended by Ben Seymour/@bseymour) to create this collage. I felt it was a little too personal to nominate my friend Kieran, whose medals are in the lower left image, so I nominated my friend James, who was leading the Scouts in our local remembrance day parade.

Day 4

7 Days 7 Photos Day 4

I found some failed backups on my Synology NAS this week. For some reason, Hyper Backup complained it didn’t have enough storage (I’m pretty sure it wasn’t Azure that ran out of space!) so I ran several backups, each one adding another folder until I had all of my new photos in the backup set. I felt the need to challenge a friend who works in IT – so I challenged my friend Stuart.

Day 5

7 Days 7 Photos Day 5

My son was cake-baking, for Children in Need, I think – or maybe it was my other son, baking his birthday cake. I can’t really remember. I challenged a friend who runs a local cafe and regularly bakes muffins…

Day 6

7 Days 7 Photos Day 6

Self-explanatory. My son’s own creation for his birthday. I challenged my wife for this one.

Day 7

7 Days 7 Photos Day 7

The last image is following an evening helping out at Scouts. Images of attempts to purify water through distillation were not that great, so I took a picture of the Scout Badge, and nominated my friend Phil, who’s another one of the local Scout leaders.

(All seven of these pictures were taken on an iPhone 8 Plus using the native camera app, then edited in Snapseed and uploaded to Flickr)

Other stuff

I like this:

And I remember shelves of tapes like these (though mine were all very neatly written, or computer-generated, even back in the 1980s):

On the topic of music, look up Master Boot Record on Spotify:

And this “Soundtrack for Coding” is pretty good for writing documentation too…

I added second-factor authentication to my WordPress blog this week. I couldn’t find anything that uses the Microsoft Authenticator, but this 2FA WordPress plugin from miniOrange uses Google Authenticator and was very easy to set up.

Some UK libraries have started loaning BBC Microbits but unfortunately not yet in my manor:

Being at home all week meant I went to see my GP about my twisted ankle (from the falling-into-the-sea incident). One referral later and I was able to see a physio… who’s already working wonders on helping to repair my damaged ligaments. And he says I can ride my bike too… so I’ll be back on Zwift even if cyclocross racing is out for the rest of the season.

Cycling

On the subject of Zwift, they announced a price rise this week. I understand that these things happen but it’s gone up 50% in the US (and slightly more than that here in the UK). All that really does is drive me to use Zwift in the winter and to cancel my membership in the summer. A more reasonable monthly fee might make me more inclined to sign up for 12 months at a time and create a recurring revenue for Zwift. Very strange business model, IMHO.

I particularly liked the last line of this article:

“Five minutes after the race
That was sooo fun! When can I do it again?!”

I may not have been riding cyclocross this weekend, but my son was, and Sunday was the popular Central Cyclocross League race at RAF Halton. With mud, sand, gravel and steep banks, long woodland sections and more, it looked epic. Maybe I’ll get to ride next year!

I did get to play with one of the RAF’s cranes (attached to a flatbed truck) though – amazing how much control there is – and had a go on the road safety rig too.

And of course, what else to eat at a cyclocross event but Belgian fries, mayo and waffles!

Finally, my friends at Kids Racing (@kidsracing) have some new kit in. Check out the video they filmed at the MK Bowl a couple of weeks back – and if you have kids in need of new cycling kit, maybe head over to HUP CC.

Wrap-up

That’s it for this week. Next week I have a bit more variation in my work (including another Microsoft event – Azure Ready in the UK) and I’m hoping to actually get some blog posts written… see you on the other side!

Seven technology trends to watch 2017-2020

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Just over a week ago, risual held its bi-annual summit at the risual HQ in Stafford – the whole company back in the office for a day of learning with a new format: a mini-conference called risual:NXT.

I was given the task of running the technical track – with 6 speakers presenting on a variety of topics covering all of our technical practices: Cloud Infrastructure; Dynamics; Data Platform; Unified Intelligent Communications and Messaging; Business Productivity; and DevOps – but I was also privileged to be asked to present a keynote session on technology trends. Unfortunately, my 35-40 minutes of content had to be squeezed into 22 minutes… so this blog post summarises some of the points I wanted to get across but really didn’t have the time.

1. The cloud was the future once

For all but a very small number of organisations, not using the cloud means falling behind. Customers may argue that they can’t use cloud service because of regulatory or other reasons but that’s rarely the case – even the UK Police have recently been given the green light (the blue light?) to store information in Microsoft’s UK data centres.

Don’t get me wrong – hybrid cloud is more than tactical. It will remain part of the landscape for a while to come… that’s why Microsoft now has Azure Stack to provide a means for customers to run a true private cloud that looks and works like Azure in their own datacentres.

Thankfully, there are fewer and fewer CIOs who don’t see the cloud forming part of their landscape – even if it’s just commodity services like email in Office 365. But we need to think beyond lifting and shifting virtual machines to IaaS and running email in Office 365.

Organisations need to transform their cloud operations because that’s where the benefits are – embrace the productivity tools in Office 365 (no longer just cloud versions of Exchange/Lync/SharePoint but a full collaboration stack) and look to build new solutions around advanced workloads in Azure. Microsoft is way ahead in the PaaS space – machine learning (ML), advanced analytics, the Internet of Things (IoT) – there are so many scenarios for exploiting cloud services that simply wouldn’t be possible on-premises without massive investment.

And for those who still think they can compete with the scale that Microsoft (Amazon and Google) operate at, this video might provide some food for thought…

(and for a similar video from a security perspective…)

2. Data: the fuel of the future

I hate referring to data as “the new oil”. Oil is a finite resource. Data is anything but finite! It is a fuel though…

Data is what provides an economic advantage – there are businesses without data and those with. Data is the business currency of the future. Think about it: Facebook and Google are entirely based on data that’s freely given up by users (remember, if you’re not paying for a service – you are the service). Amazon wouldn’t be where it is without data.

So, thinking about what we do with that data: the 1st wave of the Internet was about connecting computers, 2nd was about people, the 3rd is devices.

Despite what you might read, IoT is not about connected kettles/fridges. It’s not even really about home automation with smart lightbulbs, thermostats and door locks. It’s about gathering information from billions of sensors out there. Then, we take that data and use it to make intelligent decisions and apply them in the real world. Artificial intelligence and machine learning feed on data – they are ying and yang to each other. We use data to train algorithms, then we use the algorithms to process more data.

The Microsoft Data Platform is about analytics and data driving a new wave of insights and opening up possibilities for new ways of working.

James Watt’s 18th Century steam engine led to an industrial revolution. The intelligent cloud is today’s version – moving us to the intelligence revolution.

3 Blockchain

Bitcoin is just one implementation of something known as the Blockchain. In this case as a digital currency.

But Blockchain is not just for monetary transactions – it’s more than that. It can be used for anything transactional. Blockchain is about a distributed ledger. Effectively, it allows parties to trust one another without knowing each other. The ledger is a record of every transaction, signed and tamper-proof.

The magic about Blockchain is that as the chain gets longer so does the entropy and the encryption level – effectively, the more the chain is used, the more secure it gets. That means infinite integrity.

(Read more in Jamie Skella’s “A blockchain explaination your parents could understand”.)

Blockchain is seen as strategic by Microsoft and by the UK government and it’s early days but we will see where people want to talk about integrity and data resilience with integrity. Databases – anything transactional – can be signed with blockchain.

A group of livestock farmers in Arkansas is using blockchain technology so customers can tell where their dinner comes from. They are applying blockchain technology to trace products from ‘farm to fork’ aiming to provide consumers with information about the origin and quality of the meat they buy.

Blockchain is finding new applications in the enterprise and Microsoft has announced the CoCo Framework to improve performance, confidentiality and governance characteristics of enterprise blockchain networks (read more in Simon Bisson’s article for InfoWorld). There’s also Blockchain as a service (in Azure) – and you can find more about Microsoft’s plans by reading up on “Project Bletchley”.

(BTW, Bletchley is a town in Buckinghamshire that’s now absorbed into Milton Keynes. Bletchley Park was the primary location of the UK Government’s wartime code-cracking efforts that are said to have shortened WW2 by around 2 years. Not a bad name for a cryptographic technology, hey?)

4 Into the third dimension

So we’ve had the ability to “print” in 3 dimensions for a while but now 3D is going further.Now we’re taking physical worlds into the virtual world and augmenting with information.

Microsoft doesn’t like the term augmented reality (because it’s being used for silly faces on photos) and they have coined the term mixed reality to describe taking untethered computing devices and creating a seamless overlap between physical and virtual worlds.

To make use of this we need to be able to scan and render 3D images, then move them into a virtual world. 3D is built into next Windows 10 release (the Fall Creators update, due on 17 October 2017). This will bring Paint 3D, a 3D Gallery, View 3D for our phones – so we can scan any object and import to a virtual world. With the adoption rates of new Windows 10 releases then that puts 3D on a market of millions of PCs.

This Christmas will see lots of consumer headsets in the market. Mixed reality will really take off after that. Microsoft is way ahead in the plumbing – all whilst we didn’t notice. They held their Hololens product back to be big in business (so that it wasn’t a solution without a problem). Now it can be applied to field worker scenarios, visualising things before they are built.

To give an example, recently, I had a builder quote for a loft extension at home. He described how the stairs will work and sketched a room layout – but what if I could have visualised it in a headset? Then imagine picking the paint, sofas, furniture, wallpaper, etc.

The video below shows how Ford and Microsoft have worked together to use mixed reality to shorten and improve product development:

5 The new dawn of artificial intelligence

All of the legends of AI are set by sci-fi (Metropolis, 2001 AD, Terminator). But AI is not about killing us all! Humans vs. machines? Deep Blue beating people at Chess, Jeopardy, then Google taking on Go. Heading into the economy and displacing jobs. Automation of business process/economic activity. Mass unemployment?

Let’s take a more optimistic view! It’s not about sentient/thinking machines or giving human rights to machines. That stuff is interesting but we don’t know where consciousness comes from!

AI is a toolbox of high-value tools and techniques. We can apply these to problems and appreciate the fundamental shift from programming machines to machines that learn.

Ai is not about programming logical steps – we can’t do that when we’re recognising images, speech, etc. Instead, our inspiration is biology, neural networks, etc. – using maths to train complex layers of neural networks led to deep learning.

Image recognition was “magic” a few years ago but now it’s part of everyday life. Nvidia’s shares are growing massively due to GPU requirements for deep learning and autonomous vehicles. And Microsoft is democratising AI (in its own applications – with an intelligent cloud, intelligent agents and bots).

NVIDIA Corporation stock price growth fuelled by demand for GPUs

So, about those bots…

A bot is a web app and a conversational user interface. We use them because natural language processing (NLP) and AI are here today. And because messaging apps rule the world. With bots, we can use Human language as a new user interface; bots are the new apps – our digital assistants.

We can employ bots in several scenarios today – including customer service and productivity – and this video is just one example, with Microsoft Cortana built into a consumer product:

The device is similar to Amazon’s popular Echo smart speaker and a skills kit is used to teach Cortana about an app; Ask “skillname to do something”. The beauty of Cortana is that it’s cross-platform so the skill can show up wherever Cortana does. More recently, Amazon and Microsoft have announced Cortana-Alexa integration (meanwhile Siri continues to frustrate…)

AI is about augmentation, not replacement. It’s true that bots may replace humans for many jobs – but new jobs will emerge. And it’s already here. It’s mainstream. We use recommendations for playlists, music, etc. We’re recognising people, emotions, etc. in images. We already use AI every day…

6 From silicon to cells

Every cell has a “programme” – DNA. And researchers have found that they can write code in DNA and control proteins/chemical processes. They can compile code to DNA and execute, creating molecular circuits. Literally programming biology.

This is absolutely amazing. Back when I was an MVP, I got the chance to see Microsoft Research talk about this in Cambridge. It blew my mind. That was in 2010. Now it’s getting closer to reality and Microsoft and the University of Washington have successfully used DNA for storage:

The benefits of DNA are that it’s very dense and it lasts for thousands of years so can always be read. And we’re just storing 0s and 1s – that’s much simpler than what DNA stores in nature.

7 Quantum computing

With massive data storage… the next step is faster computing – that’s where Quantum computing comes in.

I’m a geek and this one is tough to understand… so here’s another video:

https://youtu.be/doNNClTTYwE

Quantum computing is starting to gain momentum. Dominated by maths (quantum mechanics), it requires thinking in equations, not translating into physical things in your head. It has concepts like superposition (multiple states at the same time) and entanglement. Instead of gates being turned on/off it’s about controlling particles with nanotechnology.

A classical 2 bit on-off takes 2 clock cycles. One quantum bit (a Qubit) has multiple states at the same time. It can be used to solve difficult problems (the RSA 2048 challenge problem would take a billion years on a supercomputer but just 100 seconds on a 250-bit quantum computer). This can be applied to encryption and security, health and pharma, energy, biotech, environment, materials and engineering, AI and ML.

There’s a race for quantum computing hardware taking place and China sees this as a massively strategic direction. Meanwhile, the UK is already an academic centre of excellence – now looking to bring quantum computing to market. We’ll have usable devices in 2-3 years (where “usable” means that they won’t be cracking encryption, but will have initial applications in chemistry and biology).

Microsoft Research is leading a consortium called Station Q and, later this year, Microsoft will release a new quantum computing programming language, along with a quantum computing simulator. With these, developers will be able to both develop and debug quantum programs implementing quantum algorithms.

Predicting the future?

Amazon, Google and Microsoft each invest over $12bn p.a. on R&D. As demonstrated in the video above, their datacentres are not something that many organisations can afford to build but they will drive down the cost of computing. That drives down the cost for the rest of us to rent cloud services, which means more data, more AI – and the cycle continues.

I’ve shared 7 “technology bets” (and there are others, like the use of Graphene) that I haven’t covered – my list is very much influenced by my work with Microsoft technologies and services. We can’t always predict the future but all of these are real… the only bet is how big they are. Some are mainstream, some are up and coming – and some will literally change the world.

Credit: Thanks to Rob Fraser at Microsoft for the initial inspiration – and to Alun Rogers (@AlunRogers) for helping place some of these themes into context.

Playing around with Azure Cognitive Services

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve been spending quite a bit of time recently getting more familiar with some of the advanced workloads in Microsoft Azure. After all, infrastructure as a service is commodity, so I’m looking at services that can be used to drive real value for my customers (more on that in another post…).

Yesterday, was our team meeting – with all but one of the risual Architects getting together, including some coaching from Microsoft around data and intelligence services. I was particularly taken with some of the demonstrations of Cognitive Services, so I set about getting some sample code to work for me…

Building the Intelligent Kiosk sample application

First up, I needed to install Visual Studio 2015 (Community Edition is fine) – it took a while, and needed admin credentials (so a visit to our support team) but eventually it installed on my PC.

Then, I downloaded the sample code for the “Intelligent Kiosk” from Github. F5 to build the solution told me that:

A project with an Output Type of Class Library cannot be started directly.

In order to debug this project, add an executable project to this solution which references the library project. Set the executable project as the startup project.

The Intelligent Kiosk sample code is a Universal Windows Platform (UWP) app, so I ignored that message, continued with the build, and tracked down the resulting IntelligentKioskSample.exe file. Trying to run that told me:

This application can only run in the context of an app container.

And StackOverflow told me that I need to sideload the app onto my PC, by creating a package to use locally.

Installing the Intelligent Kiosk sample application

The application package comes with a PowerShell script to install it (Add-AppDevPackage.ps1), but I found I needed to follow these steps:

  1. Enable developer mode in Windows 10 Settings
  2. Restart the PC
  3. Open a PowerShell session as an Administrator and run:

Show-WindowsDeveloperLicenseRegistration

Get-WindowsDeveloperLicense

Set-ExecutionPolicy unrestricted

.\Add-AppDevPackage.ps1

Now the app is ready and available via the Start Menu…

Running the Intelligent Kiosk sample application

  1. Get some API keys (for free) from the Microsoft Cognitive Services site.
  2. Run the Intelligent Kiosk app.
  3. Go to settings and paste in your API keys.
  4. Have some fun with the demos!

Demos in the Azure Cognitive Services Sample app
Intelligent Kiosk Demos

Azure Cognitive Services Emotion Detection
Emotion detection (web image)

Azure Cognitive Services Emotion Detection
Emotion detection (live image)

Azure Cognitive Services Face Detection
Face Detection

Azure Cognitive Services Mall Kiosk
Detect age and gender, recommend a product!

Further Reading