Weeknote 22/2020: holidaying on the Costa del Great Ouse (plus password resets, cycling performance, video-conferencing equipment and status lights)

In the last few hours of 2019, my family planned our holiday. We thought we had it all sorted – fly to Barcelona, spend the weekend sight-seeing (including taking my football-mad son to Camp Nou) and then head up the coast for a few more days in the Costa Brava. Flights were booked, accomodation was sorted, trips were starting to get booked up.

We hadn’t counted on a global pandemic.

To be clear, I’m thankful that myself, my family and friends, and those around us are (so far) safe and well. By April, I didn’t much like the prospect of getting into a metal tube with 160+ strangers and flying for 3 hours in each direction. We’re also incredibly lucky to be able to access open countryside within a couple of hundred metres of our house, so daily exercise is still possible and enjoyable, with very few people around, most of the time.

I still took the week off work though. After cancelling my Easter break, it’s been a while since I took annual leave and even my Furlough period was not exactly relaxing, so I could do with a rest.

The weather has been glorious in the UK this week too, making me extra-glad we re-landscaped the garden last year and I’ve spent more than a few hours just chilling on our deck.

Unfortunately, we also got a taste of what it must be like to live in a tourist hotspot, as hundreds of visitors descended on our local river each day this weekend. It seems the Great Ouse at Olney has featured in a list of top places to swim in Britain, which was recently featured in The Times. It may sound NIMBYish, but please can they stay away until this crisis is over?

As for the holiday, hopefully, we’ll get the money refunded for the cancelled flights (if the airlines don’t fold first – I’m sure that if they refunded everyone they would be insolvent, which is my theory for why they are not increasing staff levels to process refunds more quickly); FC Barcelona contacted me weeks ago to extend my ticket and offer a refund if we can’t use it; and AirBnB had the money back in our account within days of us being forced to pull out due to cancelled flights.

(I did spend a few weeks effectively “playing chicken” with easyJet to see if they would cancel first, or if it would be us. An airline-cancelled flight can be refunded, but a consumer-cancelled flight would be lost, unless we managed to claim on travel insurance).

Even though I’ve had a week off, I’ve still been playing with tech. Some of my “projects” should soon have their own blog post (an Intel NUC for a new Zwift PC; migrating my wife’s personal email out of my Office 365 subscription to save me a licence; and taking a look at Veeam Backup for Office 365), whilst others get a brief mention below…

Please stop resetting user passwords every x days!

Regularly resetting passwords (unless a compromise is suspected) is an old way of thinking. Unfortunately, many organisations still make users change their password every few weeks. Mine came up for renewal this week and I struggled to come up with an acceptable, yet memorable passphrase. So, guess what? I wrote it down!

I use a password manager for most of my credentials but that doesn’t help with my Windows logon (before I’ve got to my browser). Biometric security like Windows Hello helps too (meaning I rarely use the password, but am even less likely to remember it when needed).

Here’s the National Cyber Security Centre (@NCSC)’s password guidance infographic (used with permission) and the associated password guidance:

This list of 100,000 commonly used passwords that will get blocked by some systems may also be useful – from Troy Hunt (@TroyHunt) but provided to me by my colleague Gavin Ashton (@gvnshtn).

Performance analysis for cyclists, by cyclists

I’ve been watching with interest as my occasional cycling buddy (and now Azure MVP) James Randall (@AzureTrenches) has been teasing development on his new cycling performance platform side project. This week he opened it up for early access and I’ve started to road test it… it looks really promising and I’m super impressed that James created this. Check it out at For Cyclists By Cyclists.

Podcasting/video conferencing upgrades in my home office

With video conferencing switching from something-I-use-for-internal-calls to something-I-use-to-deliver-consulting-engagements, I decided to upgrade the microphone and lighting in my home office. After seeking some advice from those who know about such things (thanks Matt Ballantine/@ballantine70 and the WB-40 Podcast WhatsApp group), I purchased a Marantz MPM-1000U microphone, boom arm, shock mount, and a cheap rechargeable LED photography light with tripod.

It’s early days yet but initial testing suggests that the microphone is excellent (although the supplied USB A-B cable is too short for practical use). I had also considered the Blue Yeti/Raspberry but it seems to have been discontinued.

As for the photo lighting, it should be just enough to illuminate my face as the north-facing window to my left often leaves me silhouetted on calls.

Smart lighting to match my Microsoft Teams presence

I haven’t watched the Microsoft Build conference presentations yet, but I heard that Scott Hanselman (@shanselman) featured Isaac Levin (@isaacrlevin)’s PresenceLight app to change the lighting according to his Windows Theme. The app can also be used to change Hue or LIFX lighting along with Teams presence status, so that’s in place now outside my home office.

It’s not the first time I’ve tried something like this:

One particularly useful feature is that I can be logged in to one tenant with the PresenceLight app and another in Microsoft Teams on the same PC – that means that I can control my status with my personal persona so I may be available to family but not to colleagues (or vice versa).

One more thing…

It may not be tech-related, but I also learned the differences between wheat and barley this week. After posting this image on Instagram, Twitter was quick to correct me:

As we’re at the end of May, that’s almost certainly not wheat…

Weeknote 21/2020: work, study (repeat)

Another week in the socially-distanced economy. Not so much to write about this week as I spent most of it working or studying… and avoiding idiots who ignore the one-way system in the local supermarket…

Some more observations on remote working

It’s not often my tweets get as much engagement as this one did. So I’m putting it on the blog too (along with my wife’s response):

My “Build Box”

Unfortunately, I didn’t get to watch the Microsoft Build virtual event this week. I’m sure I’ll catch up later but it was great to receive this gift from Microsoft – it seems I was one of the first few thousand to register for the event:

Annual review

This week was my fifth anniversary of joining risual. Over that time I’ve watched the company grow and adapt, whilst trying to retain the culture that made it so strong in the early days. I don’t know if it’s possible to retain a particular culture as a business grows beyond a certain size but I admire the attempts that are made and one of those core tenets is an annual review with at least one if not both of the founding Directors.

For some, that’s a nerve-wracking experience but I generally enjoy my chat with Rich (Proud) and Al (Rogers), looking back on some of the key achievements of the last year and plans for the future. Three years ago, we discussed “career peak”. Two years ago it was my request to move to part-time working. Last year, it was my promotion to Principal Architect. This year… well, that should probably remain confidential.

One thing I found particularly useful in my preparation was charting the highs and lows of my year. It was a good way to take stock – which left me feeling a lot better about what I’d achieved over the last 12 months. For obvious reasons, the image below has had the details removed, but it should give some idea of what I mean:

Another exam ticked off the expanding list

I wrapped up the work week with another exam pass (after last week’s disappointment) – AZ-301 is finally ticked off the list… taking me halfway to being formally recognised as an Azure Solutions Architect Expert.

I’ll be re-taking AZ-300 soon. And then it looks like two more “Microsoft fundamentals” exams have been released (currently in Beta):

  • Azure AI Fundamentals (AI-900).
  • Azure Data Fundamentals (DP-900).

Both of these fit nicely alongside some of the topics I’ve been covering in my current client engagement so I should be in a position to attempt them soon.

Weeknote 20/2020: back to work

Looking back on another week of tech exploits during the COVID-19 coronavirus chaos…

The end of my furlough

The week started off with exam study, working towards Microsoft exam AZ-300 (as mentioned last week). That was somewhat derailed when I was asked to return to work from Wednesday, ending my Furlough Leave at very short notice. With 2.5 days lost from my study plan, it shouldn’t have been a surprise that I ended my working week with a late-night exam failure (though it was still a disappointment).

Returning to work is positive though – whilst being paid to stay at home may seem ideal to some, it didn’t work so well for me. I wanted to make sure I made good use of my time, catching up on personal development activities that I’d normally struggle to fit in. But I was also acutely aware that there were things I could be doing to support colleagues but which I wasn’t allowed to. And, ultimately, I’m really glad to be employed during this period of economic uncertainty.

Smart cities

It looks like one of my main activities for the next few weeks will be working on a Data Strategy for a combined authority, so I spent Tuesday afternoon trying to think about some of the challenges that an organisation with responsibility for transportation and economic growth across a region might face. That led me to some great resources on smart cities including these:

  • There are some inspirational initiatives featured in this video from The Economist:
  • Finally (and if you only have a few minutes to spare), this short video from Vinci Energies provides an overview of what smart cities are really about:

Remote workshop delivery

I also had my first experience of taking part in a series of workshops delivered using Microsoft Teams. Teams is a tool that I use extensively, but normally for internal meetings and ad-hoc calls with clients, not for delivering consulting engagements.

Whilst they would undoubtedly have been easier performed face-to-face, that’s just not possible in the current climate, so the adaptation was necessary.

The rules are the same, whatever the format – preparation is key. Understand what you’re looking to get out of the session and be ready with content to drive the conversation if it’s not quite headed where you need it to.

Editing/deleting posts in Microsoft Teams private channels

On the subject of Microsoft Teams, I was confused earlier this week when I couldn’t edit one of my own posts in a private channel. Thanks to some advice from Steve Goodman (@SteveGoodman), I found that the ability to delete and/or edit messages is set separately on a private channel (normal channels inherit from the team).

The Microsoft Office app

Thanks to Alun Rogers (@AlunRogers), I discovered the Microsoft office app this week. It’s a great companion to Office 365 (or , searching across all apps, similar to Delve but in an app rather than in-browser. The Microsoft Office app is available for download from the Microsoft Store.

Azure Network Watcher

And, whilst on the subject of nuggets of usefulness in the Microsoft stable…

A little piece of history

I found an old map book on my shelf this week: a Halford’s Pocket Touring Atlas of Great Britain and Ireland, priced at sixpence. I love poring over maps – they provide a fascinating insight into the development of the landscape and the built environment.

That’s all for now

Those are just a few highlights (and a lowlight) from the week – there’s much more on my Twitter feed

Weeknote 19/2020: Azure exam study, remote working, and novelty video conference backgrounds

Another week in the furloughed fun house…

Studying

I still have a couple of exams I’d like to complete this month. I’ve been procrastinating about whether to take the Microsoft Azure Architect exams in their current form (AZ-300/301) or to wait for the replacements (AZ-303/304). As those replacements have been postponed from late April until the end of June (at least), I’ve booked AZ-300/301 and am cramming in lots of learning, based on free training from the Microsoft Learn website.

I’m sure it’s deeper (technically) than I need for an Architect exam, but it’s good knowledge… I just hope I can get through it all before the first exam appointment next Thursday evening…

Thoughts on remote working during the current crisis

I’ve seen this doing the rounds a couple of times on Twitter and I don’t know the original source, but it’s spot on. Words to live by in these times:

  1. You’re not “working from home”. You’re “At your home, during a crisis, trying to work” [whilst parenting, schooling, helping vulnerable people, etc.].
  2. Your personal physical, mental and emotional health is far more important than anything else right now.
  3. You should not try to compensate for lost productivity by working longer hours.
  4. You will be kind to yourself and not judge how you are coping based on how you see others coping.
  5. You will be kind to others and not judge how they are coping based on how you are coping.
  6. Your success will not [should not] be measured in the same way it was when things were normal.

This animation may also help…

Also, forget the 9-5:

As for returning to the office?

Video conference backgrounds

Novelty backgrounds for video conferences are a big thing right now. Here are a couple of collections that came to my attention this week:

Upgrades to the Zwift bike

My old road bike has been “retired” for a year now, living out its life connected to an indoor trainer and used for Zwifting. It’s needed some upgrades recently though…

I also realised why I struggled to do 90km on the road today… that was my fifth ride this week, on top of another 100km which was mostly off-road!

“Disaster Recovery” and related thoughts…

Backup, Archive, High Availbility, Disaster Recovery, Business Continuity. All related. Yet all different.

One of my colleagues was recently faced with needing to run “a DR [disaster recovery] workshop” for a client. My initial impression was:

  • What disasters are they planning for?
  • I’ll bet they are thinking about Coronavirus and working remotely. That’s not really DR.
  • Or are they really thinking about a backup strategy?

So I decided to turn some of my rambling thoughts into a blog post. Each of these topics could be a post in its own right – I’m just scraping the surface here…

Let’s start with backup (and recovery)

Backups (of data) are a fairly simple concept. Anything that would create a problem if it was lost should be backed up. For example, my digital photos are considered to not exist at all unless they are synchronised (or backed up) to at least two other places (some network-attached storage, and the cloud).

In a business context, we run backups in order to be able to recover (restore) our content (configuration or data) within a given window. We may have weekly full backups and daily incremental or differential backups (perhaps with more regular snapshots), then retain parent, grandparent and great-grandparent copies of the full backups (four weeks) and keep each of these as (lunar) monthly backups for a year. That’s just an example – each organisation will have its own backup/retention policies and those backups may be stored on or off-site, on tape or disk.

In summary, backups are about making sure we have an up to date copy of our important configuration information and data, so we can recover it if the primary copy is lost or damaged.

And for bonus content, some services we might consider in a modern infrastructure context include Azure Backup or AWS Backup.

Backups must be verified and periodically tested in order to have any use.

Archiving information

When I wrote about backups above, I mentioned keeping multiple copies covering various points in time. Whilst some may consider this adequate for archival, archival is the storage of data for long-term preservation of read-only access – for example, documents that must be stored for an extended period of time (for example 7, 10, 25, 99 years). Once that would have been paper documents, in boxes. Now it might be digital files (or database contents) on tape or disk (potentially cloud storage).

Archival might still use backup software and associated retention policies, but we’ll think carefully about the medium we store it on. For very long term physical storage we might need to consider the media formats (paper is bulky and transferred to microfiche, or old magnetic media degrades, so it’s moved to optical storage – but the hardware becomes obsolete, so it’s moved to another format). If storing on disk (on-premises or in the cloud), we can use slower (cheaper) disks and accept that restoration from the archive may take additional time.

In summary, archival is about long-term data storage, generally measured in many years and archives might be stored off-line, or near-line.

Technologies we might use for archival are similar to backups, but we could consider lower-cost storage – e.g. Azure Storage‘s Cool or Archive tiers or Amazon S3 Glacier.

Keeping systems highly available

High Availability (HA) is about making sure that our systems are available for as much time as possible – or certainly within a given service level agreement (SLA).

Traditionally, we used technologies like a redundant array of inexpensive devices (RAID) for disks or memory, error checking memory, or redundant power supplies. We might also have created server clusters or farms. All of these methods have the intention of removing single points of failure (SPOFs).

In the cloud, we leave a lot of the infrastructure considerations to the cloud service provider and we design for failure in other ways.

  • We assume that virtual machines will fail and create availability sets.
  • We plan to scale out across multiple hosts for applications that can take advantage of that architecture.
  • We store data in multiple regions.
  • We may even consider multiple clouds.

Again, the level of redundancy built into the app and its supporting infrastructure must be designed according to requirements – as defined by the SLA. There may be no point in providing an expensive four nines uptime for an application that’s used once a month by one person, who works normal office hours. But, then again, what if that application is business critical – like payroll? Again, refer to the SLA – and maybe think about business continuity too… more on that in a moment.

Some of my clients have tried to implement Windows Server clusters in Azure. I’ve yet to be convinced and still consider that it’s old-world thinking applied in a contemporary scenario. There are better ways to design a highly available file service in 2020.

In summary, high availability is about ensuring that an application or service is available within the requirements of the associated service level agreement.

Technologies might include some of the hardware considerations I listed earlier, but these days we’re probably thinking more about:

Remember to also consider other applications/systems upon which an application relies.

Also, quoting from some of Microsoft’s training materials:

“To achieve four 9’s (99.99%), you probably can’t rely on manual intervention to recover from failures. The application must be self-diagnosing and self-healing.

Beyond four 9’s, it is challenging to detect outages quickly enough to meet the SLA.

Think about the time window that your SLA is measured against. The smaller the window, the tighter the tolerances. It probably doesn’t make sense to define your SLA in terms of hourly or daily uptime.”

Microsoft Learn: Design for recoverability and availability in Azure: High Availability

Disaster recovery

As the name suggests, Disaster Recovery (DR) is about recovering from a disaster, whatever that might be.

It could be physical damage to a piece of hardware (a switch, a server) that requires replacement or recovery from backup. It could be a whole server room or datacentre that’s been damaged or destroyed. It could be data loss as a result of malicious or accidental actions by an employee.

This is where DR plans come into play- firstly analysing the risks that might lead to disaster (including possible data loss and major downtime scenarios) and then looking at recovery objectives – the application’s recovery point objective (RPO) and recovery time objective (RTO).

Quoting Microsoft’s training materials again:

An illustration showing the duration, in hours, of the recovery point objective and recovery time objective from the time of the disaster.

“Recovery Point Objective (RPO): The maximum duration of acceptable data loss. RPO is measured in units of time, not volume: “30 minutes of data”, “four hours of data”, and so on. RPO is about limiting and recovering from data loss, not data theft.

Recovery Time Objective (RTO): The maximum duration of acceptable downtime, where “downtime” needs to be defined by your specification. For example, if the acceptable downtime duration is eight hours in the event of a disaster, then your RTO is eight hours.”

Microsoft Learn: Design for recoverability and availability in Azure: Disaster Recovery

For example, I may have a database that needs to be able to withstand no more than 15 minutes’ data loss and an associated SLA that dictates no more than 4 hours’ downtime in a given period. For that, my RPO is 15 minutes and the RTO is 4 hours. I need to make sure that I take snapshots (e.g. of transaction logs for replay) at least every 15 minutes and that my restoration process to get from offline to fully recovered takes no more than 4 hours (which will, of course, determine the technologies used).

Considerations when creating a DR plan might include:

  • What are the requirements for each application/service?
  • How are systems linked – what are the dependencies between applications/services?
  • How will you recover within the required RPO and RTO constraints?
  • How can replicated data be switched over?
  • Are there multiple environments (e.g. dev, test and production)?
  • How will you recover from logical errors in a database that might impact several generations of backup, or that may have spread through multiple data replicas?
  • What about cloud services – do you need to backup SaaS data (e.g. Office 365)? (Possibly not, if you’re happy with a retention-period based restoration from a “recycle bin” or similar but what if an administrator deletes some data?)

As can be seen, there are many factors here – more than I can go into in this blog post, but a disaster recovery strategy needs to consider backup/recovery, archive, availability (high or otherwise), technology and service (it may help to think about some of the ITIL service design processes).

In summary, disaster recovery is about having a plan to be able to recover from an event that results in downtime and data loss.

Technologies that might help include Azure Site Recovery. Applications can also be designed with data replication and recovery in mind, for example, using geo-replication capabilities in Azure Storage/Amazon S3, Azure SQL Server/Amazon RDS or using a globally-distributed database such as Azure Cosmos DB. And DR plans must be periodically tested.

Business continuity

Finally, Business Continuity (BC). This is something that many organisations will have had to contend with over the last few weeks and months.

BC is often confused with DR but they are different. Business continuity is about continuing to conduct business when something goes wrong. That may be how to carry on working whilst working on recovering from a disaster. Or it may be how to adapt processes to allow a workforce to continue functioning in compliance with social distancing regulations.

Again, BC needs a plan. But many of those plans will be reconsidered now – if your BC arrangements are that in the event of an office closure, people go to a hosted DR site with some spare equipment that will be made available within an agreed timescale, that might not help in the event of a global pandemic, when everyone else wants to use that facility. Instead, how will your workforce continue to work at home? Which systems are important?How will you provide secure remote access to those systems? (How will you serve customers whilst employees are also looking after children?) The list goes on.

Technology may help with BC, but technology alone will not provide a solution. The use of modern approaches to End User Computing will certainly make secure remote and mobile working a possibility (indeed, organisations that have taken a modern approach will probably already be familiar with those practices) but a lot of the issues will relate to people and process.

In summary, Business Continuity plans may be invoked if there is a disaster but they are about adapting business processes to maintain service in times of disruption.

Wrapping up

As I was writing this post, I thought about many tangents that I could go off and cover. I’m pretty sure the topic could be a book and this post scrapes the surface. Nevertheless, I hope my thoughts are useful and show that disaster recovery cannot be considered in isolation.

Weeknote 18/2020: Microsoft 365, the rise of the humans and some data platform discovery

Some highlights from the last week of “lockdown” lunacy*…

Office 365 rebranding to Microsoft 365

For the last couple of years, Microsoft has had a subscription bundle called Microsoft 365, which includes Office 365 Enterprise, Enterprise Mobility and Security and Windows 10 Enterprise. Now some bright spark has decided to rebrand some Office 365 products as Microsoft 365. Except for the ones that they haven’t (Office 365 Enterprise E1/3/5). And Office 365 ProPlus (the subscription-based version of the Office applications) is now “Microsoft 365 Apps for Enterprise”. Confused? Join the club…

Read more on the Microsoft website.

The Rise of the Humans

A few years ago, I met Dave Coplin (@DCoplin). At the time he was working for Microsoft, with the assumed title of “Chief Envisioning Officer” (which was mildly amusing when he was called upon to interview the real Microsoft CEO, Satya Nadella at Future Decoded). Dave’s a really smart guy and a great communicator with a lot of thoughts about how technology might shape our futures so I’m very interested in his latest project: a YouTube Channel called The Rise of the Humans.

Episode 1 streamed on Wednesday evening and featured a discussion on Algorithmic Bias (and why it’s so important to understand who wrote an algorithm that might be judging you) along with some discussion about some of the tech news of the week and “the new normal” for skills development, education and technology. There’s also a workshop to accompany the podcast, which I intend to try out with my family…

Data Platform Discovery Day

I spent Thursday in back-to-back webcasts, but that was a good thing. I’d stumbled across the presence of Data Platform Discovery Day and I joined the European event to learn about all sorts of topics, with talks delivered by MVPs from around the world.

The good thing for me was that the event was advertised as “level 100” and, whilst some of the presenters struggled with that concept, I was able to grasp just enough knowledge on a variety of topics including:

  • Azure Data Factory.
  • Implementing Power BI in the enterprise.
  • An introduction to data science.
  • SQL Server and containers.
  • The importance of DevOps (particularly apt as I finished reading The Pheonix Project this week).
  • Azure SQL Database Managed Instances.
  • Data analysis strategy with Power BI.

All in all, it was a worthwhile investment of time – and there’s a lot there for me to try and put into practice over the coming weeks.

2×2

I like my 2x2s, and found this one that may turn out to be very useful over the coming weeks and months…

Blogging

I wrote part 2 of my experiences getting started with Azure Sphere, this time getting things working with a variety of Azure Services including IoT Hub, Time Series Insights and IoT Central.

Decorating

I spent some time “rediscovering” my desk under the piles of assorted “stuff” this week. I also, finally, put my holographic Windows 2000 CD into a frame and it looks pretty good on the wall!

* I’m just trying to alliterate. I don’t really think social distancing is lunacy. It’s not lockdown either.

Getting started with Azure Sphere: Part 2 (integration with Azure services)

Last week, I wrote about my experiences getting some sample code running on an Avnet Azure Sphere Starter Kit. That first post walked through installing the SDK, setting up my development environment (I chose to use Visual Studio Code), configuring the device (including creating a tenant, claiming the device, connecting the device to Wi-Fi, and updating the OS), and downloading and deploying a sample app.

Since then, I’ve managed to make some steps forward with the Element 14 out of the box demo by Brian Willess (part 1, part 2 and part 3). Rather than repeat Brian’s posts, I’ll focus on what I did to work around a few challenges along the way.

Working around compiler errors in Visual Studio Code using the command line

My first issue was that the Element 14 blogs are based on Visual Studio – not Visual Studio Code and I was experiencing issues where Code would complain it couldn’t find a compiler.

Thanks my colleague Andrew Hawker who was also experimenting with his Starter Kit, but using a Linux VM, I had a workaround. That workaround was to run CMake and Ninja from the command line, then to sideload the resulting app package onto the device from the Azure Sphere Developer Command Prompt:

cmake ^
-G "Ninja" ^
-DCMAKE_TOOLCHAIN_FILE="C:\Program Files (x86)\Microsoft Azure Sphere SDK\CMakeFiles\AzureSphereToolchain.cmake" ^
-DAZURE_SPHERE_TARGET_API_SET="4" ^
-DAZURE_SPHERE_TARGET_HARDWARE_DEFINITION_DIRECTORY="C:\Users\%username%\AzureSphereHacksterTTC\Hardware\avnet_mt3620_sk" ^
-DAZURE_SPHERE_TARGET_HARDWARE_DEFINITION="avnet_mt3620_sk.json" ^
--no-warn-unused-cli ^
-DCMAKE_BUILD_TYPE="Debug" ^
-DCMAKE_MAKE_PROGRAM="ninja.exe" ^
"C:\Users\%username%\AzureSphereHacksterTTC\AvnetStarterKitReferenceDesign"
ninja
azsphere device sideload deploy --imagepackage AvnetStarterKitReferenceDesign.imagepackage

I wasn’t able to view the debug output (despite my efforts to use PuTTY to read 192.168.35.2:2342) but I was confident that the app was working on the device so moved on to integrating with cloud services.

Brian Willess has since updated the repo so it should now work with Visual Studio Code (at least for the high level application) and I have successfully tested the non-connected scenario (part 1) with the changes.

Integration with Azure IoT Hub, device twins and Azure Time Series Insights

Part 2 of the series of posts I was working though is where the integration starts. The basic steps (refer to Brian Willess’ post for full details) were:

  1. Create an Azure IoT hub, which is a cloud-hosted back-end for secure communication with Internet of Things (IoT) devices, of which the Azure Sphere is just one of many options.
  2. Create and configure the IoT Hub Device Provisioning Service (DPS), including:
    • Downloading a certificate from the Azure Sphere tenant (using azsphere tenant download-CA-certificate --output CAcertificate.cer at the Azure Sphere Developer Command Prompt) and using this to authenticate with the DPS, including validation with the verification code generated by the Azure portal (azsphere tenant download-validation-certificate --output validation.cer --verificationcode verificationcode) and uploading the resulting certificate to the portal.
    • Creating an Enrollment Group, to enrol any newly-claimed device whose certificate is signed by my tenant. This stage also includes the creation of an initial device twin state, editing the JSON to include some extra lines:
      "userLedRed": false,
      "userLedGreen": false,
      "userLedBlue": true
    • The initial blue illumination of the LED means that we can see when the Azure Sphere has successfully connected to the IoT Hub.
  3. Edit the application source code (I used Visual Studio Code but any editor will do) to:
    • Uncomment #define IOT_HUB_APPLICATION in build_options.h.
    • Update the CmdArgs line in app_manifest.json with the ID Scope from the DPS Overview in the Azure portal.
    • Update the AllowedConnections line in app_manifest.json with the FQDNs from the DPS Overview (Global Device Endpoint) and the IoT Hub (Hostname) in the Azure portal.
    • Update the DeviceAuthentication line in app_manifest.json with the Azure Sphere tenant ID (which may be obtained using azsphere tenant show-selected at the Azure Sphere Developer Command Prompt).
  4. Build and run the app. I used the CLI as detailed above, but this should now be possible within Visual Studio Code.
  5. Use the device twin capabilities to manipulate the device, for example turning LEDs on/off (though clearly there are more complex scenarios that could be used in real deployments!).
  6. Create a Time Series Insights resource in Azure, which is an analytics solution to turn IoT data into actionable insights.
    • Create the Time Series Insights environment using the existing IoT Hub with an access policy of iothubowner and consumer group of $Default.
  7. Add events inside the Time Series Insights to view the sensor readings from the Azure Sphere device.
Time Series Insights showing sensor data from an Azure Sphere device.

Time Series Insights can get expensive for a simple test project without any real value. I could quickly have used my entire month’s Azure credits, so I deleted the resource group used to contain my Azure Sphere resources before moving on to the next section…

Integration with Azure IoT Central

Azure IoT Central is a hosted IoT platform. It is intended to take away much of the underlying complexity and let organisations quickly build IoT solutions using just a web interface.

Following part 3 in Brian Willess’ Azure Sphere series, I was able to get my device working with IoT Central, both using the web interface to control the LEDs on the board and also pushing sensor data to a dashboard. As before, these are the basic steps and refer to Brian Willess’ post for full details:

  1. Create a new IoT Central application.
  2. Select or create a template:
    • Use the IoT device custom template.
    • Either import an existing capability model (this was mine) or create one, adding interfaces (sensors, buttons, information, etc.) and capabilities.
    • Create custom views – e.g. for LED device control or for device metrics.
  3. Publish the template.
  4. Configure DPS:
    • Download a certificate from the Azure Sphere tenant using azsphere tenant download-CA-certificate --output CAcertificate.cer at the Azure Sphere Developer Command Prompt. (This is the same certificate already generated for the IoT Hub example.)
    • Upload the certificate to IoT Central and generate a validation code, then use azsphere tenant download-validation-certificate --output validation.cer --verificationcode verificationcode to apply this.
    • Upload the new validation certificate.
  5. Create a non-simulated device in IoT Central:
  6. Run ShowIoTCentralConfig.exe, providing the ID Scope and a shared access signature key for the device (both obtained from the Device Connection details in IoT Central) and the Device ID (from the device created in the previous step). Make a note of details provided by the tool
  7. Configure the application source code to connect to IoT Central:
    • Uncomment #define IOT_CENTRAL_APPLICATION in build_options.h.
    • Update the CmdArgs line in app_manifest.json with the ID Scope obtained from the Device Connection details in IoT Central.
    • Update the AllowedConnections line in app_manifest.json with the FQDNs obtained by running ShowIoTCentralConfig.exe.
    • Update the DeviceAuthentication line in app_manifest.json with the Azure Sphere tenant ID (which may be obtained using azsphere tenant show-selected at the Azure Sphere Developer Command Prompt).
  8. Build and run the application.
  9. Associate the Azure Sphere device with IoT Central (the device created previously was just a “dummy” to get some configuration details). IoT Central should have found the real device but it will need to be “migrated” to the appropriate device group to pick up the template created earlier.
  10. Open the device and enjoy the data!

I hadn’t expected IoT Central to cost much (if anything, because the first two devices are free) but I think the app I’m using is pretty chatty so I’m being charged for extra messages (30,000 a month sounds like a lot until you realise it’s only around 40 an hour on a device that’s sending frequent updates to/from the service). It seems to be costing just under £1/day (from a pool of credits) so I won’t be worrying too much!

What’s next for my Azure Sphere device?

Having used Brian Willess’ posts at Element 14 to get an idea of how this should work, I think my next step is to buy some external sensors and write some real code to monitor something real… unfortunately the sensors I want are on back order until the summer but watch this space!

Weeknote 17/2020: Geeking out and taking advantage of the sunshine

Another week of socially-distanced, furloughed fun: here are some of the highlights…

“Playing” with tech: Azure Sphere

I took a break from exam study this week, partly because I had some internal meetings that made a big hole in the calendar and diverted my attention. Instead, I finally got my Azure Sphere Starter Kit IoT device working, with both Microsoft samples and with some more practical advice from Brian Willess at Element 14.

I’m blogging my progress (slightly behind the actual learning) but over the course of a few days, supported by Brian’s blog posts, I managed to get the sensor readings from my device working locally, with Azure IoT Hub and Time Series Insights, and then finally in Azure IoT Central.

The next stop is to try and write some code of my own rather than using other people’s – it’s been a while since I wrote any C/C++!

Blogging

I also wrote some blog posts:

Other geek stuff

I finished watching “Devs“. No spoilers here, but the ending did leave me a little flat…

I didn’t spot any SpaceX Starlink satellites, despite a few attempts and some very clear evenings. This website seemed particularly helpful, although the developer (@modeless) had to remove the Google Street View content when the site got popular.

Being “too British”

Thursday meant my usual trip to the local market, followed by the supermarket, buying provisions for my family and others. Because product availability is a bit “hit and miss”, in the supermarket (and because I prioritise supporting local businesses over the big retailers, where I can), I bought some peppers (capsicums) from the market greengrocer. There was no price displayed but, as he bagged them, he said they were expensive… and he was not wrong: £3/lb, I think! But I was too embarrassed to say “no thanks at that price” so bought them anyway. Lesson learned…

To add insult to injury, when I got to Sainsbury’s they had plenty, at a much more reasonable price…

“On holiday”, in the garden

The week wrapped up with sunshine, low wind and reasonably high temperatures (19°C is not bad for April in England!). After a decent bike ride with my son (permissible under the current social distancing advice), I made the time to just relax a bit…

What a great way to end the week!

Getting started with Azure Sphere: Part 1 (setup and running a sample app)

Late in 2019, I got my hands on an Azure Sphere Starter Kit, which I’ve been intending to use for an IoT project, using some of the on-board sensors for temperature and potentially an external one for humidity…

For those who aren’t familiar with Azure Sphere, it’s Microsoft’s Secure Internet of Things (IoT) solution using certified chips, a custom operating system and a security service. My device is an Avnet Azure Sphere MT3620 Starter Kit and this blog post focuses on getting it up and running with one of the sample applications that Microsoft provides, using Windows 10 (other options include Linux).

Installing Visual Studio Code and the Azure Sphere SDK

Having obtained the kit, the next stop was Microsoft’s Getting Started with Azure Sphere page. I downloaded and installed Visual Studio Code (I don’t really need the whole Visual Studio 2019 application – though I later found that a lot of the advice on the Internet assumes that’s what you’re using…) and then immediately found that there are two versions of the Azure Sphere Software Development Kit (SDK). According to the Microsoft docs, either can be used with Visual Studio Code but I found the setup for the Azure Sphere SDK for Visual Studio failed when it can’t find Visual Studio (not really surprising) and so I used the Azure Sphere SDK for Windows.

Connecting the hardware

I plugged in the Avnet Azure Sphere Starter Kit, using the supplied USB cable, and watched as Windows installed drivers after which a virtual network interface was present and three COM ports appeared in Device Manager.

Setting up my dev environment

Installing Visual Studio Code and the Azure Sphere SDK was only the first part of getting ready to create code for the device. I needed to install the Azure Sphere extension (easily found in the Extensions Marketplace):

The Azure Sphere extension also installs two dependencies:

  • C/C++
  • CMake Tools

I also need to install CMake (in my case it was version 3.17.1). Not really knowing what I was doing, I followed the defaults but on reflection, I probably should have let CMake add its directory to the system %PATH% variable (I later uninstalled and reinstalled CMake to do this, but could just have added C:\Program Files\CMake\bin to the Path in the user environment variables).

The final installation was Ninja. Windows Defender SmartScreen blocked this app, but I was later able to work around that, by unblocking in the properties for ninja.exe:

I missed the point in the Microsoft documentation that said I needed to manually add Ninja to the %PATH% environment variable but I later went back and added the folder that I copied ninja.exe to (which, for me, was C:\Users\%username%\Tools).

(The above steps were my second attempt – the first time I installed MinGW-W64 to work around issues when Visual Studio Code couldn’t find a compiler, together with several changes in settings.json. I later removed all of that and managed to compile and deploy a sample application using just the settings above…)

Configuring the Azure Sphere device for use

There are a few steps required to configure the device for use. These are all completed using the Azure Sphere Developer Command Prompt, which was installed earlier, with the SDK.

Creating an Azure Sphere tenant and claiming the device

Each Azure Sphere device must be “claimed” and associated with a “tenant”. I followed the Microsoft documentation to do this…

azsphere login --newuser user@domain.tld

After completing Multi-Factor Authentication (MFA) and confirming I wanted to allow Azure Sphere to use my account, I was logged in but with a warning that I don’t have access to any Azure Sphere tenants, so I created one:

azsphere tenant create --name "Mark Wilson"

Warning – more research required: I used a Microsoft Account, as per the Microsoft instructions, but am now concerned I should have used an Azure Active Directory (Organisational/Work or School) account (especially as Role Based Access Control is supported from Azure Sphere 19.10 onwards). As a device can only be claimed once and, once claimed, the device is permanently associated with the Azure Sphere tenant, I’m stuck with these settings now…

I then went ahead and claimed the device:

azsphere device claim

Connecting to Wi-Fi and updating the device operating system

I checked the current OS version on the device:

azsphere device show-deployment-status

As can be seen, not only is the OS out of date, but the device is not connected to a network, so I connected to Wi-Fi:

azsphere device wifi show-status
azsphere device wifi add --ssid "SSID" --psk password
azsphere device wifi show-status

Now, with network connectivity in place, the device had a fighting chance of an OS update and according to the Microsoft documentation:

The Azure Sphere device checks for Azure Sphere OS and application updates each time it boots, when it initially connects to the internet, and at 24-hour intervals thereafter. If updates are available, download and installation could take as much as 15-20 minutes and might cause the device to restart.

Configure networking and update the device OS

I tried several restarts using azsphere device restart with no success. In the end, I left the device connected overnight and, by the morning, it had updated to 20.03.

Finally, I enabled application development on the device, ready to download some code and deploy an application:

azure sphere device enable-development

Downloading a sample app

My initial attempts to use the app that I wanted to didn’t work so I decided to test my setup with one of the Microsoft Quick Starts.

I needed to use git to clone the Azure Sphere Samples Repo, so that meant installing git. Then, from the Terminal in Visual Studio Code, I ran git clone https://github.com/Azure/azure-sphere-samples.git.

I then opened the Samples\HelloWorld\HelloWorld_HighLevelApp folder in Visual Studio Code, ready to build and deploy the app.

Building and deploying the app

Having set up my dev environment, set up the device and downloaded some sample code, I followed the instructions in the Visual Studio Code Azure Sphere Extension to run the following in the Command Palette: Azure Sphere: Configure Settings (selecting High-Level Application) and CMake: Build.

I was then able to build and deploy the sample app to my Azure Sphere device, by starting a debug session (F5) .

and was rewarded with a blinking LED on the board!

Azure Sphere Starter Kit with blinking LED

I can also view the application status with azsphere device app show-status.

Next steps

The next step is to get the app I really wanted to use working on the device, making use of some of the on-board sensors and then integrating this with some of the Azure services. I’m having trouble compiling that code at the moment, so that blog post may be a while longer…

Further reading

When is agile not Agile? And why Waterfall is not always wrong!

In 2004 (when I started writing this blog), I was working for a company called conchango*. The developers talked a strange language – about Scrum and XP – and it was nothing to do with Rugby Union or Windows but it did have something to do with sprints…

That was my first encounter with Agile software development methodologies. Not being a coder, I haven’t done a huge amount of agile development, with the infrastructure projects I’ve been involved in generally being run using a traditional “waterfall” approach.

These days, things are different. There’s a huge push for Agile projects and the UK Government Digital Service’s Service Manual even says:

“You must use the agile approach to project management to build and run government digital services.

Agile methods encourage teams to build quickly, test what they’ve built and iterate their work based on regular feedback.”

Agile and government services: an introduction

There’s also a lot of confusion in the marketplace. Colleagues and clients alike are using the word “agile” in different ways. And there’s an undertone that agile is the one true way and waterfall is bad.

No!

Agile/agile/agility

Let’s start off by comparing uses of the word “agile” (in IT) and what they mean:

  1. Agile (big A) often relates to a methodology – for example APMG International’s AgilePM project management methodology or the AgileBA approach to business analysis – but really they have their roots in Agile software development, with the Agile Manifesto, written in 2001.
  2. When we talk about being agile (small a), it’s a mindset: the approach taken. Literally, being able to adapt to change and to move quickly. We might use Agile (big A) approaches to help increase our agility (small a).
  3. Agility is about reaction to change. Many business want to be agile. That doesn’t mean they only run projects with Agile approaches. It means they want the ability to flex and change in line with business requirements.
  4. And then there’s the UK public sector. Specifically Police, who for some reason refer to what the rest of us consider to be remote/mobile working as agile working (as shown in this Agile Working Policy from West Yorkshire Police). That’s just an anomaly.

So that’s Agile/agile/agility sorted then. There are Agile frameworks/methodologies/approaches to delivering outcomes in a more agile manner, to increase organisational agility.

Agile=good, waterfall=bad?

Now waterfall. If Agile, is the one true way, waterfall must be old hat and avoided at all costs, right?

Not at all.

Agile projects work well for quickly creating a minimum viable product (MVP) and iterating development – for example as a series of sprints. They are great when there is a known problem but the requirements are less clear. The solution can evolve in line with the definition of the requirements. The requirements may change as the solution develops: respond to market changes; adapt to new requirements; fail fast.

But some projects are less defined. In a 2018 blog post, Matt Ballantine (@ballantine70) referred to unknown problems with unknown solutions as tinkering. That seems fair – if you don’t know what the issue is, then you can’t have a solution!

Similarly, unknown problems with a known solution. That’s nonsense. Or “WTF?” as Matt so succinctly puts it in his 2×2 diagram:

Matt Ballantine's 2x2 diagram of which path to take, including Agile and Waterfall approaches
Matt Ballantine’s 2×2 for which path to take, including Agile and Waterfall approaches (used with kind permission).

You’ll see though, that there is a place for waterfall project management. Waterfall works when there is a known problem and a known solution. Instead of constantly iterating towards an end, work out the steps to go straight there. It will almost certainly be more efficient. Waterfall projects are based on the golden triangle of time/cost/quality (which together define scope). A known deliverable (scope) bounded by how fast/cheap/good you want it to be – and there’s always a trade-off.

So there we have it. Agile is not a silver bullet and there is still a place for waterfall projects.

What to use, when?

In my line of work, Cloud Transformation might appear to use a combination of Agile and Waterfall approaches. We might create a virtual datacentre in Azure or AWS and take an iterative approach to migrating workloads but that’s still really just Waterfall with incremental delivery – even if a Kanban approach is used to inject some urgency! Similarly migrating batches of mailboxes to the cloud is just iteration, as is a programme that’s adopting Office 365 workloads one by one. An Agile approach comes into its own when we think about Business Transformation, or Digital Transformation, where we can define an MVP and then use sprints to iterate development of a set of new business processes or the digital tools to deliver those processes in a new way.

Further reading

For a clear definition of Cloud, Business and Digital Transformation, see my blog post from last year: “Digital Transformation – it’s not about the Technology“.

* The small c is not a typo – that was the branding!