Tweaking audio and (webcam) video quality in Windows 10

Back in the spring (whilst I was on Furlough Leave and had time for weeknotes), I wrote about some upgrades to my home office. The LED lights didn’t work out (battery life was too short – I need to find something that works from mains power) so they went back to Amazon but the Marantz MPM-1000U microphone has been excellent.

I’ve seen a few tweets and videos recently about using software to use a smartphone camera as a webcam. Why might you do that? Well, because many laptop webcams are a bit rubbish (like the one in my Apple MacBook) or poorly placed, giving an unflattering view from below.

I had a play with the Iriun webcam software recommended in this video from Kevin Stratverdt and it worked really well, with the phone on a tripod, giving a better angle of view.

Ultimately though, the Microsoft Surface Pro 6 that I use for work has a pretty decent webcam, and my Nokia 7 Plus was no better quality – all I was really gaining was a better camera position.

I do still have a challenge with lighting. My desk position means that I’m generally back-lit with a north-facing window to my left. Some fill-in light in front might help but I also wanted to adjust the settings on my webcam.

Microsoft Teams doesn’t let me do that – but the Camera app in Windows 10 does… as described at Ceofix, there is a “Pro mode” in the Windows 10 Camera app that allows the brightness to be adjusted. There are more options for still images (timer, zoom, white balance, sensitivity, shutter speed and brightness) but the brightness option for video let me tweak my settings a little.

The next challenge I had was with audio. Despite using the volume controls on the Surface Pro to knock the volume up to 100% whilst I was presenting over Teams earlier, everyone else on the call sounded very quiet. It turned out that 100% was not 100% – there is a Realtek Audio Console app on my PC which, as well as letting me adjust the speaker and microphone settings, including volume, balance, Dolby audio, sample rate and depth. Finding this revealed that my volume was actually no-where near 100% and I was quickly able to increase it to a level where I could hear my client and co-presenters!

Bulk removing passwords from PDF documents

My payslip and related documents are sent to me in PDF format. To provide some rudimentary protection from interception, they are password protected, though the password is easily obtained by anyone who knows what the system is.

Because these are important documents, I store a copy in my personal filing system, but I don’t want to have to enter the password each time I open a file. I know I can open each file individually and then resave without a password (Preview on the Mac should do this) but I wanted a way to do it in bulk, for 10s of files, without access to Adobe Acrobat Pro.

Twitter came to my aid with various suggestions including Automator on the Mac. In the end, the approach I used employed an open source tool called QPDF, recommended to me by Scott Cross (@ScottCross79). Scott also signposted a Stack Overflow post with a PowerShell script to run against a set of files but it didn’t work (leading to a rant about how Stack Overflow’s arcane rules and culture prevented me from making a single character edit) and turned out to be over-engineered. It did get me thinking though…

Those of us old enough to remember writing MS-DOS batch files will probably remember setting environment variables. Combined with a good old FOR loop, I got this:

FOR %G IN (*.pdf) DO qpdf --decrypt --password=mypassword "%G" --replace-input

Obviously, replace mypassword with something more appropriate. The --replace-input switch avoids the need to specify output filenames, and the use of the FOR command simply cycles through an entire folder and removes the encryption.

The 5 or 6 Rs of cloud transformation

A few years ago, a couple of colleagues showed me something they had been working on – a “5 Rs” approach to classifying applications for cloud transformation. It was adopted for use in client engagements but I decided it needed to be extended – there was no “do nothing” option, so I added “Remain” as a 6th R.

I later discovered that my colleagues were not the first to come up with this model. When challenged, they maintained that it was an original idea (and I was convinced someone had stolen our IP when I saw it used by another IT services organisation!). Research suggests Gartner defined 5Rs in 2010 and both Microsoft and Amazon Web Services have since created their own variations (5Rs in the Microsoft Cloud Adoption Framework and 6Rs in Amazon Web Services’ Application Migration Strategies). I’m sure there are other variations too, but these are the main ones I come across.

For reference, this is the description of the 6Rs that we use where I work, at risual:

  • Replace (or repurchase) – with an equivalent software as a service (SaaS) application.
  • Rehost – move to IaaS (lift and shift). This is relatively fast, with minimal modification but won’t take advantage of cloud characteristics like auto-scaling.
  • Refactor (or replatform/revise) – decouple and move to PaaS. This may provide lower hosting and operational costs together with auto-scaling and high availability by default.
  • Redesign (or rebuild/rearchitect) – redevelop into a cloud-aware solution. For example, if a legacy application is providing good value but cannot be easily migrated, the application may be modernised by rebuilding it in the cloud. This is the most complicated approach and will involve creating a new architecture to add business value to the core application through the incorporation of additional cloud services.
  • Remain (or retain/revisit) – for those cases where the “do nothing” approach is appropriate although, even then, there may be optimisations that can be made to the way that the application service is provided.
  • Retire – for applications that have reached the end of their lifecycle and are no longer required.

Right now, I’m doing some work with a client who is looking at how to transform their IT estate and the 5/6Rs have come into play. To help my client, who is also working with both Microsoft and AWS, I needed to compare our version with Gartner’s, Microsoft’s and AWS’… and this is what I came up with:

risualGartnerMicrosoftAWSNotes
ReplaceReplaceReplaceRepurchaseWhilst AWS uses a different term, the approach is broadly similar – look to replace/repurchase existing solutions with a SaaS alternative: e.g. Office 365, Dynamics 365, Salesforce, WorkDay, etc.
RehostRehostRehostRehostAll are closely aligned in thinking – rehost is the “lift and shift” option – based on infrastructure as a service (IaaS) – which is generally straightforward from a technical perspective but may not deliver the same long term benefits as other cloud transformation methods.
RefactorRefactorRefactorReplatformRefactoring generally involves the adoption of PaaS – for example making use of particular cloud frameworks, application hosting or database services; however this may be at the expense of portability between clouds. The exception is AWS, which uses refactor in a slightly different context and replatform for what is referred to as “lift, tinker and shift”.
 Revise  Gartner’s revise relates to modifying existing code before refactoring or rehosting. risual, Microsoft and AWS would all consider this as part of the refactoring/replatforming.
RedesignRebuildRebuildRefactor/re-architect.Gartner defines rebuilding as moving to PaaS, rebuilding the solution and rearchitecting the application.

AWS groups its definition of refactoring and rearchitecting, although the definition of refactor is closer to Microsoft/Gartner’s rebuild – adding features, scale, or performance that would otherwise be difficult to achieve in the application’s existing environment (for example.
  Rearchitect Microsoft makes the distinction between rebuilding (creating a new cloud-native codebase) and rearchitecting (looking for cost and operational efficiencies in applications that are cloud-capable but not cloud-native) – for example migrating from a monolithic architecture to a serverless architecture.
Remain  Retain/revisitPerhaps because their application transformation strategies assume that there is always some transformation to be done, Gartner and Microsoft do not have a remain/retain option. This can be seen as the “do nothing” approach but, as AWS highlights, it’s really a revisit as the do nothing is a holding state.
Maybe the application will be deprecated soon – or was recently purchased/upgraded and so is not a priority for further investment. It is likely to be addressed by one of the other approaches at some point in future.
Retire  RetireSometimes, an application has outlived its usefulness – or just costs more to run than it delivers in value, and should be retired. Neither Gartner nor Microsoft recognise this within their 5Rs.

Whichever 5 or 6Rs approach you take, it can be a useful approach for categorising potential transformation opportunities and I’m often surprised exercise how it exposes services that are consuming resources, long after their usefulness has ended.

Intel NUC makes a fantastic Zwift computer (and Samsung DeX is pretty cool for homework)

With my tech background, my family is more fortunate than many when it comes to finding suitable equipment for the kids to use whilst school is closed. Even so, we’ve struggled with both my teenagers sharing one laptop – they really do both need to use it at the same time.

We thought that one of them would be using a tablet would be OK, but that wasn’t really working out either. Then, a few weeks ago, we thought I’d found a great solution to the problem. My youngest has a Samsung Galaxy S10 smartphone, which supports Samsung DeX. We tried it out with the Apple USB-C to HDMI/USB-A power adapter and it worked a treat:

The only problem was the keyboard. I tried some Bluetooth keyboards for Android but they all had small keys. And we tried a normal PC keyboard, which worked well but lacked a trackpad and didn’t have a USB port for a mouse. Using the phone as a trackpad was awkward, so I was going to have to buy another keyboard and either a trackpad or a mouse – or find a way of splitting the USB-A socket to run two devices. It was all a bit Heath Robinson so I started looking for another approach…

I had been using an old laptop for Zwifting but, after seeing Brian Jones (@brianjonesdj) tweet about an Intel NUC, I realised that I could get one for not too much money, hook it up to the TV in the Man Cave and release the laptop for general family use.

It took a while to decide which model to go for but, in the end, I settled for the Intel Dual Core 8th Gen i3 Short NUC Barebone Mini PC Kit, with 120GB SSD and 8GB RAM (all from Scan Computers) – and it is a fantastic little thing:

I did spend far too much time downloading the latest version of Windows 10 because I thought it was corrupted when I didn’t read the error message properly. Actually it was a problem with the USB thumb drive I was using, fixed with a full format (instead of a quick one).

Anyway, here’s Microsoft’s instructions for creating Windows 10 boot media. F10 is the magic key to make the NUC boot from an alternative device but I found USB boot only worked at the rear of the machine – not using the ports on the front. Finally, here’s a location for downloading Windows 10 ISOs (it doesn’t really matter where you get the media, as long as it’s an official source, so if you download from a Volume Licence or Visual Studio subscription, that should be fine too).

With the NUC in the cave, the laptop has been released for general family computing. My Microsoft 365 Family subscription (formerly Office 365 Home) gives access to 6 copies of the Office apps so that more than covers us the Windows and macOS PCs used by myself, my wife and the boys. (The Microsoft 365 subscription also includes Office mobile apps for iOS/Android and 1TB cloud storage in OneDrive as well as other benefits).

Weeknote 22/2020: holidaying on the Costa del Great Ouse (plus password resets, cycling performance, video-conferencing equipment and status lights)

In the last few hours of 2019, my family planned our holiday. We thought we had it all sorted – fly to Barcelona, spend the weekend sight-seeing (including taking my football-mad son to Camp Nou) and then head up the coast for a few more days in the Costa Brava. Flights were booked, accomodation was sorted, trips were starting to get booked up.

We hadn’t counted on a global pandemic.

To be clear, I’m thankful that myself, my family and friends, and those around us are (so far) safe and well. By April, I didn’t much like the prospect of getting into a metal tube with 160+ strangers and flying for 3 hours in each direction. We’re also incredibly lucky to be able to access open countryside within a couple of hundred metres of our house, so daily exercise is still possible and enjoyable, with very few people around, most of the time.

I still took the week off work though. After cancelling my Easter break, it’s been a while since I took annual leave and even my Furlough period was not exactly relaxing, so I could do with a rest.

The weather has been glorious in the UK this week too, making me extra-glad we re-landscaped the garden last year and I’ve spent more than a few hours just chilling on our deck.

Unfortunately, we also got a taste of what it must be like to live in a tourist hotspot, as hundreds of visitors descended on our local river each day this weekend. It seems the Great Ouse at Olney has featured in a list of top places to swim in Britain, which was recently featured in The Times. It may sound NIMBYish, but please can they stay away until this crisis is over?

As for the holiday, hopefully, we’ll get the money refunded for the cancelled flights (if the airlines don’t fold first – I’m sure that if they refunded everyone they would be insolvent, which is my theory for why they are not increasing staff levels to process refunds more quickly); FC Barcelona contacted me weeks ago to extend my ticket and offer a refund if we can’t use it; and AirBnB had the money back in our account within days of us being forced to pull out due to cancelled flights.

(I did spend a few weeks effectively “playing chicken” with easyJet to see if they would cancel first, or if it would be us. An airline-cancelled flight can be refunded, but a consumer-cancelled flight would be lost, unless we managed to claim on travel insurance).

Even though I’ve had a week off, I’ve still been playing with tech. Some of my “projects” should soon have their own blog post (an Intel NUC for a new Zwift PC; migrating my wife’s personal email out of my Office 365 subscription to save me a licence; and taking a look at Veeam Backup for Office 365), whilst others get a brief mention below…

Please stop resetting user passwords every x days!

Regularly resetting passwords (unless a compromise is suspected) is an old way of thinking. Unfortunately, many organisations still make users change their password every few weeks. Mine came up for renewal this week and I struggled to come up with an acceptable, yet memorable passphrase. So, guess what? I wrote it down!

I use a password manager for most of my credentials but that doesn’t help with my Windows logon (before I’ve got to my browser). Biometric security like Windows Hello helps too (meaning I rarely use the password, but am even less likely to remember it when needed).

Here’s the National Cyber Security Centre (@NCSC)’s password guidance infographic (used with permission) and the associated password guidance:

This list of 100,000 commonly used passwords that will get blocked by some systems may also be useful – from Troy Hunt (@TroyHunt) but provided to me by my colleague Gavin Ashton (@gvnshtn).

Performance analysis for cyclists, by cyclists

I’ve been watching with interest as my occasional cycling buddy (and now Azure MVP) James Randall (@AzureTrenches) has been teasing development on his new cycling performance platform side project. This week he opened it up for early access and I’ve started to road test it… it looks really promising and I’m super impressed that James created this. Check it out at For Cyclists By Cyclists.

Podcasting/video conferencing upgrades in my home office

With video conferencing switching from something-I-use-for-internal-calls to something-I-use-to-deliver-consulting-engagements, I decided to upgrade the microphone and lighting in my home office. After seeking some advice from those who know about such things (thanks Matt Ballantine/@ballantine70 and the WB-40 Podcast WhatsApp group), I purchased a Marantz MPM-1000U microphone, boom arm, shock mount, and a cheap rechargeable LED photography light with tripod.

It’s early days yet but initial testing suggests that the microphone is excellent (although the supplied USB A-B cable is too short for practical use). I had also considered the Blue Yeti/Raspberry but it seems to have been discontinued.

As for the photo lighting, it should be just enough to illuminate my face as the north-facing window to my left often leaves me silhouetted on calls.

Smart lighting to match my Microsoft Teams presence

I haven’t watched the Microsoft Build conference presentations yet, but I heard that Scott Hanselman (@shanselman) featured Isaac Levin (@isaacrlevin)’s PresenceLight app to change the lighting according to his Windows Theme. The app can also be used to change Hue or LIFX lighting along with Teams presence status, so that’s in place now outside my home office.

It’s not the first time I’ve tried something like this:

One particularly useful feature is that I can be logged in to one tenant with the PresenceLight app and another in Microsoft Teams on the same PC – that means that I can control my status with my personal persona so I may be available to family but not to colleagues (or vice versa).

One more thing…

It may not be tech-related, but I also learned the differences between wheat and barley this week. After posting this image on Instagram, Twitter was quick to correct me:

As we’re at the end of May, that’s almost certainly not wheat…

Weeknote 21/2020: work, study (repeat)

Another week in the socially-distanced economy. Not so much to write about this week as I spent most of it working or studying… and avoiding idiots who ignore the one-way system in the local supermarket…

Some more observations on remote working

It’s not often my tweets get as much engagement as this one did. So I’m putting it on the blog too (along with my wife’s response):

My “Build Box”

Unfortunately, I didn’t get to watch the Microsoft Build virtual event this week. I’m sure I’ll catch up later but it was great to receive this gift from Microsoft – it seems I was one of the first few thousand to register for the event:

Annual review

This week was my fifth anniversary of joining risual. Over that time I’ve watched the company grow and adapt, whilst trying to retain the culture that made it so strong in the early days. I don’t know if it’s possible to retain a particular culture as a business grows beyond a certain size but I admire the attempts that are made and one of those core tenets is an annual review with at least one if not both of the founding Directors.

For some, that’s a nerve-wracking experience but I generally enjoy my chat with Rich (Proud) and Al (Rogers), looking back on some of the key achievements of the last year and plans for the future. Three years ago, we discussed “career peak”. Two years ago it was my request to move to part-time working. Last year, it was my promotion to Principal Architect. This year… well, that should probably remain confidential.

One thing I found particularly useful in my preparation was charting the highs and lows of my year. It was a good way to take stock – which left me feeling a lot better about what I’d achieved over the last 12 months. For obvious reasons, the image below has had the details removed, but it should give some idea of what I mean:

Another exam ticked off the expanding list

I wrapped up the work week with another exam pass (after last week’s disappointment) – AZ-301 is finally ticked off the list… taking me halfway to being formally recognised as an Azure Solutions Architect Expert.

I’ll be re-taking AZ-300 soon. And then it looks like two more “Microsoft fundamentals” exams have been released (currently in Beta):

  • Azure AI Fundamentals (AI-900).
  • Azure Data Fundamentals (DP-900).

Both of these fit nicely alongside some of the topics I’ve been covering in my current client engagement so I should be in a position to attempt them soon.

Weeknote 20/2020: back to work

Looking back on another week of tech exploits during the COVID-19 coronavirus chaos…

The end of my furlough

The week started off with exam study, working towards Microsoft exam AZ-300 (as mentioned last week). That was somewhat derailed when I was asked to return to work from Wednesday, ending my Furlough Leave at very short notice. With 2.5 days lost from my study plan, it shouldn’t have been a surprise that I ended my working week with a late-night exam failure (though it was still a disappointment).

Returning to work is positive though – whilst being paid to stay at home may seem ideal to some, it didn’t work so well for me. I wanted to make sure I made good use of my time, catching up on personal development activities that I’d normally struggle to fit in. But I was also acutely aware that there were things I could be doing to support colleagues but which I wasn’t allowed to. And, ultimately, I’m really glad to be employed during this period of economic uncertainty.

Smart cities

It looks like one of my main activities for the next few weeks will be working on a Data Strategy for a combined authority, so I spent Tuesday afternoon trying to think about some of the challenges that an organisation with responsibility for transportation and economic growth across a region might face. That led me to some great resources on smart cities including these:

  • There are some inspirational initiatives featured in this video from The Economist:
  • Finally (and if you only have a few minutes to spare), this short video from Vinci Energies provides an overview of what smart cities are really about:

Remote workshop delivery

I also had my first experience of taking part in a series of workshops delivered using Microsoft Teams. Teams is a tool that I use extensively, but normally for internal meetings and ad-hoc calls with clients, not for delivering consulting engagements.

Whilst they would undoubtedly have been easier performed face-to-face, that’s just not possible in the current climate, so the adaptation was necessary.

The rules are the same, whatever the format – preparation is key. Understand what you’re looking to get out of the session and be ready with content to drive the conversation if it’s not quite headed where you need it to.

Editing/deleting posts in Microsoft Teams private channels

On the subject of Microsoft Teams, I was confused earlier this week when I couldn’t edit one of my own posts in a private channel. Thanks to some advice from Steve Goodman (@SteveGoodman), I found that the ability to delete and/or edit messages is set separately on a private channel (normal channels inherit from the team).

The Microsoft Office app

Thanks to Alun Rogers (@AlunRogers), I discovered the Microsoft office app this week. It’s a great companion to Office 365 (or , searching across all apps, similar to Delve but in an app rather than in-browser. The Microsoft Office app is available for download from the Microsoft Store.

Azure Network Watcher

And, whilst on the subject of nuggets of usefulness in the Microsoft stable…

A little piece of history

I found an old map book on my shelf this week: a Halford’s Pocket Touring Atlas of Great Britain and Ireland, priced at sixpence. I love poring over maps – they provide a fascinating insight into the development of the landscape and the built environment.

That’s all for now

Those are just a few highlights (and a lowlight) from the week – there’s much more on my Twitter feed

Weeknote 19/2020: Azure exam study, remote working, and novelty video conference backgrounds

Another week in the furloughed fun house…

Studying

I still have a couple of exams I’d like to complete this month. I’ve been procrastinating about whether to take the Microsoft Azure Architect exams in their current form (AZ-300/301) or to wait for the replacements (AZ-303/304). As those replacements have been postponed from late April until the end of June (at least), I’ve booked AZ-300/301 and am cramming in lots of learning, based on free training from the Microsoft Learn website.

I’m sure it’s deeper (technically) than I need for an Architect exam, but it’s good knowledge… I just hope I can get through it all before the first exam appointment next Thursday evening…

Thoughts on remote working during the current crisis

I’ve seen this doing the rounds a couple of times on Twitter and I don’t know the original source, but it’s spot on. Words to live by in these times:

  1. You’re not “working from home”. You’re “At your home, during a crisis, trying to work” [whilst parenting, schooling, helping vulnerable people, etc.].
  2. Your personal physical, mental and emotional health is far more important than anything else right now.
  3. You should not try to compensate for lost productivity by working longer hours.
  4. You will be kind to yourself and not judge how you are coping based on how you see others coping.
  5. You will be kind to others and not judge how they are coping based on how you are coping.
  6. Your success will not [should not] be measured in the same way it was when things were normal.

This animation may also help…

Also, forget the 9-5:

As for returning to the office?

Video conference backgrounds

Novelty backgrounds for video conferences are a big thing right now. Here are a couple of collections that came to my attention this week:

Upgrades to the Zwift bike

My old road bike has been “retired” for a year now, living out its life connected to an indoor trainer and used for Zwifting. It’s needed some upgrades recently though…

I also realised why I struggled to do 90km on the road today… that was my fifth ride this week, on top of another 100km which was mostly off-road!

“Disaster Recovery” and related thoughts…

Backup, Archive, High Availbility, Disaster Recovery, Business Continuity. All related. Yet all different.

One of my colleagues was recently faced with needing to run “a DR [disaster recovery] workshop” for a client. My initial impression was:

  • What disasters are they planning for?
  • I’ll bet they are thinking about Coronavirus and working remotely. That’s not really DR.
  • Or are they really thinking about a backup strategy?

So I decided to turn some of my rambling thoughts into a blog post. Each of these topics could be a post in its own right – I’m just scraping the surface here…

Let’s start with backup (and recovery)

Backups (of data) are a fairly simple concept. Anything that would create a problem if it was lost should be backed up. For example, my digital photos are considered to not exist at all unless they are synchronised (or backed up) to at least two other places (some network-attached storage, and the cloud).

In a business context, we run backups in order to be able to recover (restore) our content (configuration or data) within a given window. We may have weekly full backups and daily incremental or differential backups (perhaps with more regular snapshots), then retain parent, grandparent and great-grandparent copies of the full backups (four weeks) and keep each of these as (lunar) monthly backups for a year. That’s just an example – each organisation will have its own backup/retention policies and those backups may be stored on or off-site, on tape or disk.

In summary, backups are about making sure we have an up to date copy of our important configuration information and data, so we can recover it if the primary copy is lost or damaged.

And for bonus content, some services we might consider in a modern infrastructure context include Azure Backup or AWS Backup.

Backups must be verified and periodically tested in order to have any use.

Archiving information

When I wrote about backups above, I mentioned keeping multiple copies covering various points in time. Whilst some may consider this adequate for archival, archival is the storage of data for long-term preservation of read-only access – for example, documents that must be stored for an extended period of time (for example 7, 10, 25, 99 years). Once that would have been paper documents, in boxes. Now it might be digital files (or database contents) on tape or disk (potentially cloud storage).

Archival might still use backup software and associated retention policies, but we’ll think carefully about the medium we store it on. For very long term physical storage we might need to consider the media formats (paper is bulky and transferred to microfiche, or old magnetic media degrades, so it’s moved to optical storage – but the hardware becomes obsolete, so it’s moved to another format). If storing on disk (on-premises or in the cloud), we can use slower (cheaper) disks and accept that restoration from the archive may take additional time.

In summary, archival is about long-term data storage, generally measured in many years and archives might be stored off-line, or near-line.

Technologies we might use for archival are similar to backups, but we could consider lower-cost storage – e.g. Azure Storage‘s Cool or Archive tiers or Amazon S3 Glacier.

Keeping systems highly available

High Availability (HA) is about making sure that our systems are available for as much time as possible – or certainly within a given service level agreement (SLA).

Traditionally, we used technologies like a redundant array of inexpensive devices (RAID) for disks or memory, error checking memory, or redundant power supplies. We might also have created server clusters or farms. All of these methods have the intention of removing single points of failure (SPOFs).

In the cloud, we leave a lot of the infrastructure considerations to the cloud service provider and we design for failure in other ways.

  • We assume that virtual machines will fail and create availability sets.
  • We plan to scale out across multiple hosts for applications that can take advantage of that architecture.
  • We store data in multiple regions.
  • We may even consider multiple clouds.

Again, the level of redundancy built into the app and its supporting infrastructure must be designed according to requirements – as defined by the SLA. There may be no point in providing an expensive four nines uptime for an application that’s used once a month by one person, who works normal office hours. But, then again, what if that application is business critical – like payroll? Again, refer to the SLA – and maybe think about business continuity too… more on that in a moment.

Some of my clients have tried to implement Windows Server clusters in Azure. I’ve yet to be convinced and still consider that it’s old-world thinking applied in a contemporary scenario. There are better ways to design a highly available file service in 2020.

In summary, high availability is about ensuring that an application or service is available within the requirements of the associated service level agreement.

Technologies might include some of the hardware considerations I listed earlier, but these days we’re probably thinking more about:

Remember to also consider other applications/systems upon which an application relies.

Also, quoting from some of Microsoft’s training materials:

“To achieve four 9’s (99.99%), you probably can’t rely on manual intervention to recover from failures. The application must be self-diagnosing and self-healing.

Beyond four 9’s, it is challenging to detect outages quickly enough to meet the SLA.

Think about the time window that your SLA is measured against. The smaller the window, the tighter the tolerances. It probably doesn’t make sense to define your SLA in terms of hourly or daily uptime.”

Microsoft Learn: Design for recoverability and availability in Azure: High Availability

Disaster recovery

As the name suggests, Disaster Recovery (DR) is about recovering from a disaster, whatever that might be.

It could be physical damage to a piece of hardware (a switch, a server) that requires replacement or recovery from backup. It could be a whole server room or datacentre that’s been damaged or destroyed. It could be data loss as a result of malicious or accidental actions by an employee.

This is where DR plans come into play- firstly analysing the risks that might lead to disaster (including possible data loss and major downtime scenarios) and then looking at recovery objectives – the application’s recovery point objective (RPO) and recovery time objective (RTO).

Quoting Microsoft’s training materials again:

An illustration showing the duration, in hours, of the recovery point objective and recovery time objective from the time of the disaster.

“Recovery Point Objective (RPO): The maximum duration of acceptable data loss. RPO is measured in units of time, not volume: “30 minutes of data”, “four hours of data”, and so on. RPO is about limiting and recovering from data loss, not data theft.

Recovery Time Objective (RTO): The maximum duration of acceptable downtime, where “downtime” needs to be defined by your specification. For example, if the acceptable downtime duration is eight hours in the event of a disaster, then your RTO is eight hours.”

Microsoft Learn: Design for recoverability and availability in Azure: Disaster Recovery

For example, I may have a database that needs to be able to withstand no more than 15 minutes’ data loss and an associated SLA that dictates no more than 4 hours’ downtime in a given period. For that, my RPO is 15 minutes and the RTO is 4 hours. I need to make sure that I take snapshots (e.g. of transaction logs for replay) at least every 15 minutes and that my restoration process to get from offline to fully recovered takes no more than 4 hours (which will, of course, determine the technologies used).

Considerations when creating a DR plan might include:

  • What are the requirements for each application/service?
  • How are systems linked – what are the dependencies between applications/services?
  • How will you recover within the required RPO and RTO constraints?
  • How can replicated data be switched over?
  • Are there multiple environments (e.g. dev, test and production)?
  • How will you recover from logical errors in a database that might impact several generations of backup, or that may have spread through multiple data replicas?
  • What about cloud services – do you need to backup SaaS data (e.g. Office 365)? (Possibly not, if you’re happy with a retention-period based restoration from a “recycle bin” or similar but what if an administrator deletes some data?)

As can be seen, there are many factors here – more than I can go into in this blog post, but a disaster recovery strategy needs to consider backup/recovery, archive, availability (high or otherwise), technology and service (it may help to think about some of the ITIL service design processes).

In summary, disaster recovery is about having a plan to be able to recover from an event that results in downtime and data loss.

Technologies that might help include Azure Site Recovery. Applications can also be designed with data replication and recovery in mind, for example, using geo-replication capabilities in Azure Storage/Amazon S3, Azure SQL Server/Amazon RDS or using a globally-distributed database such as Azure Cosmos DB. And DR plans must be periodically tested.

Business continuity

Finally, Business Continuity (BC). This is something that many organisations will have had to contend with over the last few weeks and months.

BC is often confused with DR but they are different. Business continuity is about continuing to conduct business when something goes wrong. That may be how to carry on working whilst working on recovering from a disaster. Or it may be how to adapt processes to allow a workforce to continue functioning in compliance with social distancing regulations.

Again, BC needs a plan. But many of those plans will be reconsidered now – if your BC arrangements are that in the event of an office closure, people go to a hosted DR site with some spare equipment that will be made available within an agreed timescale, that might not help in the event of a global pandemic, when everyone else wants to use that facility. Instead, how will your workforce continue to work at home? Which systems are important?How will you provide secure remote access to those systems? (How will you serve customers whilst employees are also looking after children?) The list goes on.

Technology may help with BC, but technology alone will not provide a solution. The use of modern approaches to End User Computing will certainly make secure remote and mobile working a possibility (indeed, organisations that have taken a modern approach will probably already be familiar with those practices) but a lot of the issues will relate to people and process.

In summary, Business Continuity plans may be invoked if there is a disaster but they are about adapting business processes to maintain service in times of disruption.

Wrapping up

As I was writing this post, I thought about many tangents that I could go off and cover. I’m pretty sure the topic could be a book and this post scrapes the surface. Nevertheless, I hope my thoughts are useful and show that disaster recovery cannot be considered in isolation.

Weeknote 18/2020: Microsoft 365, the rise of the humans and some data platform discovery

Some highlights from the last week of “lockdown” lunacy*…

Office 365 rebranding to Microsoft 365

For the last couple of years, Microsoft has had a subscription bundle called Microsoft 365, which includes Office 365 Enterprise, Enterprise Mobility and Security and Windows 10 Enterprise. Now some bright spark has decided to rebrand some Office 365 products as Microsoft 365. Except for the ones that they haven’t (Office 365 Enterprise E1/3/5). And Office 365 ProPlus (the subscription-based version of the Office applications) is now “Microsoft 365 Apps for Enterprise”. Confused? Join the club…

Read more on the Microsoft website.

The Rise of the Humans

A few years ago, I met Dave Coplin (@DCoplin). At the time he was working for Microsoft, with the assumed title of “Chief Envisioning Officer” (which was mildly amusing when he was called upon to interview the real Microsoft CEO, Satya Nadella at Future Decoded). Dave’s a really smart guy and a great communicator with a lot of thoughts about how technology might shape our futures so I’m very interested in his latest project: a YouTube Channel called The Rise of the Humans.

Episode 1 streamed on Wednesday evening and featured a discussion on Algorithmic Bias (and why it’s so important to understand who wrote an algorithm that might be judging you) along with some discussion about some of the tech news of the week and “the new normal” for skills development, education and technology. There’s also a workshop to accompany the podcast, which I intend to try out with my family…

Data Platform Discovery Day

I spent Thursday in back-to-back webcasts, but that was a good thing. I’d stumbled across the presence of Data Platform Discovery Day and I joined the European event to learn about all sorts of topics, with talks delivered by MVPs from around the world.

The good thing for me was that the event was advertised as “level 100” and, whilst some of the presenters struggled with that concept, I was able to grasp just enough knowledge on a variety of topics including:

  • Azure Data Factory.
  • Implementing Power BI in the enterprise.
  • An introduction to data science.
  • SQL Server and containers.
  • The importance of DevOps (particularly apt as I finished reading The Pheonix Project this week).
  • Azure SQL Database Managed Instances.
  • Data analysis strategy with Power BI.

All in all, it was a worthwhile investment of time – and there’s a lot there for me to try and put into practice over the coming weeks.

2×2

I like my 2x2s, and found this one that may turn out to be very useful over the coming weeks and months…

Blogging

I wrote part 2 of my experiences getting started with Azure Sphere, this time getting things working with a variety of Azure Services including IoT Hub, Time Series Insights and IoT Central.

Decorating

I spent some time “rediscovering” my desk under the piles of assorted “stuff” this week. I also, finally, put my holographic Windows 2000 CD into a frame and it looks pretty good on the wall!

* I’m just trying to alliterate. I don’t really think social distancing is lunacy. It’s not lockdown either.