Weeknote 2024/07: pancakes; cycle races; amateur radio; flooding; and love stories

The feedback I receive on these weeknotes is generally something like “I’m enjoying your weekly posts Mark – no idea how you find the time?”. The answer is that 1) I work a 4-day week; and 2) I stay up far too late at night. I also write them in bits, as the week progresses. This week has been a bit of a rollercoaster though, with a few unexpected changes of direction, and consequently quite a few re-writes.

This week at work

I had planned to take an extra day off this week which looked like it was going to squeeze things a bit. That all changed mid-week, which gave me a bit more time to move things forward. These were the highlights:

This week away from work

Last weekend

I was cycle coaching on Saturday, then dashed home as my youngest son, Ben, said he would be watching the rugby at home instead of with his mates. England vs. Wales is the most important Six Nations fixture in my family. My Dad was Welsh. He wasn’t big into sport, but, nevertheless I remember watching 15 men in red shirts running around with an oval ball with him. Nikki’s Dad was Welsh too. Even though we were both born in England, that makes our sons two-quarters Welsh. Cymru am byth! Sadly, the result didn’t quite go our way this year – though it was closer than I’d dared dream.

On Sunday, our eldest son, Matt was racing the Portsdown Classic. It’s the first road race of the season and there were some big names in there. Unfortunately, he didn’t get the result he wanted – finding he has the power but is still learning to race – but he did finish just ahead of Ed Clancey OBE, so that’s something to remember.

I’m just glad he avoided this (look carefully and Matt can be seen in white/blue on a grey bike with white decals on the wheels, very close to the verge on the left, just ahead of the crash)

The rest of the week

Our town, Olney, has celebrated Shrove Tuesday with a pancake race since 1445. It even features on the signs as you drive into town.

I didn’t see this year’s race as I was working in Derby. Then driving back along the motorway in torrential rain, in time for a family meal. We were supposed to be getting together before Matt flew out to Greece for 10 weeks, but those plans fell apart with 2 days before his outbound flight. Thankfully he’s sorted a plan B but I’m not writing about it until it actually happens!

For a couple of years I’ve struggled to ride with Matt without him finding it too easy (and actually getting cold). I miss my riding buddy, but it was good to hear him say he’d like to ride with me again if I can get back into shape. Right. That’s my chance. Whilst he is away it’s time to get back on Zwift and prepare for a summer on the real bike. I need to lose at least 20kgs too, but that’s going to take a while…

…which reminds me. I must find a way to pull all my information from the Zoe app before my subscription expires.

As last Sunday’s bike race was “only” around 75km, I didn’t have any roadside bottle-passing duties so I took “the big camera” (my Nikon D700 DSLR). Then, I got home and realised my digital photography workflow has stopped flowing. My Mac Mini has run out of disk space. My youngest son, Ben, now uses my MacBook for school. And my Windows PC didn’t want to talk to the D700 (until I swapped cables – so that must have been the issue). It took me a while, but I eventually managed to pull a few half-decent images out of the selection. You can see them below, under “this week in photos”. I love using the DSLR, but do wish it had the connectivity that makes a smartphone so much more convenient.

The Portsdown Classic was my first opportunity to take a hand-held radio to a race. I’d seen spectators using them at other National Races last year but I didn’t have the equipment. I’d asked someone what they used and considered getting a Baofeng UV-5R but didn’t actually get around to clicking “buy now”. Then Christian Payne (Documentally) gifted me a Quansheng UV-K5(8) at Milton Keynes Geek Night. A chat with a friendly NEG rider and a little bit of homework told me which frequencies British Cycling uses. It was fascinating to be able to listen to the race convoy radio, both when driving behind the convoy at the start of the race and then when spectating (at least when the race was within radio range).

Listening in on the action gave me a whole new perspective on the race. So much so that I’m considering completing the ConvoyCraft training to be able to drive an official event car

I mentioned that Christian had gifted me a radio last December. That was on condition that I promised to take the exam for my RSGB Foundation Licence. Well, I took it this morning and passed. The results are provisional but, assuming all goes well and I get my licence from Ofcom, I’ll write another post about that journey into the world of RF and antennae…

Finally, I wrapped up the week by meeting up with my former colleague, manager, and long-time mentor, Mark Locke. I learned a lot from Mark in my days at ICL and Fujitsu (most notably when I was a wet-behind-the-ears Graduate Trainee in the “Workgroup Systems” consultancy unit we were a part of in the early days of Microsoft Exchange, Novell GroupWise and Lotus Notes; and later working for Mark on a major HMRC infrastructure project); he was the one who sponsored me into my first Office of the CITO role for David Smith, back in 2010; and we’ve remained friends for many years. It was lovely to catch up on each other’s news over a pint and a spot of lunch.

This week in TV/video

My wife and I started watching two new TV series this last week. Both are shaping up well, even if one is a rom-com (not normally my favourite genre):

This week in photos

Elsewhere on the Internet

In tech

At least one good thing came out of the VMware-Broadcom situation:

The NCSC appears to have rebranded 2FA/MFA as 2SV:

But this. This is a level of geekiness that I can totally get behind:

Even I have to accept that playing Snake on network switches is a little too niche though:

Close to home

The river Great Ouse in Olney saw the biggest floods I can remember (for the second time this winter). The official figures suggest otherwise but they measure at the sluice – once the river bursts its banks (as it now does) the sluice is bypassed through the country park and across fields. The drone shots are pretty incredible.

This is a fantastic project. The pedant in me can almost forgive the errant apostrophe in the final frames of the video because the concept is so worthwhile:

Underground-Overground

Transport for London decided to rename six formerly “Overground” lines, This is one of the more educational stories about it:

It’s not the first time naming these lines has been proposed:

But British Twitter stepped up to the mark and delivered its own commentary:

Or at least some of British Twitter. Those outside the gravitational pull of London were less bothered:

St Valentine’s Day

Every now and again, the social networks surface something really wholesome. This week I’ve picked three St Valentine’s Day posts. Firstly, from “the Poet Laureate of Twitter”, Brian Bilston:

And then this lovely story (pun entirely intended) from Heather Self (click through for the whole thread of three posts):

This one just made me giggle:

Coming up

The coming weekend will be a busy one. Ben is heading off to the West Country for a few days away with his friends. It’s also Nikki’s birthday… but I won’t spill the beans here about any plans because she has been known to read these posts. And then, hopefully, on Monday, Matt will finally get away to train in a sunnier climate for a while.

Next week is half term but with both the “boys” away it will be quiet. When they are at home, we have the normal chaos of a busy family with two sporty teenagers. When they are away it’s nice to enjoy some peace (and a slightly less messy house), but it sometimes feels just a little odd.

Right, time to hit publish. I have a birthday cake to bake…

Featured image by -Rita-??? und ? mit ? from Pixabay.

Removing password protection from PDF files

Important note: this post wont help you if you have a PDF file and don’t know the password. This is for removing passwords on PDFs that you have legal access to, but don’t want to be password-protected any more.

A while ago, one of my employers started emailing payslips in PDF format. Now, I know there are many issues around accessibility with PDFs, but it works for me – I get a digital version of a document that looks exactly as the printed one would have. Except that someone decided email (even to a company-secured account) was not secure enough, and they password-protected the files. In theory, this stops another employee from opening my payslip. In practice, they used a known piece of personally identifiable information (PII).

Anyway, I wanted to keep a copy of the files on my own file storage. I can do this because, technically, they are not company data and they are (or at least should be) private to me. Indeed the company in question has since moved to a system that emails a link to a personal email account, inviting the employee to download their payslip from a portal.

I didn’t want the copies of the payslips that I held to be password protected. That meant I needed to remove those passwords.

QPDF

QPDF is a computer program, and associated library, for structural, content-preserving transformations on PDF files. It’s not for creating, viewing or converting PDF files.

One of the things it can do, is remove the password protection on a file. Remember, this is a file that I have legal access to, so removing the password protection is not a crime. I’m not hacking the file – in fact I need to know the password in order to remove it.

QPDF can do much more than remove passwords (for example I think I could use it to create new versions of a PDF file with just a subset of the pages), but this was what I needed to do.

A little side-note

This was the second time I performed this exercise. I first did it a few years ago, but only on the payslips I’d received up until that date. Later ones were still password-protected. I didn’t document my method the first time around though… so I had to work it all out again. This time I decided to write it down…

A little PowerShell Script

It looks like, the first time I ran this, I downloaded a Windows executable version of QPDF and either wrote, or more likely found, a PowerShell script to adapt. The script is called payslips.ps1 and looks like this:

$children = Get-ChildItem # Save files in a variable. Piping the rest of the script from Get-ChildItem in a single line was a bad idea
$children | ForEach-Object {
Write-Debug "Working on $_.Name"; #Doesn't actually display a lot
$fileName =[System.IO.Path]::GetFileNameWithoutExtension($_.Name); #Strip name, we will append "tmp"
$ext =[System.IO.Path]::GetExtension($_.Name);
$tempFile = $fileName + "tmp" + $ext; # Append "_tmp" Move-Item -Path $.Name -Destination $tempFile; #Move the file to a temporary location
..\qpdf.exe --password=AB123456C --decrypt $tempFile $_.Name; #Use qpdf to decrypt it, save in original location
#Remove-Item $tempFile #Remove temporary file
}

ABC123456C should be replaced with the actual password. Actually, it shouldn’t, because including credentials in code is sloppy security practice. There are better ways to pass the password, but I’m just converting 50 files as a one-off exercise, not building a repeatable business process. If you go on to use this in a business environment, please don’t do it this way!

Release notes

The script makes a temporary copy of each file, suffixed with _tmp but preserving the file extension.

If you run the script against the current folder, it will run against all files, not just PDFs. That means it will rename itself and all the QPDF files with _tmp. This will cause it to fail.

It looks like, when I ran this a few years ago, I used a files.txt file to control this behaviour. files.txt was just a list of filenames and is easily generated using the following command:

dir /b /a-d > files.txt

But, this time, I couldn’t see how to provide that as a parameter to QPDF, so I had to:

  1. Place all the files to be converted in a subfolder of the folder containing QPDF and my PowerShell script.
  2. Edit the payslips.ps1 script to refer to ..\qpdf.exe (i.e. qpdf.exe in the folder above the current one).
  3. Change directory into the subfolder.
  4. Run payslips.ps1 from the subfolder – i.e.:
..\payslips.ps1

This means it will only run against the files in the subfolder, and not against QPDF, the script, or anything else.

It doesn’t seem to remove the temporary files. I didn’t try to work out why. It had already created what I needed by then.

Featured image: author’s own

Some thoughts on Microsoft Windows Extended Security Updates…

Technology moves quickly. And we’re all used to keeping operating systems on current (or n-1) releases, with known support lifecycles and planned upgrades. We are, aren’t we? And every business application, whether COTS or bespoke, has an owner, who maintains a road map and makes sure that it’s not going to become the next item of technical debt. Surely?

Unfortunately, these things are not always as common as they should be. A lot comes down to the perception of IT – is it a cost centre or does it add value to the business?

Software Assurance and Azure Hybrid Benefit

Microsoft has a scheme for volume licensing customers called Software Assurance. One of the benefits of this scheme is the ability to keep running on the latest versions of software. Other vendors have similar offers. But they all come at a cost.

When planning a move to the cloud, Software Assurance is the key to unlocking other benefits too. Azure Hybrid Benefit is a licensing offer for Windows Server and SQL Server that provides a degree of portability between cloud and on-premises environments. Effectively, the cloud costs are reduced because the on-prem licenses are released and allocated to new cloud resources.

But what if you don’t have Software Assurance? As a Windows operating system comes to the end of its support lifecycle, how are you going to remain compliant when there are no longer any updates available?

End of support for Windows Server 2012/2012 R2

In case you missed it, Windows Server 2012 and Windows Server 2012 R2 reached the end of extended support on October 10, 2023. (Mainstream support ended five years previously.) That means that these products will no longer receive security updates, non-security updates, bug fixes, technical support, or online technical content updates.

Microsoft’s advice is:

“If you cannot upgrade to the next version, you will need to use Extended Security Updates (ESUs) for up to three years. ESUs are available for free in Azure or need to be purchased for on-premises deployments.”

Extended Security Updates

Extended Security Updates are a safety net – even Microsoft describes the ESU programme as:

“a last resort option for customers who need to run certain legacy Microsoft products past the end of support”.

The ESU scheme:

“includes Critical and/or Important security updates for a maximum of three years after the product’s End of Extended Support date. Extended Security Updates will be distributed if and when available.

ESUs do not include new features, customer-requested non-security updates, or design change requests.”

They’re just a way to maintain support whilst you make plans to get off that legacy operating system – which by now will be at least 10 years old.

If your organisation is considering ESUs, The real questions to answer are what are their sticking points that are keeping you from moving away from the legacy operating system? For example:

  • Is it because there are applications that won’t run on a later operating system? Maybe moving to Azure (or to a hybrid arrangement with Azure Arc) will provide some flexibility to benefit from ESUs at no extra cost whilst the app is modernised? (Windows Server and SQL Server ESUs are automatically delivered to Azure VMs if they’re configured to receive updates).
  • Is it a budget concern? In this case, ESUs are unlikely to be a cost-efficient approach. Maybe there’s an alternative – again through cloud transformation, software financing, or perhaps a cloud-to-edge platform.
  • Is it a cash-flow concern? Leasing may be an answer.

There may be other reasons, but doing nothing and automatically accepting the risk is an option that a lot of companies choose… the art (of consulting) is to help them to see that there are risks in doing nothing too.

Featured image by 51581 from Pixabay

Password complexity in the 1940s

Over the last couple of weeks I’ve been fortunate enough to have two demonstrations of Enigma machines. For those who are not familiar with these marvelous mechanical computers, they were used to encrypt communications. Most notably by German forces during World War 2.

The first of the demonstrations was at Milton Keynes Geek Night, where PJ Evans (@MrPJEvans) gave an entertaining talk on the original Milton Keynes Geeks.

Then, earlier this week, I was at Bletchley Park for Node4’s Policing First event, which wrapped up with an Enigma demonstration from Phil Simons.

The two sessions were very different in their delivery. PJ’s used Raspberry Pi and web-based emulators, along with slides and a demonstration with a ball of wool. Phil was able to show us an actual Enigma machine. What struck me though was that the weakness that ultimately led to Bletchley Park cracking wartime German encryption codes. It wasn’t the encryption itself, but the way human operators used it.

Downfall

The Enigma machine was originally invented for encrypted communications in the financial services sector. By the time the German military was using it in World War 2, the encryption was very strong.

Despite having just 26 characters, each one was encoded an electrical signal which passed through three rotors from a set of five, changed daily, with different start positions and incrementing on each use, plus a plug board of ten electrical circuits that further increased the complexity.

There’s a good description of how the Enigma machine works on Brilliant. To cut a long story short, an Enigma machine can be set up in 158,962,555,217,826,360,000 ways. Brute force attacks are just not credible. Especially when the setup changes every day and each military network has a different encryption setup.

But there were humans involved:

  • Code books were needed so that, the sending and receiving stations set their machines up identically each day.
  • Young soldiers on the front line took short-cuts. Like re-using rotor start positions. They would spell out things like BER, PAR (for their home city, where they were stationed, girlfriend’s name, etc.).
  • Some networks issued guidance that all 26 letters needed to be used for a rotor start position each 26 days. This had unintended consequence that the desire for perceived variety meant the letter being used was predictable. It actually reduced the combinations as it couldn’t be one of the ones used in the previous 26 days.
  • Then there was the flaw that an Enigma machine’s algorithm was designed to take one letter and output another. Input of A would never result in output of A, for example.
  • And there were common phrases to look for in the messages to test possible encryption combinations – like WETTERBERICHT (weather report).

All of these clues helped the code-breakers at Bletchley Park narrow down the combinations. That gave them the head start they needed to use to try and brute force the encryption on a message.

Why is this relevant today?

By now, you’re probably thinking “that’s a great history lesson Mark, but why is it relevant today?”

Well, we have the same issues in modern IT security. We rely on people following policies and processes. And people look for shortcuts.

Take password complexity as an example. The UK National Cyber Security Centre (NCSC) specifically advises against enforcing password complexity requirements. Users will work around the requirements with predictable outcomes, and that actually reduces security. Just like with the “use all 26 letters in 26 days” guidance I cited in my Enigma history lesson above.

And yet, only last month, I was advising a client whose CIO peers maintain that password complexity should be part of the approach.

One more thing… the Germans tried to crack Allied encryption too. They gave up after a while because it was difficult – they assumed if they couldn’t crack ours then we couldn’t crack theirs. But, whilst German command was distributed, the Allies set up what we would now call a “centre of excellence” in Bletchley Park. And that helped to bring together some of our greatest minds, along with several thousand support staff!

Postscript

After I started to write this post, I was multitasking on a Teams call. I should have concentrated on just one thing. Instead, went to open a DocuSign link from the company HR department and fell foul of a phishing simulation exercise. I’m normally pretty good at spotting these things but this time I was distracted. As a result, I clicked the (potentially credible) link without checking it. If you want an illustration of how fallible humans are, that’s one right there!

Featured image: author’s own.

Weeknotes 18-19/2021: Doubling up

This content is 3 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week didn’t have a weeknote. I just didn’t get around to it! To be perfectly honest, my weekends are packed with cycling-related activities at the moment and work has been pretty busy too… so here’s a bumper fortnight-note. Even this is delayed because I locked myself out of WordPress with too many incorrect login attempts… but the very fact I managed to post this indicates that I got in again!

Working

There’s much I can write about my work at the moment but we are approaching my annual review. That means I’ve spent a lot of time reflecting on the last 12 months and looking forward to where I need things to head in the coming weeks and months. It’s not been a wonderful year: although my family has been fortunate to avoid Covid-19 we’re still living in strange times and I really could do with leaving my home office for the odd day here and there. Procrastination levels are certainly up, followed by evening catch-up sessions. That could be another reason there was no week note last week…

Learning

I did manage to squeeze in another exam. It’s one of the Microsoft Fundamentals series: Microsoft Azure Data Fundamentals (DP-900) and I used Microsoft Learn to prepare, passing with a good score (944).

I’m also really interested in building a body of knowledge around sustainable IT and I worked my way through the Sustainable IT MOOC from the Institut du Numérique Responsable’s ISIT Academy. Not surprisingly, some of the statistics are French-specific but, in general I found the content interesting and enlightening. Definitely worth a few hours for anyone with an interest in the topic.

Watching

I’m a heavy social media user and I’m under no illusions about what that means in terms of my privacy. I often say that, if you’re not paying for the product, you are the product. Even so, my wife and I watched The Social Dilemma on Netflix a couple of nights ago. Highly recommended for anyone who uses… well… the Internet. So, pretty much everyone then.

Cycling

After riding England Coast to Coast (C2C) on The Way of the Roses a couple of years ago, I’ve been planning my next big cycling trip.

My eldest son and I were planning to head to the French Alps after his GCSEs this summer but, well, that was before a global pandemic messed up our plans. So we’ve been looking for something a little closer to home. We’re planning on riding the length of Wales – from Cardiff to Holyhead on Lôn Las Cymru

After booking all the hotels, and the train travel to return from Holyhead (5.5 hours, via England, with a change mid-way at Shrewsbury) the biggest challenge was booking 2 spaces for bikes on the train. I had similar issues with the C2C and I’m just hoping that I manage to make the cycle reservations nearer the time. I certainly can’t allow myself to stress about it for the whole 4 day ride up!

Something that will almost certainly come in useful on that trip are the waterproof socks I bought from Sealskins… they are fantastic:

Still on the subject of cycling, the Trek X-Caliber 9 mountain bike that I bought last autumn is back in the workshop. It’s 6 months old, with just 300km on the clock and the forks have gone back for warranty repairs (and that’s after the headset bearings already had to be replaced because they were not fitted correctly in the factory). More generally, there’s a big problem with bike part availability in the UK right now – partly Brexit-related (inability to buy from some EU-based vendors) but some general supply issues with some parts on back order until 2023.

Meanwhile, I’m finding more and more of my weekends involve supporting my eldest son with his racing (either cross-country or cycle-cross, with the occasional road circuit). One bonus was that the usual Saturday Youth Coaching session was replaced by a pleasurable gravel ride (and pub garden visit) this week due to non-availability of our usual venue.

Random techie stuff

The last few weeks in pictures

Weeknote 12/2021: IT architecture, design thinking and hybrid work

This content is 3 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve tried writing weeknotes a few time over the years and they have been pretty sporadic. So, let’s give it another go… this should probably be weeknote 28 (or something like that) but it seems last year I named them after the week number in the year… so let’s try that again.

Because I haven’t done this for a while, let’s add some bonus notes for last week too…

Last week:

This week:

  • I published my long-form blog post on developing IT architecture skills, spun out from conversations with Matt Ballantine (@ballantine70) but also part of the work I’m doing to develop my team at risual.
  • My technical training was interrupted to complete the Microsoft Catalyst pre-sales training. It started off as what I may have described as a “buzzword-filled gamified virtual learning experience”. Then, I started to learn some consulting skills as Rudy Dillenseger brought Design-Led Thinking (aka Design Thinking) to life.
  • It was interesting to see Microsoft recommending the use of Klaxoon with Teams when facilitating remote workshops, which made me speculate about the future of Microsoft Whiteboard.
  • Was a week of virtual calls – even in the evenings. I had Zoom calls with British Cycling and for some financial advice but also a really pleasurable couple of hours on Signal chatting with an old mate I haven’t seen or spoken to in a while, who now lives overseas. It was definitely one of those moments when I appreciated a good friendship and it made me think “we should do this more often”.
  • Just when I thought I’d handed off some project management duties to a real PM, they bounced back at me like a boomerang…
  • The UK Government’s comments on returning to work (ahem, we have been working, just not in the office) reminded me of a post I wrote at the start of the year. Hybrid working is the future folks – we ain’t going back to 2019

The last couple of weeks’ photos

Bulk removing passwords from PDF documents

This content is 4 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

My payslip and related documents are sent to me in PDF format. To provide some rudimentary protection from interception, they are password protected, though the password is easily obtained by anyone who knows what the system is.

Because these are important documents, I store a copy in my personal filing system, but I don’t want to have to enter the password each time I open a file. I know I can open each file individually and then resave without a password (Preview on the Mac should do this) but I wanted a way to do it in bulk, for 10s of files, without access to Adobe Acrobat Pro.

Twitter came to my aid with various suggestions including Automator on the Mac. In the end, the approach I used employed an open source tool called QPDF, recommended to me by Scott Cross (@ScottCross79). Scott also signposted a Stack Overflow post with a PowerShell script to run against a set of files but it didn’t work (leading to a rant about how Stack Overflow’s arcane rules and culture prevented me from making a single character edit) and turned out to be over-engineered. It did get me thinking though…

Those of us old enough to remember writing MS-DOS batch files will probably remember setting environment variables. Combined with a good old FOR loop, I got this:

FOR %G IN (*.pdf) DO qpdf --decrypt --password=mypassword "%G" --replace-input

Obviously, replace mypassword with something more appropriate. The --replace-input switch avoids the need to specify output filenames, and the use of the FOR command simply cycles through an entire folder and removes the encryption.

Weeknote 22/2020: holidaying on the Costa del Great Ouse (plus password resets, cycling performance, video-conferencing equipment and status lights)

This content is 4 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In the last few hours of 2019, my family planned our holiday. We thought we had it all sorted – fly to Barcelona, spend the weekend sight-seeing (including taking my football-mad son to Camp Nou) and then head up the coast for a few more days in the Costa Brava. Flights were booked, accomodation was sorted, trips were starting to get booked up.

We hadn’t counted on a global pandemic.

To be clear, I’m thankful that myself, my family and friends, and those around us are (so far) safe and well. By April, I didn’t much like the prospect of getting into a metal tube with 160+ strangers and flying for 3 hours in each direction. We’re also incredibly lucky to be able to access open countryside within a couple of hundred metres of our house, so daily exercise is still possible and enjoyable, with very few people around, most of the time.

I still took the week off work though. After cancelling my Easter break, it’s been a while since I took annual leave and even my Furlough period was not exactly relaxing, so I could do with a rest.

The weather has been glorious in the UK this week too, making me extra-glad we re-landscaped the garden last year and I’ve spent more than a few hours just chilling on our deck.

Unfortunately, we also got a taste of what it must be like to live in a tourist hotspot, as hundreds of visitors descended on our local river each day this weekend. It seems the Great Ouse at Olney has featured in a list of top places to swim in Britain, which was recently featured in The Times. It may sound NIMBYish, but please can they stay away until this crisis is over?

As for the holiday, hopefully, we’ll get the money refunded for the cancelled flights (if the airlines don’t fold first – I’m sure that if they refunded everyone they would be insolvent, which is my theory for why they are not increasing staff levels to process refunds more quickly); FC Barcelona contacted me weeks ago to extend my ticket and offer a refund if we can’t use it; and AirBnB had the money back in our account within days of us being forced to pull out due to cancelled flights.

(I did spend a few weeks effectively “playing chicken” with easyJet to see if they would cancel first, or if it would be us. An airline-cancelled flight can be refunded, but a consumer-cancelled flight would be lost, unless we managed to claim on travel insurance).

Even though I’ve had a week off, I’ve still been playing with tech. Some of my “projects” should soon have their own blog post (an Intel NUC for a new Zwift PC; migrating my wife’s personal email out of my Office 365 subscription to save me a licence; and taking a look at Veeam Backup for Office 365), whilst others get a brief mention below…

Please stop resetting user passwords every x days!

Regularly resetting passwords (unless a compromise is suspected) is an old way of thinking. Unfortunately, many organisations still make users change their password every few weeks. Mine came up for renewal this week and I struggled to come up with an acceptable, yet memorable passphrase. So, guess what? I wrote it down!

I use a password manager for most of my credentials but that doesn’t help with my Windows logon (before I’ve got to my browser). Biometric security like Windows Hello helps too (meaning I rarely use the password, but am even less likely to remember it when needed).

Here’s the National Cyber Security Centre (@NCSC)’s password guidance infographic (used with permission) and the associated password guidance:

This list of 100,000 commonly used passwords that will get blocked by some systems may also be useful – from Troy Hunt (@TroyHunt) but provided to me by my colleague Gavin Ashton (@gvnshtn).

Performance analysis for cyclists, by cyclists

I’ve been watching with interest as my occasional cycling buddy (and now Azure MVP) James Randall (@AzureTrenches) has been teasing development on his new cycling performance platform side project. This week he opened it up for early access and I’ve started to road test it… it looks really promising and I’m super impressed that James created this. Check it out at For Cyclists By Cyclists.

Podcasting/video conferencing upgrades in my home office

With video conferencing switching from something-I-use-for-internal-calls to something-I-use-to-deliver-consulting-engagements, I decided to upgrade the microphone and lighting in my home office. After seeking some advice from those who know about such things (thanks Matt Ballantine/@ballantine70 and the WB-40 Podcast WhatsApp group), I purchased a Marantz MPM-1000U microphone, boom arm, shock mount, and a cheap rechargeable LED photography light with tripod.

It’s early days yet but initial testing suggests that the microphone is excellent (although the supplied USB A-B cable is too short for practical use). I had also considered the Blue Yeti/Raspberry but it seems to have been discontinued.

As for the photo lighting, it should be just enough to illuminate my face as the north-facing window to my left often leaves me silhouetted on calls.

Smart lighting to match my Microsoft Teams presence

I haven’t watched the Microsoft Build conference presentations yet, but I heard that Scott Hanselman (@shanselman) featured Isaac Levin (@isaacrlevin)’s PresenceLight app to change the lighting according to his Windows Theme. The app can also be used to change Hue or LIFX lighting along with Teams presence status, so that’s in place now outside my home office.

It’s not the first time I’ve tried something like this:

One particularly useful feature is that I can be logged in to one tenant with the PresenceLight app and another in Microsoft Teams on the same PC – that means that I can control my status with my personal persona so I may be available to family but not to colleagues (or vice versa).

One more thing…

It may not be tech-related, but I also learned the differences between wheat and barley this week. After posting this image on Instagram, Twitter was quick to correct me:

As we’re at the end of May, that’s almost certainly not wheat…

Microsoft Ignite | The Tour: London Recap

This content is 5 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of the most valuable personal development activities in my early career was a trip to the Microsoft TechEd conference in Amsterdam. I learned a lot – not just technically but about making the most of events to gather information, make new industry contacts, and generally top up my knowledge. Indeed, even as a relatively junior consultant, I found that dipping into multiple topics for an hour or so gave me a really good grounding to discover more (or just enough to know something about the topic) – far more so than an instructor-led training course.

Over the years, I attended further “TechEd”s in Amsterdam, Barcelona and Berlin. I fought off the “oh Mark’s on another jolly” comments by sharing information – incidentally, conference attendance is no “jolly” – there may be drinks and even parties but those are after long days of serious mental cramming, often on top of broken sleep in a cheap hotel miles from the conference centre.

Microsoft TechEd is no more. Over the years, as the budgets were cut, the standard of the conference dropped and in the UK we had a local event called Future Decoded. I attended several of these – and it was at Future Decoded that I discovered risual – where I’ve been working for almost four years now.

Now, Future Decoded has also fallen by the wayside and Microsoft has focused on taking it’s principal technical conference – Microsoft Ignite – on tour, delivering global content locally.

So, a few weeks ago, I found myself at the ExCeL conference centre in London’s Docklands, looking forward to a couple of days at “Microsoft Ignite | The Tour: London”.

Conference format

Just like TechEd, and at Future Decoded (in the days before I had to use my time between keynotes on stand duty!), the event was broken up into tracks with sessions lasting around an hour. Because that was an hour of content (and Microsoft event talks are often scheduled as an hour, plus 15 minutes Q&A), it was pretty intense, and opportunities to ask questions were generally limited to trying to grab the speaker after their talk, or at the “Ask the Experts” stands in the main hall.

One difference to Microsoft conferences I’ve previously attended was the lack of “level 400” sessions: every session I saw was level 100-300 (mostly 200/300). That’s fine – that’s the level of content I would expect but there may be some who are looking for more detail. If it’s detail you’re after then Ignite doesn’t seem to be the place.

Also, I noticed that Day 2 had fewer delegates and lacked some of the “hype” from Day 1: whereas the Day 1 welcome talk was over-subscribed, the Day 2 equivalent was almost empty and light on content (not even giving airtime to the conference sponsors). Nevertheless, it was easy to get around the venue (apart from a couple of pinch points).

Personal highlights

I managed to cover 11 topics over two days (plus a fair amount of networking). The track format of the event was intended to let a delegate follow a complete learning path but, as someone who’s a generalist (that’s what Architects have to be), I spread myself around to cover:

  • Dealing with a massive onset of data ingestion (Jeramiah Dooley/@jdooley_clt).
  • Enterprise network connectivity in a cloud-first world (Paul Collinge/@pcollingemsft).
  • Building a world without passwords.
  • Discovering Azure Tooling and Utilities (Simona Cotin/@simona_cotin).
  • Selecting the right data storage strategy for your cloud application (Jeramiah Dooley/@jdooley_clt).
  • Governance in Azure (Sam Cogan/@samcogan).
  • Planning and implementing hybrid network connectivity (Thomas Maurer/@ThomasMaurer).
  • Transform device management with Windows Autopilot, Intune and OneDrive (Michael Niehaus/@mniehaus and Mizanur Rahman).
  • Maintaining your hybrid environment (Niel Peterson/@nepeters).
  • Windows Server 2019 Deep Dive (Jeff Woolsey/@wsv_guy).
  • Consolidating infrastructure with the Azure Kubernetes Service (Erik St Martin/@erikstmartin).

In the past, I’d have written a blog post for each topic. I was going to say that I simply don’t have the time to do that these days but by the time I’d finished writing this post, I thought maybe I could have split it up a bit more! Regardless, here are some snippets of information from my time at Microsoft Ignite | The Tour: London. There’s more information in the slide decks – which are available for download, along with the content for the many sessions I didn’t attend.

Data ingestion

Ingesting data can be broken into:

  • Real-time ingestion.
  • Real-time analysis (see trends as they happen – and make changes to create a competitive differentiator).
  • Producing actions as patterns emerge.
  • Automating reactions in external services.
  • Making data consumable (in whatever form people need to use it).

Azure has many services to assist with this – take a look at IoT Hub, Azure Event Hubs, Azure Databricks and more.

Enterprise network connectivity for the cloud

Cloud traffic is increasing whilst traffic that remains internal to the corporate network is in decline. Traditional management approaches are no longer fit for purpose.

Office applications use multiple persistent connections – this causes challenges for proxy servers which generally degrade the Office 365 user experience. Remediation is possible, with:

  • Differentiated traffic – follow Microsoft advice to manage known endpoints, including the Office 365 IP address and URL web service.
  • Let Microsoft route traffic (data is in a region, not a place). Use DNS resolution to egress connections close to the user (a list of all Microsoft peering locations is available). Optimise the route length and avoid hairpins.
  • Assess network security using application-level security, reducing IP ranges and ports and evaluating the service to see if some activities can be performed in Office 365, rather than at the network edge (e.g. DLP, AV scanning).

For Azure:

  • Azure ExpressRoute is a connection to the edge of the Microsoft global backbone (not to a datacentre). It offers 2 lines for resilience and two peering types at the gateway – private and public (Microsoft) peering.
  • Azure Virtual WAN can be used to build a hub for a region and to connect sites.
  • Replace branch office routers with software-defined (SDWAN) devices and break out where appropriate.
Microsoft global network

Passwordless authentication

Basically, there are three options:

  • Windows Hello.
  • Microsoft Authenticator.
  • FIDO2 Keys.

Azure tooling and utilities

Useful resources include:

Selecting data storage for a cloud application

What to use? It depends! Classify data by:

  • Type of data:
    • Structured (fits into a table)
    • Semi-structured (may fit in a table but may also use outside metadata, external tables, etc.)
    • Unstructured (documents, images, videos, etc.)
  • Properties of the data:
    • Volume (how much)
    • Velocity (change rate)
    • Variety (sources, types, etc.)
Item TypeVolume Velocity Variety
Product catalogue Semi-structured High Low Low
Product photos Unstructured High Low Low
Sales data Semi-structured Medium High High

How to match data to storage:

  • Storage-driven: build apps on what you have.
  • Cloud-driven: deploy to the storage that makes sense.
  • Function-driven: build what you need; storage comes with it.

Governance in Azure

It’s important to understand what’s running in an Azure subscription – consider cost, security and compliance:

  • Review (and set a baseline):
    • Tools include: Resource Graph; Cost Management; Security Center; Secure Score.
  • Organise (housekeeping to create a subscription hierarchy, classify subscriptions and resources, and apply access rights consistently):
    • Tools include: Management Groups; Tags; RBAC;
  • Audit:
    • Make changes to implement governance without impacting people/work. Develop policies, apply budgets and audit the impact of the policies.
    • Tools include: Cost Management; Azure Policy.
  • Enforce
    • Change policies to enforcement, add resolution actions and enforce budgets.
    • Consider what will happen for non-compliance?
    • Tools include: Azure Policy; Cost Management; Azure Blueprints.
  • (Loop back to review)
    • Have we achieved what we wanted to?
    • Understand what is being spent and why.
    • Know that only approved resources are deployed.
    • Be sure of adhering to security practices.
    • Opportunities for further improvement.

Planning and implementing hybrid network connectivity

Moving to the cloud allows for fast deployment but planning is just as important as it ever was. Meanwhile, startups can be cloud-only but most established organisations have some legacy and need to keep some workloads on-premises, with secure and reliable hybrid communication.

Considerations include:

  • Extension of the internal protected network:
    • Should workloads in Azure only be accessible from the Internal network?
    • Are Azure-hosted workloads restricted from accessing the Internet?
    • Should Azure have a single entry and egress point?
    • Can the connection traverse the public Internet (compliance/regulation)?
  • IP addressing:
    • Existing addresses on-premises; public IP addresses.
    • Namespaces and name resolution.
  • Multiple regions:
    • Where are the users (multiple on-premises sites); where are the workloads (multiple Azure regions); how will connectivity work (should each site have its own connectivity)?
  • Azure virtual networks:
    • Form an isolated boundary with secure communications.
    • Azure-assigned IP addresses (no need for a DHCP server).
    • Segmented with subnets.
    • Network Security Groups (NSGs) create boundaries around subnets.
  • Connectivity:
    • Site to site (S2S) VPNs at up to 1Gbps
      • Encrypted traffic over the public Internet to the GatewaySubnet in Azure, which hosts VPN Gateway VMs.
      • 99.9% SLA on the Gateway in Azure (not the connection).
      • Don’t deploy production workloads on the GatewaySubnet; /26, /27 or /28 subnets recommended; don’t apply NSGs to the GatewaySubnet – i.e. let Azure manage it.
    • Dedicated connections (Azure ExpressRoute): private connection at up to 10Gbps to Azure with:
      • Private peering (to access Azure).
      • Microsoft peering (for Office 365, Dynamics 365 and Azure public IPs).
      • 99.9% SLA on the entire connection.
    • Other connectivity services:
      • Azure ExpressRoute Direct: a 100Gbps direct connection to Azure.
      • Azure ExpressRoute Global Reach: using the Microsoft network to connect multiple local on-premises locations.
      • Azure Virtual WAN: branch to branch and branch to Azure connectivity with software-defined networks.
  • Hybrid networking technologies:

Modern Device Management (Autopilot, Intune and OneDrive)

The old way of managing PC builds:

  1. Build an image with customisations and drivers
  2. Deploy to a new computer, overwriting what was on it
  3. Expensive – and the device has a perfectly good OS – time-consuming

Instead, how about:

  1. Unbox PC
  2. Transform with minimal user interaction
  3. Device is ready for productive use

The transformation is:

  • Take OEM-optimised Windows 10:
    • Windows 10 Pro and drivers.
    • Clean OS.
  • Plus software, settings, updates, features, user data (with OneDrive for Business).
  • Ready for productive use.

The goal is to reduce the overall cost of deploying devices. Ship to a user with half a page of instructions…

Windows Autopilot overview

Autopilot deployment is cloud driven and will eventually be centralised through Intune:

  1. Register device:
    • From OEM or Channel (manufacturer, model and serial number).
    • Automatically (existing Intune-managed devices).
    • Manually using a PowerShell script to generate a CSV file with serial number and hardware hash, which is then uploaded to the Intune portal.
  2. Assign Autopilot profile:
    • Use Azure AD Groups to assign/target.
    • The profile includes settings such as deployment mode, BitLocker encryption, device naming, out of box experience (OOBE).
    • An Azure AD device object is created for each imported Autopilot device.
  3. Deploy:
    • Needs Azure AD Premium P1/P2
    • Scenarios include:
      • User-driven with Azure AD:
        • Boot to OOBE, choose language, locale, keyboard and provide credentials.
        • The device is joined to Azure AD, enrolled to Intune and policies are applied.
        • User signs on and user-assigned items from Intune policy are applied.
        • Once the desktop loads, everything is present, including file links in OneDrive) – time depends on the software being pushed.
      • Self-deploying (e.g. kiosk, digital signage):
        • No credentials required; device authenticates with Azure AD using TPM 2.0.
      • User-driven with hybrid Azure AD join:
        • Requires Offline Domain Join Connector to create AD DS computer account.
        • Device connected to the corporate network (in order to access AD DS), registered with Autopilot, then as before.
        • Sign on to Azure AD and then to AD DS during deployment. If they use the same UPN then it makes things simple for users!
      • Autopilot for existing devices (Windows 7 to 10 upgrades):
        • Backup data in advance (e.g. with OneDrive)
        • Deploy generic Windows 10.
        • Run Autopilot user-driven mode (can’t harvest hardware hashes in Windows 7 so use a JSON config file in the image – the offline equivalent of a profile. Intune will ignore unknown device and Autopilot will use the file instead; after deployment of Windows 10, Intune will notice a PC in the group and apply the profile so it will work if the PC is reset in future).

Autopilot roadmap (1903) includes:

  • “White glove” pre-provisioning for end users: QR code to track, print welcome letter and shipping label!
  • Enrolment status page (ESP) improvements.
  • Cortana voiceover disabled on OOBE.
  • Self-updating Autopilot (update Autopilot without waiting to update Windows).

Maintaining your hybrid environment

Common requirements in an IaaS environment include wanting to use a policy-based configuration with a single management and monitoring solution and auto-remediation.

Azure Automation allows configuration and inventory; monitoring and insights; and response and automation. The Azure Portal provides a single pane of glass for hybrid management (Windows or Linux; any cloud or on-premises).

For configuration and state management, use Azure Automation State Configuration (built on PowerShell Desired State Configuration).

Inventory can be managed with Log Analytics extensions for Windows or Linux. An Azure Monitoring Agent is available for on-premises or other clouds. Inventory is not instant though – can take 3-10 minutes for Log Analytics to ingest the data. Changes can be visualised (for state tracking purposes) in the Azure Portal.

Azure Monitor and Log Analytics can be used for data-driven insights, unified monitoring and workflow integration.

Responding to alerts can be achieved with Azure Automation Runbooks, which store scripts in Azure and run them in Azure. Scripts can use PowerShell or Python so support both Windows and Linux). A webhook can be triggered with and HTTP POST request. A Hybrid runbook worker can be used to run on-premises or in another cloud.

It’s possible to use the Azure VM agent to run a command on a VM from Azure portal without logging in!

Windows Server 2019

Windows Server strategy starts with Azure. Windows Server 2019 is focused on:

  • Hybrid:
    • Backup/connect/replicate VMs.
    • Storage Migration Service to migrate unstructured data into Azure IaaS or another on-premises location (from 2003+ to 2016/19).
      1. Inventory (interrogate storage, network security, SMB shares and data).
      2. Transfer (pairings of source and destination), including ACLs, users and groups. Details are logged in a CSV file.
      3. Cutover (make the new server look like the old one – same name and IP address). Validate before cutover – ensure everything will be OK. Read-only process (except change of name and IP at the end for the old server).
    • Azure File Sync: centralise file storage in Azure and transform existing file servers into hot caches of data.
    • Azure Network Adapter to connect servers directly to Azure networks (see above).
  • Hyper-converged infrastructure (HCI):
    • The server market is still growing and is increasingly SSD-based.
    • Traditional rack looked like SAN, storage fabric, hypervisors, appliances (e.g. load balancer) and top of rack Ethernet switches.
    • Now we use standard x86 servers with local drives and software-defined everything. Manage with Admin Center in Windows Server (see below).
    • Windows Server now has support for persistent memory: DIMM-based; still there after a power-cycle.
    • The Windows Server Software Defined (WSSD) programme is the Microsoft approach to software-defined infrastructure.
  • Security: shielded VMs for Linux (VM as a black box, even for an administrator); integrated Windows Defender ATP; Exploit Guard; System Guard Runtime.
  • Application innovation: semi-annual updates are designed for containers. Windows Server 2019 is the latest LTSC channel so it has the 1709/1803 additions:
    • Enable developers and IT Pros to create cloud-native apps and modernise traditional apps using containers and micro services.
    • Linux containers on Windows host.
    • Service Fabric and Kubernetes for container orchestration.
    • Windows subsystem for Linux.
    • Optimised images for server core and nano server.

Windows Admin Center is core to the future of Windows Server management and, because it’s based on remote management, servers can be core or full installations – even containers (logs and console). Download from http://aka.ms/WACDownload

  • 50MB download, no need for a server. Runs in a browser and is included in Windows/Windows Server licence
  • Runs on a layer of PowerShell. Use the >_ icon to see the raw PowerShell used by Admin Center (copy and paste to use elsewhere).
  • Extensible platform.

What’s next?

  • More cloud integration
  • Update cadence is:
    • Insider builds every 2 weeks.
    • Semi-annual channel every 6 months (specifically for containers):
      • 1709/1803/1809/19xx.
    • Long-term servicing channel
      • Every 2-3 years.
      • 2016, 2019 (in September 2018), etc.

Windows Server 2008 and 2008 R2 reach the end of support in January 2020 but customers can move Windows Server 2008/2008 R2 servers to Azure and get 3 years of security updates for free (on-premises support is chargeable).

Further reading: What’s New in Windows Server 2019.

Containers/Azure Kubernetes Service

Containers:

  • Are fully-packaged applications that use a standard image format for better resource isolation and utilisation.
  • Are ready to deploy via an API call.
  • Are not Virtual machines (for Linux).
  • Do not use hardware virtualisation.
  • Offer no hard security boundary (for Linux).
  • Can be more cost effective/reliable.
  • Have no GUI.

Kubernetes is:

  • An open source system for auto-deployment, scaling and management of containerized apps.
  • Container Orchestrator to manage scheduling; affinity/anti-affinity; health monitoring; failover; scaling; networking; service discovery.
  • Modular and pluggable.
  • Self-healing.
  • Designed by Google based on a system they use to run billions of containers per week.
  • Described in “Phippy goes to the zoo”.

Azure container offers include:

  • Azure Container Instances (ACI): containers on demand (Linux or Windows) with no need to provision VMs or clusters; per-second billing; integration with other Azure services; a public IP; persistent storage.
  • Azure App Service for Linux: a fully-managed PaaS for containers including workflows and advanced features for web applications.
  • Azure Kubernetes Service (AKS): a managed Kubernetes offering.

Wrap-up

So, there you have it. An extremely long blog post with some highlights from my attendance at Microsoft Ignite | The Tour: London. It’s taken a while to write up so I hope the notes are useful to someone else!

UK Government Protective Marking and the Microsoft Cloud

This content is 6 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I recently heard a Consultant from another Microsoft partner talking about storing “IL3” information in Azure. That rang alarm bells with me, because Impact Levels (ILs) haven’t been a “thing” for UK Government data since April 2014. For the record, here’s the official guidance on the UK Government data security classifications and this video explains why the system was changed:

Meanwhile, this one is a good example of what it means in practice:

So, what does that mean for storing data in Azure, Dynamics 365 and Office 365? Basically, information classified OFFICIAL can be stored in the Microsoft Cloud – for more information, refer to the Microsoft Trust Center. And, because OFFICIAL-SENSITIVE is not another classification (it’s merely highlighting information where additional care may be needed), that’s fine too.

I’ve worked with many UK Government organisations (local/regional, and central) and most are looking to the cloud as a means to reduce costs and improve services. The fact that more than 90% of public data is classified OFFICIAL (indeed, that’s the default for anything in Government) is no reason to avoid using the cloud.