Serverless and the death of DevOps

A couple of weeks back, I took a trip to London after work to attend the latest CloudCamp meet-up. It’s been a while since I last went to CloudCamp but I was intrigued by the title of the event: “Serverless and the death of DevOps?”. The death of DevOps? Surely not. Most organisations I’m working with are only just getting their heads around what DevOps is. Some are still confusing a cultural change with some tools (hey, we’ll adopt some new tools and rebrand our AppDev function as DevOps). If anything, DevOps is at the top of the hype curve; it can’t possibly be dead!

Well, 5 minutes into the event and, after Simon Wardley (@SWardley)’s introduction, I could see where he was coming from. Mix the following up with some “Wardley Mapping” and you can see that what’s being discussed is not really the death of DevOps (as a concept where development and operations teams work in a more integrated fashion) but it may well be a new cloud computing paradigm, in the form of “serverless” computing (AWS Lambda, Azure Functions, etc.):

  • Back in the beginning of computing, systems were hard-wired (e.g. Colossus).
  • Then, they evolved and we had custom-built computing (e.g. Leo) with the concept of applications and an operating system.
  • This evolved and new products (like the IBM 650) were born with novel architectural practices, based around the concept of compute as a product.
  • These systems had a high mean time to recover (MTTR) so the architecture of the day was designed around N+1, DR tests, scaling up.
  • Evolution continued and novel architectural practices became emerging, then good. Computing became more resilient.
  • Next came frameworks. We had applications and an emerging coding practice based around these frameworks, running on an operating system using good architectural practice, all built around the concept of compute as a product (a server).
  • All was happy.
  • Then along came the cloud. Compute was no longer a product but a utility. It brought new benefits of efficiency, pooling resources, agility. Computing had new sources of worth.
  • And organisations said “make my legacy cloudy” [actually, this is as far as many have got to…].
  • Some people asked “but shouldn’t architecture evolve too?” And, after the initial cries of “burn him, heretic”, a new novel architectural practice emerged, built around a low MTTR. It took seconds to get a new virtual machine, distributed systems were designed for failure, indeed chaos monkeys were introduced to the environment to introduce failure and ensure resilience. We introduced co-evolution (which has been practiced in other fields throughout history) and we called it DevOps.
  • This evolved until it became good architectural practice for the utility world and the old practices for a product world became legacy.
  • The legacy world was held back by inertia but the cloud was about user needs, measurement, automation, collaboration and fast feedback.
  • Then a new tribe began to rise up. Using commodity operating systems and functions as a framework. This framework is becoming a utility. And it will move from emerging to good practice, then best practice and “serverless” will be the future.
  • The old world will become legacy. Even the wonderful world of “DevOps”.
  • But, for now, if we say that “DevOps” is legacy, the response will be “burn him, heretic”.

So that’s the rise of serverless and the “death of DevOps”.

[Simon Wardley does a much better job of this… hopefully, there’s a video out there of him explaining the above somewhere…]

Installing youtube-dl on a Mac to watch YouTube videos when working offline

My work pattern at the moment means that I’m spending a lot of time travelling on trains up and down the country (in fact, as this post is published, I’ll be somewhere between Bedford and Sheffield). A combination of fatigue and motion sickness means that this isn’t always a good opportunity to work on the train but it is potentially an opportunity to listen to podcasts or, unlike when I’m driving, to watch some videos. Unfortunately, travelling at 125 miles an hour with a varying quality of 4G data signal doesn’t always lend itself well to streaming from YouTube, etc.

That’s where youtube-dl comes in – I can download videos to my MacBook before I leave home, and watch at my leisure on the train. So, how do you get started?

Well, the Mac App Store website helped me. Following advice there, I issued two commands in Terminal to first install the HomeBrew package manager for MacOS, and then to install youtube-dl:

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" < /dev/null 2> /dev/null

brew install youtube-dl

So, with youtube-dl installed, how do I use it? The youtube-dl readme.md file has lots of information but it’s not exactly easy to digest.

I found that:

youtube-dl -F youtubeurl

would give me a list of available video formats for a given URL and reading about YouTube media types led me to the very important number 22 for MP4 video at 720p with H.264 encoding and AAC audio. That should play on a wide variety of devices (including Quicktime on my Mac).

Next, to download:

youtube-dl -f 22 https://www.youtube.com/channel/UCz7bkEsygaEKpim0wu_JaUQ

(this URL is the CloudTechTV channel).

That command brought down all of the videos in the channel but I can also download individual episodes, for example:

youtube-dl -f 22 https://www.youtube.com/watch?v=ymKSGTR55LQ
youtube-dl
I can do something similar for other YouTube videos/channels (and even for some other video services) and build a library of videos to watch on my journeys, without needing to worry about an Internet connection.

“dotnet: command not found” after installing the Microsoft .NET Core SDK on a Mac

Whilst installing the Microsoft .NET Core SDK on my MacBook earlier last week, I found that the instructions on the Microsoft website were not quite complete.

Microsoft tells us to run a few commands to install OpenSSL:

  1. Install Homebrew (it was already on my system)
  2. Then run:

brew update
brew install openssl
mkdir -p /usr/local/lib
ln -s /usr/local/opt/openssl/lib/libcrypto.1.0.0.dylib /usr/local/lib/
ln -s /usr/local/opt/openssl/lib/libssl.1.0.0.dylib /usr/local/lib/

After this, you should be able to download and install the .NET Core SDK package (version 1.0.4 seems to be the latest version of the SDK at the time of writing, which includes .NET Core 1.0 and 1.1).

Then, in theory, running the dotnet command should be all that’s required but, for me, it resulted in an error:

-bash: dotnet: command not found

The fix, it seems, is to create another symbolic link:

ln -s /usr/local/share/dotnet/dotnet /usr/local/bin/

After that, dotnet ran as expected.

For reference, My MacBook is running MacOS Sierra version 0.12.5 (16F73).

Introduction to Microsoft .NET Standard

A few months ago, I was (along with about 70 other rMVPs) privileged to be on a Skype call with Scott Hanselman (@shanselman) as he gave an overview into Microsoft .NET Core. Some of what was discussed was confidential but the general overview of how .NET Core fits with the full .NET Framework and with Mono/Xamarin was a great education for a non-developer like me and it seems fine to reproduce it here for the benefit of others.

This is part 2 of a 2-part series. In part 1, I explained .NET versions and the lightweight, cross-platform Microsoft .NET Core SDK. This post moves on to look at standardising Microsoft .NET.

Where does .NET Standard fit in?

Hopefully in part 1, I showed how .NET Core is a fantastically quick to run set of development tools that can run across platforms but you might also have heard of .NET Standard.

Over the years, the .NET landscape has become pretty complicated (the discussion on versions just scrapes the surface) and we should really think of “.NET” as a big umbrella for marketing purposes with three main instances:

  1. Microsoft .NET Framework is really the .NET (Full) Framework – it runs on Windows clients and servers.
  2. Microsoft .NET Core is for server applications with no UI and is ideally suited to microservices, Docker containers, web apps and more. It runs on Windows, MacOS and multiple Linux distributions.
  3. Mono (led by Xamarin) is an open source cleanroom implementation of .NET – written without access to the code. Its developers looked at the interfaces/shape and made new versions, which means it has the benefit of running anywhere (e.g. “exotic places” like a Sony Playstation, or on tiny devices).

That means .NET code can, theoretically run anywhere. Except that the versioning gets in the way… and that’s where .NET Standard fits in.

.NET Standard is a target. Not a platform. Not a runtime. It’s an “agreement”.

If that’s difficult to understand, here’s the analogy thatScott Hanselman (@shanselman) used in a recent webcast: Android developers don’t target an Android OS (e.g. v4.3), they target an API level (e.g. 15). If a new API is released that provides new capabilities, a developer can move to the new API level but it might not work on older devices.

Microsoft has something like this in Portable Class Libraries, except they are the lowest common denominator – the centre of a Venn diagram. .NET Standard is about running anywhere so that if a developer targets their application to a standard, they can be sure it will run wherever that standard is supported.

The .NET Platform Support table demonstrates this and indicates the minimum version of the platform that’s needed to implement that .NET Standard.

.NET Platforms Support table
.NET Platforms Support table as at July 2017

For example, if you want to run on .NET Framework 4.5 and .NET Core 1.0, the highest .NET Standard version you can use is .NET Standard 1.1; if you want to run on Windows 8.1 you can’t go above .NET Standard 1.2; etc. The higher the version, the more APIs are available and the lower the version, the more platforms implement the standard. Developers target a framework for specific Windows APIs and target a .NET standard for portability and flexibility.

Resources

Introduction to Microsoft .NET Core

A few months ago, I was (along with about 70 other rMVPs) privileged to be on a Skype call with Scott Hanselman (@shanselman) as he gave an overview into Microsoft .NET Core. Some of what was discussed was confidential but the general overview of how .NET Core fits with the full .NET Framework and with Mono/Xamarin was a great education for a non-developer like me and it seems fine to reproduce it here for the benefit of others.

This is part 1 of a 2-part series, looking at .NET versions and the Microsoft .NET Core SDK. Tomorrow’s post moves on to look at Microsoft .NET Standard.

.NET versions

The first thing to understand is that .NET versioning is a mess [my words, not Scott’s]:

On a Windows machine with Visual Studio installed, open a command prompt and type clrver -all. Chances are you’ll see versions 2.0 and 4.0. cd C:\Windows\Microsoft.NET\Framework and on my Windows 10/Office 2016 machine I can see versions 1.0.3705, 1.1.4322, 2.0.50727 and 4.0.30319.

So where is .NET 3.0? Well, 3.0 and 3.5 were the Windows Presentation Framework (WPF) and Windows Communication Framework (WCF), running on v2.0 of the Common Language Runtime (CLR). They are actually part of Windows!

All of this makes developing for the .NET framework tricky. Side by side running for .NET 4.x is at the CLR level – if your app is developed for 4.6 and others are for 4.0, you may have a hard time convincing IT operations people to update and chances are you’ll need to drop down to 4.0.

That (together with cross-platform capabilities) is where .NET Core comes in.

Creating simple applications with the .NET Core SDK

.NET Core is in C:\Program Files\DotNet and is implemented as a driver (dotnet.dll). With a few simple commands we can create and run a console application:

dotnet new console

dotnet restore

dotnet run

That’s three steps to Hello World!

It’s just as simple for a web application:

dotnet new web

dotnet restore

dotnet run

Then browse to http://localhost:5000.

One difference in the web app is the presence of the

<Project Sdk="Microsoft.NET.Sdk.Web">

line in the .csproj file. This signifies a “meta package” – a package of packages that avoids explicitly listing multiple package references (for cleaner code).

Unlike in the (full) .NET Framework, we can specify the version of .NET Core to use in the .csproj file, meaning that multiple versions of the SDK can be used with whatever variety of libraries are needed:

<TargetFramework>netcoreapp1.1</TargetFramework>

We can also add references to libraries with dotnet add reference libraryname. Adding a package with dotnet add package packagename not only adds it but restores, downloads and checks compatibility. Meanwhile dotnet new solution creates a solution file – something that would be complex to do manually.

Why revert to a CLI?

So we’ve seen that you can be a successful .NET programmer without a visual editor. We can build an entire project system from the command line. .NET Core is not just about involving the Linux community with a cross-platform version of .NET but also about speed and Scott used an analogy of thinking of the CLI as creating a “2D” version to validate before working on the full “3D” GUI version.

In the background (under the covers), the .NET Core SDK uses the NuGet package manager (dotnet add package), MSBuild (dotnet build) and VSTest (dotnet new xunit and dotnet test).

Visual Studio gives a better user interface but we can develop the basics very quickly using a command line and then get the best of both worlds – both CLI and GUI. What .NET Core does is lower the barrier to entry – getting up and running writing Microsoft.NET code in just a few minutes, for Windows, MacOS or Linux.

Resources

Bose Soundlink Mini II speakers turn off at low volume levels

Listening to music a couple of nights ago (streamed from Spotify on my MacBook, though I’m not sure how relevant that is), I found that my Bose SoundLink Mini II speakers kept turning off after 5 minutes (running on battery power, connected with a cable). Spotify kept playing but the sound stopped until I turned the speakers off/on again.

I hadn’t seen this issue before – and I was using the same 3.5mm AUX cable setup that I often use with our small TV (to improve its sound quality), so I hit the interwebs to see what I could find…

Some hunting around suggests that the issue may have been the low volume level on my MacBook (I was in the room directly under my youngest son’s bedroom, after bedtime).

“The Speaker does have a power-save mode, but it will generally only enter this when no audio is detected. The most likely explanation here is that the speaker is getting a very weak signal […] and boosting it enormously with its internal amp.

[…]
If you are using a headphone jack or similar […], try increasing the […] output level while turning the speaker’s volume down. This should provide a stronger signal on the AUX port which would prevent the speaker from sleeping automatically.”

Sure enough, increasing the volume on the MacBook to around level 4-5 and decreasing the volume on the speakers seems to stop the power-down. Indeed, to make sure this was the case, I turned the MacBook’s volume back down to 1 and waited for the music to cut out… then, when it did, I just increased the volume to around level 4-5 again and the speakers came alive!

On a related note… I stumbled across these Spotify tips and tricks that might be useful…

Reducing the time taken for a Garmin Edge 25 to find a satellite signal

Regular readers will know that cycling is one of my hobbies (with my eldest son looking like he may follow in the same direction…).

I have a Garmin Edge 810 cycle computer to use on my big rides but for commutes (e.g. on the Brompton) I use a smaller unit – an Edge 25. The Edge 25 is a cracking little unit, with all the basic functionality I’d expect and Bluetooth connectivity, but one of the issues I’ve found is that it can be slow to pick up a GPS signal.

I think I may have made a breakthrough though, thanks to a comment in this review from Average Joe Cyclist:

“Satellite Acquisition on the Garmin Edge 25
The Garmin Edge 25 can connect to both GPS and GLONASS satellites. As it has more satellites to choose from, it can lock in faster. I know that Garmin Edge bike computers with only GPS can be frustratingly slow to lock in, so this is important. It was a very happy surprise to find GLONASS on such a relatively cheap bike computer as the Garmin Edge 25. This is obviously a huge selling point for this tiny bike computer.

Note: GPS and GLONASS are different kinds of satellite systems – the GPS was developed by the USA, and the GLONASS is Russian.”

Sure enough, I checked my settings and GLONASS was off. So I turned it on and limited testing suggests that it may now be faster to pick up a satellite. Time will tell, as will experience with the second Edge 25 that’s in the post for my son to use…

Some more reading suggests that using GLONASS and GPS together may affect battery life but could also improve accuracy. If satellite lock-in is still slow, then a master reset may be required. To reset the Edge 25:

  1. Power on the device whilst holding the two right-side buttons down.
  2. Release the top button when you hear the first beep.
  3. Release the bottom button when you hear the second beep.

Garmin Edge software version as viewed in Garmin ConnectI also upgraded the firmware (unfortunately breaking the rule of only changing one thing at a time when troubleshooting tech…) which got me thinking “what firmware did I have before?”. It seems the way to tell this is to view an activity in Garmin Connect, where the details of the device used to upload the data shown on the right-hand side.

Short takes: SSH, custom ports, root and Synology NASs

This blog has been much maligned of late… I’d like to get more time to write and I have literally hundreds of part-written posts, some of which are now just a collection of links for me to unpick…

In the meantime, a couple of snippets that may be useless, or may help someone one day…

Using SSH with a custom port number

My Synology NAS complains about poor security if I leave SSH enabled on port 22. It’s fine if I change it to another port though (security by obscurity!). Connecting then needs a bit more work as it’s ssh user@ipaddress -p portnumber (found via the askubuntu forums)

Logging on to a Synology NAS from SSH as root

On a related topic, I recently needed to SSH to my NAS as root (not admin). ssh root@ipaddress -p portnumber wasn’t authenticating correctly and then I found Synology’s advice on how to login to DSM with root permission via SSH/Telnet. It seems I have to first log on as admin, then sudo -i to elevate to root.

Synology Hyper Backup and DSM update failures

I have a Synology DS916+ NAS and, for the 9 months or so, I’ve been using it to back up my photos to Microsoft Azure. I’ve realised that they are being backed up in a format that’s unique to Synology’s Hyper Backup program, so I should probably see if there is an alternative that backs up the files in their native format but, more worryingly, this afternoon I noticed that backups had been failing for a few days. The logs weren’t much help (no detailed information) and a search on the ‘net didn’t turn much up either. For reference, this was the (very high level) information in the logs when viewed in the Hyper Backup GUI:

Information 2017/07/08 03:00:02 SYSTEM [Azure Blob] [Backup Photos to Microsoft Azure] Backup task started.
Error 2017/07/08 03:00:33 SYSTEM [Azure Blob] [Backup Photos to Microsoft Azure] Exception occured while backing up data.
Error 2017/07/08 03:00:36 SYSTEM [Azure Blob] [Backup Photos to Microsoft Azure] Failed to backup data.
Error 2017/07/08 03:00:36 SYSTEM [Azure Blob] [Backup Photos to Microsoft Azure] Failed to run backup task.

(Since then, I’ve found how to view detailed backup logs on a Synology NAS, thanks to a blog post by Jonathan Mumm, though in this case, the logs didn’t shine much of a light on the problem for me.)

I wondered if there were any DSM updates available that might fix things but, when I checked for updates, I got a message to say “Insufficient capacity for update. The system partition requires at least 400MB”. Googling suggested lots of manual file deletion and I was sure this was just a buildup of temp files (maybe to do with the failed backup), so I decided to reboot. After all, what do you do when a computer isn’t working as expected? Turn it off and on again!

After rebooting, attempts to update no longer produced an error (simply confirming that I’m up-to-date with DSM 6.1.2-15132) and the backup is now running nicely (it will take a few hours to complete as I added a few months’ worth of iPhone photos to the NAS earlier in the week, around about the time the backups started failing…)

VPN, DirectAccess or Windows 10 auto-trigger VPN profile?

On a recent consulting gig, I found myself advising a customer who was keen to deploy Microsoft DirectAccess (DA) in place of their legacy virtual private network (VPN) solution. As a DirectAccess user (who used Cisco AnyConnect VPN at my last place of work), I have to say the convenience of being always connected to the company network without any interaction on my part is awesome. I’m sure the IT guys like that they can always access my PC for management purposes too…

The trouble with DirectAccess is that it doesn’t seem to have a published roadmap. So, should I really be advising my customers to use a technology that doesn’t seem to be being developed? First of all, I should add that it’s not been deprecated. DirectAccess is still a supported feature in Windows Server 2016 (it’s part of the Remote Access server role) – so it’s still got a future. Annoyingly, it’s not a supported workload on Azure (leading to on-premises deployments) but we can’t have everything…

Now for the question of whether to use DA or a traditional VPN. Well, Microsoft MVP Richard Hicks (@RichardHicks) has written a fantastic blog post that goes through this in detail. Rather than paraphrasing, I’ll suggest that you go and read Richard’s post on DirectAccess vs. VPN.

But that’s not the whole picture… you see Windows 10 has a new auto-triggered VPN profile capability that I’m sure will, in time, replace DirectAccess. So, where does that fit in?

Great response there from Richard, and then my colleague Steve Harwood (@steveeh) joined in, advising that Auto VPN still requires a VPN profile and infrastructure but gets initiated through either a Universal Windows Platform (UWP) or desktop app being started or stopped, meanwhile DirectAccess has other benefits from being always-on avoiding the need to expose management/compliance systems publicly.

Actually, it gets a bit better with the Windows 10 Anniversary Update (RedStone 1/1607), which has the Always On VPN profile option, but we’re still Windows-only at this point. Richard has recommended a DirectAccess alternative for Windows, MacOS, iOS and Android:

So if the question is “should you deploy DirectAccess?”, the answer is “maybe”. It’s a Windows Enterprise-only solution but, if you have other clients in your enterprise, you might want to consider alternatives instead of or alongside DA.