Serverless and the death of DevOps

A couple of weeks back, I took a trip to London after work to attend the latest CloudCamp meet-up. It’s been a while since I last went to CloudCamp but I was intrigued by the title of the event: “Serverless and the death of DevOps?”. The death of DevOps? Surely not. Most organisations I’m working with are only just getting their heads around what DevOps is. Some are still confusing a cultural change with some tools (hey, we’ll adopt some new tools and rebrand our AppDev function as DevOps). If anything, DevOps is at the top of the hype curve; it can’t possibly be dead!

Well, 5 minutes into the event and, after Simon Wardley (@SWardley)’s introduction, I could see where he was coming from. Mix the following up with some “Wardley Mapping” and you can see that what’s being discussed is not really the death of DevOps (as a concept where development and operations teams work in a more integrated fashion) but it may well be a new cloud computing paradigm, in the form of “serverless” computing (AWS Lambda, Azure Functions, etc.):

  • Back in the beginning of computing, systems were hard-wired (e.g. Colossus).
  • Then, they evolved and we had custom-built computing (e.g. Leo) with the concept of applications and an operating system.
  • This evolved and new products (like the IBM 650) were born with novel architectural practices, based around the concept of compute as a product.
  • These systems had a high mean time to recover (MTTR) so the architecture of the day was designed around N+1, DR tests, scaling up.
  • Evolution continued and novel architectural practices became emerging, then good. Computing became more resilient.
  • Next came frameworks. We had applications and an emerging coding practice based around these frameworks, running on an operating system using good architectural practice, all built around the concept of compute as a product (a server).
  • All was happy.
  • Then along came the cloud. Compute was no longer a product but a utility. It brought new benefits of efficiency, pooling resources, agility. Computing had new sources of worth.
  • And organisations said “make my legacy cloudy” [actually, this is as far as many have got to…].
  • Some people asked “but shouldn’t architecture evolve too?” And, after the initial cries of “burn him, heretic”, a new novel architectural practice emerged, built around a low MTTR. It took seconds to get a new virtual machine, distributed systems were designed for failure, indeed chaos monkeys were introduced to the environment to introduce failure and ensure resilience. We introduced co-evolution (which has been practiced in other fields throughout history) and we called it DevOps.
  • This evolved until it became good architectural practice for the utility world and the old practices for a product world became legacy.
  • The legacy world was held back by inertia but the cloud was about user needs, measurement, automation, collaboration and fast feedback.
  • Then a new tribe began to rise up. Using commodity operating systems and functions as a framework. This framework is becoming a utility. And it will move from emerging to good practice, then best practice and “serverless” will be the future.
  • The old world will become legacy. Even the wonderful world of “DevOps”.
  • But, for now, if we say that “DevOps” is legacy, the response will be “burn him, heretic”.

So that’s the rise of serverless and the “death of DevOps”.

[Simon Wardley does a much better job of this… hopefully, there’s a video out there of him explaining the above somewhere…]

What is DevOps? And is your organisation ready?

Like cloud a few years ago and then big data, DevOps is one of the buzzwords of the moment. So what does it actually mean? And is there more to it than hype?

There are many definitions but most people will agree that, at its core, DevOps is about closer working between development and operations teams (and for infrastructure projects read engineering and operations teams). Part of DevOps is avoiding the “chuck it over the fence” mentality that can exist in some places (often accompanied with a “not invented here” response). But there is another side too – by increasing co-operation between those who develop or implement technology and those who need to operate it, there are opportunities for improved agility within the organisation.

DevOps is as much (probably more) about people and process than technology, but the diagram below illustrates the interaction between teams at different stages in the lifecycle:

  • Businesses need change in order to drive improvements and respond to customer demands.
  • Development teams implement change.
  • Meanwhile, operations teams strive to maintain a stable environment.

Just as agile methodologies sit between the business and developers, driving out requirements that are followed by frequent releases of updated code with new functionality, DevOps is the bridge between the development and operations teams.

DevOps in context

This leads to concepts such as infrastructure as code (implementing virtual infrastructure in a repeatable manner using declarative templates), configuration automation (perhaps with desired state configuration) and automation testing. Indeed, DevOps is highly dependant on automation: automating code testing, automating workflows, automating infrastructure – Automating Everything!

Configuring, managing and deploying resources (for example into the cloud) is improved with DevOps processes such as continuous integration (CI). No doubt some will argue that CI has existed for a lot longer than the DevOps term and that is true – just as virtualisation pre-dates infrastructure-as-a-service!

The CI process is a cycle of integrating code check-ins with testing and feedback mechanisms to improve the quality of the code:

Continuous integration example

In the example above, each new check-in to the version control system results in an automated trigger to carry out build and unit tests. These will either pass or fail, with corresponding feedback. When the tests are successful, another trigger fires to start automated acceptance tests, again with feedback on the success or failure of those tests. Eventually, the code passes the automated tests and is approved for user acceptance testing, before ultimately being released.

Continuous integration works hand in hand with continuous delivery and continuous deployment to ensure that development teams are continuously dropping new code but in line with the Release Management processes that the operations teams require in order to maintain their service.

Continuous delivery allows new versions of software to be deployed to any environment (e.g. test, staging, production) on demand. Continuous delivery is similar to continuous integration but can also feed business logic tests. Continuous deployment takes this further with every check-in that passes all tests ultimately ending up with a production release – the fastest route from code to release.

No one tool is used to implement DevOps – DevOps is more about a cultural shift than it is about technology – but there are many tools that can assist with implementing DevOps processes. Examples include Chef, Puppet (configuration management) and Jenkins (continuous integration). Integrated development environments (such as Visual Studio and Eclipse) also play a part, as do source control systems like Visual Studio Team Services and Git/GitHub.

DevOps is fuzzy too. Once we start to talk about software-defined infrastructure we start to look at orchestration tools (e.g. Mesosphere, Docker Swarm) and containerisation (e.g. Docker, Azure Container Service, Amazon ECS). And then there’s monitoring – either with tools built into the platform (e.g. Visual Studio Application Insights) or third party tools (like those from NewRelic and AppDynamics).

So DevOps is more than a buzzword. It’s a movement, that’s bringing with it a whole stack of processes and tools to help drive towards a more agile environment. IT that can support business change. But DevOps needs a change of mindset and for me the big question is “is your organisation ready for DevOps?”.

Further reading/viewing

Microsoft #TechDays Online 2015

Last week, was Microsoft UK’s TechDays Online conference, held over three days with thousands of virtual attendees watching/listening to sessions on a variety of topics, starting off in the IT Pro arena with a keynote on Windows 10 from Journalist and Author Mary Jo Foley (@MaryJoFoley), Windows Server, on to Intune, Office 365, progressing to a variety of Azure topics, containerisation and DevOps with a keynote from Microsoft Distinguished Engineer Jeffrey Snover (@JSnover) and eventually into full developer mode with a keynote from Scott Hanselman (@SHanselman).

This is the fourth year that Microsoft has run these events and I was fortunate to be invited to watch the sessions being recorded.  I attended the first afternoon/evening and the second day – driving my Twitter followers mad with a Microsoft overload. For those who missed it, here’s a recap (unfortunately I couldn’t commit the time to cover the developer day):

(I later retweeted this:)

And we continue…

Actually, he didn’t – I later published this correction:

And back to my stream of Twitter consciousness:

Sadly, I missed Mary Jo Foley’s keynote (although I did manage to get over to Microsoft’s London offices on the second evening for a Live recording of the Windows Weekly podcast and caught up with Mary Jo after the event).

Sessions were recorded and I’ll update this post with video links when I have them.