What is DevOps? And is your organisation ready?

Like cloud a few years ago and then big data, DevOps is one of the buzzwords of the moment. So what does it actually mean? And is there more to it than hype?

There are many definitions but most people will agree that, at its core, DevOps is about closer working between development and operations teams (and for infrastructure projects read engineering and operations teams). Part of DevOps is avoiding the “chuck it over the fence” mentality that can exist in some places (often accompanied with a “not invented here” response). But there is another side too – by increasing co-operation between those who develop or implement technology and those who need to operate it, there are opportunities for improved agility within the organisation.

DevOps is as much (probably more) about people and process than technology, but the diagram below illustrates the interaction between teams at different stages in the lifecycle:

  • Businesses need change in order to drive improvements and respond to customer demands.
  • Development teams implement change.
  • Meanwhile, operations teams strive to maintain a stable environment.

Just as agile methodologies sit between the business and developers, driving out requirements that are followed by frequent releases of updated code with new functionality, DevOps is the bridge between the development and operations teams.

DevOps in context

This leads to concepts such as infrastructure as code (implementing virtual infrastructure in a repeatable manner using declarative templates), configuration automation (perhaps with desired state configuration) and automation testing. Indeed, DevOps is highly dependant on automation: automating code testing, automating workflows, automating infrastructure – Automating Everything!

Configuring, managing and deploying resources (for example into the cloud) is improved with DevOps processes such as continuous integration (CI). No doubt some will argue that CI has existed for a lot longer than the DevOps term and that is true – just as virtualisation pre-dates infrastructure-as-a-service!

The CI process is a cycle of integrating code check-ins with testing and feedback mechanisms to improve the quality of the code:

Continuous integration example

In the example above, each new check-in to the version control system results in an automated trigger to carry out build and unit tests. These will either pass or fail, with corresponding feedback. When the tests are successful, another trigger fires to start automated acceptance tests, again with feedback on the success or failure of those tests. Eventually, the code passes the automated tests and is approved for user acceptance testing, before ultimately being released.

Continuous integration works hand in hand with continuous delivery and continuous deployment to ensure that development teams are continuously dropping new code but in line with the Release Management processes that the operations teams require in order to maintain their service.

Continuous delivery allows new versions of software to be deployed to any environment (e.g. test, staging, production) on demand. Continuous delivery is similar to continuous integration but can also feed business logic tests. Continuous deployment takes this further with every check-in that passes all tests ultimately ending up with a production release – the fastest route from code to release.

No one tool is used to implement DevOps – DevOps is more about a cultural shift than it is about technology – but there are many tools that can assist with implementing DevOps processes. Examples include Chef, Puppet (configuration management) and Jenkins (continuous integration). Integrated development environments (such as Visual Studio and Eclipse) also play a part, as do source control systems like Visual Studio Team Services and Git/GitHub.

DevOps is fuzzy too. Once we start to talk about software-defined infrastructure we start to look at orchestration tools (e.g. Mesosphere, Docker Swarm) and containerisation (e.g. Docker, Azure Container Service, Amazon ECS). And then there’s monitoring – either with tools built into the platform (e.g. Visual Studio Application Insights) or third party tools (like those from NewRelic and AppDynamics).

So DevOps is more than a buzzword. It’s a movement, that’s bringing with it a whole stack of processes and tools to help drive towards a more agile environment. IT that can support business change. But DevOps needs a change of mindset and for me the big question is “is your organisation ready for DevOps?”.

Further reading/viewing

Code dojo for test-driven development

Every now and again, I think it would be great to do some coding, to give up this infrastructure stuff (or at least to give up the management stuff) and solve problems programmatically for a living.  Unfortunately, I also have a mortgage to pay, and certain expectations on living standards, so rewinding my career 20 years and starting again is probably not an option right now…

Even so, I took a C# course on PluralSight and, last month, I attended the Code Dojo that my colleague Steve Morgan (@smorgo) was running for some of the developers in Fujitsu’s UK Microsoft Practice.

Dojo is a Japanese word that means “a place of the way” with various explanations including a place where a group of people stay to discipline themselves.  So, it follows that a Code Dojo is a place where a group of software developers come together to be enlightened.

Our Code Dojo focused on a Kata, which is another Japanese term that literally means “form” – i.e. describing patterns of movements practiced solo or in pairs.  In this case, the pattern that we followed was one of Test Driven Development (TDD).  We used TDD to implement a software solution to a given set of requirements but, like all projects, the requirements start off incomplete and there are new requirements received at each stage of the project.

We each took it in turns to write code (even me), with the rest of the group observing and offering help where necessary.  The principle of TDD was used to write unit tests that are machine-executable expressions of requirements.

First, we wrote a test for a single requirement and then attempt to run it.  It should fail because the requirement isn’t implemented so we wrote just enough code to satisfy the requirement (and no more).  The next step is to run all tests and, if any fail, fix the failing tests or the implementation until everything works.  Finally, we refactored the code to make it more maintainable and supportable.

Very quickly, we had grasped the TDD mantra of “red, green, refactor” – i.e. at least one test fails, fix the code to pass the tests, then improve the code but tests must still pass.

The event was over all too quickly, and we ran out of time, but it was certainly worthwhile – and a great education for me.  we used C# and Visual Studio but really you could use any language to apply the principles and I really should give it another go at home.

Steve’s next Code Dojo is today but I can’t be there as I’ll be cycling to Paris as you read this (and, even if I wasn’t, I’d need to be at a management meeting!). Hopefully there will be more soon and I can continue my education…

Journey through the Amazon Web Services cloud

Working for a large system integrator, I tend to find myself focused on our own offerings and somewhat isolated from what’s going on in the outside world. It’s always good to understand the competitive landscape though and I’ve spent some time recently brushing up my knowledge of Amazon Web Services (AWS), which may come in useful as I’m thinking of moving some of my computing workloads to AWS.  Amazon’s EMEA team are running a series of “Journey to the Cloud” webcasts at the moment and the first two sessions covered:

The next webcast in the series is focused on Storage and Archiving and it takes place next week (23 October). Based on the content of the first two, it should be worth an hour of my time, and maybe yours too?