What we learned from PuppetLabs State of DevOps Report 2015
The combined focus on throughput AND operability is the key to high IT performance via DevOps.
Key points from PuppetLabs State of DevOps report 2015
At Skelton Thatcher Consulting we have long valued the clarity and insights produced by PuppetLabs in their annual State of DevOps report. Here are the key aspects of the 2015 PuppetLabs State of DevOps report as we see them – we’ve read the report so you don’t need to!
High-performing IT organizations deploy 30x more frequently with 200x shorter lead times; they have 60x fewer failures and recover 168x faster.
These metrics are pretty extraordinary: a 200x shorter lead time means that if a high performing organisation has a lead time (time taken from accepting a new feature request in Dev to that feature being live) of just 1 day, some low-performing organisations take 200 days or over 6 months to get a new feature live. DevOps practices can therefore be hugely business-enabling, delivering shorter ROI on technology investments as well as improved responsiveness to changing requirements.
DevOps also improves operational capabilities:
“Failures are unavoidable, but how quickly you detect and recover from failure can mean the difference between leading the market and struggling to catch up with the competition”
This ability to recover from unexpected problems is crucial for modern software systems which tend to be distributed and composed of multiple separate components, many of which we do not control ourselves and so are unpredictable.
High performance is achievable whether your apps are greenfield, brownfield or legacy.
Many organisations make the mistake of thinking they need to ‘rebuild from the ground up’ when their systems become a few years old. While a rebuild may sometimes work, what is often better is to evolve the existing (working) system bit-by-bit, retaining the knowledge of the current teams, while demonstrating regular, measurable improvements.
“Continuous delivery can be applied to any system, provided it is architected correctly”
At Skelton Thatcher, we’re working with some organisations that build and sell highly successful software that is deployed ‘on-premise’ at customer sites. We have found that even on-premise software – quite far from most web-scale software – can benefit hugely from the practices of Continuous Delivery and DevOps, especially with a focus on software operability. The PuppetLabs 2015 report certainly matched our experience with clients.
“Even packaged software and systems of record can be evolved and operated using DevOps principles and practices”
The female:male ratio in IT teams in the UK and USA has been embarrassingly imbalanced for many years, and so it’s heartening to see PuppetLabs addressing this problem so directly.
“Teams with more women members have higher collective intelligence and achieve better business outcomes … We recommend that teams wanting to achieve high performance do their best to recruit and retain more women”
This relationship between greater diversity and improved engineering performance makes sense: the kinds of problems we need to solve in modern IT are multi-faceted and non-trivial, so having a greater diversity of approaches to call on ought to lead to more innovative and ingenious solutions.
Deployment pain can tell you a lot about your IT performance.
No team likes painful or unpredictable deployments, but many organisations have consistently neglected deployability as a feature of their software systems. We’ve seen consistently that poor deployability is a huge drain on team morale that can lead to a downward spiral of poor quality and quick fixes that go wrong.
“[High-performing teams have a] mean time to recover (MTTR) that’s 168 times faster” and “stability was significantly better”
Mean time to recover (MTTR) is a useful engineering metric that captures the average time taken to restore live service following unexpected failure. In 10% of the low-performing organisations, MTTR was over 6 months, meaning that operational problems for these organisations are taking months to diagnose and fix; that is a business-killing length of time. Compare this to the high-performing DevOps-driven organisations, where 25% of teams had a MTTR of less than 1 minute, a business-saving duration!
“Our definition of IT performance includes two throughput metrics — deployment frequency and deployment lead time — and one stability metric, mean time to recover (MTTR)”
We like this definition of IT performance because if combines Development-focused metrics (throughput via deployment frequency and lead time) with an Operations-focussed metric (MTTR) – both sets of metrics need to be optimised for the organisation to perform well. High throughput with a long MTTR would suggest poor system operability, leading to outages and potential data loss, whereas short MTTR with low throughput would leave the organisation unable to innovate or deliver changes rapidly.
“Once you’ve reached a certain level of throughput (including more frequent releases) you’re going to get more economic benefit from investing in improved stability”
The combined focus on throughput AND operability is the key to high IT performance via DevOps. One way to achieve this combined focus is through the practices of Continuous Delivery:
“the practices that make up continuous delivery — deployment automation and automated testing, continuous integration, and version control for all production artifacts — have a significant predictive relationship to deployment pain, IT performance and change fail rate”
Continuous Delivery is a high-discipline approach, and needs backing and investment from key stakeholders inside and outside IT. For instance, a significant investment is needed in testing and testability in order to be able to test software components in isolation:
“[the] ability to test without an integrated environment [correlates] with high IT performance”
This focus on testability is (and always has been) a huge enabler for IT performance:
“Don’t focus on the type of system you have: Instead, focus on re-architecting for testability and deployability”
Build quality into the entire process
It was good to see that the PuppetLabs report itself used high quality methods of data collection and analysis – see p.35 for details of the models and techniques used.
Arguably the most important phrase in the 2015 report is this:
“Quality isn’t just the responsibility of one team; it’s the shared responsibility of everyone involved in the software delivery lifecycle. High-performing organizations know this and build quality into the entire process.”
- Build Quality In: Continuous Delivery and DevOps experience reports, edited by Steve Smith and Matthew Skelton – http://buildqualityin.com/
- Continuous Delivery by Jez Humble and David Farley