Earlier this year, I wrote about a new approach my team is pursuing to inform our Container Adoption Program. We are using software delivery metrics to help keep organizations aligned and focused, even when those organizations are engaging in multiple workstreams spanning infrastructure, release management, and application onboarding. I talked about starting with a set of four core metrics identified in Accelerate: Building and Scaling High Performance Technology Organizations (by Nicole Forsgren, Jez Humble, and Gene Kim) that act as drivers of both organizational and noncommercial performance.

Let’s start to highlight how those metrics can inform an adoption program at the implementation team level. The four metrics are: Lead Time for Change, Deployment Frequency, Mean Time to Recovery, and Change Failure Rate. Starting with Lead Time and Deployment Frequency, here are some suggestions for activities that each metric can guide in initiatives to adopt containers, with special thanks to Eric Sauer, Prakriti Verma, Simon Bailleux, and the rest of the Metrics-Driven Transformation working group at Red Hat.

Lead Time for Change

Accelerate describes Lead Time for Change as “the time it takes to go from code committed to code successfully running in production.” Organizations and teams should strive to reduce Lead Time for Change.

Providing automated, immutable infrastructure to development teams. Use OCI container images to define a baseline set of development environments to developer teams and allow self-service provisioning for new projects.

Building automated deployment pipelines. Create deployment pipelines using build servers, source code management, image registries, and Kubernetes to automate previously manual deployment and promotion processes. 

Building unit tests. Unit tests are often talked about but still too often left out of development activities, and they are as relevant and important in cloud or Kubernetes environments as they are in traditional deployments. Every piece of code with faulty logic sent back for rework by a manual test team presents unnecessary delays in a deployment pipeline. Building unit tests into an automated build process keeps those errors close to their developer source, where they can be fixed faster.

Automating and streamlining functional testing. Just as unit tests avoid time-consuming manual testing, so does the automation of functional acceptance tests. These tests evaluate correctness against business use cases and are more complex than code-level unit tests. That makes them all the more important to automate in order to drive down deployment lead times. The contribution of Kubernetes here is the ability to easily spin up and, just as easily, destroy sophisticated container-based test architectures to improve overall throughput.

Container-native application architectures. As the size and number of dependencies in an application deployment increases, the chances of deployment delays due to errors or other issues in the readiness of those dependencies likewise increases. Decomposing monoliths into smaller containerized, API-centric microservices can speed deployment time by decoupling the service from the deployment lifecycle of the rest of the application.

Deployment Frequency

Accelerate uses the term “deployment” in the sense of “a software deployment to production or to an app store.” Deployment frequency is used by the authors as a proxy for batch size. A higher deployment frequency is indicative of small batch sizes and smaller batches are associated with higher performance, per Lean thinking. Organizations and teams should strive to increase Deployment Frequency.

For implementation teams, deployment frequency is as much about development processes as it is about technical architecture.

Iterative planning. Deployment frequency is in part a function of the way project management approaches the planning process. Each release represents an aggregation of functionality that is considered significant or meaningful to some stakeholder. Rethinking the concept of a release at an organizational level can help improve deployment frequency. As project managers (and stakeholders) start to reimagine delivery as an ongoing stream of functionality rather than the culmination of an extended planning process, iterative planning takes hold. Teams plan just enough to get to the next demo and use the learning from that sprint as input for the next increment. 

User story mapping. User story mapping is a release planning pattern identified by Jeff Patton to get around the shortcomings of the traditional Agile backlog. If Agile sprint planning and backlog refinement is causing teams to deliberately throttle back on the number of software releases, it may be time to revisit how Agile is implemented in the development process itself, modifying by-the-book techniques with other approaches that may feel more natural to the team.

Container-native microservices architecture. Larger and more complex deployments are hard to deploy cleanly. It is difficult to automate the configuration of deployments with a large number of library and infrastructure dependencies, and without that automation, manual configuration mistakes are bound to happen. Knowing those deployments are painful and error-prone, teams shift toward fewer, less frequent deployments to reduce the number of outages and late night phone calls. Breaking a large monolithic deployment into smaller, simpler, more independent processes and artifacts makes deployments potentially less painful, which should give teams the assurance to increase deployment frequency to keep pace with customer demands.

These are just a few team-level techniques organizations can pursue to improve Lead Time for Change and Deployment Frequency, the software delivery metrics associated with market agility. In the next posts, I’ll outline some techniques teams can pursue to improve upon the measures of reliability: Mean Time to Recovery and Change Failure Rate.


About the author

Trevor Quinn is the Application Modernization Practice Director in Red Hat Consulting’s Solutions and Technology practice. Quinn’s team is responsible for repeatable solution definition and delivery for application layer solutions built around container platforms. Quinn has spent more than 15 years in IT as an analyst, software engineer, architect, and professional services manager. He has written and spoken about the intersection of DevOps and containers in articles and conference presentations. Quinn lives in Austin, Texas, where he enjoys the outdoors, except in July and August.

Read full bio