k8s-kubernetes-openshift

I often get asked why Red Hat chose to standardize on Kubernetes for OpenShift, instead of going with a competing container orchestration solution. OpenShift is Red Hat’s enterprise Kubernetes distribution, available as both a commercial software solution (OpenShift Container Platform, available to run on OpenStack, VMware, AWS, GCP, Azure and any platform that delivers RHEL 7) and a public cloud service (OpenShift Dedicated and OpenShift Online). There are a number of open source container orchestration solutions available today. The OpenShift product team explored many of them, and we even explored building our own. In this blog, I will explain why we chose Kubernetes and why more than two years later, we couldn’t be happier with our decision!

Background

OpenShift was first launched in 2011 and always relied on Linux containers to deploy and run user applications. In OpenShift v1 & v2 we used our own platform-specific container runtime and container orchestration engine, like many PaaS solutions that relied on containers. In mid-2013, Red Hat made an early decision to support the docker open source project and help drive industry standards for the container runtime and packaging format. We brought full docker support to Red Hat Enterprise Linux 7 (as well as Fedora and CentOS) and this became the foundation for OpenShift 3.

However, we always knew that OpenShift would need more than a container runtime. As I often like to say, real applications don’t run in a single container and real production applications can’t be deployed on a single host. The ability to orchestrate multiple containers across multiple hosts was a critical requirement for OpenShift. In late 2013 and early 2014 we explored a number of options, including nascent orchestration efforts in the docker community, a thorough evaluation of Mesos and we even experimented with building our own OpenShift-specific orchestration solution. We even heard calls from the Twitterati that Red Hat should embrace other projects instead of OpenShift.

Ultimately our investigation led us to Kubernetes and looking back on it now there are three main reasons why - great technology, a great partner and a truly great community.

Great Technology

What we first saw and still love about Kubernetes were powerful primitives for container orchestration and management. These included:

  • Kubernetes pods that allowed developers to deploy one or multiple containers as a single atomic unit.
  • Services to access a group of pods at a fixed address and integrated IP and DNS-based service discovery to link those services together.  
  • Replication controllers to ensure that the desired number of pods is always running and labels to identify pods and other Kubernetes objects.
  • A powerful networking model to manage containers across multiple hosts
  • The ability to orchestrate storage, allowing you to run both stateless and stateful services in containers.
  • Simplified orchestration models that quickly allowed applications to get running without the need for complex two-tier schedulers.
  • An architecture that understood that the needs of developers and operators were different and took both of those requirements into considerations, eliminating the need to compromise both important functions.

We also liked that Kubernetes was written from the ground up in the Go language and was specifically designed for container orchestration. Nothing we saw in the docker community or in our own homegrown efforts even came close. While Mesos already had an established track record in big data, it was focused more on cluster management and relied on plugins like Marathon or Aurora to orchestrate containerized application services. These plugins were less capable than what we saw in Kubernetes, and Mesos itself was a more complex C++ code base, that we felt would be more difficult to extend and maintain. Even today we see solutions like Docker Swarm and Mesos copying many of the container orchestration primitives that Kubernetes pioneered. But the Kubernetes community has not stood still, and innovation in Kubernetes continues marching forward at an amazing pace with each release.  

A Great Partner

Looking back through my old emails, I discovered that our first meeting with Google on the containers topic was in early March 2014. That meeting included folks like Craig McLuckie, Joe Beda, Brendan Burns, Martin Buhr, Ville Aikas and other pioneers of the Kubernetes project at Google. Of course at that time, we didn’t know that Kubernetes existed. Our goal was just to discuss the work that both companies had been doing in the docker community and explore areas of alignment.

Both Google and Red Hat have a long history of contributions to Linux and containers technology. It was Google’s work on Linux Control groups (Cgroups) back in 2006 that formed the foundation for containers in Linux and enabled projects like docker to exist. Red Hat has also contributed extensively to containers technology in Linux and, along with Google, quickly became top contributors to the docker project.

At that meeting, we learned that not only was Google committed to docker and bringing it to the Google Cloud Platform, but that they also had a new project in the works for container orchestration. This project, code named “Seven of Nine” at the time, was subsequently demonstrated to Red Hat Engineering leads Matt Hicks, Clayton Coleman, Daniel Riek and others and suffice it to say, it was love at first sight! ;)  Many of the orchestration capabilities were drawn from Google’s own experience running legendary systems like Borg and orchestrating containers at scale over many years. It’s an overused line at this point, but at Google everything runs in a container and when you deploy billions of containers every single week, you quickly learn what works and what doesn’t.

Project “Seven of Nine” ultimately became Kubernetes and launched in July of that year, reaching 1.0 status just a year later in July 2015. I’ve often heard it said that when evaluating a company, you should focus on the people, not the products. In that regard, the Google Cloud team was hard to beat and their knowledge and experience with containers and managing them at scale was second to none.

A Truly Great Community

A mistake people make when evaluating open source software is to spend too much time looking at the technology and not enough time evaluating the community around it. This is one of the first lessons I learned as a Product Manager at Red Hat. The current state of any open source technology is just a snapshot in time. The state of the community around it is what determines the pace of innovation and how quickly that technology will evolve over time. In just over a year, the Kubernetes community has grown to nearly 4x the number of contributors as any other container orchestration project and has become one of the fastest growing open source projects on Github. And now under the governance of the Cloud Native Computing Foundation (CNCF), we’ve seen outstanding collaboration and innovation from a broad set of companies and individual contributors.

From our earliest discussions with Google, we saw their commitment to build a completely open community that was based on meritocracy. They invited Red Hat to join that community and asked for our input on how to shape it. When Kubernetes was finally released as open source in July 2014, Red Hat joined in the launch and today we are #2 only to Google in contributions. With Google’s support, we’ve worked to help enable Kubernetes for enterprise customer use cases and we deliver that through OpenShift.

The vibrancy of the Kubernetes community is reflected in the growth of the project and new features being introduced with every release.  Today we see Kubernetes expanding to run at new levels of scale, enabling new workloads, driving cluster federation and addressing enterprise requirements around security and manageability, while enhancing ease of use.

That vibrancy is also reflected in events like Kubecon 2016, happening this week in Seattle. The event is sold out, and is expecting twice as many attendees as the last years event, with hundreds more on the waiting list.

And once again, I and the entire Red Hat team are excited to be part of it.