OpenShift & Kubernetes: Where We’ve Been and Where We’re Going Part 1

As we approach the end of another year for Red Hat OpenShift and Kubernetes, and another Kubecon, which I believe will be even bigger than the last, it’s a great time to reflect on both where we’ve been and where we’re going. In this blog I will look back over the past 4+ years since Red Hat first got involved in the Kubernetes project, where we have focused our contributions and the key decisions that got us to this point. Then in Part II, I will look ahead at some of the areas we’re focusing on now and into the future.  

In the Beginning

OpenShift is a leading enterprise Kubernetes container platform. With hundreds of customers, spanning industry verticals, and deployed in enterprise data centers and across major public clouds, OpenShift is enabling customers to evolve their applications, infrastructure and ultimately their business.  

While OpenShift itself has been around since 2011, it was the decision to standardize on Kubernetes in 2014 that changed the game for OpenShift and for Red Hat as a whole. OpenShift 3.0 would launch a year later in June 2015 and it was built on 3 key pillars:

  • Linux, building on Red Hat’s heritage and deep expertise in Linux and the reliability of Red Hat Enterprise Linux which served as the foundation for OpenShift 3
  • Containers, designed to provide efficient, immutable and standardized application packaging that enables application portability across a hybrid cloud environment
  • Kubernetes, providing powerful container orchestration and management capabilities and becoming one of the fastest growing open source projects of the last decade

These 3 pillars still form the foundation for OpenShift today, but bringing Kubernetes to the enterprise required more. Enterprises typically have complex requirements around security, interoperability, compatibility and more and they need a platform for both building new cloud-native applications and modernizing existing applications and services. In the last 4+ years Red Hat has invested to build new capabilities, both in Kubernetes and around Kubernetes, driven by the needs of our customers and the community. In the past I’ve written about why Red Hat chose Kubernetes, so here I’d like to highlight areas of Kubernetes where the OpenShift team has invested, why they’re important, and then look ahead to what comes next.

One Cluster, Many Users and Many Applications

When you use many Kubernetes cloud services today, you own the cluster and use it to run just your own applications. In OpenShift, we had a simple requirement – a single Kubernetes cluster needed to support many applications and multiple users. Multi-tenancy is a hotly debated topic in the Kubernetes community and still evolving to this day.  Our requirements were driven by the needs of our enterprise customers, who didn’t want to run separate clusters for every developer or for each application. It was also driven by the needs of our own OpenShift Operations team, who managed a free developer cloud in Red Hat OpenShift Online Starter, that needed to support potentially thousands of users and applications on each Kubernetes cluster.

To address this, Red Hat invested engineering resources in a number of key areas. We helped drive the development of Kubernetes namespaces and resource quotas, so that multiple users could share a single Kubernetes cluster and be limited to a fixed amount of resources on that cluster. We also drove the development of Kubernetes roles based access controls (RBAC), so that those users could be assigned different roles, with different level of permissions, to execute in the cluster, rather than allowing everyone to be a cluster admin. While RBAC is effectively table stakes today, it didn’t actually appear in Kubernetes until version 1.6. OpenShift has provided RBAC since Kubernetes 1.0, and Red Hat was eventually able to get this capability accepted upstream, largely based on OpenShift’s original implementation. While features like namespaces, quota and RBAC may not have been a priority for public cloud services, where each user owns their own Kubernetes cluster, this is often not the case in the enterprise where administrators manage clusters that enable hundreds of developers and applications. Many of OpenShift’s enterprise customers would not have been able to adopt Kubernetes without these core features.

Making Application Deployments Simpler and More Secure

While Kubernetes has always provided powerful primitives like pods, services and controllers for running applications, deploying those applications and application updates in Kubernetes 1.0 was far from simple. This is another area where Red Hat invested. We developed Deployment Configurations in OpenShift 3.0 (Kubernetes 1.0) to provide a template for parameterizing deployment inputs, doing rolling deployments, enabling rollbacks to a previous deployment state and exposing triggers to drive automated deployments in response to events. Many of these features would eventually become part of the Kubernetes Deployments feature set, which are also fully supported in OpenShift today. We also maintained support for Deployment Configurations, because there are still key features like lifecycle hooks, which allow behavior to be injected into the deployment process at predefined points, but are not yet supported in Deployments. This enables OpenShift customers who rely on those features or value these capabilities to leverage them, while also enabling customers who prefer Deployments to leverage this standard Kubernetes feature. Users can choose to deploy their applications from the OpenShift Console, OpenShift “oc” CLI (an extension of “kubectl”) or Kubernetes “kubectl” CLI or they can integrate their own deployment tools.

Enterprise customers also told us they needed more security tools for the applications they were deploying. The container ecosystem has come a long way in this regard, in terms of solutions for image scanning, signing and more. But too often developers are still finding and deploying images that lack any provenance and may be less secure. The Kubernetes Pod Security Policy feature is designed to provide another layer of security, by defining a set of conditions that a pod must run with in order to be accepted into the system. An example policy requires that containers must always run as non-root, which sounds obvious until you realize how many container images exist in the world which unnecessarily run with escalated privileges. Pod Security Policy was a recent addition to Kubernetes, but it was inspired by an earlier feature we developed in OpenShift called Security Context Constraints that had the same objective. This isn’t the first feature developers think about when using Kubernetes. But it can be a valuable tool for Enterprise administrators to help provide greater security for their Kubernetes clusters, especially in production environments.  

This focus on security and standards has also driven our work at the Linux container runtime layer. Our work in the Open Container Initiative helped drive standards for the container runtime and image format that could not be controlled by a single vendor. From there we developed CRI-O, a lightweight, stable and more secure container runtime for Kubernetes, based on the OCI specification and integrated via the Kubernetes Container Runtime Interface.

Enabling More Application Workloads on Kubernetes

We also talked to enterprise customers who wanted to move beyond simple, stateless, cloud native apps that most PaaS platforms limited them to, to more complex stateful services that are prevalent in the enterprise. It’s hard to run stateful services without access to durable storage. We created the OpenShift storage scrum team to focus on this area and contribute to Kubernetes storage volume plugins upstream, to enable different storage backends for those stateful services. Red Hat has since driven further enhancements like dynamic storage provisioning, introduced innovative solutions like OpenShift Container Storage and are now further standardizing storage integration for storage providers through the introduction of the Kubernetes Container Storage Interface (CSI).  

StatefulSets came out of similar customer interactions, as we saw customers who wanted to run stateful services that needed more than a storage back end. These traditional stateful services required guarantees about the ordering and uniqueness of pod instances within a service vs. cattle-like services where each pod instance was identical and could blindly be redeployed on failure. An example would be a SQL database cluster, where each instance in the database cluster may have a different identity and role, which can now be enabled by StatefulSets.

Once you see what Kubernetes can do for your applications, you may often want to extend those capabilities to other workloads. In OpenShift, we are also proud to have partnered directly with our customers to make this happen. One of our first OpenShift 3.0 customers contributed to the development of Kubernetes Jobs and CronJob controllers. Like many innovations, these were born out of necessity, because many applications they needed to run on Kubernetes had backing batch services that needed to run once, or run to completion. These weren’t a fit for Replication Controllers, which were designed for long running services, and so the Kubernetes Job controller was born.

Enabling new workloads not only drove some of the key contributions we worked on over the past 4 years, it is also inspiring a lot of the work we are doing today and into the future. In my next blog, I will talk about how we are enabling lifecycle management of stateful services with Kubernetes Operators and how we are bringing entirely new classes of workloads to Kubernetes.

Connecting Users to Their Applications

The reason you deploy applications in Kubernetes is to serve your users, but connecting users to those applications is also not trivial. In Kubernetes 1.0, there was no concept of Ingress and therefore routing inbound traffic to Kubernetes pods and services was a very manual task if you were using upstream bits. In OpenShift 3.0, we developed the concept of Routes to provide automated load balancing of incoming requests to the applications they were intended for. Routes was a predecessor to Kubernetes Ingress controllers which are also fully supported in OpenShift today.  

Using Routes or Ingress you can expose HTTP and HTTPS routes from outside the cluster to services within the cluster. So why does OpenShift support both? Once again there are capabilities in Routes that have yet to be added to Ingress, which we cover in more detail in this blog. But also, Ingress is still a Beta feature in Kubernetes and may have key limitations depending on the ingress implementation. Enabling customers to have timely access to key features, but also long term stable implementations they can rely on in mission critical environments is core to OpenShift’s value proposition.

In a shared cluster environment, you may also want to require that each application only sees the traffic that’s intended for it. While basic networking solutions like Flannel implement a flat network with no isolation between applications, OpenShift included Openshift SDN, a fully supported multi-tenant SDN from our earliest releases. When deployed in multi-tenant mode, the OpenShift SDN, based on Open vSwitch, would restrict applications to only seeing network traffic within its namespace or from other select namespaces that had been enabled. We also helped drive the Kubernetes Container Networking Interface to enable a rich ecosystem of 3rd party SDN plugins for Kubernetes clusters. Since then, Red Hat engineers have worked with Google, Tigera and others to develop Network Policy, which is today fully supported in OpenShift and can take multi-tenant network isolation to another level, providing more advanced isolation capabilities between namespaces, specific services, ports and more.

More Work to Be Done

While these Kubernetes innovations have been exciting, there is a lot more work to be done.  Although OpenShift and Kubernetes have evolved over many releases since initial launch and can be relied on for production application deployments today, in many ways we are just scratching the surface in a number of key areas and there is a lot more innovation to come. These days, there are also many new vendors in the Kubernetes ecosystem and I am often asked how we will differentiate OpenShift and stand out from the crowd. The OpenShift team is planning to do what we’ve always done: continue listening to our enterprise customers, continue investing in the Kubernetes community and surrounding ecosystems and continue driving new innovations so customers can successfully deploy and manage applications across an open hybrid cloud environment. I will dive into more details on this in my next blog.

Categories
Kubernetes
Tags
, ,