Kubernetes on Metal with OpenShift

My first concert was in the mid-80s, when AC/DC came to the Providence Civic Center in Rhode Island, and it was glorious. Music fans who grew up in the 80s will fondly remember the birth of MTV, the emergence of the King of Pop and the heyday of rock-n-roll’s heavy metal gone mainstream era, when long hair and guitar riffs both flowed freely. So recently when Def Leppard joined Journey at Fenway Park in Boston for their 2018 joint tour, I knew I had to be there.

Metal also dominated the datacenter in the 80s and 90s, as mainframes and minicomputers made way for bare-metal servers running enterprise applications on UNIX and, soon after, open source Linux operating systems powered by Red Hat. Just like heavy metal eventually made way for the angst-filled grunge rock era of the 90s, so too did application provisioning on bare metal make way for the era of virtualization driven by VMWare – with subsequent VM sprawl and costly ELAs creating much angst to this day for many IT organizations.

Metal makes a comeback

Recently, bare metal computing has also been making a big comeback. In the public cloud, AWS and other providers are now offering bare metal instances with direct access to hardware, to enable applications that take advantage of low-level hardware features. A report by Grand View Research, Inc. projected that the bare metal cloud market would grow to $26.21 billion by 2025, with an estimated compound annual growth rate of 38.4 percent. A related Markets and Markets report identified that “the major forces driving the bare metal cloud market are the growing need for high-performance computing and reliable load balancing of data-intensive and latency-sensitive operations.” This report also cites the “necessity for non-locking compute & storage resources, advent of fabric virtualization, and the need for noiseless neighbors & hypervisor tax” as driving this trend.

In the datacenter, many customers I speak to are also choosing to run their OpenShift Kubernetes clusters on bare metal servers for the same reasons. One of the most common questions I get is “Do I need to run containers in VMs?” These customers see containers as an agile, efficient and portable way to package their applications and Kubernetes as a great way to manage those applications at scale across a hybrid cloud environment. In this new world, many organizations view the hypervisor as redundant and they desperately want to reduce the overhead and expense of their sprawling virtualization environments, and make no compromises on their hybrid cloud journey.

Containers and Kubernetes on VMs

Of course, containers and Kubernetes have no dependency on the hypervisor, and will happily run on either physical or virtual server environments. However, just as containers primarily run on Linux today, Kubernetes-based platforms like OpenShift are most often deployed in Linux VM-based environments, in both the datacenter and public cloud.

While some customers still value the additional security boundary that a VM offers, advances in container security driven by Red Hat and others have made this less of a concern. Red Hat customers running containers on bare metal trust the security of Red Hat Enterprise Linux and OpenShift extends that security to Kubernetes clusters.

In my opinion, the biggest reasons customers are running containers on VMs are related to automation and accessibility, with a bit of inertia thrown in for good measure. When you run your container environments on bare metal, you lose the automation provided by your virtualization platform. This includes not only automated provisioning of new VM instances to run those containers, but also automation of storage, networking and everything else provided by the underlying virtualization platform and integrated automation solutions like Ansible. Since not every application runs in containers, this automation can benefit both VM and container-based workloads.

This automation also makes VMs more accessible than physical servers for many users. While in some organizations it still may take days or even weeks to get new VMs, it likely takes significantly longer to get bare metal servers. Until recently, it’s also been difficult to get bare metal environments in the public cloud, and it’s still not as seamless as getting VMs. When you combine automation and accessibility with the inertia of “we’ve always done it this way,” it is easy to see why despite all the new paradigms introduced by containers and Kubernetes, many organizations choose to run them the same old way.

Containers and Kubernetes on metal

There are a growing number of reasons customers want to run containers and Kubernetes on bare metal environments. In addition to eliminating the potential overhead of the hypervisor and making more efficient use of compute, the expanded range of workloads that customers are bringing to Kubernetes make bare metal an attractive option.

We recently discussed how Red Hat OpenShift Container Platform 3.10 has paved the way for intelligent and performance-sensitive applications on Kubernetes with the work Red Hat and the community have been driving. This includes Kubernetes Device Manager for advertising specialized hardware resources like GPU enabled nodes, CPU Manager to constrain workloads to specific CPUs and Huge Pages for memory-intensive workloads.  

We also see an increasing number of customers running SQL and NoSQL databases, data streaming, analytics and AI/ML services on OpenShift and Kubernetes in general. These customers are moving beyond stateless, cloud-native applications and view Kubernetes as the best way to run a broad array of workloads.

At Red Hat, our philosophy since we launched OpenShift has always been “If it runs on Linux it should run in containers!” and this has driven much of the work we have been doing upstream and directly with customers. We also recognize that not every workload will move to containers right away and VMs will be around for some time. But with KVM, a VM is just another worload that runs on Linux and these VMs can potentially benefit from the power of Kubernetes orchestration. Applying our philosophy, we launched the Kubevirt project to build a virtualization API for Kubernetes and enable unified management of both container and VM workloads.

Simplifying OpenShift on metal deployments

In this growing universe of Kubernetes and container native workloads, running on bare metal compute is often the best option. The question then becomes, how do we address the challenges that make running on bare metal a challenge? Luckily, these are all solvable problems and many customers running OpenShift Kubernetes clusters on bare metal today have done just that, by bootstrapping their own bare metal OpenShift clusters and managing them at scale.

At Red Hat Summit, we showed how we are trying to make this even easier going forward when we demonstrated the deployment of a full OpenShift Kubernetes environment, starting from bootstrapping a bare metal rack of servers running up on the stage, leveraging technologies like Director and Ironic from the OpenStack project integrated with Ansible. Enabling bare metal cloud operations is a key focus for upcoming Red Hat OpenStack releases and work we are doing upstream.

At Red Hat Summit, we also unveiled our OpenShift and CoreOS integration roadmap. We demonstrated new capabilities around day two management with Prometheus monitoring, Operator driven automation and fully immutable host infrastructure on Red Hat CoreOS. By combining open source technologies found in Red Hat Enterprise Linux, OpenStack, KVM, Ansible and Kubernetes with the expanded automation and management capabilities found in the coming Red Hat CoreOS-empowered OpenShift, we feel we can deliver more efficient and cost-effective Kubernetes Native Infrastructure on metal.  

Looking ahead

Just as musical tastes vary widely, we know that bare metal computing is not going to make sense for every user or every application. But as technology evolves to address new customer demands, we often find new uses for old approaches. The rising interest and adoption of bare metal in both the public cloud and datacenter environments, driven by the evolution of containers and Kubernetes technology, and addressing customer demands to run new classes of workloads across hybrid cloud environments, is a testament to that.

At Red Hat, we will continue to drive this evolution in the upstream community and enable enterprise customers to take advantage of it across all of their environments. And finally, the 30,000 fans, young and old, braving the rain in Boston, to view rockers 30+ years past their prime shows that even 80s music could see a resurgence in the not too distant future. That’s something we can all look forward to!

 

Categories
Kubernetes, OpenShift Container Platform, Thought Leadership
Tags