Kubernetes 1.15: Enabling the Workloads

 

kubernetes logo

The last mile for any enterprise IT system is the application. In order to enable those applications to function properly, an entire ecosystem of services, APIs, databases and edge servers must exist. As Carl Sagan once said, “If you wish to make an apple pie from scratch, you must first invent the universe.”

To create that IT universe, however, we must have control over its elements. In the Kubernetes universe, the individual solar systems and planets are now Operators, and the fundamental laws of that universe have solidified to the point where civilizations can grow and take root.

Discarding the metaphor, we can see this in the introduction of Object Count Quota Support For Custom Resources. In English, this enables administrators to count and limit the number of Kubernetes resources across the broader ecosystem in a given cluster.  This means services like Knative, Istio, and even Operators like the CrunchyData PostgreSQL Operator, the MongoDB Operator or the Redis Operator can be controlled via quota using the same mechanisms that standard Kubernetes resources have enjoyed for many releases.

That’s great for developers, who can now be limited by certain expectations. It would not benefit the cluster for a bad bit of code to create 30 new PostgreSQL clusters because someone forgot to add a “;” at the end of a line.  Call them “guardrails” that protect against unbounded object growth in your etcd database.

And this is a general theme across Kubernetes, at the moment: allowing the ecosystem around the Kubernetes Core to better take care of itself, monitor itself, and to generally provide the groundwork needed by those looking to extend the functionality of this container-native infrastructure platform. The CNCF is focusing on filling out the final remaining areas of the core that need polish, while preparing Kubernetes for the next million applications and services that will make the migration.

More and more applications and platforms are building on top of Kubernetes as a platform for extension.  A key requirement for control in the enterprise is the need to manage consumption of these resource APIs.  Whether its an API resource introduced from Istio or Knative, or an end-user application oriented API provided via an Operator in OperatorHub, administrators have a standard format for controlling consumption of those resources using Kubernetes ResourceQuota everywhere.

Elsewhere in Kubernetes 1.15, there were some Node enhancements. The third-party device monitoring feature is enabled by default. This allows device vendors to provide device-specific metrics (like GPUs) without having to contribute to core Kubernetes. Device specific operators can now cover both the deployment and production monitoring for a device associated with a pod. Similar to devices, CSI volume stats are also available as a kubelet metric source enabling custom storage providers to support monitoring in production. The core Kubelet has also improved the performance around monitoring to only collect the core metrics required by the platform itself as part of a longer term journey to separate the kubelet from cAdvisor.  Finally, the ability to isolate pid resources for the node from scheduling has graduated to beta completing the journey started in the release prior.

In aggregate, these types of incremental changes that enable control and visibility into how the platform is consumed enables the broader ecosystem to build upon Kubernetes as a reliable platform for extension that administrators can trust.  We’ll be working to integrate Kubernetes 1.15 into future releases of Red Hat OpenShift.

Categories
Kubernetes, News
Tags
, , ,