Looking for newer information on Helm? Check out our guide to making Kubernetes Operators with Helm in 5 steps!

 

Getting started with Helm on OpenShift

Helm needs little introduction as a popular way of defining, installing, and upgrading applications on Kubernetes. But did you know that it’s just as easy to install and use Helm on OpenShift as well? No special magic is required, making it straightforward to use Helm Charts on OpenShift Online, OpenShift Dedicated, OpenShift Container Platform (version >= 3.6) or OpenShift Origin (version >= 3.6).

This post will walk you through getting both the Tiller server and Helm client up and running on OpenShift, and then installing your first Helm Chart. It assumes that you already have the OpenShift oc client installed locally and that you are logged into your OpenShift instance. If you don’t have access to an OpenShift instance right now, you can sign up for free access to the OpenShift Online Starter plan.

Note that this post is solely an illustration of how OpenShift and Helm can run together; Helm is not a technology supported by Red Hat. If you are looking for a Red Hat supported way to define and install applications, please see OpenShift Templates and Ansible.

Single-tenant Helm Architecture

We’ll install Tiller in its own dedicated project, then grant it permissions to one or more other projects where Helm Charts will be installed*. This provides a clear and beneficial separation between the Tiller server (and its data) and the application(s) that it manages.

Using the above model, you can install your own private Tiller server on OpenShift to manage one or more applications across one or more of your own projects. However, note that today a single Tiller instance can’t safely scale to multitenant use, not least because Tiller carries out all of its actions using a single service account (see Helm issue #1918).

* The OpenShift Online Starter plan currently allows one project only. If you’re using the Starter plan to follow this post, that’s fine; we’ll just skip creating a subsequent project when the time comes.

Warning: container provenance and security

Before we get down to business, a few words of warning. A feature of Helm is that it makes it very easy to download and install arbitrary containerised applications from the internet. However, think twice before using this power! Consider: Do you trust the container images you’re using? Could they have security issues that will cause you problems? Will they be updated quickly if a security problem is discovered later?

If you haven’t looked already, now would be a great time to explore the Red Hat Container Catalog, which delivers you certified, trusted and secure Red Hat and third-party application container images. Every image from the Red Hat Container Catalog has a Container Health Index, and clearly shows security advisories and available updates, helping you to stay secure.

Speaking of security, one problem that unfortunately affects many of the applications in Helm’s stable repository is that they expect to be started with root privileges. This is, in fact, a security risk because containers don’t contain; a contained process running as root effectively has root privileges on your entire machine.

To keep your cluster safe by default, OpenShift prevents containers from running as root (although cluster-admins can override this). Unfortunately, this means that many charts from Helm’s stable repository won’t run out of the box on OpenShift today. However, the good news is that none of this prevents you from installing and managing secure (non-root) containers on OpenShift using Helm. So let’s get underway!

Step 1: Create an OpenShift project for Tiller. We’ll be imaginative and call the project “tiller”, but you can call it anything you like. Note: if you’re using a shared OpenShift instance, you’ll probably have to call it something different.

$ oc new-project tiller
Now using project "tiller" on server "https://...".
...

If you already have an OpenShift project you want to use, select it as follows:

$ oc project tiller
Now using project "tiller" on server "https://...".

Later, when we install the Helm client, it will need to know the name of the namespace (project) where Tiller is installed. This can be indicated by locally setting the TILLER_NAMESPACE environment variable as follows:

$ export TILLER_NAMESPACE=tiller

Step 2: Install the Helm client locally. We’ll use Helm version 2.9.0, which can be downloaded via https://github.com/kubernetes/helm/releases/tag/v2.9.0.

  • Linux
$ curl -s https://storage.googleapis.com/kubernetes-helm/helm-v2.9.0-linux-amd64.tar.gz | tar xz
$ cd linux-amd64
$ ./helm init --client-only
...
$HELM_HOME has been configured at /.../.helm.
Not installing Tiller due to 'client-only' flag having been set
Happy Helming!
  • OSX
$ curl -s https://storage.googleapis.com/kubernetes-helm/helm-v2.9.0-darwin-amd64.tar.gz | tar xz
$ cd darwin-amd64
$ ./helm init --client-only
...
$HELM_HOME has been configured at /.../.helm.
Not installing Tiller due to 'client-only' flag having been set
Happy Helming!
  • Windows

Download and extract https://storage.googleapis.com/kubernetes-helm/helm-v2.9.0-windows-amd64.zip. Open a command prompt in the newly created windows-amd64 folder.

C:\...\windows-amd64>helm init --client-only
...
$HELM_HOME has been configured at \.helm.
Not installing Tiller due to 'client-only' flag having been set
Happy Helming!

Step 3: Install the Tiller server. In principle this can be done using helm init, but currently the helm client doesn’t fully set up the service account rolebindings that OpenShift expects. To try to keep things simple, we’ll use a pre-prepared OpenShift template instead. The template sets up a dedicated service account for the Tiller server, gives it the necessary permissions, then deploys a Tiller pod that runs under the newly created SA.

Techie note: The permissions needed by the Tiller server in its namespace are as follows: “edit” on ConfigMap objects (this is where Tiller stores its state) and “read” on Namespaces.

oc process -f https://github.com/openshift/origin/raw/master/examples/helm/tiller-template.yaml -p TILLER_NAMESPACE="${TILLER_NAMESPACE}" -p HELM_VERSION=v2.9.0 | oc create -f -

At this point, you’ll need to wait for a moment until the Tiller server is up and running:

$ oc rollout status deployment tiller
Waiting for rollout to finish: 0 of 1 updated replicas are available...
deployment "tiller" successfully rolled out

Now that the Tiller server is installed, the Helm client can access it automagically by forwarding its gRPC API requests over a Kubernetes port-forward (this relies on the .kube/config file and the TILLER_NAMESPACE environment variable being correctly set up locally).

We’ll check that the Helm client and Tiller server are able to communicate correctly by running helm version. The results should be as follows:

$ ./helm version
Client: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"}

Step 4: Create a separate project where we’ll install a Helm Chart. If you’re using the OpenShift Online Starter plan, you should skip this step, because the Starter plan allows one project only.

$ oc new-project myapp
Now using project "myapp" on server "https://...".
...

Step 5: Grant the Tiller server edit access to the current project. The Tiller server will probably need at least “edit” access to each project where it will manage applications. In the case that Tiller will be handling Charts containing Role objects, “admin” access will be needed.

$ oc policy add-role-to-user edit "system:serviceaccount:${TILLER_NAMESPACE}:tiller"

Step 6: Install a Helm Chart. As an example, we’ll install the trusty OpenShift nodejs-ex sample application:

$ ./helm install https://github.com/jim-minter/nodejs-ex/raw/helm/helm/nodejs-0.1.tgz -n nodejs-ex
NAME: nodejs-ex
LAST DEPLOYED: ...
NAMESPACE: myapp
STATUS: DEPLOYED
...

And that’s all there is to it. You’ve now deployed Helm, and installed a Helm Chart, on OpenShift!

Bonus Step 7 for Helm Chart authors: Explore a Helm Chart containing OpenShift objects.

OpenShift offers a number of additional object kinds that aren’t all available in Kubernetes, including for example BuildConfig, DeploymentConfig and Route.

Just as any kind of Kubernetes object can be included in a Helm Chart when running on Kubernetes, exactly the same is true with OpenShift objects (in addition to Kubernetes objects) when running on OpenShift. Note that this functionality requires a version of OpenShift which supports Kubernetes API groups, i.e. version >= 3.6.

https://github.com/openshift/nodejs-ex/tree/master/helm/nodejs contains the unpackaged Helm Chart that we deployed in step 6. It includes a Kubernetes object (a Service), as well as a number of OpenShift objects (an ImageStream, a BuildConfig and a DeploymentConfig).

If you look closely at the ImageStream, BuildConfig and DeploymentConfig templates in the Helm Chart, they’ll probably look pretty familiar to you, possibly except for one detail. Whereas previously you might have been familiar with reading/writing {"Kind": "BuildConfig", "apiVersion": "v1"} for objects in OpenShift Templates, with Helm it is essential to specify the full API group in the apiVersion field, e.g. {"Kind": "BuildConfig", "apiVersion": "build.openshift.io/v1"}.

Aside from that, there are no other changes required!

For reference, here’s a non-exhaustive table of some common OpenShift object kinds and their full API groups:

apiVersion Kind
apps.openshift.io/v1 DeploymentConfig
authorization.openshift.io/v1, rbac/v1beta1* ClusterRole
authorization.openshift.io/v1, rbac/v1beta1* ClusterRoleBinding
authorization.openshift.io/v1, rbac/v1beta1* Role
authorization.openshift.io/v1, rbac/v1beta1* RoleBinding
build.openshift.io/v1 Build
build.openshift.io/v1 BuildConfig
image.openshift.io/v1 Image
image.openshift.io/v1 ImageStream
project.openshift.io/v1 Project
route.openshift.io/v1 Route
template.openshift.io/v1 Template
user.openshift.io/v1 Group
user.openshift.io/v1 User

* objects in authorization.openshift.io/v1 are transitioning to rbac/v1beta1: prefer the latter where available.

More resources

For more on using Helm and OpenShift, check out Deploy Helm Charts on Minishift’s OpenShift for Local Development.

{{cta('1ba92822-e866-48f0-8a92-ade9f0c3b6ca')}}