Getting started with Helm on OpenShift

Getting started with Helm on OpenShift

Helm needs little introduction as a popular way of defining, installing, and upgrading applications on Kubernetes. But did you know that it’s just as easy to install and use Helm on OpenShift as well? No special magic is required, making it straightforward to use Helm Charts on OpenShift Online, OpenShift Dedicated, OpenShift Container Platform (version >= 3.6) or OpenShift Origin (version >= 3.6).

This post will walk you through getting both the Tiller server and Helm client up and running on OpenShift, and then installing your first Helm Chart. It assumes that you already have the OpenShift oc client installed locally and that you are logged into your OpenShift instance. If you don’t have access to an OpenShift instance right now, you can sign up for free access to the OpenShift Online Starter plan.

Note that this post is solely an illustration of how OpenShift and Helm can run together; Helm is not a technology supported by Red Hat. If you are looking for a Red Hat supported way to define and install applications, please see OpenShift Templates and Ansible.

Single-tenant Helm Architecture

We’ll install Tiller in its own dedicated project, then grant it permissions to one or more other projects where Helm Charts will be installed*. This provides a clear and beneficial separation between the Tiller server (and its data) and the application(s) that it manages.

Using the above model, you can install your own private Tiller server on OpenShift to manage one or more applications across one or more of your own projects. However, note that today a single Tiller instance can’t safely scale to multitenant use, not least because Tiller carries out all of its actions using a single service account (see Helm issue #1918).

* The OpenShift Online Starter plan currently allows one project only. If you’re using the Starter plan to follow this post, that’s fine; we’ll just skip creating a subsequent project when the time comes.

Warning: container provenance and security

Before we get down to business, a few words of warning. A feature of Helm is that it makes it very easy to download and install arbitrary containerised applications from the internet. However, think twice before using this power! Consider: Do you trust the container images you’re using? Could they have security issues that will cause you problems? Will they be updated quickly if a security problem is discovered later?

If you haven’t looked already, now would be a great time to explore the Red Hat Container Catalog, which delivers you certified, trusted and secure Red Hat and third-party application container images. Every image from the Red Hat Container Catalog has a Container Health Index, and clearly shows security advisories and available updates, helping you to stay secure.

Speaking of security, one problem that unfortunately affects many of the applications in Helm’s stable repository is that they expect to be started with root privileges. This is, in fact, a security risk because containers don’t contain; a contained process running as root effectively has root privileges on your entire machine.

To keep your cluster safe by default, OpenShift prevents containers from running as root (although cluster-admins can override this). Unfortunately, this means that many charts from Helm’s stable repository won’t run out of the box on OpenShift today. However, the good news is that none of this prevents you from installing and managing secure (non-root) containers on OpenShift using Helm. So let’s get underway!

Step 1: Create an OpenShift project for Tiller. We’ll be imaginative and call the project “tiller”, but you can call it anything you like. Note: if you’re using a shared OpenShift instance, you’ll probably have to call it something different.

$ oc new-project tiller
Now using project "tiller" on server "https://...".

If you already have an OpenShift project you want to use, select it as follows:

$ oc project tiller
Now using project "tiller" on server "https://...".

Later, when we install the Helm client, it will need to know the name of the namespace (project) where Tiller is installed. This can be indicated by locally setting the TILLER_NAMESPACE environment variable as follows:

$ export TILLER_NAMESPACE=tiller

Step 2: Install the Helm client locally. We’ll use Helm version 2.6.1, which can be downloaded via

  • Linux
$ curl -s | tar xz
$ cd linux-amd64
$ ./helm init --client-only
$HELM_HOME has been configured at /.../.helm.
Not installing Tiller due to 'client-only' flag having been set
Happy Helming!
  • OSX
$ curl -s | tar xz
$ cd darwin-amd64
$ ./helm init --client-only
$HELM_HOME has been configured at /.../.helm.
Not installing Tiller due to 'client-only' flag having been set
Happy Helming!
  • Windows

Download and extract Open a command prompt in the newly created windows-amd64 folder.

C:\...\windows-amd64>helm init --client-only
$HELM_HOME has been configured at \.helm.
Not installing Tiller due to 'client-only' flag having been set
Happy Helming!

Step 3: Install the Tiller server. In principle this can be done using helm init, but currently the helm client doesn’t fully set up the service account rolebindings that OpenShift expects. To try to keep things simple, we’ll use a pre-prepared OpenShift template instead. The template sets up a dedicated service account for the Tiller server, gives it the necessary permissions, then deploys a Tiller pod that runs under the newly created SA.

Techie note: The permissions needed by the Tiller server in its namespace are as follows: “edit” on ConfigMap objects (this is where Tiller stores its state) and “read” on Namespaces.

oc process -f -p TILLER_NAMESPACE="${TILLER_NAMESPACE}" | oc create -f -

At this point, you’ll need to wait for a moment until the Tiller server is up and running:

$ oc rollout status deployment tiller
Waiting for rollout to finish: 0 of 1 updated replicas are available...
deployment "tiller" successfully rolled out

Now that the Tiller server is installed, the Helm client can access it automagically by forwarding its gRPC API requests over a Kubernetes port-forward (this relies on the .kube/config file and the TILLER_NAMESPACE environment variable being correctly set up locally).

We’ll check that the Helm client and Tiller server are able to communicate correctly by running helm version. The results should be as follows:

$ ./helm version
Client: &version.Version{SemVer:"v2.6.1", GitCommit:"bbc1f71dc03afc5f00c6ac84b9308f8ecb4f39ac", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.6.1", GitCommit:"bbc1f71dc03afc5f00c6ac84b9308f8ecb4f39ac", GitTreeState:"clean"}

Step 4: Create a separate project where we’ll install a Helm Chart. If you’re using the OpenShift Online Starter plan, you should skip this step, because the Starter plan allows one project only.

$ oc new-project myapp
Now using project "myapp" on server "https://...".

Step 5: Grant the Tiller server edit access to the current project. The Tiller server will probably need at least “edit” access to each project where it will manage applications. In the case that Tiller will be handling Charts containing Role objects, “admin” access will be needed.

$ oc policy add-role-to-user edit "system:serviceaccount:${TILLER_NAMESPACE}:tiller"

Step 6: Install a Helm Chart. As an example, we’ll install the trusty OpenShift nodejs-ex sample application:

$ ./helm install -n nodejs-ex
NAME:   nodejs-ex

And that’s all there is to it. You’ve now deployed Helm, and installed a Helm Chart, on OpenShift!

Bonus Step 7 for Helm Chart authors: Explore a Helm Chart containing OpenShift objects.

OpenShift offers a number of additional object kinds that aren’t all available in Kubernetes, including for example BuildConfig, DeploymentConfig and Route.

Just as any kind of Kubernetes object can be included in a Helm Chart when running on Kubernetes, exactly the same is true with OpenShift objects (in addition to Kubernetes objects) when running on OpenShift. Note that this functionality requires a version of OpenShift which supports Kubernetes API groups, i.e. version >= 3.6. contains the unpackaged Helm Chart that we deployed in step 6. It includes a Kubernetes object (a Service), as well as a number of OpenShift objects (an ImageStream, a BuildConfig and a DeploymentConfig).

If you look closely at the ImageStream, BuildConfig and DeploymentConfig templates in the Helm Chart, they’ll probably look pretty familiar to you, possibly except for one detail. Whereas previously you might have been familiar with reading/writing {"Kind": "BuildConfig", "apiVersion": "v1"} for objects in OpenShift Templates, with Helm it is essential to specify the full API group in the apiVersion field, e.g. {"Kind": "BuildConfig", "apiVersion": ""}.

Aside from that, there are no other changes required!

For reference, here’s a non-exhaustive table of some common OpenShift object kinds and their full API groups:

apiVersion Kind DeploymentConfig, rbac/v1beta1* ClusterRole, rbac/v1beta1* ClusterRoleBinding, rbac/v1beta1* Role, rbac/v1beta1* RoleBinding Build BuildConfig Image ImageStream Project Route Template Group User

* objects in are transitioning to rbac/v1beta1: prefer the latter where available.

More resources

For more on using Helm and OpenShift, check out Deploy Helm Charts on Minishift’s OpenShift for Local Development.

OpenShift Container Platform, OpenShift Dedicated, OpenShift Online, OpenShift Origin
  • Karim Boumedhel

    great article, jim!
    it s worth nothing that some charts might fail to deploy if running in a cluster without dynamic storage provisioning.
    for those cases, it’s generally enough to actually edit the deployment to use an existing pvc

  • Mark Vinkx

    I don’t know helm and try to figure out if I will use it to create api objects on my open shift platform

    In this scenario new objects are created. Can you also modify existing objects. For instance I want to change something in the deployment config. Can I see what will be changed to the deployment config before committing it ?