Motivation

OpenShift is an application platform built around a Kubernetes core. Distilling decades of experience with container clusters, Kubernetes has quickly become the defacto system for orchestrating containers on Linux. OpenShift employs standard Kubernetes mechanisms to maintain compatibility with both interfaces and techniques, but as an enterprise distribution, ready for production deployment and day-zero support of critical applications, it extends and diverges from “vanilla” Kubernetes in some ways.

Where OpenShift differs, it is often where no equivalent feature existed in the upstream core version at the time Red Hat needed to deliver it. OpenShift Routes, for example, predate the related Ingress resource that has since emerged in upstream Kubernetes. In fact, Routes and the OpenShift experience supporting them in production environments helped influence the later Ingress design, and that’s exactly what participation in a community like Kubernetes is all about.

I was curious about some of Openshift’s extended features. In particular, I wanted to explore the differences between Kubernetes Deployments and OpenShift’s Deployment Configurations.

OpenShift Templates

Another concept unique to Openshift is the Template object. An OpenShift Template bundles some number of objects that are related from a functional point of view.

Kubernetes is a vibrant laboratory, and so there are similar efforts in the community to group an “application”’s objects and resources in a way convenient for deployment, management, and organization. One of the other popular systems in this space is Microsoft/Deis’s Helm. In this post, I’ll show how an OpenShift Template can be converted into a Helm Chart with just a few changes.

Helm

Previous blogs talk about Helm and how to use it in OpenShift. This post will give some hints for testing our converted Chart in a minishift environment. While the Chart can be used in production, it requires cluster-admin permissions granted to the kube-system service account for Tiller deployments. Those permissions are not granted in most production OpenShift environments. I will not go into detail on setting up and configuring Helm itself.

The experiment

I used Vic Iglesias’s MySQL Helm Chart as an example. Also, I compared the Chart with the MySQL software collection Template provided in an OpenShift installation. The two application bundles use different MySQL images, the Chart’s from Docker Hub,  while OpenShift’s is pulled from the platform’s software collections, but the idea behind both is virtually identical: deploy a ready-to-use MySQL database server...

These are the primary steps o move from a Template to a Helm Chart:

1. Change the object definition from JSON to YAML. You can use an addon for your favorite source editor, or use an online converter.

From:

    {
"kind": "Template",
"apiVersion": "v1",
"metadata": {
"name": "mysql-persistent",
"annotations": {
"openshift.io/display-name": "MySQL",
...

To:

    ---
kind: Template
apiVersion: v1
metadata:
name: mysql-persistent
annotations:
openshift.io/display-name: MySQL
...

2. Split Template objects into one file per object.

From mysql-persistent-template.yam

     objects:
- kind: Secret
apiVersion: v1
stringData:
...
- kind: Service
...

Into secret.yaml, svc.yaml, pvc.yaml, deployment.yaml. Example secret.yaml:

    apiVersion: v1
kind: Secret
stringData:
...

3. Move template parameters to helm values.

Example in mysql-persistent-template.yaml:

    ...
parameters:
- name: MYSQL_USER
displayName: MySQL Connection Username
description: Username for MySQL user that will be used for accessing the database.
generate: expression
from: user[A-Z0-9]{3}
required: true
- name: MYSQL_PASSWORD
displayName: MySQL Connection Password
description: Password for the MySQL connection user.
generate: expression
from: "[a-zA-Z0-9]{16}"
required: true
- name: MYSQL_ROOT_PASSWORD
displayName: MySQL root user Password
description: Password for the MySQL root user.
generate: expression
from: "[a-zA-Z0-9]{16}"
required: true
- name: MYSQL_DATABASE
displayName: MySQL Database Name
description: Name of the MySQL database accessed.
value: sampledb
required: true
...

Into values.yaml. Note: some default values will be created later through gotpl functions:

    ...
## Specify password for root user
##
## Default: random 10 character string
# mysqlRootPassword: testing

## Create a database user and password
##
## Default: random 10 character string
# mysqlUser:
# mysqlPassword:

## Create a database
##
## Default: sampledb
# mysqlDatabase:
...

4. Whenever a parameter substitution (${}) in any object, change for the gotpl equivalent structure based on values.yaml (.Value.).

For example, instead of:

    kind: Secret
apiVersion: v1
Metadata:
name: "${DATABASE_SERVICE_NAME}"
stringData:
database-user: "${MYSQL_USER}"
database-password: "${MYSQL_PASSWORD}"
database-root-password: "${MYSQL_ROOT_PASSWORD}"
database-name: "${MYSQL_DATABASE}"

We use:

    kind: Secret
apiVersion: v1
Metadata:
name: {{ template "mysql.fullname" . }}
Labels:
app: {{ template "mysql.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
type: Opaque
Data:
{{ if .Values.mysqlUser }}
database-user: {{ .Values.mysqlUser | b64enc | quote }}
{{ else }}
database-user: {{ randAlphaNum 10 | b64enc | quote }}
{{ end }}
{{ if .Values.mysqlPassword }}
database-password: {{ .Values.mysqlPassword | b64enc | quote }}
{{ else }}
database-password: {{ randAlphaNum 10 | b64enc | quote }}
{{ end }}
{{ if .Values.mysqlRootPassword }}
database-root-password: {{ .Values.mysqlRootPassword | b64enc | quote }}
{{ else }}
database-root-password: {{ randAlphaNum 10 | b64enc | quote }}
{{ end }}
database-name: {{ default "sampledb" .Values.mysqlDatabase | b64enc | quote}}

5. Remove inputstream/tag definition (it does not exist in vanilla kubernetes), changing references to the source image from a container repository. In this case, the image could be centos-based or rhel-based.

From:

    ...
spec:
containers:
- name: mysql
image: " "
ports:
- containerPort: 3306
...

To:

    ...
spec:
containers:
- name: {{ template "mysql.fullname" . }}
{{- if (eq "centos" .Values.image.base) }}
image: "docker.io/centos/mysql-{{ .Values.image.version }}-centos7:latest"
{{- else }}
image: "registry.access.redhat.com/rhscl/mysql-{{.Values.image.version }}-rhel7:latest"
{{- end }}
...

6. Move from a Deployment Configuration to a Deployment. In this case, it was pretty straightforward, just deleting the triggers, as the configuration is now handled by Helm.

In our case, we delete the triggers:

    ...
spec:
strategy:
type: Recreate
triggers:
- type: ImageChange
imageChangeParams:
automatic: true
containerNames:
- mysql
from:
kind: ImageStreamTag
name: mysql:${MYSQL_VERSION}
namespace: "${NAMESPACE}"
- type: ConfigChange
replicas: 1
...

The resulting Chart can be downloaded from this GitHub repo.

The test

On Openshift

I assume you have a minishift cluster with OpenShift 3.6 or above.

First, we will grant cluster-admin rights to the kube-system default service account, where Tiller will be deployed. If not, Tiller will need to be granted admin rights in every namespace where it tries to deploy. Remember to grant rights using a cluster-admin user. On a stock minishift cluster, you can log in with oc login -u system:admin, or you can use minishift addons apply admin-user, and log in with "admin" to become a cluster-admin user.

Then, grant cluster-admin rights to the kube-system namespace’s default kube-system service account:
oc adm policy add-cluster-role-to-user cluster-admin -z default --namespace kube-system

Use a cluster-admin user for this test. Deploy Tiller, using:

helm init

And now install the Chart:

git clone https://github.com/rgordill/helm-repo
cd helm-repo
git checkout blog
helm install --name mysql --namespace helm-test --set image.base=rhel ./helm-repo/incubator/mysql

After that, you can use the persistent MySQL database server deployed on your Openshift cluster:

MYSQL_ROOT_PASSWORD=$(oc get secret --namespace helm-test mysql-mysql -o jsonpath="{.data.database-root-password}" | base64 --decode; echo)
MYSQL_DATABASE=$(oc get secret --namespace helm-test mysql-mysql -o jsonpath="{.data.database-name}" | base64 --decode; echo)

oc run -i --rm --tty centos --namespace helm-test --image=registry.access.redhat.com/rhscl/mysql-57-rhel7 --restart=Never -- mysql -h mysql-mysql.helm-test.svc.cluster.local -u root -p${MYSQL_ROOT_PASSWORD} ${MYSQL_DATABASE} -e status

This should return something like:

mysql: [Warning] Using a password on the command line interface can be insecure.
--------------

mysql Ver 14.14 Distrib 5.7.20, for Linux (x86_64) using EditLine wrapper
Connection id: 79
Current database: sampledb
Current user: root@172.17.0.11
SSL: Cipher in use is DHE-RSA-AES128-GCM-SHA256
Current pager: stdout
Using outfile: ''
Using delimiter: ;
Server version: 5.7.20 MySQL Community Server (GPL)
Protocol version: 10
Connection: mysql-mysql.helm-test.svc.cluster.local via TCP/IP
[...]

Vanilla Kubernetes

Can we use our converted Helm Chart to deploy an identical MySQL on any vanilla Kubernetes cluster? Sure! All you need is a minikube cluster with Kubernetes 1.6 or above -- and remember to use "kubectl" instead of “oc”.

Following the same steps from Helm init, except we don’t need to perform the cluster-admin grant:

helm init
git clone https://github.com/rgordill/helm-repo
cd helm-repo
git checkout blog
helm install --name mysql --namespace helm-test ./helm-repo/incubator/mysql

You will end up with a persistent MySQL server deployed on your Kubernetes cluster. To check it:

MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace helm-test mysql-mysql -o jsonpath="{.data.database-root-password}" | base64 --decode; echo)
MYSQL_DATABASE=$(kubectl get secret --namespace helm-test mysql-mysql -o jsonpath="{.data.database-name}" | base64 --decode; echo)

kubectl run -i --rm --tty centos --namespace helm-test --image=centos/mysql-57-centos7 --restart=Never -- mysql -h mysql-mysql.helm-test.svc.cluster.local -u root -p${MYSQL_ROOT_PASSWORD} ${MYSQL_DATABASE} -e status

This should, again, return something like:

mysql: [Warning] Using a password on the command line interface can be insecure.
--------------

mysql Ver 14.14 Distrib 5.7.20, for Linux (x86_64) using EditLine wrapper
Connection id: 205
Current database: sampledb
Current user: root@172.17.0.8
SSL: Cipher in use is DHE-RSA-AES128-GCM-SHA256
Current pager: stdout
Using outfile: ''
Using delimiter: ;
Server version: 5.7.20 MySQL Community Server (GPL)
Protocol version: 10
Connection: mysql-mysql.helm-test.svc.cluster.local via TCP/IP
[...]

This MySQL server deployment came from a Helm Chart that began life as an OpenShift Template!

Final Thoughts

We have proven it is fairly easy to move from OpenShift Templates to Helm Charts. This is a good sign for general interchangeability between differing mechanisms in OpenShift and efforts in stock Kubernetes and the wider community.

If Helm is currently your solution for application bundle not and distribution on Kubernetes clusters, , you should know Openshift 3.7 and above introduced the Service Broker concept. Service Brokers bundle  not only the provision and deprovision of application components, but also mechanisms for binding and unbinding them. The bind/unbind use case is not covered by Helm by default, and some initial work was done on building a “Helm Broker”, but it seems this effort is no longer active.

Not only that, there are some security concerns with the Helm implementation. As we saw above, Tiller must run with elevated permissions, essentially with the rights of a cluster-admin, or else with a project admin grant in every namespace in which you expect to do Helm deployments.

Helm is not protected by RBAC. If a user can contact Tiller, that user can deploy anything in any namespace to which it has access, regardless of the accounts cluster rights.

Another feature that is needed in most complex apps is deployment ordering. We don’t want to see our containers restarting like popcorn, failing their readiness probes because one or more dependencies are not ready yet.

That is one of the gaps the  Ansible Service Broker aims to fill. Complex templating, provisioning, and binding technologies, both in and out of OpenShift, are some of the extra features that most apps will need.

That leads us to a question about the path forward for distributing complex app deployments in Openshift and Kubernetes. Will Helm 3 answer of some of them? Maybe using Ansible Playbooks can help right now. That will be the topic of the second post in this series.

About the Author

Ramón Gordillo is a Senior Solution Architect at Red Hat, focused on middleware and containers. He has devoted his career to solution design, development and implementation in application integration, API management, real time processing and cloud technology, particularly in the telecommunications and finance industries. In addition, he maintains a keen interest in the new challenges of microservices, serverless and edge computing architectures, and agile development models.