Securing Kubernetes

Security is a complicated topic that has the effect of making many people’s eyes glaze over. There are of course a few people who spend the time to learn everything about it and become the security experts, but those people are rare. The great thing about a platform like OpenShift/Kubernetes is that it creates a finite space of what you have to learn. So unlike security in the wild west of computing that exists outside OpenShift/Kubernetes, if you take the time to learn the constructs provided by the platform, you can feel comfortable having a security conversation with most anyone. I’ll try and take you through the basics.


Roles are added to a user or group and provide a default set of policies. Ex:

oadm policy add-role-to-group <role> <groupname> # Per project
oadm policy add-cluster-role-to-group <role> <groupname>

The roles everyone should probably know about are:

  • Project Level:
    • view
    • edit
    • admin
  • Cluster Level:
    • cluster-admin
    • cluster-reader

A more complete list of roles can be found here. Note that project and cluster admins can assign project level roles. Only cluster admins can assign cluster level roles.

Security Context Constraints

SCC define what a given pod is allowed to do. If you want to restrict a pod to not be able to run as root, not be able to mount host directory, or use host networking, you’ll need to adjust its SCC.

By default, since OpenShift maintains a secure by default posture, you can’t run containers as root. To override this setting for a user with SCC, you would run:

oadm policy add-scc-to-user anyuid <user>

Explicit access to host dirs and host network requires the privileged SCC:

oadm policy add-scc-to-user privileged <user>

Note that only cluster admins can create, modify, or grant the ability to use an SCC.


For those who really need to tinker, it’s also possible for cluster admins to create an SCC with modified linux capabilities (CAP_*). Dan Walsh recently wrote a great blog post on the topic of Linux Capabilities if you want to dig a little deeper.

Service Accounts

Service accounts are just a fancy name for a system user. If you need the system to do something for you, it’s best to separate those actions into a separate user for security and auditing purposes. Every project has some number of service accounts by default (builder, deployer, default). Service accounts should generally be limited in permissions to what they need to accomplish. Once a service account is created, it’s generally interchangeable with a user when running commands.

Node Level Security

Node level security consists of 3 main pillars:


The first line network defense comes from network namespaces which provide isolation on the node itself. They give each pod its entire IP and port range to bind to and have the effect of isolating pod networks from each other on the node. In addition to network namespaces, the OpenShift SDN provides additional security by offering isolation between projects with the multi-tenant plugin. This means that packets from one project by default will not be visible to other projects. If you want to learn more about how OpenShift networking works, this document is fairly enlightening.


When it comes to OpenShift/Kubernetes, storage comes in multiple flavors:

  • In container
  • Secure host-based scratch space (EmptyDir)
  • Directories mounted from the host (HostPath)
  • External storage (NFS, Ceph, Gluster, iSCSI, Fibre Channel, EBS, etc.)

Of these, only HostPaths are considered privileged and potentially insecure in a multi-tenant environment. Meaning HostPaths are only safe if the privileged pods are 100% trusted. Which generally means you should avoid them if possible. In container and EmptyDirs are completely safe and fairly straight forward. The most complexity comes with securing external storage, but it’s really less scary than it looks. Here are the basics:

  • Block Storage (EBS, GCE Persistent Disks, iSCSI, etc.)
    • For non privileged pods, the root of the mounted volume is chowned and chconed (it’s an SELinux thing) to match that of the fsGroup and MCS label (the SELinux things being changed by chcon) of the pod. This essentially makes the mounted volume owned by and only visible to the container it’s associated with.
  • Shared Storage (NFS, Ceph, Gluster, etc.)
    • Since chown and chcon aren’t valid options, there has to be another solution for shared storage. The trick is to have the shared storage persistent volume (PV) register its group id (gid) as an annotation on the PV resource. Then when the PV is claimed by the pod, the annotated gid will be added to the supplemental groups of the pod, and hence give that pod access to the contents of the shared storage.

Compute (everything else)

There are multiple levels of security and a lot of history involved in securing containers on Linux. Linux namespaces, SELinux, and CGroups are three of the of the main players.

Linux namespaces provide the fundamentals of container isolation. All the namespaces combined are what give you the impression you are running on your own operating system when you are inside a container.

SELinux provides an additional layer of security to keep containers isolated from each other and from the host. The main thing to understand about SELinux integration with OpenShift is that, by default, OpenShift runs each container as a random uid and is isolated with SELinux MCS labels. The easiest way of thinking about MCS labels is they are a dynamic way of getting SELinux separation without having to create policy files and run restorecon.


Attributions [1]

If you are wondering why we need SELinux and namespaces at the same time, the way I view it is namespaces provide the nice abstraction but are not designed from a security first perspective. SELinux is the brick wall that’s going to stop you if you manage to break out of (accidentally or on purpose) from the namespace abstraction.

CGroups is the remaining piece of the puzzle. Its primary purpose isn’t security, but I list it because it regulates that different containers stay within their allotted space for compute resources (cpu, memory, I/O). So without cgroups, you can’t be confident your application won’t be stomped on by another application on the same node.

Securing the Master

The OpenShift/Kubernetes masters are a central point of access and should receive the highest level of security scrutiny.  OpenShift adds several components to Kubernetes to ensure a secure multi-tenant master:

  • All access to the master is over TLS
  • Access to the API Server is X.509 certificate or token based
    • Project quota to limit how much damage a rogue token could do
  • Etcd is not exposed directly to the cluster


That’s it. Now you know everything you need to know about security. Well, maybe not everything, but you should have the basics. And knowing the basics at least lets everyone take part in the conversation and have a chance of asking the right questions.




OpenShift Container Platform, OpenShift Dedicated, OpenShift Origin
Comments are closed.