In the Kubernetes namespace model, the high-level idea is that a development team is given access to a namespace. Within the confines of that sandbox, they have the freedom to perform any action they desire.

There are, however, a set of namespaced objects whose ownership is not so immediately clear.

For example, while it’s obvious that a dev team will need to create deployments, services, pods etc…. It is not obvious who should own the quotas object, or the network policy objects.

Actually, I think it’s fair to say that when discussing this type of ownership with my customers, prohibiting developers from gaining  access to those objects almost always prevails. So, naturally at that point, two problems arise:

  1. Which team should own those objects?
  2. What should the process of provisioning those objects to the development team looks like?

We can clearly see here the risk of sliding back to a model where, in order to be onboarded on Red Hat OpenShift, a dev team has to open a bunch of tickets to have OpenShift objects provisioned by a different team. This would defeat two of the objectives that we are almost always trying to achieve when we deploy OpenShift: self servicing and increased development speed.

Traditionally, the answer to this problem has been to create an automated onboarding process that takes care of creating the needed namespace scoped object when new projects are created. This can be implemented with the use of an external agent who creates the objects based on a desired configuration. In other cases, this can be implemented by changing the default project template object used by OpenShift to create new projects.

These approaches each have the limitation that the configuration is applied at provisioning time and then never modified again. Also, the project template approach has another limitation where there can be only one template cluster-wide which eliminates the possibility of configuring namespaces differently.

I always felt that we needed a way to define namespace configurations in a composable way, i.e. it would be nice to be able to say that namespace A has configuration #1 and #4, but not #2 and #3.

Also, we need a way to know that configurations are enforced beyond the creation time.

The Namespace Configuration Controller

Continuously enforcing the presence of a certain set of objects within a namespace seems to be exactly what most of the Kubernetes controllers do. So why not build a controller to allow the cluster administrator to specify namespace configurations?

The namespace configuration is a custom resource that defines a set of objects along with a a label selector to select which namespace or namespaces it must be applied to.

This way, multiple configurations can be applied to the same namespace (achieving composability), as well as a single configuration can be applied to a subset of namespaces (achieving different SLAs).

The set of objects controlled by the namespace configuration controller are:

This is an initial list that I arbitrarily chose because it makes sense to me. More objects will likely be added in the future.

Note: While one approach would have been to utilize PodPresets, they were deprecated as of OpenShift 3.7 and the feature was completely removed in 3.11.

So, assuming the correct namespace configurations are in place, the workflow to provision a new project to a dev team looks like the following:

  1. Create the project
  2. Label the project so it can be selected by the correct namespace configurations.

The namespace configuration Custom Resource Definition (CRD) looks as follows:

apiVersion: namespaceconfig.raffaelespazzoli.systems/v1alpha1
kind: NamespaceConfig
metadata:
 name: example-namespaceconfig
spec:
 selector:
   matchLabels:
     namespaceconfig: "true"
 networkpolicies:
 - apiVersion: networking.k8s.io/v1
   kind: NetworkPolicy
   metadata:
     name: default-deny
   spec:
     podSelector: {}
     policyTypes:
     - Ingress
 configmaps: []         
 podpresets: []         
 quotas: []              
 limitranges: []        
 rolebindings: []        
 clusterrolebindings: [] 
 serviceaccounts: []

As we can see, the CRD is comprised of a selector which selects the namespace which this configuration will be applied to, and the arrays of the objects that need to be created in the selected namespaces. Objects should be specified without the namespace field. If the namespace field is provided, it will be overwritten with the name of the namespace to which the configuration is being applied.

The Namespace Configuration Controller  repository contains instructions on how to install the controller along with the associated resources.

Configuration Examples

Here are a few of use cases from my personal experience where having the Namespace Configuration Controller would have proven useful. More examples can be found here.

T-Shirt Sized Quotas

An OpenShift onboarding process should take care of creating projects with proper quotas.

Often it is difficult to know what quota a project will really need, so it is a good compromise to start with one of a set of predefined quotas (also known as T-Shirt sized quotas).

We can define namespace configuration to represent the T-Shirt quotas, and then create new namespaces with a label that represents the desired quotas. Here is a sample configuration:

apiVersion: namespaceconfig.raffaelespazzoli.systems/v1alpha1
kind: NamespaceConfig
metadata:
 name: small-size
spec:
 selector:
   matchLabels:
     size: small  
 quotas:
 - apiVersion: v1
   kind: ResourceQuota
   metadata:
     name: small-size  
   spec:
     hard:
       requests.cpu: "4"
       requests.memory: "2Gi"
---
apiVersion: namespaceconfig.raffaelespazzoli.systems/v1alpha1
kind: NamespaceConfig
metadata:
 name: large-size
spec:
 selector:
   matchLabels:
     size: large  
 quotas:
 - apiVersion: v1
   kind: ResourceQuota
   metadata:
     name: large-size
   spec:
     hard:
       requests.cpu: "8"
       requests.memory: "4Gi"

We have now two configurations with different quotas. To apply the configurations we can simply execute the following:

oc apply -f examples/tshirt-quotas.yaml
oc new-project large-project
oc label namespace large-project size=large
oc new-project small-project
oc label namespace small-project size=small

Default Network Policy

When a new namespace is created with the networkpolicy SDN plugin enabled, full inbound and outbound traffic is allowed by default.

We can use the Namespace Configuration Controller to enforce a custom initial set of default network policy rules.

In the example below, we create a network configuration that enforces the same rules as the multitenant SDN plugin:

apiVersion: namespaceconfig.raffaelespazzoli.systems/v1alpha1
kind: NamespaceConfig
metadata:
 name: multitenant
spec:
 selector:
   matchLabels:
     multitenant: "true"  
 networkpolicies:
 - apiVersion: networking.k8s.io/v1
   kind: NetworkPolicy
   metadata:
     name: allow-from-same-namespace
   spec:
     podSelector:
     ingress:
     - from:
       - podSelector: {}
 - kind: NetworkPolicy
   apiVersion: networking.k8s.io/v1
   metadata:
     name: allow-from-default-namespace
   spec:
     podSelector:
     ingress:
     - from:
       - namespaceSelector:
           matchLabels:
             name: default

To deploy and test this configuration, we can simply run:

oc apply -f examples/multitenant-networkpolicy.yaml
oc new-project multitenant-project
oc label namespace multitenant-project multitenant=true

Service Account with special permissions

Another way to use the Namespace Configuration Controller is to initialize a namespace with a service account with a set of special permissions, without needing to  grant a user of that namespace with those permissions.

Here is an example where the permissions are to be able to pull and push from the registry are granted to a service account.

 

apiVersion: namespaceconfig.raffaelespazzoli.systems/v1alpha1
kind: NamespaceConfig
metadata:
 name: special-sa
spec:
 selector:
   matchLabels:
     specialsa: "true"
 serviceaccounts:
 - apiVersion: v1
   kind: ServiceAccount
   metadata:
     name: special-sa
 rolebingings:
 - apiVersion: authorization.openshift.io/v1
   kind: RoleBinding
   metadata:
     name: special-sa-rb
   roleRef:
     name: registry-editor
   subjects:
   - kind: ServiceAccount
     name: special-sa
 clusterrolebindings:
 - apiVersion: authorization.openshift.io/v1
   kind: ClusterRoleBinding
   metadata:
     name: special-sa-crb
   roleRef:
     name: registry-viewer
   subjects:
   - kind: ServiceAccount
     name: special-sa

Notice that we have to grant the namespace configuration controller service account those permissions in order for it to be able to grant them.

 

Here is how we can apply this configuration:

oc adm policy add-cluster-role-to-user registry-editor -n namespace-configuration-controller -z namespace-configuration-controller
oc adm policy add-cluster-role-to-user registry-viewer -n namespace-configuration-controller -z namespace-configuration-controller
oc apply -f examples/serviceaccount-permissions.yaml
oc new-project special-sa
oc label namespace special-sa special-sa=true

Conclusion

When we create a namespace in Kubernetes or OpenShift, some default namespace configurations are applied. In certain cases, these default configurations may not meet the desired requirements. With the Namespace Configuration controller, we can create more flexible configurations that will be applied at namespace creation, and thus, create more flexible namespaces for developers. Because we are using an operator-based approach which provides convergence of the current state to a desired state, the Namespace Configuration controller can also be used to enforce new configuration to existing namespaces.

 


About the author

Raffaele is a full-stack enterprise architect with 20+ years of experience. Raffaele started his career in Italy as a Java Architect then gradually moved to Integration Architect and then Enterprise Architect. Later he moved to the United States to eventually become an OpenShift Architect for Red Hat consulting services, acquiring, in the process, knowledge of the infrastructure side of IT.

Read full bio