OpenShift Router Sharding for Production and Development Traffic

As an OpenShift cluster grows by helping developers deliver features and applications faster, the efficiency of resource use becomes a central concern. Many OpenShift rollouts start with at least two physical clusters, one for development and one for production. While this has the perceived advantage of simplifying the mental model and some administrative aspects, OpenShift’s automated resilience, enhanced access controls, and advanced Routing features can logically isolate dev and prod environments on a single cluster. The Kubernetes scheduler at OpenShift’s core maximizing the density of applications running on each node. With additional configuration, further density and compute savings are possible through features like Quality of Service QoS.

This post shows how to deploy isolated development and production versions of an application on a single cluster by redirecting requests to the appropriate environment. If engineers are benchmarking and load testing a development application, it should not impact production traffic or production performance. For this reason, we will deploy “sharded” routing in an OpenShift cluster, creating multiple routers for particular purposes.

Cluster and Router Architecture

This is our high level cluster and router architecture:

In this cluster, we do not dedicate nodes to the different production and dev workloads. We just split traffic by routing requests appropriately.

Global Load Balancer: HAProxy

The global load balancer, at the top of the architecture diagram, runs haproxy with the following key lines of configuration:

acl host_router_dev req.ssl_sni -m sub -i
acl host_router_prod req.ssl_sni -m sub -i

use_backend atomic-openshift-router-dev if host_router_dev
use_backend atomic-openshift-router-prod if host_router_prod

(The complete haproxy configuration can be found here.)

OpenShift Routers

As shown in the architecture chart, different subdomains are sent to different OpenShift Routers. We deploy 3 routers: 2 for Production, and 1 for Development. Routers are deployed with the OpenShift command-line API tool, oc. The oc adm router subcommands are used to manipulate router deployments:

oc adm router router-prod --replicas=2 --force-subdomain='${name}-${namespace}'
oc adm router router-dev --replicas=1 --force-subdomain='${name}-${namespace}'

We use --force-subdomain to force separate subdomains for separate routers.

Next, we will dedicate each router deployment to serve traffic to only a subset of namespaces:

oc set env dc/router-prod NAMESPACE_LABELS="router=prod"
oc set env dc/router-dev NAMESPACE_LABELS="router=dev"

We ensure that our routers are running on their dedicated nodes, by specifying node labels for the router deployments to select:

oc label node "router=prod"
oc label node "router=prod"
oc label node "router=dev"

#patch deployments of the router:
oc patch dc router-dev -p "spec:
       router: dev"

oc patch dc router-prod -p "spec:
       router: prod"

OpenShift Namespace, Project, and Application Labels and Selectors

With router and node labels set, we create OpenShift projects for prod and dev, labeling them accordingly.

oc new-project prod
oc label namespace prod router=prod

Now, let’s test our configuration with a new OpenShift application in the prod project we just created and labeled:

# oc new-app cakephp-mysql-example
# oc get route
NAME                    HOST/PORT                                                   PATH      SERVICES                PORT      TERMINATION   WILDCARD
cakephp-mysql-example            cakephp-mysql-example   web       edge          None

Notice that the route already uses the apps-prod subdomain, for production workloads.

Let’s test the same routing pattern for development environments.

# oc new-project dev
# oc label namespace dev router=dev 
# oc new-app cakephp-mysql-example
# oc get route
NAME                    HOST/PORT                                                 PATH      SERVICES                PORT      TERMINATION   WILDCARD
cakephp-mysql-example           cakephp-mysql-example   web       edge          None

Next, we test both URLs as a client, using curl:

# curl -sSLk -D - -o /dev/null
HTTP/1.1 200 OK
Date: Wed, 11 Apr 2018 13:27:54 GMT
Server: Apache/2.4.27 (Red Hat) OpenSSL/1.0.1e-fips
Content-Length: 64467
Content-Type: text/html; charset=UTF-8
Set-Cookie: 1b15022d32bdaf178e4bb662559c535f=9b517482994c000cd2b19fe8ca6174e2; path=/; HttpOnly; Secure
Cache-control: private

# curl -sSLk -D - -o /dev/null
HTTP/1.1 200 OK
Date: Wed, 11 Apr 2018 13:28:46 GMT
Server: Apache/2.4.27 (Red Hat) OpenSSL/1.0.1e-fips
Content-Length: 64484
Content-Type: text/html; charset=UTF-8
Set-Cookie: 2e2307f4645f03dde968155c002d6b44=8316f5abdc2526e8edcb1b110e430325; path=/; HttpOnly; Secure
Cache-control: private


By splitting traffic, we ensure that we can meet our application Service Level Agreements (SLAs) while reducing the number of clusters we have to manage and the resources consumed. By having everything in one cluster, we can save on hardware, people resources, and maintenance costs.

Before you move toward a “one cluster” architecture, ask yourself: Are you and your organization are ready for this?


This post is a simplified version of the OpenShift router documentation.

Special thanks to @noeloc.

Comments are closed.