Subscribe to our blog

Some time ago we talked about how Federation V2 on Red Hat OpenShift 3.11 enables users to spread their applications and services across multiple locales or clusters. As a fast moving project, lots of changes happened since our last blog post. Among those changes, Federation V2 has been renamed to KubeFed and we have released OpenShift 4.

In today's blog post we are going to look at KubeFed from an OpenShift 4 perspective, as well as show you a stateful demo application deployed across multiple clusters connected with KubeFed.

There are still some unknowns around KubeFed; specifically in storage and networking. We are evaluating different solutions because we want to  we deliver a top-notch product to manage your clusters across multiple regions/clouds in a clear and user-friendly way. Stay tuned for more information to come!

KubeFed on OpenShift 4

In this video Chandler Wilkerson covers the main components in KubeFed, vocabulary related to KubeFed as well as some use cases enabled by KubeFed.

 

 

Merging Kubeconfig Configurations

During the next demos, a flattened Kubeconfig file will be used.

In this video Chandler Wilkerson will cover how to create a flattened Kubeconfig file using multiple Kubeconfig files.

 

 

Stateful Demo Application

Our main application is built in Node.js, and it consists of a Pacman game which stores high scores in a MongoDB database, as well as some information about the cloud where the app is running.

We will have 3 different OpenShift 4 clusters distributed across the US:

  • Cluster 1
    • Cloud / Region: AWS - east-1
    • Name: east1
  • Cluster 2
    • Cloud / Region: AWS - east-2
    • Name: east2
  • Cluster 3
    • Cloud / Region: AWS - west-2
    • Name: west-2

We will have one Pacman and MongoDB replica per cluster: the Pacman replicas will be independent while the three MongoDB replicas will be configured as a MongoDB ReplicaSet so the storage will be replicated using MongoDB primitives.

On the networking side, the three clusters are independent from each other, no network connection has been configured between them. The only method to reach one cluster from another is using OpenShift Routes.

MongoDB ReplicaSet Architecture

 

  • There is a MongoDB pod running on each OpenShift 4 Cluster, each pod has its own storage (provisioned by a block type StorageClass).
  • The MongoDB pods are configured as a MongoDB ReplicaSet and communications happening between them use TLS.
  • The OpenShift Routes are configured as passthrough termination, we need the connection to remain plain TLS (no HTTP headers involved).
  • The MongoDB pods interact between each other using the OpenShift Routes (OCP Nodes where MongoDB pods are running must be able to connect to the other clusters OpenShift Routers).

How is the MongoDB ReplicaSet Configured

Each OpenShift cluster has a MongoDB pod running. There is an OpenShift service which routes the traffic received on port 27017/TCP to the MongoDB pod's port 27017/TCP.

There is an OpenShift route created on each cluster, the route termination is passthrough (the HAProxy won't add HTTP headers to the connections). Each route listens for connections on port 443/TCP and reverse proxies traffic received to the MongoDB service on port 27017/TCP.

The MongoDB pods have been configured to use TLS, so all connections will be made using TLS (a TLS certificate with route hostnames as well as services hostnames configured as SANs is used by MongoDB pods).

Once the three pods are up and running, the MongoDB ReplicaSet is configured using the OpenShift routes hostnames as MongoDB endpoints.

We will have something like this:

  • ReplicaSet Primary Member: mongo-east1.apps.east1.sysdeseng.com:443
  • ReplicaSet Secondary Members: mongo-east2.apps.east2.sysdeseng.com:443, mongo-west2.apps.west2.sysdeseng.com:443

In the event that one of the MongoDB pods fails/stops, the MongoDB ReplicaSet will reconfigure itself, and once a new pod replaces the failed/stopped one, the MongoDB ReplicaSet will reconfigure that pod and include it as a member again. In the meantime the MongoDB Quorum Algorithm will determine whether the MongoDB ReplicaSet is RO or RW.

Deploying the MongoDB ReplicaSet

In this video Ryan Cook will cover the deployment of the Federated MongoDB ReplicaSet using KubeFed primitives and tooling.

 

 

 

 

NOTE: The content on this video corresponds to KubeFed v0.0.10 release, the current release as the time of this writing. Things are rapidly evolving, so changes are expected.

Demo Application Architecture

  • There is a Pacman pod running on each OpenShift 4 Cluster, each pod can connect to any MongoDB pod.
  • The MongoDB connection protocol will determine which MongoDB pod Pacman should connect to based on which pod is acting as a MongoDB ReplicaSet primary member.
  • There is an external DNS name pacman.sysdeseng.com that points to a custom load balancer, HAProxy in this case.
  • The HAProxy load balancer is configured to distribute incoming traffic across the three different Pacman pods in a Round Robin fashion.

Demo Application Workflow

  1. The user connects to pacman.sysdeseng.com and is presented with a Pacman game.
  2. The user can play and share their high score.
  3. In case the high score is saved, the Pacman application will connect to the primary MongoDB ReplicaSet instance and will save the score.
  4. The user can see the high score list, which will be gathered by the Pacman application from any MongoDB pod.

Deploying the Pacman Application

In this video Ryan Cook will cover the deployment of the Federated Pacman application using KubeFed primitives and tooling.

 

 

 

 

NOTE: The content on this video corresponds to KubeFed v0.0.10 release, the current release as the time of this writing. Things are rapidly evolving, so changes are expected.

Next Steps

  • As mentioned earlier in the post, KubeFed is fast evolving, so make sure to check out the GitHub repository from time to time to stay updated.
  • If you want to learn more about KubeFed you can go through this Katacoda Scenario which will teach you the basics of KubeFed.
  • A rename of the Federation Operator is imminent.
  • The operator will be mutated to deploy kubeFed in response to an API resource (e.g. like etcd, prometheus operators).

 


About the authors

Browse by channel

automation icon

Automation

The latest on IT automation that spans tech, teams, and environments

AI icon

Artificial intelligence

Explore the platforms and partners building a faster path for AI

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

Explore how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the solutions that simplify infrastructure at the edge

Infrastructure icon

Infrastructure

Stay up to date on the world’s leading enterprise Linux platform

application development icon

Applications

The latest on our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech