This guest post was written by Andrey Belik, Dana Groce, and Jason Mimick of MongoDB
The newest release of Red Hat OpenShift, version 3.11, introduces Kubernetes Operator support. Operators, along with Custom Resource Definitions, allow OpenShift to be extended to support custom types and manage complex services. The Kubernetes MongoDB Enterprise Operator (Beta) in conjunction with MongoDB Ops Manager supports provisioning and lifecycle management for multiple MongoDB Enterprise clusters. An OpenShift user can more easily deploy MongoDB replica sets or sharded clusters, perform upgrades to future versions, and change configurations directly from the standard Kubernetes API’s, or from tooling (such as kubectl). The Kubernetes MongoDB Enterprise Operator is available directly in Dev Preview as an optional install in OpenShift 3.11.
MongoDB and Red Hat are collaborating to help our customers modernize applications and automate infrastructure management. MongoDB’s integration with OpenShift allows customers to run backend services in the same way as other stateless application services. We’re excited to announce that the next version of OpenShift ships with a new integration which makes provisioning and managing data services easier.
The MongoDB Enterprise Kubernetes Operator (MongoDB Operator) allows for easier deployments of different MongoDB Enterprise database configurations inside the OpenShift platform. MongoDB standalone, replica sets, or complex sharded clusters can be deployed for your applications in minutes. Management tools like MongoDB Ops Manager or MongoDB Cloud Manager work in conjunction with the Operator to provide management, monitoring, and backup capabilities.
The MongoDB Enterprise Operator can be configured in your environment and provide Kubernetes native management capabilities. The Operator defines new Custom Resource Definitions (CRD’s), handles lifecycle events (such as scaling), and manages MongoDB running in pods.
MongoDB Operator overview
The MongoDB Operator enables OpenShift to manage typical lifecycle events for MongoDB that have strict policies on data persistence and management. The Operator handles the creation of MongoDB enterprise pods, coordinates configuration of MongoDB deployments with Ops Manager, and orchestrates MongoDB configuration changes — accomplished through the Kubernetes API, declarative configuration in YAML, or other tooling.
The MongoDB Operator works together with MongoDB Ops Manager, which in turn, applies final configurations to MongoDB clusters. When MongoDB is deployed and running in OpenShift, there are a number of tasks that may not relate to Kubernetes operations like monitoring, fine tuning DB performance, DB backups, index management, etc. You can manage these tasks in the Ops Manager.
These models allow administrators to configure resource types and access permissions in OpenShift. Developers deploy the MongoDB database in the same way as they deploy the rest of application services. DBA’s can work within the familiar Ops Manager interface to help run MongoDB at optimal performance.
In this post, we’ll review how you can use this exciting new integration between MongoDB and Red Hat which allows you to scale out your enterprise data service needs with greater ease and confidence.
Let’s start with a high level representation of our architecture.
Figure 1: Overall Kubernetes Architecture
This diagram depicts the fundamental components:
- OpenShift and underlying Kubernetes cluster
- MongoDB Ops Manager
- MongoDB Enterprise Operator
- 3 Custom Resource Definitions which model different kinds of MongoDB deployments
- A ConfigMap and Secret which contain metadata for connection to MongoDB Ops Manager
- YAML file definition for a MongoDB replica set
- Resulting StatefulSet and Pods actually running MongoDB
Logically, one can think of the MongoDB Operator as a lightweight agent whose duty is to listen for events happening within the OpenShift cluster concerning the MongoDB resources. When something happens, such as an “ockubectl create“, the operator is notified by the Kubernetes control plane and acts accordingly, calling the appropriate Kubernetes APIs and MongoDB Ops Manager APIs to “create” your MongoDB deployment. In this way, the operator acts as a proxy for these APIs and, most importantly, can handle the complex logic around deploying production-grade MongoDB clusters.
An instance of MongoDB Ops Manager is required in order to use the MongoDB Operator. Eventually, we plan to ship a containerized version of Ops Manager which is designed to allow a simple deployment directly into your OpenShift cluster. For now, you can get started through the documentation for installing a test MongoDB Ops Manager instance. When installing, make sure the containers running in your OpenShift cluster have network access to your instance of MongoDB Ops Manager.
MongoDB Ops Manager is used by the Operator to perform MongoDB configurations. Ops Manager itself provides a logical hierarchical structure for your MongoDB deployments. This hierarchy consists of Organizations and Projects. Each Organization can contain multiple Projects. Each project can contain multiple MongoDB deployments. However, the database-level security settings for each MongoDB deployment are defined at the Project level. This is an important detail and means that the MongoDB deployments within a given project will share the same security settings. This applies to database user authentication and authorization and SSL/TLS settings (it does not apply to MongoDB Enterprise encryption-at-rest settings).
To get started, install MongoDB Ops Manager and create an Organization. You’ll also need to configure access to the Ops Manager API by creating an apikey and opening up the API Whitelist. Please refer the to links given for full details.
The next step is to create two objects in OpenShift which allow the MongoDB Operator to connect to Ops Manager. A Secret holds a set of credentials (an Ops Manager username and apikey) and allows the operator to access the Ops Manager API. A ConfigMap is used to define the URL for your Ops Manager instance and also reference the name of the Project within Ops Manager you wish to associate to a given MongoDB deployment running in OpenShift.
The Secret should be created as follows, with a field called user which is a valid Ops Manager user id and a publicApiKey created in Ops Manager.
Next, a ConfigMap with a projectName and baseUrl as follows:
ordId: <orgId> # Optional
Create both of these objects in OpenShift and you’re ready to start using the MongoDB Operator.
In the latest release, OpenShift 3.11, the MongoDB Operator is available in the Service Catalog of operators.
Installing the MongoDB Operator
Before installing an instance of the MongoDB Operator, here are some technical details about how it operates. The MongoDB Operator works at a namespace level and does not require cluster admin role access. The CRDs (Custom Resource Definitions) are installed into the cluster separate from the actual Operator to allow for finer-grained control by OpenShift administrators. Therefore, the CRDs should be installed with cluster admin role permissions by cluster administrators before the operator is installed.
Note: It is possible to install the operator at a cluster level. However, OpenShift cluster administrators should limit operator scope to a single namespace in production clusters for better security configuration.
The MongoDB Operator defines three new custom resources: MongoDbStandalone, MongoDbReplicaSet, and MongoDbShardedCluster.
Let’s install the MongoDB operator. Visit the Cluster Service Versions screen and navigate to “Certified Operators”. The MongoDB Operator is already made available in OpenShift. The last step is to create an instance of the operator inside a namespace of your choice. Find MongoDB Operator in the list and press “Create”. Check parameters for the Cluster Service Version definition and a create one for the MongoDB Operator.
Example replica set deployment
When OpenShift has the MongoDB Operator configured, we can deploy any of the three MongoDB cluster configurations. Let’s try deploying a MongoDB replica set. In this configuration, we expect to get a StatefulSet with 3 pods deployed (1 primary and 2 secondaries) providing HA for MongoDB. In the UI we navigate to Cluster Service Versions page and choose MongoDB Operator.
Next, we get a landing page with extra information, including configuration instructions. The YAML configuration can be inspected and as well as a list of existing deployed instances of MongoDB can be seen in the Instances tab.
Lets go ahead and click the “Create New” button and choose “MongoDB Replica Set” (one of the three custom resources that are managed by Operator).We get a screen with a YAML file that allows us to configure new instances.At a minimum, we need to give a user-friendly name to the metadata.name field and press “Create”.
That’s it. From this point, OpenShift, the MongoDB Operator and Ops Manager will orchestrate the deployment of a MongoDB Replica Set. In just a few minutes the replica set will be available to use.
This was a quick post about the new OpenShift and MongoDB integration using the MongoDB Enterprise Operator. We have covered the MongoDB Operator installation and configuration in OpenShift and deployed our standard MongoDB Replica Set.
For more details about MongoDB Kubernetes operator, please visit our docs tutorial page.
Head over to the MongoDB Ops Manager download site and get started today.