OpenShift 4: Install Experience

In the previous post, we described our goal of making day-to-day software operations effortless for both operations and development teams. How have we changed our install experience in Red Hat OpenShift to reduce friction and achieve this goal? In this post, we will provide an overview of our new installation tool, its usage of the Kubernetes operator pattern, and how we manage the platform itself as a set of Kubernetes native applications.

Enabling Novices to Experts

The OpenShift installer is designed to help users, ranging from novices to experts, create OpenShift clusters in various environments.  By default, the installer acts as an installation wizard, prompting the user for the minimum set of values that it cannot determine on its own while providing reasonable defaults for everything else.  It prompts the user for the least amount of information required at day-0 for the target platform. This ensures the production cluster supports configuration changes that arise over its lifecycle from day-2 and beyond.

Cluster Lifecycle

To demonstrate, let’s create a highly-available (HA) cluster in a cloud platform via a single command.

$ ./openshift-install-linux-amd64 create cluster
? SSH Public Key /home/user_id/.ssh/libra.pub
? Platform aws
? Region us-west-2
? Base Domain openshiftcorp.com
? Cluster Name prod
? Pull Secret [? for help] *************************
INFO Creating cluster...            
INFO Waiting up to 30m0s for the Kubernetes API... 
INFO API v1.12.4+f391adc up            
INFO Waiting up to 30m0s for the bootstrap-complete event... 
INFO Destroying the bootstrap resources...    
INFO Waiting up to 30m0s for the cluster to initialize... 
INFO Waiting up to 10m0s for the openshift-console route to be created... 
INFO Install complete!              
INFO Run 'export KUBECONFIG=/home/user_id/install/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. 
INFO The cluster is ready when 'oc login -u kubeadmin -p xxxx succeeds (wait a few minutes). 
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.prod.openshiftcorp.com 
INFO Login to the console with user: kubeadmin, password: xxxx-xxxx-xxxx-xxxx

The result of this operation is an HA cluster with three control plane machines and three compute machines.  Both control plane and compute are spread across availability zones in the specified region. The number of compute machines can additionally grow and shrink over the lifetime of the cluster to meet workload demands via dynamic compute capabilities.

Each machine runs a new, stripped-down operating system, Red Hat Enterprise Linux (RHEL) CoreOS, inspired by our team at CoreOS.  RHEL CoreOS is the immutable container host version of Red Hat Enterprise Linux and features a RHEL kernel with SELinux enabled by default.  It includes the kubelet, which is the Kubernetes node agent, and the CRI-O container runtime, which is optimized for Kubernetes.  Every control plane machine in an OpenShift 4 cluster must run RHEL CoreOS.  It includes a critical first-boot provisioning tool called CoreOS Ignition that enables the cluster to configure the machines.  Operating system updates are delivered as an Atomic OSTree repo embedded within a container image which is rolled out across the cluster via an operator.  Actual operating system changes are made in-place on each machine as an atomic operation using rpm-ostree.  Together these technologies enable OpenShift to manage the operating system like any other application on the cluster via in-place upgrades, keeping the entire platform up-to-date and reducing the burden on operations teams.

OpenShift uses operators to manage every aspect of the cluster.  This includes operators that manage essential Kubernetes project components like the api server, scheduler, and controller manager.  Additional operators for components like the cluster-autoscaler, cluster-monitoring, web console, dns, ingress, networking, node-tuning, and authentication are included to provide management of the entire platform.  Each operator exposes a custom resource API interface to define the desired state, observe the status of their rollout, and diagnose any issues that may occur.  The usage of the operator pattern simplifies operations and avoids configuration drift by continually reconciling the observed and desired states.  Finally, the operator lifecycle manager component is included to provide a framework for administrators to manage additional Kubernetes-native applications that are optionally integrated into the cluster as a day-2 task.

If the cluster is no longer needed, it is easy to destroy the cluster and release associated resources via a single command:

$ ./openshift-install-linux-amd64 destroy cluster

Automation Boundaries

OpenShift 4 aims to deliver the automation experience of a native public cloud container platform while retaining the flexibility of a multi-cloud, enterprise-class solution.

On supported platforms, the installer is capable of provisioning the underlying infrastructure for the cluster.  The installer programmatically creates all portions of the networking, machines, and operating systems required to support the cluster.  This is called the Installer Provisioned Infrastructure (IPI) pattern.  Think of it as best-practice reference architecture implemented in code.  It is recommended that most users make use of this functionality to avoid having to provision their own infrastructure.  The installer will create and destroy the infrastructure components it needs to be successful over the life of the cluster.

For other platforms or in scenarios where installer provisioned infrastructure would be incompatible, the installer can stop short of creating the infrastructure, and allow the platform administrator to provision their own using the cluster assets generated by the install tool.  This pattern is called User Provisioned Infrastructure (UPI).  Once the infrastructure has been created, OpenShift 4 is installed, maintaining its ability to support automated operations and over-the-air platform updates.

Installation Process

Instead of treating the installation as an isolated event, it is handled in the same manner as an automatic update.  We think of installation as a specialization of automatic upgrades, where we upgrade from nothing to a running OpenShift cluster.  OpenShift is unique in that its management extends all the way down to the operating system itself. Every machine boots with a configuration which references resources hosted by the cluster its joining.  This helps allow the cluster to manage itself and migrate resources as updates are applied. A downside to this approach is that new clusters have no way of starting without external help because every machine in the to-be-created cluster is waiting on the to-be-created cluster.

OpenShift breaks this dependency loop using a temporary bootstrap machine booted with a concrete Ignition configuration, which describes how to create the cluster.  The machine acts as a temporary control plane whose job is to launch the production control plane.

The main assets generated by the installer are the Ignition configs for the bootstrap, control plane, and compute machines.  Given these configuration files (and properly configured infrastructure), it is possible to start an OpenShift cluster.

The bootstrapping process for the cluster looks like the following:

  1. The bootstrap machine boots and starts hosting the remote resources required for control plane machines to boot.
  2. The control plane machines fetch the remote resources from the bootstrap machine and finish booting.
  3. The control plane machines form an etcd cluster.
  4. The bootstrap machine starts a temporary Kubernetes control plane using the newly-created etcd cluster.
  5. The temporary control plane schedules the production control plane to the control plane machines.
  6. The temporary control plane shuts down, yielding to the production control plane.
  7. The bootstrap node injects OpenShift-specific components into the newly formed control plane.
  8. The installer then tears down the bootstrap resources.

The result of this bootstrapping process is a fully running OpenShift cluster.  The cluster will then download and configure the remaining components needed for day-to-day operation via the cluster version operator including the automated creation of compute machines on supported platforms.

Cluster Version Operator

The desired state for the cluster is described using the ClusterVersion custom resource definition.  It is acted upon by the cluster version operator which is the root level operator responsible for ensuring the user’s desired version matches the cluster actual version.

 

An overview of the cluster version can be inspected as follows:

$ ./oc describe clusterversion
Name:         version
API Version:  config.openshift.io/v1
Kind:         ClusterVersion
Spec:
  Channel:     stable-4.0
  Cluster ID:  xxxxxx-xxxx-xxxx-xxxxxxxxx
  Upstream:    https://api.openshift.com/api/upgrades_info/v1/graph
Status:
  Available Updates:  <nil>
  Conditions:
    Last Transition Time:  2019-02-26T15:39:17Z
    Message:               Done applying 4.0.0-0.alpha-2019-02-26-133550
    Status:                True
    Type:                  Available
    Last Transition Time:  2019-02-27T05:29:02Z
    Status:                False
    Type:                  Failing
    Last Transition Time:  2019-02-26T15:39:17Z
    Message:               Cluster version is 4.0.0-0.alpha-2019-02-26-133550
    Status:                False
    Type:                  Progressing
  Desired:
    Image: registry.svc.ci.openshift.org/openshift/origin-release@sha256:xyz
    Version:  4.0.0-0.alpha-2019-02-26-133550
  History:
    Completion Time:    2019-02-26T15:39:17Z
    Image: registry.svc.ci.openshift.org/openshift/origin-release@sha256:xyz
    Started Time:       2019-02-26T15:18:14Z
    State:              Completed
    Version:            4.0.0-0.alpha-2019-02-26-133550
  Observed Generation:  1
  Version Hash:         NGEuVbQjnaQ=
Events:                 <none>

Each cluster is associated with an update channel which describes the set of available upgrade paths for the cluster from its current state.  Updates are delivered as a container image, known as the release image. This release image includes all the required components that make up the platform including core Kubernetes, add-on components, and operating system updates for RHEL CoreOS machines.

At installation time, the installer seeds the ClusterVersion resource with a desired release image.  The installer is not actually responsible for the final installation of the production control plane, but instead the cluster version operator ensures the update is applied.  This is how we are able to treat installation as a specialization of upgrade. Unlike many other installation patterns, the cluster version operator is always reconciling observed state of the cluster with its actual state.

To understand what components make up a given release and their images, execute the following command (abridged), providing the desired release image as input:

$ oc adm release info registry.svc.ci.openshift.org/openshift/origin-release@sha256:xyz

Name:      4.0.0-0.alpha-2019-02-26-133550
Manifests: 251

Release Metadata:
  Version:  4.0.0-0.alpha-2019-02-26-133550
  Upgrades: <none>

Component Versions:
  Kubernetes 1.12.4

Images:
  NAME                                          DIGEST
  aws-machine-controllers                       sha256:xyz
  cli                                           sha256:xyz
  cloud-credential-operator                     sha256:xyz
  cluster-authentication-operator               sha256:xyz
  cluster-autoscaler                            sha256:xyz
  cluster-autoscaler-operator                   sha256:xyz
  cluster-bootstrap                             sha256:xyz
  cluster-config-operator                       sha256:xyz
  cluster-dns-operator                          sha256:xyz
  cluster-image-registry-operator               sha256:xyz
  cluster-ingress-operator                      sha256:xyz
  cluster-kube-apiserver-operator               sha256:xyz
  cluster-kube-controller-manager-operator      sha256:xyz
  cluster-kube-scheduler-operator               sha256:xyz
  cluster-machine-approver                      sha256:xyz
  cluster-monitoring-operator                   sha256:xyz
  cluster-network-operator                      sha256:xyz
  cluster-node-tuned                            sha256:xyz
  cluster-node-tuning-operator                  sha256:xyz
  cluster-openshift-apiserver-operator          sha256:xyz
  cluster-openshift-controller-manager-operator sha256:xyz
  cluster-samples-operator                      sha256:xyz
  cluster-storage-operator                      sha256:xyz
  cluster-version-operator                      sha256:xyz
  console                                       sha256:xyz
  console-operator                              sha256:xyz
  container-networking-plugins-supported        sha256:xyz
  container-networking-plugins-unsupported      sha256:xyz
  coredns                                       sha256:xyz
  etcd                                          sha256:xyz
  grafana                                       sha256:xyz
  haproxy-router                                sha256:xyz
  k8s-prometheus-adapter                        sha256:xyz
  kube-rbac-proxy                               sha256:xyz
  kube-state-metrics                            sha256:xyz
  machine-api-operator                          sha256:xyz
  machine-config-controller                     sha256:xyz
  machine-config-daemon                         sha256:xyz
  machine-config-operator                       sha256:xyz
  machine-config-server                         sha256:xyz
  machine-os-content                            sha256:xyz
  multus-cni                                    sha256:xyz
  oauth-proxy                                   sha256:xyz
  openstack-machine-controllers                 sha256:xyz
  operator-lifecycle-manager                    sha256:xyz
  operator-marketplace                          sha256:xyz
  operator-registry                             sha256:xyz
  prometheus                                    sha256:xyz
  service-ca                                    sha256:xyz
  telemeter                                     sha256:xyz

The cluster version operator reads the operator manifests included in the release image to roll-out each operator in a controlled fashion which in turn is responsible for upgrading their managed component.

By consolidating all of the content into a single payload, we intend to make it easy to mirror images to support disconnected environments.

Cluster Operators

Individual operators are just Kubernetes applications.  An operator executes in a namespace configured with a set of service accounts and roles that restricts the operator’s access to its managed set of resources.  The operator looks at custom resource definitions, secrets, or configmaps to drive additional Kubernetes workload controllers such as deployments or daemon sets needed to configure the cluster function. Operators are more than just controllers in that the functions they perform tend to be more complex – things like rotating certificates.

Each of the core operators in the cluster sync its observed state to a ClusterOperator resource.  The cluster version operator uses this information to understand if a particular release payload has fully propagated across the cluster.  If transient errors occur, the ClusterOperator resource includes conditions and messages to describe the nature of the problem to inform the platform administrator on any appropriate course of action.

 

The operator’s statuses can be inspected as follows:

$ oc get clusteroperators
NAME                                  VERSION AVAILABLE PROGRESSING FAILING   SINCE
cluster-autoscaler                              True False False 14h
cluster-storage-operator                        True False False 14h
console                                         True False False 152m
dns                                             True False False 14h
image-registry                                  True False False 14h
ingress                                         True False False 14h
kube-apiserver                                  True False False 4h29m
kube-controller-manager                         True False False 152m
kube-scheduler                                  True False False 152m
machine-api                                     True False False 14h
machine-config                                  True False False 6h44m
marketplace-operator                            True False False 14h
monitoring                                      True False False 29m
network                                         True False False 26m
node-tuning                                     True False False 14h
openshift-apiserver                             True False False 152m
openshift-authentication                        True False False 152m
openshift-cloud-credential-operator             True False False 14h
openshift-controller-manager                    True False False 152m
openshift-samples                               True False False 14h
operator-lifecycle-manager                      True False False 14h
service-ca                                      True False False 152m

Global Cluster Configuration

Platform administrators are able to configure the cluster by editing a set of custom resource definitions, grouped by function.  These resources are read by various operators in the cluster which reconcile that desired state with the observed state.  API definitions are available to configure the api server, authentication, build, console, networking, dns, oauth, and more.  Since configuration of the cluster is no different than interacting with any other Kubernetes resource, this model works well for supporting GitOps patterns.

To illustrate this, let’s configure the OpenShift OAuth server to work with a desired identity provider.  Multiple identity providers are supported in OpenShift, but a simple version to configure is using htpasswd because it requires no prerequisites.

Let’s create a secret in the openshift-config namespace that contains our desired htpasswd file data for the user ‘test’ with password ‘test’.  

$ oc apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: htpass-secret
  namespace: openshift-config
data:
  htpasswd: dGVzdDokYXByMSRxa0Zvb203dCRSWFIuNHhTV0lhL3h6dkRRUUFFUG8w
EOF

Next let’s create the OAuth configuration spec we want our operator to realize by enabling the HTPasswd identity provider and referencing the htpass-secret we created previously.

$ oc apply -f - <<EOF
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: htpassidp
    challenge: true
    login: true
    mappingMethod: claim
    type: HTPasswd
    htpasswd:
      fileData:
        name: htpass-secret
EOF

The operators running in the cluster observe the desired authentication changes, safely roll the change out across the cluster, and moments later, when users attempt to login into the cluster using the CLI, Web Console, or interact with any cluster specific service like Prometheus or Grafana that integrates with the OAuth provider, they are able to login with their distinct identity and easily access the content that is mapped to their proper role and permissions.

Visiting the web console, you now see two prompts to login:


Choose htpassidp provider, login as user test with password test.

And now you see a role based view of the set of resources the new test user has access to by default.

Want to see it now?

The friction to install an OpenShift 4 cluster has been reduced to a single command to go from nothing to provisioned infrastructure running a fully automated cluster.  Where installer-provisioned infrastructure is incompatible, we have described how RHEL CoreOS technologies enables a cluster to dynamically bootstrap. By treating the install as a specialization of an automated upgrade, we have shown how our usage of the Kubernetes operator pattern is able to lifecycle the operating system, applications and the cluster itself.

If you are interested in trying this out yourself, we would love you to explore a sneak peek of OpenShift 4.  It’s easy to get started, via the following steps:

  1. To get started, visit try.openshift.com and click on “Get Started”.
  2. Log in or create a Red Hat account and follow the instructions for setting up your first cluster.

Once you’ve successfully installed, check out the OpenShift Training walkthrough for a deep dive on the systems and concepts that make OpenShift 4 the easiest way to run Kubernetes.

We hope you are as excited about the future of OpenShift as we are. In a future post, we will describe our Kubernetes deployment topology managed by our operators, and how we are working together with upstream Kubernetes to support a kernel for platform extension.

Categories
Containers, OpenShift Container Platform, Products
Tags
, , , , , ,