Setting up OpenShift 3 on OpenStack

OpenShift running on OpenStack is a popular use case and we keep coming across this setup quite often. This article explains the steps to set up OpenShift Enterprise 3 on OpenStack. This document lists instructions for a quick setup that you can execute one by one. You can also copy these commands and make your own script file. For detailed installation instructions and explanation of each command you can refer to the OpenShift Administration Guide.

This article assumes that you have an understanding of OpenShift 3 architecture and you know the OpenShift 3, Docker and Kubernetes concepts such as master, node, pod, docker image, docker container, image streams, builder images, templates, S2I et al. This article is not intended to teach OpenShift 3 concepts. It is meant for those who know about OpenShift 3, but want to know specifics of how to quickly set it up on OpenStack.

Prerequisites

  1. You will need OpenShift Enterprise subscriptions or evals on your Red Hat account. You can request them here.
  2. You will need an OpenStack environment where you can spin up your own VMs.
  3. You will need RHEL 7 image in your OpenStack environment.

Steps

Spin up host VMs on OpenStack

  1. Let us start by setting up a new security group on OpenStack by opening the ports listed  in the figure below. I named it osev3.

1 Setup Security

  1. We will use three RHEL 7 VMs. One that will run as a OpenShift master and the other two will run as OpenShift nodes. The host for master will also do the job of a node. I chose a host with 8GB RAM and 20GB disk space with 4 vCPUs for all my 4 hosts.  You can go for a little higher amount of disk space of the master.  I chose boot from image option and the RHEL 7 image.
  2. Next go to Access & Security tab and assign the osev3 security group that we added earlier. Repeat these steps for all three VMs.

I added three nodes: master, node1 and node2.

2 spinup

3. Assign security group

  1. Next we will assign the Floating IPs to these VMs. Navigate to the “Access & Security” and select “Floating IPs” tab. Allocate three floating IPs to your project and assign them to master, node1 and node2.

4. Assign Floating IP

Make a note of the floating IPs assigned to all the hosts. Now your VMs are ready from OpenStack’s point of view.

DNS Setup

  1. You should choose a domain name for your OpenShift set up make the entries for your domain in DNS.  If your domain is example.com, you would add entries for:

master.example.com to resolve to the Floating IP address assigned to the master

node1.example.com to resolve to the Floating IP address assigned to node1

node2.example.com to resolve to Floating IP address assigned to node2

Note: Replace example.com with the domain name of your choice everywhere in this document.

  1. Make an additional wildcard for a DNS zone resolve to the IP address of the OpenShift router. In this setup, we will run the OpenShift router on the master. So create a wild card DNS entry with low TTL that points to the public IP of the master as shown below:
*.example.com 300 IN A <<master floating IP>>

Prepare the all the hosts

Before we run the OpenShift installer each of the 3 VMs need to be prepared. This involves registering these hosts using subscription manager so that necessary channels can be enabled. Then we will install the packages needed for OpenShift.

  1. You will need internet access from these hosts. Specifically you should be able to reach Red Hat Network from these hosts. In my case, I had to add change /etc/hosts to be able to reach RHN. Depending on your OpenStack environment, you may NOT need this.
echo "209.132.183.44 xmlrpc.rhn.redhat.com" >> /etc/hosts
echo "23.204.148.218 content-xmlrpc.rhn.redhat.com" >> /etc/hosts
echo "209.132.183.49 subscription.rhn.redhat.com" >> /etc/hosts
echo "209.132.182.33 repository.jboss.org" >> /etc/hosts
echo "209.132.182.63 registry.access.redhat.com" >> /etc/hosts
  1.  Next run register the hosts with RedHat network running subscription-manager.  You will have to supply your RHN username and password.
subscription-manager register
  1. Find the pool-id for your OpenShift subscriptions by running the following command. Browse thru the list to find the pool-id corresponding to OpenShift V3 subscriptions.
subscription-manager list --available
  1. Attach the pool id to the host VM substituting the pool-id in the command below.
subscription-manager attach --pool <<your pool id>>
  1. Enable the following list of channels and disable the ones we don’t need. Run the “yum update”.
subscription-manager repos --disable="*"

subscription-manager repos \
--enable="rhel-7-server-rpms" \
--enable="rhel-7-server-extras-rpms" \
--enable="rhel-7-server-optional-rpms" \
--enable="rhel-server-7-ose-3.0-rpms"

yum update -y
  1. We will be changing the hostname to match with the settings in DNS. For example the hostname for master will be renamed as master.example.com. OpenStack VMs tend to lose the hostname changes when the host restarts. So we will first disable hostname changes during restart. (I gave the ‘sed’ scripts where possible instead of opening and editing files).

Note: Change the HOSTNAME to relevant value for node1 and node2.

# Disable hostname changes when the machine restarts
sed -i.bak -e 's/^ - set_hostname/# - set_hostname/' \
-e 's/^ - update_hostname/# - update_hostname/' \
/etc/cloud/cloud.cfg
 
# environment variable for HOSTNAME
export HOSTNAME=master.example.com
 
# change hostname
hostnamectl set-hostname ${HOSTNAME}
  1. We will now install the packages needed for OpenShift and some tools that will help us as we move on. Mainly we will install and enable docker.
# install necessary packages
yum -y remove NetworkManager*
yum install -y wget vim-enhanced net-tools bind-utils tmux git iptables-services bridge-utils
yum -y update
yum -y install docker
sed -i.bak -e 's/^OPTIONS/#OPTIONS/' -e '/^#OPTIONS/a OPTIONS=--insecure-registry 172.30.0.0/16' /etc/sysconfig/docker

systemctl start docker
systemctl enable docker

Establish SSH access between hosts

We will be installing OpenShift from the master host. In order for the installation script to run from master and be able reach out to nodes, we should ensure that master host can ssh to itself and to the node hosts. Let us generate SSH key and copy it to all the hosts.

On master host run the following:

#generate sshkey
ssh-keygen

#copy ssh key - this will allow master to ssh to itself as root.
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

Append the contents of root/.ssh/id_rsa.pub from master to root/.ssh/authorized_keys on each of the nodes. You may not know the root password of the OpenStack nodes. So you may have to do this step manually.

Installing OpenShift using Ansible

We will be installing OpenShift from the master host. Hence, all the commands in this section should be run from the master host.

  1. First we will install ansible from EPEL repository:
yum -y install \
http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
sed -i -e "s/^enabled=1/enabled=0/" /etc/yum.repos.d/epel.repo
yum -y --enablerepo=epel install ansible
  1. We will next clone the openshift ansible installer from github.
cd ~
git clone https://github.com/openshift/openshift-ansible
cd ~/openshift-ansible
  1. We will next create the /etc/ansible/hosts file. I am using a couple of echo commands to create this file. If you are copy-pasting, watch out carefully. The /etc/ansible/hosts file provides the configuration for your OpenShift environment be installed to the ansible playbook. Again I scripted the file creation. So you can just copy and paste these commands. We will first set some environment variables and the commands later will use the same to prepare the /etc/ansible/hosts file.
##############################################################################
# DOMAIN = domain name for the openshift environment
# MASTER = master to be added. This script will append the domain name
# NODES_LIST = nodes to be added.
#############################################################################
export DOMAIN=example.com
export MASTER="master"
export NODES_LIST="master node1 node2"

##########################################
# create the /etc/ansible hosts file
###########################################

echo "# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=root

# If ansible_ssh_user is not root, ansible_sudo must be set to true
#ansible_sudo=true

# To deploy origin, change deployment_type to origin
deployment_type=enterprise

# enable htpasswd authentication
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true','kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/openshift/openshift-passwd'}]

# host group for masters
[masters]
${MASTER}.${DOMAIN}

# host group for nodes, includes region info
[nodes]" \
> /etc/ansible/hosts

for node in $NODES_LIST
do
echo "${node}.${DOMAIN} openshift_node_labels=\"{'region': 'infra', 'zone': 'default'}\" openshift_public_hostname=${node}.${DOMAIN} openshift_hostname=${node}.${DOMAIN} " >> /etc/ansible/hosts
done

Open the /etc/ansible/hosts file to review the contents.

  1. Now we are ready to install OpenShift by running ansible-playbook. Run the following command. This will take a little while to run.
ansible-playbook ~/openshift-ansible/playbooks/byo/config.yml
  1. Let’s verify that the nodes are all running and ready by running
oc get nodes

You should see master, node1 and node2 running and in “Ready” state. Also make sure that the regions and zones labels are assigned to the nodes.

Note: At the time of writing this blog, there is a known defect that the ansible playbook is not labeling the nodes by default. You can run oc label commands and substitute the labels of your choice to make sure that the nodes are labeled.

oc label node master.example.com region="infra" zone="default"
oc label node node1.example.com region="primary" zone="east"
oc label node node2.example.com region="primary" zone="west"

 

Your OpenShift 3 environment should be up and running now.

Install Docker Registry

We will next install a docker registry in our OpenShift environment. We will set up the registry on the master hosts – so run the following commands from master.

  1. First we need to create a service account for the registry. We will name it “registry”:
echo '{"kind":"ServiceAccount","apiVersion":"v1","metadata":{"name":"registry"}}' | oc create -f -
  1. We need to add this service account users list by editing scc:
oc get scc privileged -o json | sed -e '/"users": \[/a"system:serviceaccount:default:registry",'| oc update scc -f -
  1. We will create a directory for registry so that the images that you add to the registry are saved on the disk.
mkdir -p /mnt/registry
  1. Now we will use the service account registry to create the docker registry in OSE by running oadm command.
oadm registry --service-account=registry \
--config=/etc/openshift/master/admin.kubeconfig \
--credentials=/etc/openshift/master/openshift-registry.kubeconfig \
--mount-host=/mnt/registry \
--images='openshift3/ose-${component}:v3.0.0.0' \
--selector="region=infra"

Note that the above command uses a selector pointing to region=infra, which means that the registry is deployed on the master that is labeled with region=infra. In your case, if you used different labels, please adjust the above command accordingly. Labels assigned to your nodes can be seen by running oc get nodes.

In a few minutes, your registry should be up and running. If you run oc get pods you should see a pod for registry in running status as shown below:

oc get pods

NAME                      READY     REASON    RESTARTS   AGE
docker-registry-1-mpih1   1/1       Running   1          4h

Install Router

Next we will install a router that runs on the master for this OSE set up. Router will be the ingress point for all the traffic destined for OpenShift services. It enables you to access your applications from outside.

  1. First we will create a router default certificate and a pem file:
CA=/etc/openshift/master
oadm create-server-cert --signer-cert=$CA/ca.crt \
--signer-key=$CA/ca.key --signer-serial=$CA/ca.serial.txt \
--hostnames='*.${DOMAIN}' \
--cert=routerdef.crt --key=routerdef.key

 

cat routerdef.crt routerdef.key $CA/ca.crt > routerdef.router.pem
  1. We will use the default certificate created above and install a router:
oadm router --default-cert=routerdef.router.pem \
--images='registry.access.redhat.com/openshift3/ose-${component}:${version}' \
--selector='region=infra' \
--credentials='/etc/openshift/master/openshift-router.kubeconfig'

In a couple of minutes you should see the router in running status. If you run oc get pods you should now see both the registry and router running on the master as shown below:

# oc get pods

NAME                      READY     REASON    RESTARTS   AGE
docker-registry-1-mpih1   1/1       Running   1          4h

router-1-b5wy5            1/1       Running   1          4h

 

Add Image Streams and Templates

The ansible script should have copied a bunch of image streams for the builder images to /usr/share/openshift/examples folder on the master host.  Run the commands below to add these image streams to your OpenShift environment:

oc create -f \
/usr/share/openshift/examples/image-streams/image-streams-rhel7.json \
-n openshift

oc create -f \
/usr/share/openshift/examples/xpaas-streams/jboss-image-streams.json \
-n openshift

oc create -f \
/usr/share/openshift/examples/db-templates -n openshift

oc create -f \
/usr/share/openshift/examples/quickstart-templates -n openshift

oc create -f \
/usr/share/openshift/examples/xpaas-templates -n openshift

 

Add users

We will now add a couple of uses to your OpenShift environment.

touch /etc/openshift/openshift-passwd
htpasswd -b /etc/openshift/openshift-passwd <username> <password>

 

Ready to Go

You are all set now. Access your master URL with port number 8443 in the browser (i.e. master.example.com:8443). You can supply the user credentials that you added in the last step and try creating applications.

Summary

In this article, we have looked at the steps to setup a simple OpenShift Enterprise 3 environment with three nodes on OpenStack.

Categories
OpenShift Container Platform, OpenShift Origin, Products
Tags
, , ,

13 Responses to “Setting up OpenShift 3 on OpenStack”

  1. Ruslan Valiyev

    Thanks for the howto!

    Any idea how to fix the error below? In my case “oc get pods” is stuck in “Pending”.

    # oc get pods
    NAME READY REASON RESTARTS AGE
    docker-registry-1-deploy 0/1 Pending 0 44m

    # oc get nodes
    NAME LABELS STATUS
    openshiftmaster.example.com kubernetes.io/hostname=openshiftmaster.example.com Ready
    openshiftnode1.example.com kubernetes.io/hostname=openshiftnode1.example.com Ready
    openshiftnode2.example.com kubernetes.io/hostname=openshiftnode2.example.com Ready

    /var/log/messages:
    Jul 21 18:59:16 openshiftmaster openshift-master: } {OPENSHIFT_DEPLOYMENT_NAME docker-registry-1 } {OPENSHIFT_DEPLOYMENT_NAMESPACE default }] {map[] map[]} [{deployer-token-f8pw8 true /var/run/secrets/kubernetes.io/serviceaccount}] /dev/termination-log IfNotPresent 0xc20c7c1ae0}] Never 0xc20d7531a8 ClusterFirst map[region:infra] deployer false [{deployer-dockercfg-wh92l}]} {Pending [] []}}

    Jul 21 18:59:16 openshiftmaster openshift-master: E0721 18:59:16.943010 5104 factory.go:262] Error scheduling default docker-registry-1-deploy: failed to find fit for pod, Node openshiftnode1.example.com: RegionNode openshiftnode2.example.com: RegionNode openshiftmaster.example.com: Region; retrying

    Jul 21 18:59:16 openshiftmaster openshift-master: I0721 18:59:16.943037 5104 factory.go:361] Backing off 4s for pod docker-registry-1-deploy

    Jul 21 18:59:16 openshiftmaster openshift-master: I0721 18:59:16.943135 5104 request.go:694] POST https://openshiftmaster.example.com:8443/api/v1/namespaces/default/events

    Jul 21 18:59:16 openshiftmaster openshift-master: I0721 18:59:16.943152 5104 request.go:698] -d ‘{“kind”:”Event”,”apiVersion”:”v1″,”metadata”:{“name”:”docker-registry-1-deploy.13f304c690efc8b3″,”namespace”:”default”,”creationTimestamp”:null},”involvedObject”:{“kind”:”Pod”,”namespace”:”default”,”name”:”docker-registry-1-deploy”,”uid”:”87362ebf-2fc9-11e5-8426-005056b7c72f”,”apiVersion”:”v1″,”resourceVersion”:”1820″},”reason”:”failedScheduling”,”message”:”Error scheduling: failed to find fit for pod, Node openshiftnode1.example.com: RegionNode openshiftnode2.example.com: RegionNode openshiftmaster.example.com: Region”,”source”:{“component”:”scheduler”},”firstTimestamp”:”2015-07-21T16:59:16Z”,”lastTimestamp”:”2015-07-21T16:59:16Z”,”count”:1}’

    • Lukas Fryc

      I’m running into the same situation. Haven’t found a solution yet.

      • Lukas Fryc

        Ok I think I found the issue – “Also make sure that the regions and zones labels are assigned to the nodes.”

        First, I deleted registry records:

        oc delete pods/docker-registry-1-deploy
        oc delete dc/docker-registry
        oc delete service/docker-registry

        These are missing commands that the instruction above outlined:

        oc label node master.example.com region=infra zone=default
        oc label node node1.example.com region=primary zone=east
        oc label node node2.example.com region=primary zone=west

        Then, run oadm again:

        oadm registry –service-account=registry
        –config=/etc/openshift/master/admin.kubeconfig
        –credentials=/etc/openshift/master/openshift-registry.kubeconfig
        –mount-host=/mnt/registry
        –images=’openshift3/ose-${component}:v3.0.0.0′
        –selector=”region=infra”

        I am not sure whether it is 100% correct (especially the deletion step, but I got it working):

        docker-registry-1-70gpv 1/1 Running 0 6m

        • Veer Muchandi

          Thanks Lukas, you are right.

        • Veer Muchandi

          As of now, playbook is not applying the labels. You got to update them manually. I updated the blog.

    • Veer Muchandi

      Currently there is a known defect that the labeling is not working via ansible by default. I should have mentioned that in the blog. Please use
      oc label node openshift-master.example.com region=”infra” zone=”default”
      oc label node openshift-node1.example.com region=”primary” zone=”east”
      oc label node openshift-node2.example.com region=”primary” zone=”west”

      and verify using oc get nodes before setting up registry.

    • Veer Muchandi

      I updated the blog to apply labels manually as playbook is currently not handling it.

    • Nicholas Nachefski

      You are probably missing the –selector=”region=infra” in your oadm registry deploy command

  2. AlMcCulloch

    Running a RedHat 7.1 cloud-init image downloaded from RedHat on OpenStack it doesn’t matter what I do I cannot get the image to ignore changing the hostname. I even erased the cloud* packages once booted, modified every parameter I could think of, and yet during the ansible install, dhcp must be called and neutron serves up a domain suffix to my image that I do not want and breaks the installation. It is true my image is on a dhcp subnet within OpenStack, but is there any other way to not allow the hostname to get changed, and ignore what neutron is sending down?

  3. AlMcCulloch

    I mean other than creating a non-dhcp network within OpenStack and creating the VMs by hand etc. I was really hoping to avoid that route

  4. Nicholas Nachefski

    Forcing the hostname changes is not very cloud-like. Instead, i recommend you keep the internal hostnames as they come and just use the public_hostname and public_ip variables in the ansible hosts file.

  5. RK

    From this article I could not understand the value OpenStack is bringing in deploying the OpenShift platform except for using the security and the virtualisation layer? OpenShift is being installed here all manually.

  6. Raymond Arias

    The line deployment_type=enterprise is incorrect for Openshift enterprise 3.5

    its need to be deployment_type=openshift-enterprise

Comments are closed.