How to Install OpenShift Enterprise PaaS – Part 1

OpenShift Enterprise Changes Software Development

Platform as a Service is changing the way developers approach developing software. Developers typically use a local sandbox with their preferred application server and only deploy locally on that instance. Developers typically start JBoss locally using the startup.sh command and drop their .war or .ear file in the deployment directory and they are done. Developers have a hard time understanding why deploying to the production infrastructure is such a time consuming process.

System Administrators understand the complexity of not only deploying the code, but procuring, provisioning and maintaining a production level system.##OpenShift Enterprise Changes Software Development

Platform as a Service is changing the way developers approach developing software. Developers typically use a local sandbox with their preferred application server and only deploy locally on that instance. Developers typically start JBoss locally using the startup.sh command and drop their .war or .ear file in the deployment directory and they are done. Developers have a hard time understanding why deploying to the production infrastructure is such a time consuming process.

System Administrators understand the complexity of not only deploying the code, but procuring, provisioning and maintaining a production level system. They need to stay up to date on the latest security patches and errata, ensure the firewall is properly configured, maintain a consistent and reliable backup and restore plan, monitor the application and servers for CPU load, disk IO, HTTP requests, etc.

OpenShift Enterprise provides developers and IT organizations an auto-scaling cloud application platform for quickly deploying new applications on secure and scalable resources with minimal configuration and management headaches. This means increased developer productivity and a faster pace in which IT can support innovation.

This blog series will walk you through the process of installing and configuring an OpenShift Enterprise environment.

Assumptions: This blog post series assumes that you are a developer or system administrator and are comfortable with installing Red Hat Enterprise Linux. I also assume that you have valid entitlements / subscriptions to Red Hat Enterprise Linux as well OpenShift Enterprise.

Enterprise PaaS is Infrastructure Agnostic

The great thing about OpenShift Enterprise is that we are infrastructure agnostic. You can run OpenShift on bare metal, virtualized instances, or on public/private cloud instances. The only thing that is required is Red Hat Enterprise Linux as the underlying operating system. We require this in order to take advantage of SELinux and other enterprise features so that you can ensure your installation is rock solid and secure.

What does this mean? This means that in order to take advantage of OpenShift Enterprise, you can use any existing resources that you have in your hardware pool today. It doesn’t matter if your infrastructure is based on EC2, VMware, RHEV, Rackspace, OpenStack, CloudStack, or even bare metal as we run on top of any Red Hat Enterprise Linux operating system as long as the architecture is x86_64.

OpenShift Enterprise IaaS requirements

Looking for a shortcut to the Install Process?

In this blog series, we are going to go into the details of installing and configuring all of the components required for OpenShift Enterprise. We will be installing and configuring BIND, MongoDB, DHCP, ActiveMQ, MCollective, and other vital pieces to OpenShift. Doing this manually will give you a better understanding of how all of the components of OpenShift Enterprise work together to create a complete solution.

That being said, once you have a solid understanding of all of the moving pieces, you will probably want to take advantage of our kickstart script that performs all the functions in the administration portion of this series on your behalf. This script will allow you to create complete OpenShift Enterprise environments in a matter of minutes. It is not intended for you to use the kickstart as part of this blog series.

The kickstart script is located at:
https://install.openshift.com/

When using the kickstart script, be sure to edit it to use the correct Red Hat subscriptions. Take a look at the script header for full instructions.

OpenShift Architecture Overview

OpenShift Enterprise enables you to create, deploy and manage applications within a private or public cloud. It provides disk space, CPU resources, memory, network connectivity, and application servers. Depending on the type of application being deployed, a template file system layout is provided (for example, PHP, Python, and Ruby/Rails).

The two basic functional units of the platform are the Broker, which provides the interface, and the Node, which provides application frameworks.

The Broker is the single point of contact for all application management activities. It is responsible for managing user logins, DNS, application state, and general orchestration of the application. Consumers don’t contact the broker directly; instead they use the Web console, CLI tools, RESTful web services, or an IDE to interact with the Broker.

OpenShift Enterprise Broker DNS BIND MongoDB LDAP

Nodes provide the actual functionality necessary to run the user applications. We currently have many language cartridges to support JBoss, PHP, Ruby, etc., as well as many DB cartridges such as PostgreSQL and MySQL.

OpenShift Enterprise Nodes Number of Gears

System Resources and Application Containers

The system resources and security containers provided by the platform are gears and nodes.

  • Gear: Gears provide a resource-constrained container to run one or more cartridges. They limit the amount of RAM and disk space available to a cartridge.

  • Node: To enable us to share resources, multiple gears run on a single physical or virtual machine. We refer to this machine as a node. Gears are generally over-allocated on nodes since not all applications are active at the same time.

OpenShift Enterprise uses SELinux for node segmentation

OpenShift Enterprise Gears reside on a node

Communication between the Broker and Node hosts

ActiveMQ is a fully open source messenger service that is available for use across many different programming languages and environments. OpenShift Enterprise makes use of this technology to handle communications between the broker host and the node host(s). OpenShift Enterprise also makes use of the popular MCollective messaging client to both publish and consume messages.

OpenShift Enterprise Broker and Node communication

Now that we have the background of the overall architecture, let’s start installing the pieces we need to make this all run in your environment.

Install RHEL Basic Server

OpenShift Enterprise is currently supported on Red Hat Enterprise Linux version 6.3 x86_64. If you need to download this version of the operating system, head on over the customer support portal and download version 6.3 x86_64.

Once you have the .iso image downloaded, go ahead and start the installation on your hardware or virtual machine.

Red Hat OpenShift Enterprise Basic Server

Choose the storage system that you will be using for OpenShift Enterprise installation and click next.

Red Hat OpenShift Enterprise Linux Storage Screen

You will then need to select the timezone that your system will be using.

Red Hat OpenShift Enterprise Linux Timezone Screen

You will then be presented with a dialog box that will allow you to choose the installation type that you would like to perform. At the time of this writing, we suggest that you select and install a Basic Server configuration.

Red Hat OpenShift Enterprise Linux Basic Server Screen

Register your installation with Red Hat

Note: This section assumes that you have your network configured and working properly to ensure communication with the Red Hat servers.

If you find that your default ethernet device is not started, perform the following command:

# ifup eth0 

Assuming you want to start device 0. You may also want to ensure that eth0 is configured to start on boot by editing the /etc/sysconfig/network-scripts/ifcfg-eth0 file.

DEVICE=eth0
HWADDR=XX:XX:XX:XX:XX
TYPE=Ethernet
UUID=xxxxxxxxxxxxxxxxxxxxxxx
ONBOOT=yes

Step 1: Register your system with Red Hat

In order to be able to update to newer package, and to download the OpenShift Enterprise software, your system will need to be registered with Red Hat to allow your system access to appropriate software channels. In order to register your subscription, issue the following command and register your system using the OpenShift Enterprise Subscription.

$ subscription-manager register

You will then be prompted for your authentication details.

You will also need to ensure that your system is configured and is subscribed to your Red Hat Enterprise Linux entitlement.

Step 2: Attach your OpenShift Enterprise Subscription

Once you have registered your system, you will need to attach the OpenShift Enterprise Subscription. In order to list all of your available subscriptions, issue the following:

$ subscription-manager list --available

Find your OpenShift Enterprise Subscription and make a note of the pool id. You can then attach the correct subscription by typing:

$ subscription-manager subscribe --pool [POOL IID from previous command]

Note: In the above command I have used the command line subscription-manager tool to perform the registration process. If you are more comfortable using a graphical interface, run:

$ subscription-manager-gui

Red Hat OpenShift Enterprise Linux Confirmation Screen

Also, take note of the yum repositories that you are now able to install packages from:

$ yum repolist

you should see the following repositories that are available for installing software. The repositories may differ on your system based upon the type of OpenShift Enterprise Subscription that you have purchased.

repo id                                repo name                                                            
jb-eap-5-for-rhel-6-server-rpms        JBoss Enterprise Application Platform 5 (RHEL 6 Server) (RPMs)
jb-eap-6-for-rhel-6-server-rpms        JBoss Enterprise Application Platform 6 (RHEL 6 Server) (RPMs)
jb-ews-1-for-rhel-6-server-rpms        JBoss Enterprise Web Server 1 (RHEL 6 Server) (RPMs)
jb-ews-2-for-rhel-6-server-rpms        JBoss Enterprise Web Server 2 (RHEL 6 Server) (RPMs) 
rhel-server-ose-infra-6-rpms           Red Hat OpenShift Enterprise Infrastructure (RPMs)
rhel-server-ose-jbosseap-6-rpms        Red Hat OpenShift Enterprise JBoss EAP add-on (RPMs)
rhel-server-ose-node-6-rpms            Red Hat OpenShift Enterprise Application Node (RPMs)
rhel-server-ose-rhc-6-rpms             Red Hat OpenShift Enterprise Client Tools (RPMs)

Update Operating System with latest packages and set clock

Step 1: Login to RHEL server and update package

We need to update the operating system to have all of the latest packages that may be in the yum repository for RHEL Server. This is important to ensure that you have a recent update to the SELinux packages that OpenShift Enterprise relies on. In order to update your system, issue the following command:

# yum update

Note: Depending on your connection and speed of your broker host, this installation make take several minutes.

Step 2: Configuring the clock to avoid clock skew

OpenShift Enterprise requires NTP to synchronize the system and hardware clocks. This synchronization is necessary for communication between the broker and node hosts; if the clocks are too far out of synchronization, MCollective will drop messages. Every MCollective request (discussed in a later post) includes a time stamp, provided by the sending host’s clock. If a sender’s clock is substantially behind a recipient’s clock, the recipient drops the message. This is often referred to as clock skew and is a common problem that users encounter when they fail to sync all of the system clocks. Please note you can use any NTP server you want as long as all your machines are syncing to the same time

# ntpdate clock.redhat.com
# chkconfig ntpd on
# service ntpd start

Install and configure the OpenShift Enterprise broker

At this point, you should have a Red Hat Enterprise Server 6 installed and configured. This 1st node will be called the broker node.

The broker handles the creation and management of user applications, including using an authentication service to authenticate users, as well as communication with appropriate nodes. The nodes run the user applications in contained environments called gears. The broker queries and controls nodes using a messaging service.

Step 1: Installing the BIND DNS Server

In order for OpenShift Enterprise to work correctly, you will need to configure BIND so that you have a DNS server setup. In a production deployment, user will probably have an existing DNS infrastructure in place. However, for the purpose of this blog series, we need to install and configure our own DNS server so that name resolution works properly. Primarily, we will be using name resolution for communication between our broker and node hosts as well as dynamically updating our DNS server to resolve gear application names when we start creating application gears. These gear application names will be the canonical URL for end-users to bring up the developers web applications.

# yum install bind bind-utils

The official OpenShift documentation suggests that you set an environment variable for the domain name that you will be using to facilitate faster configuration of BIND. Let’s follow the suggested route for this blog post by issuing the following command:

# domain=example.com

Step 2: Create DNSSEC key file

DNSSEC, which stands for DNS Security Extensions, is a method by which DNS servers can verify that DNS data is coming from the correct place. You create a private/public key pair to determine the authenticity of the source domain name server. In order to implement DNSSEC on our new PaaS, we need to create a key file, which we will store in /var/named. For convenience, set the $keyfile variable now to the location of this key file:

# keyfile=/var/named/${domain}.key

Create a DNSSEC key pair and store the private key in a variable named $key by using the following commands:

# cd /var/named
# dnssec-keygen -a HMAC-MD5 -b 512 -n USER -r /dev/urandom ${domain}
# KEY="$(grep Key: K${domain}*.private | cut -d ' ' -f 2)"
# cd -
# rndc-confgen -a -r /dev/urandom

Verify that the key was created properly by viewing the contents of the key variable:

# echo $KEY

Configure the ownership, permissions, and SELinux context for the key we created:

# restorecon -v /etc/rndc.* /etc/named.*
# chown -v root:named /etc/rndc.key
# chmod -v 640 /etc/rndc.key

Step 3: Creating the fowarders.conf configuration file for host name resolution

The DNS forwarding facility of BIND can be used to create a large site-wide cache on a few servers, reducing traffic over links to external nameservers. It can also be used to allow queries by servers that do not have direct access to the Internet, but wish to look up exterior names anyway. Forwarding occurs only on those queries for which the server is not authoritative and does not have the answer in its cache.

Create a forwards.conf file with the following commands:

The forwarders.conf file is also attached to the blog post under the attachment section.

# echo "forwarders { 8.8.8.8; 8.8.4.4; } ;" >> /var/named/forwarders.conf
# restorecon -v /var/named/forwarders.conf
# chmod -v 755 /var/named/forwarders.conf

Step 4: Configuring subdomain resolution and creating an initial DNS database

To ensure that we are starting with a clean /var/named/dynamic directory, let’s remove this directory if it exists:

# rm -rvf /var/named/dynamic
# mkdir -vp /var/named/dynamic

Issue the following command to create the ${domain}.db file (before running this command, verify that the domain variable you set earlier in this post is available to your current session):

The example.com.db file is also attached to the blog post under the attachment section.

cat <<EOF > /var/named/dynamic/${domain}.db
\$ORIGIN .
\$TTL 1 ; 1 seconds (for testing only)
${domain}       IN SOA  ns1.${domain}. hostmaster.${domain}. (
            2011112904 ; serial
            60         ; refresh (1 minute)
            15         ; retry (15 seconds)
            1800       ; expire (30 minutes)
            10         ; minimum (10 seconds)
            )
        NS  ns1.${domain}.
        MX  10 mail.${domain}.
\$ORIGIN ${domain}.
ns1         A   127.0.0.1
EOF

Once you have entered the above echo command, cat the contents of the file to ensure that the command was successful:

# cat

You should see the following output:

$ORIGIN .
$TTL 1  ; 1 second
example.com             IN SOA  ns1.example.com. hostmaster.example.com. (
                                2011112916 ; serial
                                60         ; refresh (1 minute)
                                15         ; retry (15 seconds)
                                1800       ; expire (30 minutes)
                                10         ; minimum (10 seconds)
                                )
                        NS      ns1.example.com.
                        MX      10 mail.example.com.
$ORIGIN example.com.
ns1                     A       127.0.0.1

Now we need to install the DNSSEC key for our domain:

cat <<EOF > /var/named/${domain}.key
key ${domain} {
  algorithm HMAC-MD5;
  secret "${KEY}";
};
EOF

Set the correct permissions and context:

# chown -Rv named:named /var/named
# restorecon -rv /var/named

Step 5: Creating the named configuration file

We also need to create our named.conf file, Before running the following command, verify that the domain variable you set earlier in this post is available to your current session.

The /etc/named.conf file is also attached to the blog post under the attachment section.

cat <<EOF > /etc/named.conf
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//

options {
    listen-on port 53 { any; };
    directory   "/var/named";
    dump-file   "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
    allow-query     { any; };
    recursion yes;

    /* Path to ISC DLV key */
    bindkeys-file "/etc/named.iscdlv.key";

    // set forwarding to the next nearest server (from DHCP response
    forward only;
        include "forwarders.conf";
};

logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};

// use the default rndc key
include "/etc/rndc.key";

controls {
    inet 127.0.0.1 port 953
    allow { 127.0.0.1; } keys { "rndc-key"; };
};

include "/etc/named.rfc1912.zones";

include "${domain}.key";

zone "${domain}" IN {
    type master;
    file "dynamic/${domain}.db";
    allow-update { key ${domain} ; } ;
};
EOF

And finally, set the permissions for the new configuration file that we just created:

# chown -v root:named /etc/named.conf
# restorecon /etc/named.conf

Step 6: Configuring host name resolution to use new the BIND server

We need to update our resolv.conf file to use our local named service that we just installed and configured. Open up your /etc/resolv.conf file and add the following entry as the first nameserver entry in the file:

nameserver 127.0.0.1

We also need to make sure that named starts on boot and that the firewall is configured to pass through DNS traffic:

# lokkit --service=dns
# chkconfig named on

Step 7: Start named service

We are finally ready to start up our new DNS server and add some updates.

# service named start

You should see a confirmation message that the service was started correctly. If you do not see an OK message, I would suggest running through the above steps again and ensuring that the output of each command matches the contents of this exercise. If you are still having trouble after trying the steps again, use the attached files as you may have a copy and paste error.

Step 8: Adding entries using nsupdate

Now that our BIND server is configured and started, we need to add a record for our broker node to BIND’s database. To accomplish this task, we will use the nsupdate command, which opens an interactive shell where we can perform commands:

# nsupdate -k ${keyfile}
> server 127.0.0.1
> update delete broker.example.com A
> update add broker.example.com 180 A **your broker ip address**
> send

Press control-D to exit from the interactive session.

In order to verify that you have successfully added broker.example.com to your DNS server, you can perform

# ping broker.example.com

and it should resolve to the local machine that you are working on. You can also perform a dig request using the following command:

# dig @127.0.0.1 broker.example.com

Summary

In this blog post we covered the first steps to complete a successful OpenShift Enterprise installation:

  • Installation of Red Hat Enterprise Linux
  • Registered our system and applied the correct subscriptions
  • Updated our operating system with the latest packages
  • Installed and configured BIND to use as our DNS server

In the next blog post in this series, we will cover installation and configuration of DHCP, MongoDB, ActiveMQ, and MCollective.

If you don’t want to muss and fuss with the details, try out the example kickstart script that creates an environment that is ready to go.

Categories
Java, MongoDB, OpenShift Container Platform, PHP
Tags
,
Comments are closed.