As of this writing, OpenShift Origin (master branch) and OpenShift Enterprise 2.2 have introduced an additional feature to our external routing solutions for highly available applications. It’s called the OpenShift Routing Daemon and it works in conjunction with an installation of OpenShift Enterprise or OpenShift Origin.
The OpenShift Routing Daemon is a plug able daemon that allows customers to use various routing solutions. In this post, I’m going to showcase the NGINX plug-in for the routing daemon and show you how to set up.
Installing the Routing SPI
Important: Before we get started with the routing daemon set up, please follow the steps in the OpenShift Enterprise Deployment Guide to configure the routing SPI.
Once you have completed the steps in the above referenced document, select which host(s) you’re going to run the NGINX routing process on. You have a couple of options to install NGINX. Red Hat offers NGINX 1.6 via the Red Hat Software Collections version 1.2. This version does not include the NGINX Plus commercial features. If you want to use the NGINX Plus commercial features, install NGINX Plus via the subscription model offered directly from http://nginx.com
For this example I’ll be using Red Hat Enterprise Linux 6.6 as the base operating system. Later in this post, I’ll point out the configuration options that enable the use of an NGINX Plus feature that enables proactive health checks for the OpenShift Enterprise gears. It’s not necessary to use this feature unless you want to ensure a seamless transition if an upstream group member is not responding with success. More on that later.
Once you have installed NGINX or NGINX Plus, it’s time to set up the OpenShift Routing Daemon. First, you’ll want to install the routing daemon on the same host(s) that are running NGINX because the routing daemon will be required to update the NGINX configuration files and reload the service. NGINX Plus offers features such as a REST API and clustering, but to keep it simple for this release, we’ve decided to run the routing-daemon on the NGINX server.
We need to modify the SELinux policy because it by default restricts NGINX from opening network connections, which NGINX will need to do when it forwards HTTP connections to OpenShift gears. Run the following command:
sudo setsebool -P httpd_can_network_connect=true
Now start NGINX with the following command:
If you’re using NGINX Plus:
sudo service nginx start
If you’re using the SCL 1.2 NGINX 1.6:
sudo service nginx16-nginx
Routing Daemon Installation and Configuration
To verify that NGINX started up, you can check its log files under /var/log/nginx/. Now that you have verified NGINX is running, let’s configure the Routing Daemon. Since I’m describing the configuration of the routing daemon with OpenShift Enterprise, you’ll want to attached your NGINX host(s) to an OpenShift Enterprise Broker subscription to enable the use of the OpenShift Infra yum repository. Connect to your NGINX host and run following
sudo yum install rubygem-openshift-origin-routing-daemon -y
If you’re setting this up against OpenShift Origin yum mirrors, them Yum will be able to resolve all the dependencies for rubygem-openshift-origin-routing-daemon.
Next, edit the routing daemon’s configuration file:
sudo vi /etc/openshift/routing-daemon.conf
Find the LOAD_BALANCER setting and verify the load balancer setting is set to load the NGINX plug-in:
Find the CLOUD_DOMAIN setting, and set it to the domain you are using:
Find the ACTIVEMQ_HOST setting and set it to the hostnames of your ActiveMQ brokers; you can use one or more activemq brokers as the value to this setting.
Find the ACTIVEMQ_PASSWORD setting and set it to the password that you configured for earlier in /etc/activemq/activemq.xml.
If you need to change the port that NGINX listens on for HTTP or HTTPS do the following:
For NGINX 1.6:
modify /opt/rh/nginx16/root/etc/nginx/nginx.conf and change listen 80; to another port.
For NGINX Plus, you won’t have to modify the listen port, because the /etc/openshift/routing-daemon.conf settings will set this up for you. The defaults for SSL_PORT and HTTP_PORT in the routing-daemon.conf are 443 and 80 respectively. In both cases, NGINX 1.6 and NGINX PLUS, you’ll want to make sure these settings are set to the port you intend NGINX to listen to. Make sure your host firewall configuration allows ingress traffic on these ports.
After making the changes to /etc/openshift/routing-daemon.conf, start the daemon:
sudo service openshift-routing-daemon start
The routing daemon produces log files under /var/log/openshift/. Check these files to make sure that the daemon started successfully:
sudo less /var/log/openshift/routing-daemon.log
You should see output similar to the following:
I, [2014-10-13T10:45:15.769081 #21442] INFO -- : Initializing routing controller... I, [2014-10-13T10:45:15.769193 #21442] INFO -- : Initializing controller... I, [2014-10-13T10:45:15.769751 #21442] INFO -- : Initializing nginx model... I, [2014-10-13T10:45:15.770348 #21442] INFO -- : Requesting list of pools from load balancer... I, [2014-10-13T10:45:15.771822 #21442] INFO -- : Found 0 pools: I, [2014-10-13T10:45:15.771954 #21442] INFO -- : Connecting to ActiveMQ... I, [2014-10-13T10:45:15.845895 #21442] INFO -- : Subscribing to /topic/routinginfo... I, [2014-10-13T10:45:15.846362 #21442] INFO -- : Listening...
Now that you have the routing daemon and NGINX set up to listen to application routing events, it’s time to set up permissions for users to create HA applications.
OpenShift Broker HA Application Configuration
Granting a User HA Permission — Before you can create an HA application you must grant permission to create HA applications to your users. Do so with the following command on the OpenShift Broker host.
sudo oo-admin-ctl-user -l exampleuser --allowha true
You should see the following output:
Setting HA capability to true for user exampleuser... Done. User exampleuser: plan: plan quantity: 1 plan expiration date: consumed domains: 0 max domains: 10 consumed gears: 0 max gears: 100 max tracked storage per gear: 0 max untracked storage per gear: 0 max teams: 0 viewing all global teams allowed: false gear sizes: small sub accounts allowed: false private SSL certificates allowed: false inherit gear sizes: false HA allowed: true
Granting HA Permission by Default — If you want to enable all new users by default to have permission to create HA applications, you can do so by modifying /etc/openshift/broker.conf on all of your OpenShift broker hosts:
sudo vi /etc/openshift/broker.conf
Find the DEFAULT_ALLOW_HA=”false” setting and change it to
Then restart the OpenShift broker in order for your changes to take effect:
sudo service openshift-broker restart
Creating an HA Application
Now you can create an HA application and it only requires two steps. First, you must create the application, specifying that it should be scalable:
rhc app create myhaapp ruby-1.9 -s
You should see output similar to the following:
Application Options ------------------- Domain: exampleuser Cartridges: ruby-1.9 Gear Size: default Scaling: yes Creating application 'myhaapp' ... done Waiting for your DNS name to be available ... done Your application 'myhaapp' is now available. URL: http://myhaapp-exampleuser.example.com/ SSH to: firstname.lastname@example.org Git remote: ssh://email@example.com/~/git/myhaapp.git/ Run 'rhc show-app myhapp' for more details about your app. Note the UUID of your application. In the above output, the UUID is 543bfaf36e762d7966000017.
Once the application has been created, enable the application as an HA app
rhc app enable-ha myhaapp
Because you enabled the MANAGE_HA_DNS setting earlier in /etc/openshift/broker.conf, the OpenShift broker will create the additional DNS record for your application when you make it HA, so your application should now be available via the load balancer using the URL http://ha-myhaapp.example.com/ . Try using the URL in your Web browser.
If you’re using more than one NGINX host, you’ll want to put an additonal load balancer in front of the the NGINX hosts or use round-robin DNS because your HA application DNS entry will need to resolve to an NGINX host. If you plan to terminate SSL at the NGINX host and you’re using multiple NGINX hosts, you’ll need to make sure the load balancer in front of NGINX supports fowarding of the SSL traffic using SNI. The SSL configuration for termination will reside in the NGINX application configuration.
To configure SSL termation, first add an alias to your OpenShift application.
rhc alias add exampledomain.com myhaapp
Then add the SSL certificate and key for your application alias
rhc alias update-cert ssltest exampledomain.com --certificate ./certs/cert.cert --private-key ./certs/cert.key
The routing daemon will automatically create the SSL configuration for your application and you’ll now be able to connect to your application using https://ha-myhapp.example.com/. Try using the URL in your Web browser.
When you scale your application up to the point where additional haproxy gears are created, the NGINX routing daemon will recieve a message and update the NGINX application configuration. If you were to look at the config files, you would see N number of server entries in an upstream group.
The way we have it set up, by default the least_conn balancing algorithm is used. I’ll let you consult the NGINX documentation for details about additional algorithms and tunables. The non-commercial NGINX uses passive health checking which means a client will make a request, then NGINX will attempt to send the request to one of the servers in the upstream group. If it fails, NGINX will take that server out of the group (for how long depends on any tunables you might change, default is 10s).
This is good behavior, but not great because the client is still going to get an error returned and they’ll have to resubmit their request to get the request routed to another server in the upstream group.
Active Health Monitoring with NGINX Plus
Add the following configuration options to the routing-daemon.conf to enable NGINX Plus health checking. Doing so will enable active health checking and will take servers out of the upstream pool without having a client request initiate the check.
NGINX_PLUS=true NGINX_PLUS_HEALTH_CHECK_INTERVAL=2s NGINX_PLUS_HEALTH_CHECK_FAILS=1 NGINX_PLUS_HEALTH_CHECK_PASSES=5 NGINX_PLUS_HEALTH_CHECK_URI=/ NGINX_PLUS_HEALTH_CHECK_MATCH_STATUS=200 NGINX_PLUS_HEALTH_CHECK_SHARED_MEMORY=64k