Announcing OpenShift Enterprise 2.1 - Archived

OpenShift Enterprise 2.1 Overview

It seems like yesterday the release of OpenShift Enterprise 2.0 was announced. Now just a few months later, Red Hat is releasing OpenShift Enterprise 2.1! In addition to fixing a number of bugs and implementing some key requests for enhancement, this release is focused on increasing functionality in 8 important areas.

  • New Content
  • Central and Consolidated Logging
  • Metric Gathering
  • Zones and Regions
  • Placement Policy Extensibility
  • External Team Integration
  • Node Watchman
  • Offline Developer Productivity

Let’s dive deeper into these areas and find out if any of them solve the problems you have been facing inside your datacenter.

New Content

One of the truly amazing things about OpenShift is it understands maintaining the PaaS platform is only half the problem. The other half is someone needs to be responsible for all the application frameworks and runtimes that are running on the platform itself. For OpenShift Enterprise, we insure that your support license is not only delivering security fixes, bug changes, and technical support for OpenShift; but also offer those same exact things for the content we ship. Using your existing patch and software life cycle management tools you have deployed today within your datacenter, you can easily see when we have released new fixes and easily determine which nodes are missing those patches. You can even see which processes and Linux containers need to be restarted. This represents a massive cost of ownership problem we’re carrying on our shoulders so you do not have to.

OpenShift Enterprise 2.1 releases cartridge support for MySQL 5.5 and PHP 5.4. In a few weeks we will release support for Python 3.3 and MongoDB 2.4. Yes! MongoDB has finally made it from OpenShift Online to OpenShift Enterprise. These four cartridges are additions to the large catalog of content that already exists for OpenShift Enterprise. As you can see from this list we have an amazing list of technologies covered, plus we have a significant amount of community contributed content.

Central and Consolidated Logging

As you know from previous versions of OpenShift Enterprise, the product does an excellent job of producing log files for the platform and the application services it provides. Unfortunately, this meant a lot of logfile locations across a lot of operating systems and gears. With OpenShift Enterprise 2.1, we allow people to choose if they want to remain with local file locations, or leverage rsyslog to consolidate these logs to a central loghost location for the environment.

This feature helps enable solutions like Splunk, Elasticsearch, and other popular file analytic tools to perform a more rich analysis of errors. Simple standard error and standard out are powerful enough all by themselves, but placing them all into a single location for easier correlation of issues, multiples the usefulness of the data 100%. At the same time, this feature increase the auditability of the platform.

We have introduced ways to configure the error level and control the key strings within the log files through a central configuration file. You can also choose what metadata labels from the node and gear you would like to be appended to the error or event appear in the logfile:

  • request_id
  • container_uuid
  • app_uuid
  • openshift_namespace
  • openshift_app_name

Metric Gathering

As we finished up the logging enhancement, we realized it would be nice to see the performance of the operating system and the applications in the same flat file format with the appropriate called out metadata for application owners and administrators to troubleshoot issues. We tackled this modification in two ways. First, we made our watchman daemon have an optional setting to record operating system metrics in a configurable interval. By default we will collect the following from cgroups:

  • CPU statistics
  • CPU accounting statistics
  • Memory usage information

These default metadata labels will appear on the collection line in the flat file as:

  • OpenShift_App_Name
  • OpenShift_Gear_UUID
  • OpenShift_App_UUID
  • OpenShift_Namespace

Secondly, we extended the cartridge specification to allow a cartridge author to supply an application specific data acquisition script or method to gather KPI information about the application.

Zones and Regions

This is an extremely exciting feature for OpenShift Enterprise 2.1. Private cloud owners have struggled to find a balance between allowing the PaaS platform to remain completely abstracted away from the infrastructure against having the PaaS platform leverage existing specialization found within their equipment and workload requirements. The user requirements in this space are endless.

Some departments might only allow specific personnel with specific nationalities to access specific equipment. Or maybe equipment was allocated for specific projects. To solve these requirements, people do not want to stand up isolated PaaS platforms. Instead they want the option to work from a central platform offering the ability to label OpenShift nodes with zone tags as they do today with gear profile names. Then they can group these zones into regions. This way when a user selects a gear profile name, i.e. “prod-large”, this workload is always be placed on the corresponding hardware, inside the correct zone, within the assigned region.

We did not stop there while introducing this feature. If the region has more than one zone contained within, our deployment and auto-scale logic will automatically span the application across all the zones within the region to the extent the user’s gear allocation allows. An example would be placing racks into zones and then multiple racks into a region. Now when a user deploys his JBoss EAP application with MongoDB backend, the gears making up his application are balanced across multiple racks. Thus increasing the availability and resilience of the application service.

The setup of zones and regions are only limited by your imagination and underlying infrastructure. Some people may choose to place neighboring datacenters or buildings in zones and place those zones in larger regions. This feature coupled by the automatic intelligence of our placement logic produces an extremely easy way to bring more high profile application services onto the PaaS platform.

openshift-zones-regions

Placement Policy Extensibility

Even after all the work we did on zones and regions, we still had users with placement requirements requiring more extensibility. They simply had operational requirements too specific to their business. Some customers might only use specific infrastructure at specific times of the day or business quarter. Others might have a CMDB and ITIL change control process requiring them to perform more actions before the workload is deployed. To solve these user requirements we decided to create a public interface on our broker tier. We have documented a Ruby plugin solution allowing the PaaS platform administrator to inject himself into the placement decision. OpenShift Enterprise will still collect all the information it normally does around submitted job, username, request, node performance selection within gear profile, zone, region, district, etc. We gather all this information and hand it over to the plugin interface to make a selection.

This new interface allows datacenter operations to further integrate their existing processes into the PaaS platform without damaging the self service, continuous delivery, developer IDE awareness, and code build chain integration found in our PaaS solution.

External Team Integration

In the last release of OpenShift Enterprise we allowed developers to invite others into their application project (domain) or we enabled operations to establish a service account controlling the project by allowing it to assign responsibilities to members. How you used the feature depended on where you were on the DevOps spectrum.

In this release of OpenShift, we furthered our teaming feature by introducing an interface to allow us to sync with an existing group repository. By far the most popular user group naming service is LDAP or Active Directory through LDAP. With this in mind, we’re providing an example of how to sync with an external LDAP repository to gather information. The interval is configurable by the platform operator. We realize there are a lot of ways to group people. For example, Amazon Web Services has a notion of grouping as well. In the end, we expose a datamodel of value key pairs we are seeking and you get full control over how you supply this information.

Not only that, but you have control over the life cycle of users through that interface. If a user is no longer with the company, when we sync to your provided logic we will discover that and remove all permissions from the user within OpenShift that were granted from the team membership. The normal AUTH provider login facility will keep the user from being able to login to OpenShift and OpenShift will remove the privileges granted to that user through his team membership. This allows the platform administrator to have control over how to reassign or remove the individual’s work from the PaaS platform.

Only users with a special permission flag set will be able to add these teams to their projects (domains). You can add these teams to projects through the rhc command line or browser user interface in the same way you would add an individual user to a project. Once added, you have the ability to further customize the team that is created within OpenShift. However, the flow from the external repository is one way into OpenShift. We don’t want you using OpenShift to administer your LDAP ;)

Node Watchman

Watchman is a daemon Red Hat has been using on our public PaaS at openshift.com for awhile. This daemon has the built in smarts to protect you from the most common issues we have found by running a PaaS for a massive population of users and a very high application count. Optional autonomous remedy is key when running a large PaaS platform in order to scale individual administrators with the platform. You can’t hire more people just because you have more applications and watchman will help fill the gap. Watchman has a fully extensible plugin interface, but does ship out of the box with the following logic that you can choose to leverage or turn off:

  • It will search the cgroup event flow through syslog to determine when a gear is destroyed. If the pattern does not match a clean gear removal, we will restart the gear.
  • It will watch over the application server logs for “out of memory” type messages and will restart the gear if needed.
  • Watchman compares the state the user believes the gear to be in to the actual status of the gear and fixes the dependency.
  • It searches out processes and makes sure they belong to the correct cgroup. It kills processes associated to a stopped gear that have been abandoned or restarts a gear that has no running processes at all.
  • Watchman monitors the rate of usage on CPU cycles and will throttle a gear’s CPU consumption if the rate of change is too aggressive.

As you can see, having watchman on your side can be a be a great way to keep the platform healthier without eating up more of your administrator’s time.

Offline Developer Productivity

Customers have asked us for a simple, no touch virtual machine (VM) image of OpenShift Enterprise so they can place the image on their laptop and work on cartridges, code, and DevOps models while they are disconnected from their datacenter. As part of your OpenShift Enterprise subscription, we have added a VM image of OpenShift Enterprise under the downloads tab for OpenShift Enterprise on access.redhat.com. This VM image will make it through our release process next week.

The image contains a desktop environment, RHEL 6.5, Software Collections, OpenShift Enterprise 2.1, and JBoss Developer Studio. When new versions of OpenShift Enterprise 2.1.x are released, we will regenerate and replace the VM image on the download site. If you want to remain up to date on RHEL or the other components, you will need to assign your subscription to the channels within the VM image.

In order to come up with a no touch experience, we have taken networking assumptions that will allow the VM image to leverage its own DNS off the localhost interface of your laptop. You can communicate with the resulting deployed applications from the VM image on your host laptop by changing some simple DNS configuration files or work from the virtual desktop.

This VM image offers a great way for someone to get started learning and experiencing OpenShift Enterprise for the first time while offering a way for transiant workers to remain productive.

Conclusion

OpenShift Enterprise 2.1 may only be a x.x (minor) release, but it sure packs in a lot of features I wouldn’t not want to miss. Please experience the full documentation for the release here:

OpenShift Enterprise Documentation

OpenShift continues to help datacenters find the right balance between next generation cloud architectures and the current needs of the business. It places the focus back on the purpose for technology rather than catering to legacy cost models and boundaries. Unleash your developers while still controlling your datacenter.

Next Steps

Discover how to transform your business. Get an OpenShift Enterprise supported evaluation license now.

Categories
News, OpenShift Container Platform
Tags
Comments are closed.