As a Solution Architect at Red Hat, I had the opportunity to run an « JBoss AMQ on OpenShift workshop » some weeks ago at a customer site. Working with AMQ for years outside OpenShift and having just played with the containerized version, I was astonished that some features were already there but not documented while some others were simply missing.

This post is a walk-through some enhancements I’ve made to Red Hat JBoss AMQ container image in order to meet my customer requirements. It goes through some topics like: adding a monitoring layer to AMQ, making configuration management across environments easier and explaining source-2-image process and use-cases for AMQ. By the way, if you’re interested on monitoring topic on Red Hat integration solutions, I recommend having a look at Bruno Meseguer excellent blog post that was an inspiration for reproducing on AMQ what was done on Fuse.

The sources and examples used into this post can be found on my GitHub repository.

Introducing JBoss AMQ

JBoss AMQ is Red Hat broker for Messaging Applications in the Enterprise. It provides fast, lightweight, and secure messaging for Internet-scale applications. AMQ components use industry-standard message protocols like AMQP and MQTT and support a wide range of programming languages and operating environments. It gives you the strong foundation you need to build modern distributed applications.

When deployed onto OpenShift, AMQ provides simple configuration for allowing the setup of a Broker Mesh: a network of replicas of broker allowing to shard messages and provides elegant distribution of producers and consumers of messages on a fault-tolerant architecture. Red Hat provides a container image that is the assembly of the productized version of upstream Apache ActiveMQ project + Fabric8 accelerators on Java container image management + Jolokia exporter for monitoring metrics.

 

AMQ Image

However, when it comes to large scale and dynamic deployments - where brokers can be spawned and disappear rapidly - the Jolokia monitoring is not easily scalable. This because it is exposed on a single container port that should be discovered. Moreover, monitoring does not provide storage and history of metrics among the broker mesh.

Adding Prometheus monitoring

In this first part, we are going to cover the addition of a monitoring layer to JBoss AMQ broker.

Prometheus is an open source monitoring and alerting toolkit which collects and stores time series data. Conveniently it understands Kubernetes’s API to discover services. The collected data can be intelligently organized and rendered using Grafana, an open analytics and monitoring platform that can plug into Prometheus.

Prometheus prime strategy is to pull metrics from applications before storing them into its internal time-series data-store. In a dynamic environment like Kubernetes or OpenShift, Prometheus is able to discover replicas of applications and thus it can pull metrics from different container instances. Conveniently, the Prometheus project provides different kinds of exporter that you can just install or embed into your application. To minimise intrusion, Prometheus offers a JMX exporter that you just have to refer into the startup command line of your Java application.

Using the JMX Exporter as a Java Agent, you are now able to expose to Prometheus scraping all the metrics coming from the underlying JVM. You may also be able to export custom JMX metrics such as those collected by ActiveMQ. Interesting and available metrics from AMQ encompass all the metrics related to:

  • pure AMQ Broker such as the status,
  • the Persistence Adapter such as the size of the embedded store for messages,
  • the Queue destinations such as the number of messages, the number of sender / consumer and enqueue / deqeueu count for each queue,
  • the Topic destinations such as the number of messages, the number of publisher / subscriber and enqueue / deqeueu count for each queue.

AMQ Prometheus metrics

Now that we got principles and goals, we’re ready to get our hands dirty. Let’s write some code!

Build new custom-jboss-amq63 image

The first task in this walk-through will be to write a custom Dockerfile in order to enhance and complete the container image provided by Red Hat. We will simply start extending the default openshift/jboss-amq-63:1.3 image by downloading and adding the Prometheus JMX exporter to a /opt/prometheus directory in our image. This exporter will expose an endpoint on 9779 port.

FROM openshift/jboss-amq-63:1.3

[...]

# Prometheus JMX exporter agent
RUN mkdir -p /opt/prometheus/etc \
&& curl http://central.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.10/jmx_prometheus_javaagent-0.10.jar \
-o /opt/prometheus/jmx_prometheus_javaagent.jar
ADD prometheus-config.yml /opt/prometheus/prometheus-config.yml
RUN chmod 444 /opt/prometheus/jmx_prometheus_javaagent.jar \
&& chmod 444 /opt/prometheus/prometheus-config.yml \
&& chmod 775 /opt/prometheus/etc \
&& chgrp root /opt/prometheus/etc

EXPOSE 9779

[...]

We can also see that this Dockerfile references a Prometheus configuration file called prometheus-config.yml. This file is used by the Java Agent to know which metrics to expose for Prometheus scraping process. You can just create this file locally in the same directory that the Dockerfile. Here is a snippet of the file I have used to illustrate metrics declaration :

startDelaySecs: 5
ssl: false
blacklistObjectNames: ["java.lang:*"]
rules:
- pattern: 'org.apache.activemqSize'
name: activemq_broker_persistence_adapter
help: Broker Persistence Adapter size
type: GAUGE
labels:
broker: $1
persistentVolume: $2
- pattern: 'org.apache.activemqQueueSize'
name: activemq_queue_size
help: Number of messages on this destination
type: GAUGE
labels:
broker: $1
queue: $2
- pattern: 'org.apache.activemqEnqueueCount'
name: activemq_queue_enqueue_count
help: Number of messages that have been sent to the destination
type: COUNTER
labels:
broker: $1
queue: $2

Do you see the point about ([^,]+) expressions? These are capture groups that are used later for assigning labels to metrics. That way you’ll be able to distinguish metrics from different brokers of from different queues or topics.

Put all these files into a Git repository somewhere and now just create a custom image in OpenShift using simply this Git repository URL and the docker strategy.

$ oc new-build https://github.com/lbroudoux/openshift-cases.git --context-dir="/jboss-amq-custom/custom-amq" --name=custom-amq --strategy=docker --to=custom-jboss-amq-63

Deploy your custom image

Now that you have created a new image, there is basically two options to consume it :

  • Deploy AMQ using provided template and then later change by editing the DeploymentConfig in OpenShift,
  • Write one or many new templates to ease up the deployment of new broker all containing Prometheus stuffs.

The changes we have to made here are the image that should be deployed and the declaration of a new 9779 port for our AMQ container. However, an ingredient is still missing in our recipe: we need also to activate the JMX exporter as a Java Agent for the JVM. We need to tweak the startup command line of our JVM. Here’s the first undocumented feature : a JAVA_OPTS_APPEND environment variable is present in AMQ image in order to do that. So you’ll also have to add the following env var at your broker deployment:

<code>JAVA_OPTS_APPEND=-javaagent:/opt/prometheus/jmx_prometheus_javaagent.jar=9779:/opt/prometheus/prometheus-config.yml</code>

Because, there at least three modifications to operate, I have chosen to write and provide two different templates : one simple without SSL support, one supporting SSL. Creating a new template is as straightforward as exporting existing ones and modifying YAML file. Optionally, you may want to only reference tagged image from your template. For that, you’ll need to tag the previously produced latest image into something more stable.

<code>oc tag amq-tests/custom-jboss-amq-63:latest amq-tests/custom-jboss-amq-63:1.3</code>

Now record your template into OpenShift or just use this single command line to deploy a broker using the non-SSL template:

$ oc process -f https://raw.githubusercontent.com/lbroudoux/openshift-cases/master/jboss-amq-custom/custom-amq63-persistent-nosecret.yml \
--param="APPLICATION_NAME=custom-broker" \
--param="IMAGE_STREAM_NAMESPACE=amq-tests" | oc create -f -

You'll find the templates I have written here.

Check and visualize

To check that new configuration has been taken into account, you may want to connect into the pod running the AMQ container. Start listing all pods into your project:

$ oc get pods
NAME READY STATUS RESTARTS AGE
custom-amq-1-build 0/1 Completed 0 1h
custom-broker-amq-1-xvn5p 1/1 Running 0 7m
custom-broker-drainer-1-7j2vr 1/1 Running 0 6m

Then connect via SSH into the container using this simple command and just call the localhost:9779 endpoint. You should then get the list of metrics exposed by the Prometheus JMX exporter like below. I've removed some metrics just to highlight the specific one coming from JVM and broker:

$ oc rsh custom-broker-amq-1-xvn5p
sh-4.2$ curl http://localhost:9779
# HELP activemq_queue_enqueue_count Number of messages that have been sent to the destination
# TYPE activemq_queue_enqueue_count counter
activemq_queue_enqueue_count{broker="custom-broker-amq-21-xvn5p",queue="incomingOrders",} 12.0
[...]
# HELP jvm_memory_bytes_used Used bytes of a given JVM memory area.
# TYPE jvm_memory_bytes_used gauge
jvm_memory_bytes_used{area="heap",} 3.8775976E7
jvm_memory_bytes_used{area="nonheap",} 3.668756E7

Now, we assume you have a Prometheus instance running somewhere on your OpenShift cluster (OpenShift comes with Ansible installation scripts for Proemetheus since version 3.7). On this running instance, you will have to adapt the configuration so that Prometheus will try discovering the Kubernetes services for AMQ brokers. This goes through the declaration of a new job_name into scrape configuration the prometheus.yml file (usually put into a Config Map for convenience). Here we ask Prometheus to look for the TCP service of AMQ broker in order to discover new AMQ containers, then to use their port called prometheus:

- job_name: 'amq-brokers'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_name]
action: keep
regex: ((.+-)*broker-amq-tcp)
- source_labels: [__meta_kubernetes_pod_container_port_name]
action: keep
regex: prometheus

Finally, after updating the configuration and restarting the Prometheus pods, you should be able to see the AMQ brokers into Prometheus targets. You are then able to start querying metrics and build some fancy graphs ;-)

Prometheus visualization

Building a base image with configuration injected into secret

Another requirement we discuss with the customer was the ability to easily inject configuration files depending on the environment. AMQ container image provide mechanism for dynamically customizing broker configuration on startup but this is purely based onto environment variables. The drawback with environment variables is that their number is limited and that this restriction cannot help cover all the configuration cases we want to achieve. For example, using built-in configuration mechanism, you will not be able to declare many users or groups that are allowed to access your broker. Variables value are also very limited to reproduce complex configuration file structure.

So the idea that emerges was about using Kubernetes / OpenShift Secrets to carry that configuration files. Secrets being cyphered all the way long from storage to container, they can contain sensitive data like user passwords or security configuration. The plan was then to identity a specific volume mount within our container to receive injected Secrets from OpenShift. Then at AMQ startup time, we will override some parts of the dynamic configuration done by base image by simply copying configuration files found in Secret. Typically, this mean overriding the default launch.sh script of base image.

AMQ image with secret

Complete custom-jboss-amq63 image

To begin, we have to complete our custom image with a new launch.sh script. Unfortunately the one coming in base image is not extensible and then we have to completely extract it and add this extra copy-directives just before the end of the script, like this:

[...]
echo "Evaluating $AMQ_SECRET_CONFIG_DIR for configuration files in secret volume..."
# Overwrite config with custom one in secret if provided.
if [ "$(ls $AMQ_SECRET_CONFIG_DIR)" ]; then
echo "Found files into configuration secret, overriding $AMQ_HOME/conf/"
cp -f "$AMQ_SECRET_CONFIG_DIR"/* "$AMQ_HOME/conf/"
fi
[...]

We will use the AMQ_SECRET_CONFIG_DIR environment variable to know where Secret volume is mounted within container. Then, we just have to add this new script into our custom image and make it the default command to execute when starting container, this way:

FROM openshift/jboss-amq-63:1.3

[...]

COPY launch.sh /opt/amq-custom
RUN chmod -R 777 /opt/amq-custom

# Override default launch.
CMD [ "/opt/amq-custom/launch.sh" ]

[...]

We can now just start rebuilding our custom image with oc start-build custom-amq.

Complete custom image deployment

Before testing things up, we have to add some extra informations into our deployment templates or directly within the deployment configuration.

  1. Add a new volume definition for the secret. We expect having a secret called ${APPLICATION_NAME}-secret-config
  2. Add a new volumeMount information to mount secret into /etc/amq-secret-config-volume
  3. Add a new env in our container for having AMQ_SECRET_CONFIG_DIR=/etc/amq-secret-config-volume

Results of this modification can be seen into the provided custom-amq-63-persitsent template in GitHub.

Test things up

Once your deployment has been updated, you just have to create a new secret, for example by using local configuration files allowing to override those that will be produced by customization process. In case of our former custom-broker application deployment, secret is called custom-broker-secret-config:

$ oc create secret generic custom-broker-secret-config --from-file=users.properties=users.properties

Redeploy your broker using updating DeploymentConfig or template and see your Secrets configuration files being injected into container and replacing the default AMQ configuration files into /opt/amq/conf!

Building a base image with S2I capabilities

As said in introduction, base AMQ image provided by Red Hat as Source-to-image (s2i) capabilities. What does that mean? It means that it is able to « merge » source resources (generally text or binaries) on top of the base image to produce a new image. This latter produced image is not instantiated using the regular CMD command found in base container definition but using a custom run script that represents a dedicated S2I endpoints.

In case of Red Hat AMQ image, this process is only helpful for customizing a configuration file that is called openshift-activemq.xml. It represents a template configuration file with placeholder for configuration dynamically injected at startup time from environment variables. This mechanism is indeed very helpful as - as said before - environment variable configuration is not as comprehensive as editing and managing directly the file. There’s a numerous reason why customer would want to custom this template :

  • Declaring destinations with their security configuration,
  • Defining custom and complex policies for destinations,
  • Adding new connectors that are not managed by customization process (ex: nio connector),
  • Embedding custom plugins…

 

AMQ image with S2I

Complete custom-jboss-amq63 image once again ;-)

So we’d like our custom image still being able to provide S2I capabilities. This involve implementing 2 lifecycle hooks in terms of shell scripts. The assemble hook / script is called during the image build phase, this script is directly reproducing the base image behavior. The run hook / script is called at container deployment phase. In our case, this script should be adapted for being able to call our custom launch.sh script written in previous chapter. So simply:

#!/bin/sh

echo "Custom run script invoked"
exec /opt/amq-custom/launch.sh

You will find this 2 scripts within the GitHub repository under the /s2i directory. Next we have to include this scripts within our container image by modifying the Dockerfile. We have to add a special annotation called io.openshift.s2i.scripts-url so that OpenShift will be able to invoke the specific s2i endpoints on build and deployment phases.

FROM openshift/jboss-amq-63:1.3

[...]

# S2I customization and annotation
COPY ./s2i/ /opt/amq-custom/s2i
COPY launch.sh /opt/amq-custom
RUN chmod -R 777 /opt/amq-custom

LABEL io.openshift.s2i.scripts-url="image:///opt/amq-custom/s2i"

[...]

We can now just start rebuilding our custom image with oc start-build custom-amq.

Build a new custom-amq-s2i image

Once the build is finished and successful, you may want to start creating a new image from your custom-jboss-amq-63 image, merging a new configuration on top of it. In my environment, the custom-jboss-amq-63 has been placed into a amq-tests OpenShift project and the Git repository used for holding the openshift-activemq.xml file is on GitHub. So I use this command line that should be adapted to yours:

$ oc new-build amq-tests/custom-jboss-amq-63~https://github.com/lbroudoux/openshift-cases.git --context-dir="/jboss-amq-custom/custom-amq-s2i" --name="custom-amq-s2i"

Deploying custom-amp-s2i image

This time deploying the new image is really simple: you just have to modify a previous deployment configuration to now deploy the new custom-amp-s2i image. This is as simple as editing the configuration through web console and trigger will redeploy the new image for you. When your pod is deployed, you can just rsh to the container and check that your openshift-activemq.xml file has been used as a template for new configuration.

Putting the whole together

Here’s below a schema to wrap up things when everything is put together:

 

All together

 

With these different customizations and following the lifecycle of application:

  • We’ve created a new custom AMQ image using S2I for injecting a new configuration template from a Git repository,
  • This image is also built from a new custom AMQ base image that includes Prometheus exporting,
  • Administrator may create new Secret containing environment dependent configuration files (such at users with credentials),
  • At startup, Secret is injected into our image and provided configuration files override and complete the automagical stuffs done by dynamic configuration from environment variables,
  • Finally, newly deployed broker is automatically discovered by Prometheus that starts scraping and collecting / storing metrics about our broker and its destinations.

In introduction, I also talked about some undocumented configuration variables and here’s a few that I’ve found helpful for customizing the AMQ broker behavior :

  • JAVA_OPTS_APPEND => Java startup command line appended
  • AMQ_MAX_CONNECTIONS => maximumConnections for individual transports
  • AMQ_FRAME_SIZE => wireFormat.maxFrameSize for transports
  • AMQ_MESH_QUERY_INTERVAL => polling interval for discovering new replicas in a broker mesh

Here we are at the end of this walk-through, I hope it has been helpful and have given some ideas on how to organize a solid delivery chain for AMQ broker within your different projects and environments.