Enhanced OpenShift Red Hat AMQ Broker container image for monitoring

Previously, I blogged about how to enhance your JBoss AMQ 6 container image for production: I explained how to externalise configuration and add Prometheus monitoring. While I already covered the topic well, I had to deal with this topic for version 7.2 of Red Hat AMQ Broker recently, and as things have slightly changed for this new release, I think it deserves an updated blog post!

This post is a walk-through on how to enhance the base Red Hat AMQ Broker container image to add monitoring. This time we’ll see how much easier it is to provide customizations, even without writing a new Dockerfile. We will even go a step further by providing a Grafana dashboard sample for visualising the broker metrics. The sources and examples used in this post can be found on my GitHub repository.

Introducing Red Hat AMQ Broker

Red Hat AMQ Broker is now a part of Red Hat AMQ product that encompasses different solutions for message exchange and data streaming. The AMQ Broker is for Entreprise Messaging Applications that needs fast, lightweight and secure messaging using industry-standard message protocols like AMQP and MQTT. The major enhancement compared to the previous 6.x releases is the use of the Artemis broker as the core of Apache ActiveMQ broker. It now has a non-blocking architecture for internet-scale performance and applications. This release also merges technologies–like HornetQ–that were previously bundled separately from JBoss EAP.

Red Hat provides a container image that is the assembly of the productized version of upstream Apache ActiveMQ with the Artemis project, Fabric8 accelerators on Java for container image management and Jolokia exporter for monitoring metrics. Deployed on Red Hat OpenShift, this image allows the easier configuration of clusters of brokers for meeting critical missions requirements.

However, when it comes to large scale and dynamic deployments – where brokers can be spawned and disappear rapidly – the Jolokia monitoring is not easily scalable. This is because it is exposed on a single container port that should be discovered. Moreover, monitoring does not provide storage and history of metrics among the broker mesh.

Adding Prometheus monitoring

Now, let’s add Prometheus as the monitoring layer for our AMQ Broker.

Prometheus is an open source monitoring and alerting toolkit which collects and stores time series data. Conveniently, it understands Kubernetes’ API to discover services. The collected data can be intelligently organized and rendered using Grafana, an open analytics and monitoring platform that can plug into Prometheus.

Prometheus is able to discover replicas of applications and thus it can pull metrics from different container instances. To minimise intrusion, Prometheus offers a JMX exporter that you just have to refer to in the startup command line of your Java application. Using the JMX Exporter as a Java Agent, you are now able to expose the service to Prometheus, scraping all the metrics coming from the underlying JVM. You may also be able to export custom JMX metrics such as those collected by the Artemis core. Interesting AMQ metrics include all those related to:

  • The pure AMQ Broker status.
  • The Persistence Adapter, and the size of the embedded store for messages.
  • The Addresses attributes, such as the number of messages, the number of sender / consumer and enqueue / dequeue count for each address.

Now that we’ve got principles and goals, we’re ready to get our hands dirty. You’ll see that there’s almost no code to write; we’ll use mainly CLI commands to execute.

Build a new custom-amq-broker-72-openshift image

During this article, I have used the version 7.2 of AMQ Broker and you have to ensure that base images and templates are already present in your OpenShift cluster as explained here. With this release, we can do everything using the source-to-image process.

As said in the introduction and previous post, the base AMQ image is provided by Red Hat as Source-to-image (s2i) capabilities. What does that mean? It means that it is able to merge  source resources (generally text or binaries) on top of the base image to produce a new image. The S2I capabilities of AMQ have evolved, and it is now able to merge configuration files (located in /configuration) as well as binary files (located in /deployments). So all we need is a git repository containing our resources and using S2I those resources will end in configuration and lib directories within the new custom image.

Have a look on my GitHub repository and you’ll see that it just contains a Prometheus configuration file and a jmx_prometheus_javaagent-0.11.0.jar file for the JMX exporter lib. All you need to do is launch a new build referencing this repo with correct context and give it a name:

$ oc new-build openshift/amq-broker-72-openshift:1.2~https://github.com/lbroudoux/openshift-cases --name=custom-amq-broker-72-openshift --context-dir=jboss-amq7-custom/custom-amq

Wait for the build to complete and then deploy a new instance of AMQ Broker. You may want to look at the official documentation on how to do this using templates.

Deploy custom-amq-broker-72-openshift image

After deploying a broker using Red Hat-provided templates, replace the official image by the custom one that you just built. You can do this using the following command:

$ oc set triggers dc/broker-amq --containers=broker-amq --from-image=custom-amq-broker-72-openshift:latest

A redeployment should occur. In order to activate the jmx_exporter_prometheus_agent, you’ll have to tweek the Java startup command line by adding a new environment variable to Deployment configuration:

$ oc set env dc/broker-amq JAVA_OPTS="-Dcom.sun.management.jmxremote=true -Djava.rmi.server.hostname= -Dcom.sun.management.jmxremote.port=1099 -Dcom.sun.management.jmxremote.ssl=true -Dcom.sun.management.jmxremote.registry.ssl=true -Dcom.sun.management.jmxremote.ssl.need.client.auth=true -Dcom.sun.management.jmxremote.authenticate=false -javaagent:/opt/amq/lib/optional/jmx_prometheus_javaagent-0.11.0.jar=9779:/opt/amq/conf/prometheus-config.yml"

All the com.sun.management related properties are necessary if you want to have all the Artemis metrics exposed. Otherwise you’ll just get the JVM metrics.

Check metrics

To check that the new configuration has been taken into account, you may want to connect to the pod running the AMQ container. Start listing all pods in your project:

$ oc get pods                                                                                                              
NAME                                     READY STATUS RESTARTS AGE
broker-amq-9-t2hhc                       1/1 Running 0 2h
custom-amq-broker-72-openshift-1-build   0/1 Completed 0 7h

Then connect via SSH into the container using this simple command and just call the localhost:9779 endpoint. You should then get the list of metrics exposed by the Prometheus JMX exporter like below. I’ve removed some metrics just to highlight the specific ones coming from JVM and broker:

$ oc rsh broker-amq-9-t2hhc                                                                                               
sh-4.2$ curl http://localhost:9779
# HELP artemis_disk_scan_period How often to check for disk space usage, in milliseconds (org.apache.activemq.artemis<broker="broker"><>DiskScanPeriod)
# TYPE artemis_disk_scan_period counter
artemis_disk_scan_period{broker="broker",} 5000.0
# HELP artemis_journal_min_files Number of journal files to pre-create (org.apache.activemq.artemis<broker="broker"><>JournalMinFiles)
# TYPE artemis_journal_min_files counter
artemis_journal_min_files{broker="broker",} 2.0
jvm_memory_pool_bytes_init{pool="G1 Eden Space",} 7.9691776E7
jvm_memory_pool_bytes_init{pool="G1 Survivor Space",} 0.0
jvm_memory_pool_bytes_init{pool="G1 Old Gen",} 1.505755136E9

Now we assume you have a Prometheus instance running somewhere on your OpenShift cluster. OpenShift comes with Ansible installation scripts for Prometheus since version 3.7. In order to get these metrics scraped by your Prometheus instance, you may need to add some expositions and/or annotations. This depends on how your Prometheus rules were configured. My personal instance only scrape services that are annotate that way and is looking for information on port and path.

I also have to add a port to my container, expose it as a service and then annotate that service so that it can be discovered. Here’s the corresponding CLI commands below:

$ oc patch dc/broker-amq --type=json -p '[{"op":"add", "path":"/spec/template/spec/containers/0/ports/-", "value": {"containerPort": 9779, "name": "prometheus", "protocol": "TCP"}}]'
$ oc expose dc/broker-amq --name=broker-amq-prometheus --port=9779 --target-port=9779 --protocol="TCP"
$ oc annotate svc/broker-amq-prometheus prometheus.io/scrape='true' prometheus.io/port='9779' prometheus.io/path='/'

As a check-point before finalizing this post with a dashboard, here’s a schema explaining what happens at Pod startup:

During the launch of the AMQ pod, configuration is copied onto a new broker instance located in /home/jboss/broker. The Prometheus configuration file and exporter lib are referenced from the command line. They are placed as startup arguments for our JVM to expose a Java Agent listening on port 9779. Depending on your Prometheus config, you may have to declare/create/expose extra Kubernetes objects.

Visualization and summary

Now that we are collecting metrics, the final steps are to create visualizations through a dashboard. This will allow operators to detect and understand operating conditions. This can be easily done using Grafana and actually you can use a community template for that. I rebranded this one for Red Hat AMQ and propose a bootstrap within my GitHub repository.

Another important point is the definition of alerts that can be done using Prometheus metrics. See here for much more information on that topic. I hope that this walk-through has been helpful and gave you some ideas. Feel free to share your experience and best-practices in the comments.

Kubernetes, Operators
, , , , ,