Async Web Apps for the Masses: Running Vert.x on OpenShift

Hey Shifters! No fancy titles this time, just straight up, old-fashioned, introduction to bringing up my new favorite App Server on the planet. If you haven’t heard of Vert.x then you are missing out and that makes me sad. So today I will give you a short introduction to Vert.x and how to run an application on OpenShift. In a future post I will show you a more advance application I built which uses the excellent FlightStats APIs to track all the United flights and move them around on a map using WebSockets.

Introduction to Vert.x

My colleague Marek likes to quote upstream project sites to introduce new technology, and since he is brilliant, I will do the same. From the Vert.x site:

Vert.x is a lightweight, high performance application platform for the JVM that's designed for modern mobile, web, and enterprise applications.

I think the only helpful piece of information in there is that it runs on the JVM, so you can run it anywhere you can run Java (which is just about anywhere). If I were to write the definition it would be something more like:

Vert.x is a polyglot, asynchronous as well synchronous, small footprint, high performance application server built to make modern application development just simple enough

This is still filled with jargon but it gets more at the reasons why I love Vert.x. Let’s go into more detail on some of these points

Polyglot

Vert.x comes out of the box with Java, JavaScript (JS), Ruby (via JRuby), Python (via Jython), CoffeeScript, Scala, Clojure, and beta support for PHP. I have only used the JS, Java, and Python on Vert.x so my comments mainly apply to using those languages. At this time, until the Jython Community get’s to a 2.7 release I think Python on Vert.x has limited utility now for all but the most simple of programs.

Verticles

A verticle is the smallest bit of code that you can run but it can also be an entire application. For now we will consider it an application. Many verticles can run in the same Vert.x instance. It can be composed of many lines of code or could be a few lines, written in any of the languages Vert.x supports. Figure 1 shows a basic schematic of verticles and some of their features.

Diagram of verticles in vert.x on OpenShift

There are two types of verticles, the standard verticle and a worker verticle. Writing applications on standard verticles should never block the event loop – they are like a Node.js application. But unlike a Node.js server, worker verticles give you the ability to smoothly handle more traditional blocking code. The worker verticle has a pool of threads that it pulls from to handle tasks assigned to it. This way you can write more numerically intensive verticles or ones that you want to block waiting on response, like a DB call.

Each verticle has it’s own event thread and it’s own class loader. This means that each verticle inside a Vert.x instance are effectively isolated from each other.

Modules and EventBus

Verticles can be combined into modules and within each modules the verticles can be in different languages if needed. Modules tie together several verticles to make a more complete application. Once again, a Vert.x container can run multiple modules, verticles, or any combination of the two. Figure 2 shows two different modules running in the same Vert.x container.

Diagram of vert.x modules and eventbus on OpenShift

You may be wondering, “If everything is isolated how do you share data between the pieces”. The Vert.x answer to this is the EventBus. When Vert.x spins up it uses Hazelcast to discover all the entities it needs to talk to and then establishes EventBus connections. Your verticles can then publish or subscribe to channels on the EventBus. To create, publish, or subscribe to messages on the EventBus you just use a plain text string to identify the channel and then create a handler to either publish or subscribe to the channel. Vert.x will create the channel on the fly. Here is an example in JavaScript that registers to receive messages on “MyChannel”:

vertx.eventBus.registerHandler('MyChannel', function(message, replier) {
   replier('pong!');
});

And this is how easy it is to publish events (this example in Python):

from core.event_bus import EventBus
...
EventBus.publish('MyChannel', mydata)

The recommended data type to send over the EventBus is JSON.

One of the really great pieces of Vert.x is that the EventBus can actually extend to the client browser. If you enable bridging in the Vert.x container, you can choose to publish channels to browsers and other external clients. By default, this bridge will use Sock.JS so you get WebSockets for free! Now with the exact same JavaScript you would use on the server you can get WebSockets on your client. Of course if the browser does not support WebSockets the connection will downgrade to some other mechanism like long polling. Here is some code I use in one of my examples:

var eb = new vertx.EventBus(window.location.protocol + '//' +
        window.location.hostname + ':' +
        8000 + '/eventbus');

eb.onopen = function() {       
    eb.registerHandler('MyChannel', function(event)
        {
            //do something cool with the data
        }
    );
}

That is all the code you need to get WebSockets going!

Clustering

The final piece I am going to talk about today is the clustering mode for Vert.x. Figure 3 show the great capabilities you get right out of the box. With clustering enabled, any new Vert.x container that spins up will automatically share the EventBus. By default this feature is enabled if you create your application on OpenShift as a scalable application. The only thing that is different for veteran Vert.x users and from my diagram, is because OpenShift automatically copies your code over to the new Vert.x instance, having heterogenous verticles and modules running in each container is not really feasible. On Openshift scaling up means running more copies of the same configuration as the first Vert.x

diagram of a vert.x cluster on OpenShift

What this means

For me there are some great parts of running Vert.x

  1. I get to do all the cool asynchronous, WebSocket, fun stuff in the language of my choosing – not just JavaScript
  2. Vert.x as a container leads me to building microservices by default. Sure I can build the same ole’ monolithic app I always used to build, but I am encouraged to write microservices here
  3. There are a ton of conveniences built into the core platform. For example, there is a web client to make REST calls built right in. Here is the Java version
  4. It is small and lightweight – I think the download is only a little over 5 megs and the server starts up in a few seconds
  5. It runs on OpenShift – WINNING!

Too much to cover

So really there is far too much to cover here about Vert.x. How about I just point you to some other places to read more.

The documentation for Vert.x is pretty good but I also love the examples in GitHub because they show some of the basic patterns of Vert.x in all the supported languages. There are at least two books out on Vert.x, one by Tero Parviainen and the other by Simone Scarduzio. Remember, this community is not a fan of StackOverflow (which I think is a mistake) and instead they encourage you to ask questions on the mailing list.

Using Vert.x on OpenShift

Today we will just take the default app and change the main verticle to be a Java verticle that says “hello world”. To create the application it is the same as any other application. Eventhough the Vert.x cartridge is a community cartridge, we have added it to the default list of cartridges on OpenShift. By OpenShift Online listing it this way, you just have to do this simple command for a Vert.x application:

 rhc app create myvertapp vertx

This command will spin up Vert.x in an OpenShift gear based on this GitHub repository. Please note that Nick Scavelli currently maintains the cartridge and is actively look for feedback in the form of GitHub issues. The command above also creates a git respository on your local machine with the same name as your application (myvertapp in this example).

This git repository is not like a typical OpenShift repository – there is no build and deploy on the gear. At this point, if you need to do a build you do it on your local machine and then copy the files into the git repo, commit, and push. With Vert.x there is very little build unless you are assembling modules or whole applications. There is no need to precompile any of your files, as the server will take care of that.

The part that will be different people already using Vert.x is that there is no command line option to start the server. Instead, go into your local git repo, myvertapp, and the into the configuration directory. Inside this directory you will see a text file named vertx.env. Edit this file to change the startup options for vert.x and which verticle, module, or zip file should be started by default. The documentation in that file makes it quite clear what you edit. We will do that in our short code change later.

The mods directory allows you to place any Vert.x module you want to deploy in that directory and have it be seen by OpenShift. Webroot is just a directory that Nick made to hold web content. You can make a new directory if you want and then just point to it in your code.

Simple change

For today, just to see how easy it is to do a code change, we are going to replace the app.js with app.java or app.py and serve up new content.

Ready… go!

1) In the same place where app.js was located make a file App.java or app.py

The files we are going to use are basically straight from the GitHub repo with Vert.x examples modified to run on OpenShift. Here are the contens of App.java

import org.vertx.java.core.Handler;
import org.vertx.java.core.http.HttpServerRequest;
import org.vertx.java.platform.Verticle;
 
import java.util.Map;
 
public class App extends Verticle {
 
  public void start() {
    int port = Integer.parseInt(System.getenv("OPENSHIFT_VERTX_PORT"));
    String ip = System.getenv("OPENSHIFT_VERTX_IP");
    vertx.createHttpServer().requestHandler(new Handler<HttpServerRequest>() {
      public void handle(HttpServerRequest req) {
        System.out.println("Got request: " + req.uri());
        System.out.println("Headers are: ");
        for (Map.Entry<String, String> entry : req.headers()) {
          System.out.println(entry.getKey() + ":" + entry.getValue());
        }
        req.response().headers().set("Content-Type", "text/html; charset=UTF-8");
        req.response().end("<html><body><h1>Hello from Java on vert.x!</h1></body></html>");
      }
    }).listen(port, ip);
  }
}

And for Python:

import os
import vertx
 
server = vertx.create_http_server()
ip = os.environ['OPENSHIFT_VERTX_IP']
port = int(os.environ['OPENSHIFT_VERTX_PORT'])
 
@server.request_handler
def handle(req):
    req.response.end("Hello from Python vert.x!")
 
server.listen(port, ip)

You can see that both of these are simple web servers, though the Java one spits all the request headers to system out. The Vert.x cartridge wraps system.out and system.err and sends all the content to console.log on the gear in ~/vertx/logs/.

Finally, edit vertx.env that we looked at before and change the vertx_app line to:

export vertx_app=App.java

or

export vertx_app=app.py

depending on which file you created.

Now go ahead and, on your local machine do:

git add .
 
git commit -am "your git message here"
 
git push

You may have to wait a while (30 seconds) for this first push while Vert.x downloads Jython or compiles the Java class. But then when you hit your app URL you should see your new message.

Congratulations you are now a certified Vert.x polyglot programmer and deployer.

Final parting wisdom

Given my initial exploration I think Vert.x is awesome. I am not sure there is enough there to convince Node.js developers to switch but for all the other supported languages except Python I think it is great. It makes creating microservices and super fast backend web applications fun again.

Vert.x is still maturing and one of the areas I think it needs help is package management for the different languages. For example, right now JavaScript developers can not use npm to manage JavaScript packages. For Python, you can’t use setup.py or requirements.txt. Hopefully a more language friendly means of handling dependencies will be coming soon.

The other thing I think the community needs is some more real world examples and uses cases. As a young community, there are not many examples out in the public right now that handle the more complicated dependencies and code of “real world” applications. Don’t get me wrong, Vert.x has some of the best documentation and examples I have seen for software. It just needs more people to chip in, right more full featured apps, ask questions in the email forum, and then make those apps available to the public.

Luckily for you (and the Vert.x community), with the OpenShift cartridge and all the built in datastores, it has just become much easier to make more real world applications. I hope to get some time in the future to build and show off some more applications. Please go ahead and take it for a spin!

Next Steps

Categories
Java, OpenShift Online, Python, Ruby
Tags
Comments are closed.