OpenShift makes it easy to deploy your containers, but it can also impact your development cycle. The core problem is that containers running in a Kubernetes cluster are running in a different environment than the development environment, for example, on your laptop. Your container may talk to other containers running in Kubernetes or rely on platform features like volumes or secrets, and those features are not available when running your code locally.

So, how can you debug them? How can you get a quick code/test feedback loop during the initial development?

In this blog post we'll demonstrate how you can have the best of both worlds, that is, the OpenShift runtime platform and the speed of local development. We will be using an open source tool called Telepresence.

Telepresence lets you proxy a local process running on your laptop to a Kubernetes cluster, both Minishift/Minikube as well as remote clusters. Your local process will have transparent access to the live environment: Networking, environment variables, and volumes. Plus, network traffic from the cluster will be routed to your local process.

Preparation

Before you begin, you will need to:

  1. Install Telepresence.
  2. Make sure you have the oc command line tool installed.
  3. Have access to a Kubernetes or OpenShift cluster, for example, using Minishift.

Running in OpenShift

Let's say you have an application running inside OpenShift; you can start one like so:

$ oc new-app --docker-image=datawire/hello-world --name=hello-world
$ oc expose service hello-world

You'll know it's running once the following shows a pod with Running status that doesn't have "deploy" in its name:

$ oc get pod | grep hello-world
hello-world-1-hljbs 1/1 Running 0 3m

To find the address of the resulting app, run the following command:

$ oc get route hello-world
NAME HOST/PORT
hello-world example.openshiftapps.com

In the above output the address is http://example.openshiftsapps.com, but you will get a different value. It may take a few minutes before the route goes live; in the interim you will get an error page. If you do, wait a minute and try again. Once it's running you can send a query and get back a response:

$ curl http://example.openshiftapps.com/
Hello, world!

Remember in the above setup that you need to substitute the real address for that to work.

Local development

The source code for the above service looks mostly like the code below, except that this code has been modified slightly. It now has a new version which returns a different response. Create a file called helloworld.py on your machine with this code:

from http.server import BaseHTTPRequestHandler, HTTPServer

class RequestHandler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-type', 'text/plain')
self.end_headers()
self.wfile.write(b"Hello, world, I am changing!\n")
return

httpd = HTTPServer(('', 8000), RequestHandler)
httpd.serve_forever()

Typically, testing this new version of your code would require pushing to upstream, rebuilding the image, redeploying the code, and so on and this can take a bit. With Telepresence, you can just run a local process and route traffic to it, allowing you a quick develop/test cycle without going through slow deploys.

We'll swap out the hello-world deployment for a Telepresence proxy, and then run our updated server locally in the resulting shell:

$ telepresence --swap-deployment hello-world --expose 8000
@myproject/192-168-99-101:8443/developer|$ python3 helloworld.py

In another shell session we can query the service we already started. This time requests will be routed to our local process, which is running the modified version of the code:

$ oc get route hello-world
NAME HOST/PORT
hello-world example.openshiftapps.com
$ curl http://example.openshiftapps.com/
Hello, world, I am changing!

The traffic is being routed to the local process on your machine. Also, note that your local process can now access other services, just as if it was running inside the cluster.

When you exit the Telepresence shell the original code will be swapped back in.

Wrapping Up

Telepresence gives you the quick development cycle and full control over your process you are used to from non-distributed computing; that is, you can use a debugger, add print statements, or use live-reloads if your web server supports it. At the same time, your local processes have full networking access, both incoming and outgoing, as if it were running in your cluster, as well as have access to environment variables and—admittedly a little less transparently—to volumes.

To get started, check out the OpenShift quick start or tutorial on debugging a Kubernetes service locally. If you're interested in contributing, read how it works, the development guide, and join the Telepresence Gitter chat.