On a recent client project, our team was tasked with setting-up local development environments for a new Node.js-based microservices system that would eventually be deployed on Red Hat’s OpenShift platform.
We’ve found a good approach using the Minishift project and we’ve put together a demo with some accompanying documentation about what we’ve learned.
You can jump straight into the code and docs, or you can stick around for more on the journey that led us to this point.
Why Target OpenShift?
Some of OpenShift’s additions have been built upon other upstream projects such as the integrated Jenkins instance for an out-of-the-box CI/CD pipeline and the integrated Docker registry for build artifacts. Others are completely custom but very sensible, such as the router to direct traffic between your services. Without this, you’d probably end up having to write your own version of this component.
One thing we really like about OpenShift is the very clean user interface that ties all of the concepts together and provides good insight into how your system is working. You can view logs for all your services, scale number of instances, and even track the progress of builds in the integrated CI/CD pipeline.
Once you start using OpenShift, you might find yourself missing this functionality if you go back to plain Kubernetes.
There are also almost equally important non-technical reasons why OpenShift is an attractive platform.
When evaluating adoption of a platform, it’s important to consider the amount of risk that it introduces into a project. Red Hat has a solid reputation for mitigating risk in what can sometimes be chaotic development of the upstream projects it packages.
Red Hat’s ability to filter and reduce risk for critical infrastructure components is one of the reasons they have built strong relationships with many enterprise customers. More and more clients are requesting OpenShift from us because of this.
What Makes a Good Development Environment?
You can optimise for many aspects when building a development environment, but these may change through the life of even a single project. The following are a few of the things that were important to us when evaluating our Minishift-based solution.
Time to First Code
We wanted a solution that would allow us to start building our services as quickly as possible, and not require us to build too much custom tooling that would make it more time-consuming to onboard our developers.
Increasing the ease with which a developer can bring up a completely fresh environment creates a significant long tail of improved productivity that can benefit you throughout the lifetime of your project.
We find that the way that Minishift makes use of Docker’s libmachine in combination with VirtualBox or Xhyve (depending on your platform) allows us to wipe and reinstall the environment with minimal fuss.
Now that we’ve conquered the initial learning curve and documented our findings clearly, we feel confident in our ability to get new developers up and running. But this approach hasn’t been without its gotchas. For one, we have found that startup on Minishift is sometimes unreliable, although retrying the command usually does the trick.
The holy grail of DevOps is a development environment that mirrors the production environment as closely as possible. This should theoretically reduce the possible bugs that could occur from slight differences, which you would only catch once you’ve deployed your code into production.
It goes without saying that there are situations where being as close as possible to production could create diminishing returns. While we have something running now that feels fairly solid, we can imagine that there could be scaling issues in the future as an application grows to a higher number of more complex services.
Running a full Kubernetes/OpenShift in a single virtual machine on your own hardware introduces a lot of complexity and could also create many opportunities for things to break. While we acknowledge this risk, it’s our hope that things would at least break in development in the same way that they would once you head into production, allowing you to catch problems earlier.
We haven’t yet explored and documented the process of taking a system developed locally using Minishift to an OpenShift environment deployed in either the public or private clouds. It’s likely that we will need to make more compromises as we map out that path.
Quick Feedback Loops
It is critical in a local development environment that you be able to execute your code changes as quickly as possible, lest you lose a few minutes on every change resulting in many hours of waiting around each week.
This part of the process was the trickiest to figure out because, for the most part, Kubernetes and Minishift weren’t designed for this use case.
Because it takes entirely too long to wait for an entire Jenkins build and deployment cycle to happen for every code change, we decided to build the container and then mount the code into place from the host machine. This would make changes available immediately, where we could pick them up with nodemon and just refresh.
We ran into some issues where we needed to change some security settings in Kubernetes to allow for host mounts of folders and, even once they were mounted, there were incompatibilities with the default way that nodemon monitors files. We had to drop back to “legacy mode“, which has its own limitations.
A further wrinkle was introduced when we realized that it’s not practical to mount the node_modules folder from the host, because native modules need to be compiled for the operating system where you are running the code. We had to get creative with Dockerfiles and mount points, but we found that it’s still really simple to kick off a Jenkins build job from the shell. You only need to do this when your package.json changes, so it doesn’t slow down day-to-day development that much.
Kubernetes will only start new pods (instances) up to the level your hardware allows.This decision is based on the memory limits set for each service and (in Minishift) the amount of CPU cores/memory handed to the virtual machine on startup. We were able to tweak these quite easily to get up to 50 pods of our little test server running. Interestingly, there is a recommended upper limit of 110 instances across all services when using OpenShift Origin (the upstream project for Minishift).
The last thing we needed to tweak was to change the deployment strategy in Kubernetes to “Replace”, so that it will kill and deploy new services instead of running both in parallel and slowly moving traffic over to the new one. The latter behavior is only preferable in production and is a hindrance in a development setup.
If you’re interested in more of the technical details of this implementation, please check out the extensive Readme we’ve created for the demo project.