There are a lot of important things happening all around the world these days. Some of them have people upset or angry, to the point where they feel the desire to march or boycott to raise awareness and demonstrate their frustrations. As a company that believes in open communities and open discussions, we appreciate the need to have many ideas heard and debated.
But sometimes you come across a “boycott” that makes you scratch you head. Take for example – “Boycott Docker“.
If you’re new to Linux containers, this is a good place to start. Also, check out all the free tutorials and content from the Red Hat “Containers for the Enterprise” Virtual Event from last week.
Before we look at where the challenges and frustrations with containers might be coming from, it’s important to remember that the Linux container itself is just a piece of a much larger set of services and capabilities that are needed to run applications in a production environment. While containers provide a consistent mechanism to package a set of application bits and dependencies, there are many more “platform-level” pieces needed to make the containerized applications scalable, secure and manageable.
If we look at the areas of concern of the “boycott”, they tend to fall into a few common buckets. Let’s take a look at each of those.
Concern #1: Docker is insecure because Docker registries are not secured and “Docker does not know anything about either SELinux or AppArmor,” which help prevent security breaches on Linux systems.
While it’s true that many container formats and runtimes can run on any version of Linux, Red Hat Enterprise Linux (RHEL) comes with SELinux enabled by default. This means that containers running on RHEL or RHEL Atomic are prepared to stop security breaches and vulnerabilities.
In addition, Red Hat OpenShift Container Platform comes with an embedded container Registry that integrates granular Role-Based Access Control on user-access and file-access. It also integrates with 3rd-party scanning tools to validate container images.
Building Docker applications requires writing scripts (or Dockerfiles, at least). Docker does not add value to the scripts; you could script application builds without Docker.
While many developers enjoy using containers to package and run applications locally on their laptop, not all of them want to understand the nuances required to build and maintain dockerfiles. They just want to build their application and push it through their application lifecycle (QA, Staging, Production).
To address this challenge, Red Hat has created “s2i” (source-2-image). s2i is an open source project which is a tool for building/building artifacts from source and injecting into docker images. In other words, by using s2i, the OpenShift platform can directly integrate into your application workflows and build the container images as they make their way into each stage of the lifecycle. The s2i process puts the application into standards-based container format, which allows them to run natively with the Kubernetes container orchestration that is built into OpenShift.
OpenShift can also use webhooks to integrate into your source repositories to simplify the workflow for developers, or natively run your CI/CD pipelines directly on the OpenShift platform to manage your scalability needs.
Docker requires some apps to be rewritten to run in containers.
There is a misperception that the only practical use for containers are applications that are written (or re-written) as microservices. While containers are a perfect deployment model for those applications, we’re also seeing tremendous traction in the market with companies that are moving existing Linux applications to containers. In some cases, they are able to reduce licensing costs for hypervisors or application middleware. In other cases, they are able to move to more moderns versions of existing software (not re-written) and leverage elements of the underlying OpenShift platform to deliver capabilities around clustering, high-availability and scalability.
Still other companies will use containers as an impetus to enhance the skills, culture and processes within their organization to improve how software is written and delivered. A great example of this is how Key Bank was able to reduce their delivery time from three months to one week for existing applications. Another example is how Swiss Rail is now able to do 400 deployments a day to their mobile applications, which improve the end-user experience for the 1M+ riders of trains every day.
The complexity required to run applications inside containers creates more trouble than it is worth.
Getting a single container to run on a developers laptop is fairly simple. Getting a complex application, made up of dozens or hundreds of containers (microservices) can be very difficult with a DIY approach. This is why OpenShift is an Enterprise-Ready container platform, based on Kubernetes.
The complexities of getting networking, storage, container registry, security, logging, monitoring and many other factors is taken care of by the OpenShift platform. And because it’s built using open source software, with container standards and Kubernetes, it’s extensible to work with 100s of ecosystem technologies to fit into your specific environment.
Docker’s abstraction layers eat up CPU time, thereby undercutting the performance advantages that containers theoretically offer over virtual machines.
Performance testing is a black art – more black magic than exact science. Anybody can tell you that something new is faster than something old, but we find that it’s usually helpful to view test results in the context of how they were tested and the objectives of the testing. So here’s a few different testing scenarios to dissect. Some of them are containers on bare metal and cloud servers, and some are 1000s of containers within 100s of virtual machines. Both scenarios could be considered “real life”, as we have OpenShift customers running all of these scenarios in production today: OpenShift on VMware, OpenShift on OpenStack, OpenShift on AWS, OpenShift on Azure, OpenShift on GCP.
Administering Docker containers requires you to learn new commands.
Try it for yourself. Setup a cluster in 30mins (or less). It’s as easy as “oc cluster up“…see, that’s not a hard command to learn.