Ansible Container: Building a Bridge to OpenShift

We started the Ansible Container project earlier this year with two goals in mind. The first is to make it possible to build container images using an Ansible playbook. The second is to provide a simple, repeatable workflow for taking containers developed on a laptop and deploying them in the cloud.

The project has been active since May, and it’s come a long way toward reaching these goals. Today it’s possible to define a multi-container application, build the container images from a playbook, deploy containers to OpenShift using the built images, and generate and execute deployment artifacts, all on your laptop, and with just a couple simple commands.

The format for defining each of the containers or services that comprise a multi-container app is the Ansible role. A role is a construct for making Ansible content reusable and portable. A role contains a playbook, or set of tasks, and all of the supporting files and metadata needed by the playbook (i.e. templates, data files, defaults, variables, dependencies on other roles, etc.). A role is a complete unit of work that can be focused on executing the tasks for a single step in a larger automation workflow.

Using an Ansible role to define a microservice offers the benefit of bringing existing Ansible content and community expertise to the new world of containers. A role written to perform configuration updates or package installs on a virtual machine, or even bare metal, can just as easily be executed to perform the same tasks inside a container. In fact, with very little change, if any, existing roles can be applied directly to the container image build process.

Additionally, the tool makes executing roles a seamless process. It uses its own build container, with Ansible pre-installed, and orchestrates the build by standing up a container from a base image, connecting to it using docker exec (not SSH), executing the role, and committing the changes to a new image. All of that work happens automatically, allowing the user to focus solely on the application and the roles needed to package the application into container images.

In addition to using existing roles, Ansible Container offers two new role types that can be executed directly by the tool. They’re designed to make microservice definitions reusable, and shareable. The community site for sharing roles is Ansible Galaxy, and there are already a couple dozen container roles that can be download and dropped into a project using the init and install commands.

The first of the new role types is a Container App, which defines a working, multi-container app. With this type of role, a full-stack application or a development framework can be distributed. It’s downloaded by running ansible-container init in an empty project folder.

The following video provides a quick demonstration:

The second is a Container Enabled role that defines a single service. It includes the container configuration and the playbook tasks for building the container image. It can be added to an existing Ansible Container project by running ansible-container install. When installed, the container configuration is added to the project’s orchestration document, container.yml, and the project playbook, main.yml, is updated to include the new role. All of the knobs and dials for tweaking the container configuration and image build are exposed, making integration easy.

By taking advantage of the community roles available on Galaxy, instantiating and running a multi-container app in development is remarkably easy. The next step, in keeping with our stated goals, is to provide a simple and repeatable workflow for deploying the containers to the cloud. And to achieve that end, we’re focused on building the tooling to deploy containers to OpenShift, and at the same time positioning the project to be a bridge for the Ansible community to use existing playbooks and roles to deploy applications on OpenShift.

The first iteration on deploying to the cloud is the ansible-container shipit command. It offers two engines, Kubernetes and Openshift, and it works by transforming the project’s orchestration document, container.yml, into an Ansible role. The generated role interacts with the oc client to create deployments, services, routes and persistent volume claims on the OpenShift cluster. The role maps each container to its own pod and connects pods through services. Named volumes are mapped to persistent volume claims, and ports are exposed using routes.

In future iterations, we hope to increase the types of objects the engine can create, and we’re investigating the best way to remove the dependency on the oc client, and access the OpenShift API directly. We’re not done, but just like the image build process, the deployment process has come a long way, making it possible today to deploy a multi-container application from your laptop to OpenShift with minimal effort.

We recently published a demo that provides a walkthrough of managing an application through the full development lifecycle, starting from an empty project folder, and ending with an application deployed to a local OpenShift instance. It’s a great way to get hands-on with the tool and see for yourself how it works.

The following video is the final step in the demo, where we execute the deployment role and deploy the application. The video continues, taking a quick tour of the OpenShift console, and then logging into the running application:

Hopefully, this gives you a better idea of what the Ansible Container project is about, and how you may be able to use the tool to develop and manage your next application from inception through cloud deployment. In addition to the demo, if you’re interested in learning more, visit our documentation site. You can also reach out to us by joining the mailing list, by opening an issue, or by joining the #ansible-container channel on irc.freenode.net. And as always, pull requests are welcome!

Categories
OpenShift Container Platform, OpenShift Ecosystem, OpenShift Origin
Tags
, , ,
Comments are closed.