OpenShift V3 Deep Dive Tutorial | The Next Generation of PaaS

There have been a lot of announcements lately around Red Hat’s OpenShift v3 plans, specifically around Docker and Kubernetes.  OpenShift v3 is being built around the central idea of user applications running in Docker containers with scheduling/management support provided by the Kubernetes project, and augmented deployment, orchestration, and routing functionality built on top.

This means if you can run your application in a Docker container, you can run it in OpenShift v3.  Let’s dig in and see just how you can do that with code that’s available today.  I’m going to walk through the setting up OpenShift and deploying a simple application.  Along the way, I’ll explain some details of the underlying components that make it all work.

Background

If you’re not familiar with the basic concepts of Docker images/containers, and Kubernetes Replication Controllers, Services, and Pods, it will be helpful to read up on those items first:

However if you want to skip that, the primary concepts you need to know are:

Docker image:  Defines a filesystem for running an isolated Linux process (typically an application)

Docker container: Running instance of a Docker image with its own isolated filesystem, network, and process spaces.

Pod: Kubernetes object that groups related Docker containers that need to share network, filesystem or memory together for placement on a node.  Multiple instances of a Pod can run to provide scaling and redundancy.

pod
Figure 1: A single pod with two containers, each exposing a port on the pod’s IP address

 

pods
Figure 2: Three different pods each running a set of related containers

 

Replication Controller: Kubernetes object that ensures N (as specified by the user) instances of a given Pod are running at all times.

Service: Kubernetes object that provides load balanced access to all instances of a Pod from another container in another Pod.

services
Figure 3: Multiple instances of a single pod are load balanced and accessed via a Service

Get OpenShift v3

The first thing you’ll need is a host machine or virtual machine capable of running Docker.  Then download and extract a copy of the OpenShift binary onto that machine:

$ wget https://github.com/openshift/origin/releases/download/20141021-pre/openshift-origin-0.1-a4f33b1-linux-amd64.tar.gz

$ tar -zxf openshift-origin-0.1-a4f33b1-linux-amd64.tar.gz

Also download the archive of sample files and scripts we’ll be working with and extract it:

$ wget https://blog.openshift.com/wp-content/uploads/blog_part1_files.zip

$ unzip blog_part1_files.zip

Get pre-req images (optional)

This walkthrough uses a few Docker images.  The system will normally pull these images as needed, but to avoid having to wait for a docker pull to occur later when it may be unclear if things are succeeding or not, I recommend you go ahead and pull them explicitly now.  You can pull the images by running the pullimages.sh script:

$ ./pullimages.sh

From here on, all commands should be run within the directory where you extracted the sample files.

Start OpenShift

Starting up an OpenShift deployment is as easy as running the binary from the archive you downloaded earlier:

 $ ./openshift start

This command starts up several important OpenShift components:

  •  The Kubernetes master:  This component is responsible for managing the state of the system, ensuring that all containers that should be running, are running, and that other requests (eg builds, deployments) are serviced.
  • A Kubelet:  Kubelets act as agents to control Kubernetes nodes.  They handle starting/stopping containers on a node, based on the desired state defined by the master.  A normal deployment would have multiple nodes, but here we just have a single one.  This also includes a Kube proxy which allows applications running inside containers to access other containers deployed across the system.  I’ll talk more about services later in this post.
  • The OpenShift master:  OpenShift provides a REST endpoint for interacting with the system.  New deployments/configurations are created via REST and the state of the system can be interrogated through this endpoint as well.
  • An etcd server:  OpenShift uses etcd to store system configuration and state.
  • Controllers:  Controllers are the components that run with the masters that makes sure the running system matches the desired state as stored in etcd.  For example a DeploymentController watches for new Deployment objects and processes them.  Similarly a BuildController watches for new Build objects and schedules the build.

 

A normal OpenShift system would distribute these components (in particular the Kubelets and the Kubernetes master), but for simplicity in this case they all run in a single process on a single machine.

openshift_all_in_one

Figure 4: All components of a running OpenShift v3 deployment

 

 

 

Deploy a Docker Registry

OpenShift makes use of its own local Docker registry for storing Docker images that are built from application source.  This allows for fast access and (eventually) local authorization/authentication control.  The main purpose for now is that when an application is built, the new image will be pushed to this local registry and then later pulled from this registry during deployment.  This also touches on one of the central themes of OpenShift v3 which is to give the same tools to administrators running OpenShift that developers have creating apps on OpenShift.

In this case, the Docker registry will be deployed on OpenShift just like any other application would be.

Define a Docker Registry Configuration

We’re going to provide configuration information to OpenShift which describes how to deploy the Docker registry image, and OpenShift will use that Deployment to create a Kubernetes Pod with a Docker registry container running on the node.  Let’s take a look at the JSON file that is going to make that happen.  The file is docker-registry-config.json in the directory where you extracted the blog files.

 “id”: “docker-registry-config”,

“kind”: “Config”,

“apiVersion”: “v1beta1”,

“creationTimestamp”: “2014-09-18T18:28:38-04:00”,

“name”: “docker-registry-config”,

“description”: “Creates a private docker registry”,

“items”: [

…….

]

The parent object is a Config object.  Config objects serve as a convenient wrapper for the different objects that can be created in OpenShift.  In this case, we see that there are two objects nested inside the Config.

The first object is a Service:

{

“apiVersion”: “v1beta1”,

“creationTimestamp”: null,

“id”: “docker-registry”,

“kind”: “Service”,

“port”: 5001,

“containerPort”: 5000,

“selector”: {

“name”: “registryPod”

}

}

This section defines a Kubernetes service which will provide access, via port 5001 on the node, to port 5000 of containers running in any Pod instance with the label “registryPod”.

The second object is a Deployment:

“id”: “docker-registry”,

“kind”: “Deployment”,

“apiVersion”: “v1beta1”,

“triggerPolicy”: “manual”,

“configId”: “registry-config”,

“strategy”: {

“type”: “customPod”,

“customPod”: {

“image”: “openshift/kube-deploy”

}

First we define a triggerPolicy for when the Deployment should be run.  In this case it’s manual, but in the future this will enable scenarios such as “redeploy when a new version of my image is available.”

Next we define a strategy for executing the deployment.  Today only one strategy exists in which we reference a specific Docker image which contains the logic for implementing the deployment.  During deployment a container will start up using this image.  The container will talk to the OpenShift API to create the objects described in the deployment.  In the future this configurable strategy will enable other deployment flows such as rolling upgrades and atomic upgrades.

Finally, we define a “controllerTemplate”:

“controllerTemplate”: {

“replicas”: 1,

“replicaSelector”: {

“name”: “registrypod”

}

…….

}

This will be used during deployment to construct a Kubernetes ReplicationController definition which in turn will be responsible for ensuring the desired number of instances of the Pod are running.  Within the controllerTemplate we have indicated that we only want one instance of the pod, and the label of the pods we want to track is “registrypod”.

Within the controller template we define a podTemplate:

“podTemplate”: {

“desiredState”: {

“manifest”: {

“containers”: [{

“image”: “openshift/docker-registry”,

“name”: “registry-container”,

“ports”: [{“containerPort”: 5000,”protocol”: “TCP”}]

}],

“version”: “v1beta1”,

“volumes”: null

},

“restartpolicy”: {}

},

“labels”: {“name”: “registrypod”}

}

Like the controllerTemplate, this is used to define a Kubernetes Pod during deployment.  The pod will contain one container, based on the image “openshift/docker-registry” and it will expose one port, 5000.  Notice how this port ties back to the Service we defined earlier.  We also define the label for this Pod to be “registrypod” which corresponds to the label the ReplicationController will be looking for.

Apply the Docker Registry Pod Configuration

Because we have wrappered our Deployment and Service objects into a Config object, we can apply the entire configuration at once via the “apply” command.  This command cause the client to iterate the list of objects in the Config and pass them to the OpenShift api where the server will create them in etcd.  Once created in etcd, the various Controllers will recognize the new desired states and go about producing them by placing Pods on nodes, running containers, etc.

To apply the config, run this command:

$ ./openshift kube apply -c docker-registry-config.json

Creation succeeded for Service with 'id=registryservice'

Creation succeeded for Deployment with 'id=registry-deploy'

This will start up the registry pod.  This may take a moment while the deployment operation executes and then creates the registry pod.  You can confirm that the new registryPod is running by executing:

$ ./openshift kube list pods

Which, when complete, will show your running pod

ID                                     Image(s)                    Host                     Labels                                                                                                   Status
----------                             ----------                  ----------               ----------                                                                                               ----------
94679170-54dc-11e4-88cc-3c970e3bf0b7   openshift/docker-registry   localhost.localdomain/   deployment=registry-config,name=registryPod,replicationController=946583f6-54dc-11e4-88cc-3c970e3bf0b7   Running

You can also confirm the registry is accessible by running

$ curl localhost:5001

"docker-registry server (dev) (v0.9.0)"

Note: if this fails with a connection reset error, run the command a second time.

Notice that here we are using port 5001 which corresponds to the Service running on the OpenShift node.  This Service in turn proxies the request to the running registry container in the pod.  Normally Services are only intended to be used by other Pods, not for external requests, but it is a convenient way to access your containers until a traditional routing layer is available.

Create An Application Repository

Now that we’ve setup some OpenShift infrastructure, we can start to experience the actual developer workflow.

First, let’s create a new application repository on github.  Fork this repository:  https://github.com/bparees/openshift3-blog-part1  (If you want to skip this step, you can, but you’ll miss out on a few of the interesting flows).

Next, add this webhook to your new repository:  http://<your_openshift_host>:8080/osapi/v1beta1/buildConfigHooks/build100/secret101/github

This step assumes your OpenShift server is running on a publicly accessible IP address.  If it is not, you can safely skip this step and we’ll work around it later.

Create a Build

OpenShift relies on the concept of Builds to turn your application source into a runnable Docker image.  Build objects are just like other objects we’ve discussed, so we’ve got a JSON file to define one: application-buildconfig.json.

 

{

“id”: “build100”,

“kind”: “BuildConfig”,

“apiVersion”: “v1beta1”,

“desiredInput”:

{

“type”:      “docker”,

“sourceURI”: “git://github.com/bparees/openshift3-blog-part1.git”,

“imageTag”:  “openshift/origin-ruby-sample”,

“registry”:  “127.0.0.1:5001”

},

“secret”: “secret101”

}

There’s not too much going on here.  The build type is docker which means that essentially OpenShift will perform a docker build operation to construct your image.  There is also an sti type build which leverages the Source-To-Image component for layering applications onto existing images.  We’ll discuss this alternative build mechanism in a future blog post.

The sourceURI points to the source repository that will be used as the context for the build.  Since this is a docker build type, the source repository must contain a Dockerfile in addition to the application source.  If you forked the source repository earlier, then update this file to point to your fork instead.

The imageTag will be applied to the resulting image when it is pushed to the registry.  You will see this imageTag again when we define the Pod which runs the application.  The registry url is the registry we started up earlier, notice once again how we use port 5001 because we are talking to the Service, not directly to the container.

Finally the secret is used by the webhook defined earlier so only you can trigger this build to run.

Since we only have a single object, there is no need to wrapper it into a Config.  Instead we’ll just apply this object directly:

$ ./openshift kube create buildConfigs -c application-buildconfig.json

You should see the build definition returned as output:

ID                  Type                SourceURI

----------          ----------          ----------

build100            docker              git://github.com/bparees/openshift3-blog-part1.git

Run a Build

Now that you’ve got a build defined for your application, you can trigger the build so a new image is created for your application.  If you previously setup the forked repository and webhook, you can accomplish this by simply pushing a change to your repository.  If you did not set up the webhook, you can simulate the trigger by running:

$ curl -s -A "GitHub-Hookshot/github" -H "Content-Type:application/json" -H "X-Github-Event:push" -d @github-webhook-example.json http://localhost:8080/osapi/v1beta1/buildConfigHooks/build100/secret101/github

Once you’ve triggered a build, you should see OpenShift building your application:

 $ ./openshift kube list builds

It may take a few minutes for the build to transition from the new/pending/running states to the “complete” state, at which point the output will look like this:

ID                                     Status              Pod ID
----------                             ----------          ----------
20f54507-3dcd-11e4-984b-3c970e3bf0b7   complete            build-docker-20f54507-3dcd-11e4-984b-3c970e3bf0b7

OpenShift has now built a brand new Docker image containing your application, based on your Dockerfile.  It pushed this image to the local docker registry using the tag provided in the build definition.

Deploy the Application

Finally we’re ready to run your new application image.  Just like with the registry, we need to define a Config which will describe the Services and Deployments needed to run our application.  However, we’re going to introduce one more concept: Templates.  Although we could simply define a Config in the same way we did with the registry, templates allow you to parameterize an application definition.  What this means is you can define values which are referenced in various locations within the configuration.  These values can be explicitly defined, or they can be generated dynamically (such as for passwords).

The full template is application-template.json where you extracted blog_part1_files.zip.

Most of this will look similar to the docker-registry-config.json, but the top level object is a Template instead of a Config.  In addition the first section is entirely new.  This is where parameters are defined.

“parameters”: [

{

“name”: “ADMIN_USERNAME”,

“description”: “administrator username”,

“generate”: “expression”,

“from”: “admin[A-Z0-9]{3}”

},

{

“name”: “ADMIN_PASSWORD”,

“description”: “administrator password”,

“generate”: “expression”,

“from”: “[a-zA-Z0-9]{8}”

},

{

“name”: “DB_PASSWORD”,

“description”: “database password”,

“generate”: “expression”,

“from”: “[a-zA-Z0-9]{8}”

}]

Here we define a few parameters that use the “expression” generator which uses a regular expression-like syntax to define how the parameter’s value should be generated.  OpenShift will generate a value that’s guaranteed to match that syntax.

Moving down to the containers section, you can see how the values are referenced:

“containers”: [{

“name”: “ruby-helloworld”,

“image”: “127.0.0.1:5001/openshift/origin-ruby-sample”,

“env”: [

{

“name”: “ADMIN_USERNAME”,

“value”: “${ADMIN_USERNAME}”

},

{

“name”: “ADMIN_PASSWORD”,

“value”: “${ADMIN_PASSWORD}”

},

{

“name”: “DB_PASSWORD”,

“value”: “${DB_PASSWORD}”

}],

“ports”: [{“containerPort”: 8080}]

}]

For example the environment variable “ADMIN_USERNAME” is defined to take on the value of ${ADMIN_USERNAME} which will be whatever value is generated when we process the template.  The environment variable will then be available to the running container, and therefore to the application code.

To convert a template into a config that we can actually run, we use the “process” command.  If you want to see the result of processing a template, you can run:

$ ./openshift kube process -c application-template.json

This does not affect the state what is running in OpenShift, it simply returns a standard Config JSON object suitable for applying.

To process the template and apply the result in one step, run:

$ ./openshift kube process -c application-template.json | ./openshift kube apply -c -

Creation succeeded for Service with 'id=frontend'

Creation succeeded for Deployment with 'id=frontend-deploy'

This will cause OpenShift to begin deploying a Pod for your new application.  You can confirm the new Pod is available by running:

$ ./openshift kube list pods

Sample output:

 ID                                                  Image(s)                       Host                Labels                                                   Status

----------                                          ----------                     ----------          ----------                                               ----------

fc66bffd-3dcc-11e4-984b-3c970e3bf0b7                openshift/origin-ruby-sample   127.0.0.1/          name=frontend,replicationController=frontendController   Running

Once the Pod is running, you can access your application via the Service that was defined in the configuration.  The Service was defined to map port 5432 to all Pods with the label “frontend”.  To test the application, run:

 $ curl localhost:5432

Sample output:

Hello World!

User is adminM5K

Password is qgRpLNGO

DB password is dQfUlnTG

Note that the credentials seen here are referenced as environment variables in the container and reflect the values generated during template processing.

Congratulations!  You’ve successfully started up an OpenShift v3 environment and then built and deployed a brand new application on it!

To clean up your environment, you can run the cleanup.sh script.  This will stop the openshift process, remove the etcd storage, and kill all Docker containers running on your host system.  (Use with caution!  Docker containers unrelated to OpenShift will also be killed by this script.)

$ ./cleanup.sh

Final Thoughts

Thank you for taking this tour of an early view of OpenShift v3.  Hopefully it’s given you some insight into the direction we’re headed with OpenShift as well as provided you with an understanding of some of the components and configuration involved.

In a follow up blog, we’ll look at how you can create a pod runnning a database and tie it to your application.  We’ll also look at some additional capabilities like viewing the build logs, using deployment triggers to automatically redeploy an application when the image is updated, and using the source-to-image builder instead of the Dockerfile based builder.

Going forward, we have lots of plans for enhancement (don’t worry, you won’t be hand-rolling JSON files forever!), and we’d love for you to get involved in the process either by contributing ideas or code.  Here are a few places to go if you’re interested in joining our community:

Additional References

Ruby Hello World application repository

The OpenShift v3 repository

The Kubernetes repository

Docker

Categories
News
Tags
, ,
  • Truelove

    When comes that day when would be possible to install any version of Node.js in cartridges?

    • Ben Parees

      Yes, that’s the goal, if you can put the version of nodejs that you want
      to use in a container, you’ll be able to run it on OpenShift. We will
      of course also supply a version ourselves just like today.

  • docker replace VM o.o!

  • Christophe Furmaniak

    can’t wait to the blog when you’ll speak about persistent data and docker volumes management

  • Arash Kaffamanesh

    Ben, many thanks for the nice guide, I’m trying to get this running on CentOS 7 VM (on top of OpenStack) and after starting openshift, I’m getting:

    I1026 08:11:14.947030 01112 controller.go:83] Synchronizing deployment id: docker-registry state: failed resourceVersion: 10

    any ideas?

    • Ben Parees

      What actual behavior are you seeing when you work through the steps of the blog? Two frequent issues are selinux getting in the way (“setenforce 0” to disable it temporarily) and firewalld blocking container->host communication (“systemctl stop firewalld” is the quickest fix there)

      • I ran into this as well on CentOS 7. Perhaps docker 1.2 there is not recent enough?

        http://fpaste.org/149919/41576557/

        Stopping firewalld and making SELinux permissive didn’t help.

        • Ben Parees

          Running the cleanup.sh script will get you back to a fresh state to start the steps. but the kube delete commands are a valid option too.

          It sounds like in the end stopping firewalld and setting SELinux to permissive did get you going (once you started fresh) right?

  • ykesh

    Building with docker using

    docker build -t vr/pres ./

    works fine, triggering the build with web-hook build is failing? How to find why its failing, where are the log files, /openshift.local.etcd/log contains all the status info, but no other logs. Where do I look for build logs?

    Thanks.

    • Ben Parees

      if you get the build id from “openshift kube list builds” you can view the build logs via “openshift kube buildLogs –id=[build id]”

      you can also do “docker ps -a” and find the docker-builder container and then run “docker logs [containerid]

  • daneyon hansen

    I found this potential doc bug:

    ./openshift kube process -c application-template.json | openshift kube apply -c –

    For me, the 2nd openshift statement needed to be ./openshift:

    # ./openshift kube process -c application-template.json | ./openshift kube apply -c –
    Creation succeeded for Service with ‘id=frontend’
    Creation succeeded for Deployment with ‘id=frontend’

    • Ben Parees

      Ah, very good catch, thank you. blog has been updated.

      • Jim Minter

        FWIW (very minor) the command returns “Creation succeeded for Deployment with ‘id=frontend'” as Daneyon mentions, whereas the blog says “Creation succeeded for Deployment with ‘id=frontend-deploy'”.

  • Pingback: Deploying a PostgreSQL Pod in OpenShift V3 – OpenShift Blog()

  • Pingback: A Few Cool Technologies (2014-11-15) | How This Dummy Did It()

  • Pingback: How Builds, Deployments and Services Work in OpenShift V3()

  • Pingback: Continuous Integration and Deployment with OpenShift v3()

  • theute

    I have no pod listed after this command: “./openshift kube apply -c docker-registry-config.json” even though it executes with no error. See the commands below after a cleanup and starting openshift:

    [theute@desktop OpenShift]$ sudo ./openshift kube apply -c docker-registry-config.json
    Creation succeeded for Service with ‘id=docker-registry’
    Creation succeeded for Deployment with ‘id=docker-registry’
    [theute@desktop OpenShift]$ ./openshift kube list pods
    ID Image(s) Host Labels Status
    ———- ———- ———- ———- ———-

    [theute@desktop OpenShift]$ sudo ./openshift kube apply -c docker-registry-config.json
    Error: service “docker-registry” already exists
    Error: deployment “docker-registry” already exists

    Firewall and SELinux are disabled

    Any idea of what could be wrong ?

    • theute

      I have errors in the logs:

      I1212 14:31:05.127482 14066 rest.go:80] Creating deployment with namespace::ID: default::docker-registry
      E1212 14:31:06.329237 14066 etcd.go:185] Failed to get the key: registry/services/endpoints/default/docker-registry 100: Key not found (/registry/services/endpoints) [5]
      E1212 14:31:06.329254 14066 etcd.go:148] Couldn’t get endpoints for default docker-registry : 100: Key not found (/registry/services/endpoints) [5] skipping
      I1212 14:31:06.331352 14066 proxier.go:492] Opened iptables portal for service “docker-registry” on 172.121.17.1:5001
      E1212 14:31:06.331465 14066 etcd.go:185] Failed to get the key: registry/services/endpoints/default/docker-registry 100: Key not found (/registry/services/endpoints) [5]
      E1212 14:31:06.331474 14066 etcd.go:148] Couldn’t get endpoints for default docker-registry : 100: Key not found (/registry/services/endpoints) [5] skipping

      After trying with: export KUBERNETES_MASTER=http://127.0.0.1:8080

      The errors go away but the pod is still not listed

      • Ben Parees

        How long did you give it? initially the system needs to pull the images which can take some time. You can run docker pull openshift/docker-registry to ensure you already have the image which will avoid this wait.

        Also I’d recommend moving up to the latest openshift code and example from the openshift origin repository. You can follow the readme here which is the updated steps for this blog:
        https://github.com/openshift/origin/blob/master/examples/sample-app/README.md

        • theute

          I will try the latest, I had other issues with a different machine

          • Matt Hicks

            One thing that I tend to do is add my user to the ‘docker’ group. This allows me to run docker commands not as sudo. This isn’t required but when I was trying using sudo, I would often accidentally run a step as my normal user and get a permission mismatch which can be pretty tricky to track down.

  • Marcello De Sales

    Hi There,

    Are you guys going to solve the problem of Resource Management with Apache Mesos???

    Docker announced a partnership with Mesosphere to provide native support to Docker Machine and Docker Swam to manage clusters in ANY Data Center, Host, Cloud Provider…

    https://www.youtube.com/watch?v=sGWQ8WiGN8Y&list=TLuv_lBNk222U

    Can you guys share insights about the directions of the Docker integration?

  • Giovanni Candido da Silva

    This is available in rhcloud?

  • Vinothini Raju

    Is it possible to use a docker file instead of a sourceURI in the buildconfig JSON file ?

  • Pingback: OpenShift 3 from zero | Teh Tech Blogz0r by Luke Meyer()

  • Jim Minter

    Great blog post! FWIW, all this works fine in Fedora 21 – but please note that before starting you should:

    1) yum -y install wget tar unzip docker-io psmisc
    2) disable SELinux ( sed -i -e ‘s/^SELINUX=.*/SELINUX=disabled/’ /etc/selinux/config ; setenforce 0 )
    3) disable firewalld ( systemctl disable firewalld.service ; systemctl stop firewalld.service )
    4) after stopping firewalld, (re)start the Docker daemon ( systemctl restart docker.service )

  • Guest

    Hey, I am anot able to run the application as described, hitting with the error:

    E0113 09:14:27.238616 01992 kubelet.go:550] Failed to pull image 127.0.0.1:5001/openshift/origin-ruby-sample: Error: image openshift/origin-ruby-sample not found skipping pod 90ad7c1c-9b04-11e4-8fad-0800279696e1.etcd container ruby-helloworld.

    any pointers on this?

  • priyanka Gupta

    Hey, I am not able to run the application as described, hitting with the error:

    E0113 09:14:27.238616 01992 kubelet.go:550] Failed to pull image 127.0.0.1:5001/openshift/origin-ruby-sample: Error: image openshift/origin-ruby-sample not found skipping pod 90ad7c1c-9b04-11e4-8fad-0800279696e1.etcd container ruby-helloworld.

    any pointers on this?

    • Matt Hicks

      I just ran through on Fedora 21 today, using Jim’s comments below
      about setting SELinux to Permissive and turning off firewalld. One
      thing I noticed that you might be hitting. At one point, I was messing
      with my networking settings and the docker registry was still running
      but became inaccessible. I ran the cleanup script and started over and
      everything worked fine. Might be worth a shot.

  • Pingback: JBoss on Docker At a Glance | Red Hat Developer Blog()

  • Pingback: Key Concepts of Kubernetes - Miles to go 2.0 ...()

  • Josh West

    Hi Ben, This is a great post. I just wanted to let you know that the 0.1 release is no longer available. If you try to update to the latest 0.3 then there are some change to the kube commands mentioned in the post such as `./openshift kube apply` that make parts of the post obsolete. I have pointed some customers at these blogs, so it would be great to get an updated version of this.

  • Falcon Taylor-Carter

    Any chance of updating this article? Some of the links no longer work, like the openshift tar link, and the openshift docker registries no longer exist.