Red Hat's public Platform-as-a-Service (PaaS) offering, OpenShift Online, has now been switched over to our latest version based around Kubernetes, developed as part of the Cloud Native Computing Foundation (CNCF), and container runtime standards from the Open Container Initiative (OCI). The prior version of OpenShift Online (V2) and all your applications are still running, but you will need to migrate them to the latest version (V3).

In a prior blog post we provided guidelines for migrating a Ruby application from V2 to V3. In this post we are going to look at migrating a Python web application based on the Django web framework to the V3 version of OpenShift Online.

V2 Django Starter Application

Migrating an application between V2 and V3 requires a number of different steps. In this blog post we will concentrate on key changes required to the existing application source code of a Python web application used with V2.

The web application we will be using is the starter Django application provided for V2 found at:

Running an Initial Build

To demonstrate the progressive changes which are required, we are going to start off by attempting to deploy the original application source code used with V2, to V3. To avoid needing to migrate to a hosted Git repository while working out the required changes, we will use a binary input source build. This type of build allows us to push the source code directly from our local computer.

From within the root directory of our workarea containing the application source code we are going to run:

oc new-build --binary --strategy=source --image-stream=python:2.7 --name django

Running:

oc get all -o name

we can see that all this has created so far is an imagestream and buildconfig definition. No actual attempt to build and deploy the source code has occurred yet.

To trigger a build we run:

oc start-build django --from-dir=.

To monitor the build process we can run:

oc logs bc/django --follow

If you view the log you will see that the local source code was uploaded. A build is then run using the Python Source-to-Image (S2I) builder provided with V3. The Python packages required by the application are listed in either setup.py or requirements.txt and will be installed automatically.

The build does in the end complete successfully, but there was a slightly ominous warning.

---> Collecting Django static files ...
WARNING: seems that you're using Django, but we could not find a 'manage.py' file.
'manage.py collectstatic' ignored.

Ignoring this for now, let's attempt to deploy the image which has been built and see what happens.

oc new-app django

We will also expose the deployed web application, using a route to provide it with a public URL which can then be used to access it from the public Internet.

oc expose svc/django

Running oc get pods to check the status of the deployed application we get:

NAME             READY     STATUS             RESTARTS   AGE
django-1-build 0/1 Completed 0 10m
django-1-jfknz 0/1 CrashLoopBackOff 3 1m

Unfortunately, the status of CrashLoopBackOff indicates that it is repeatedly failing to startup.

To see any errors in the logs from the web application when it is being started we can run oc logs, passing as argument the name of the pod. This yields:

WARNING: seems that you're using Django, but we could not find a 'manage.py' file.
Skipped 'python manage.py migrate'.
ERROR: don't know how to run your application.
Please set either APP_MODULE, APP_FILE or APP_SCRIPT environment variables, or create a file 'app.py' to launch your application.

So out of the box, the existing application source code for V2 will not run. Time to start working through the changes required.

Hosting the Web Application

When deploying a web application with the Python cartridge in V2, two ways for hosting the web application were supported.

The first was that if the source code contained a wsgi/application or wsgi.py file, it would be assumed it was a WSGI script file and that a WSGI application was being provided. To host the web application, Apache/mod_wsgi would be automatically configured to host it for you.

If instead an app.py file existed, it was assumed to be a self-contained web application that would start up a web server of its own. The Python script in this case would be run as python app.py.

For the Django application we are migrating, it uses a wsgi/application file. This file isn't recognised by the Python S2I builder of V3 as being special. Therefore, what we need to do is specify ourselves how to run up a WSGI server to host the web application, using that WSGI script file as the entry point.

To do this we are going to use mod_wsgi-express, a much simplified way of installing and using Apache/mod_wsgi which works well with containerized deployments.

The first step to using mod_wsgi-express is to install it. To do that, add the mod_wsgi package to the requirements.txt file which pip will use when installing any required Python packages.

After having added mod_wsgi to the requirements.txt file, add a new file to the top level directory of the application source code called app.sh. Initially add to this:

#!/bin/bash

ARGS=""

ARGS="$ARGS --log-to-terminal"
ARGS="$ARGS --port 8080"


exec mod_wsgi-express start-server $ARGS wsgi/application

Ensure that the app.sh file is executable by running chmod +x app.sh.

We now trigger a new build using these changes by again running:

oc start-build django --from-dir=.

Once the build has completed, a new deployment will be automatically triggered.

Using oc get pods once the deployment has completed, this time we get:

django-2-0d047   1/1       Running     0          1m

This time the deployment is succeeding, but attempting to visit the application using its URL from the web browser yields an Internal Server Error.

Checking the logs for the pod, the error we find is:

   File "/opt/app-root/src/wsgi/application", line 9, in <module>
sys.path.append(os.path.join(os.environ['OPENSHIFT_REPO_DIR'], 'wsgi', 'myproject'))
File "/opt/app-root/lib64/python2.7/UserDict.py", line 40, in __getitem__
raise KeyError(key)
KeyError: 'OPENSHIFT_REPO_DIR'

The problem in this case is that the source code is expecting to find an environment variable OPENSHIFT_REPO_DIR.

Default Environment Variables

The OPENSHIFT_REPO_DIR environment variable was one of a number which were set automatically when using V2. These environment variables would provide various details about the filesystem layout under which the web application was running. These environment variables are no longer being set under V3.

We have two choices here to rectify this issue. We can either hunt through all the code looking for any references to these environment variables and change them to the appropriate path, or we can set the environment variables as part of the image so they are defined when the application is deployed.

To do the latter and set this environment variable, create a directory .s2i and in the directory create a file called environment containing:

OPENSHIFT_REPO_DIR=/opt/app-root/src

The directory /opt/app-root/src is the equivalent directory when using the Python S2I builder, in which the application source code is placed inside of the image.

Trigger a new build with the change and once deployed try refreshing the browser page for the URL of the web application.

This time we get back the error Server Error (500).

Enabling Django Error Logging

In this case the error is coming from Django and there is nothing in the error logs. This is because DEBUG is currently disabled in the Django settings file, and no mechanism has been set up to log Django errors to the logs.

We could enable DEBUG in the Django settings file, but that's not the best option for production systems. The better way to ensure we get logs for any errors raised by Django is to enable logging of errors to the error log. This way they will be included in the log messages captured from the container the application is running in and available through OpenShift.

To enable logging of Django errors, add to the Django settings file the following:

LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'console': {
'class': 'logging.StreamHandler',
},
},
'loggers': {
'django': {
'handlers': ['console'],
'level': os.getenv('DJANGO_LOG_LEVEL', 'INFO'),
},
},
}

With this change deployed, the error which we find has been logged is:

     if domain and validate_host(domain, settings.ALLOWED_HOSTS):
File "/opt/app-root/lib/python2.7/site-packages/Django-1.8.4-py2.7.egg/django/http/request.py", line 570, in validate_host
pattern = pattern.lower()
AttributeError: 'NoneType' object has no attribute 'lower'

This points to a problem with the ALLOWED_HOSTS setting in the Django settings module. The code for this setting is:

from socket import gethostname
ALLOWED_HOSTS = [
gethostname(), # For internal OpenShift load balancer security purposes.
os.environ.get('OPENSHIFT_APP_DNS'), # Dynamically map to the OpenShift gear name.
#'example.com', # First DNS alias (set up in the app)
#'www.example.com', # Second DNS alias (set up in the app)
]

It failed because the environment variable OPENSHIFT_APP_DNS was not set and yielded a None value when queried.

Setting the Allowed Hosts

The ALLOWED_HOSTS setting in Django is a list of strings representing the host/domain names that this Django site can serve. This is a security measure to prevent HTTP Host header attacks, which are possible even under many seemingly-safe web server configurations.

At this point in time we do not know what final host or domain name the site will be made available under. When we exposed the service we relied on OpenShift providing us with one. What this was is not readily available from within the running container. To workaround this issue for now, until we know what final host name we might use for the site, let's accept requests against any host name by adding to the .s2i/environment file:

OPENSHIFT_APP_DNS=*

In practice this could actually be left as the value "*". This is because OpenShift uses a front end load balancer and will only direct requests through to the web application which match the advertised host name for the site. If a different Host header value were provided, the request would be rejected at the load balancer as it wouldn't know where to direct it.

Trying again after deploying the change, the error is now different, receiving instead a Not Found error. In this case this is because the sample starter application only has a URL handler registered for the /admin URL. Using that instead we now successfully get back a response and are shown the login page for the Django admin interface.

One problem with the login page presented though is that it has no styling. A check of the error log shows:

Not Found: /static/admin/css/base.css
Not Found: /static/admin/css/login.css

Missing Static File Assets

The reason that we are missing any styling for the login page to the Django admin interface is that when Django is in production mode it doesn't itself host static files. What we need to do is configure mod_wsgi-express to host the static files. Before we do that though, we need to collate all the static files for the Django web application together.

The Django management command to collate together all the static file assets is python manage.py collectstatic. You may remember though at this point the warning in the initial build logs about collectstatic not being able to be run.

The reason this message came out is that if the Python S2I builder detects Django is being used it will attempt to automatically run collectstatic for you. For this original V2 version of the application, the directory structure is different than what V3 expects, so it didn't find the Django manage.py file where it expected it to be.

To get around this, we will need to run the collectstatic command ourselves as part of the build. This is where things get a bit complicated, as although V2 has an action hooks feature whereby you could specify extra steps to be run as part of the build and deployment, V3 does not have an equivalent feature.

Replicating Action Hooks

Because of the quite varied ways in which Python web applications can be deployed and the large number of Python web frameworks, each potentially with their own special build and deployments steps, being able to replicate the action hooks feature of V2 is essential to making it easier to deploy Python web applications.

To fill this need, I've created a Python package to help out.

To install and use this Python package, first create the directory .s2i/bin in the workarea for your source code. In this directory create a file called assemble and in the file add:

#!/bin/bash

pip install powershift-cli[image]


exec powershift image assemble

Ensure the file is executable by running chmod +x .s2i/bin/assemble.

Also create a file run in the same directory, and in it add:

#!/bin/bash

exec powershift image run

Make this executable as well by running chmod +x .s2i/bin/run.

With the addition of these two files, we restore an equivalent feature to what action hooks provided us under V2. The location of where these action hook scripts need to be placed does differ though, so we do need to create them and migrate any existing action hook scripts as necessary.

Hosting Static File Assets

We now have a capability to define action hooks once more and we can therefore specify additional actions to run during a build such as collectstatic.

First though, just to ensure it doesn't cause any problems, disable any attempt by the Python S2I builder to also run collectstatic.

To do this, add to the .s2i/environment file:

DISABLE_COLLECTSTATIC=1

We now want to add a build action hook script to run collectstatic ourselves. This needs to be placed in the .s2i/action_hooks directory, so create that directory. In the build script add:

#!/bin/bash

python $OPENSHIFT_REPO_DIR/wsgi/myproject/manage.py collectstatic --noinput

Make the build script executable by running chmod +x .s2i/action_hooks/build.

With this in place, when the build occurs to create the image for our web application, the collectstatic command will be run. This will end up copying all static files required by the Django web application into the directory defined by the STATIC_ROOT setting of the Django settings module. In this case that was defined as the directory wsgi/static.

Under V2, when Apache/mod_wsgi was used, any files in the wsgi/static directory were automatically hosted at the sub URL /static. When using mod_wsgi-express as we are now, we will need to tell it to do the same.

Modify the app.sh script and change it to:

#!/bin/bash

ARGS=""

ARGS="$ARGS --log-to-terminal"
ARGS="$ARGS --port 8080"
ARGS="$ARGS --url-alias /static wsgi/static"


exec mod_wsgi-express start-server $ARGS wsgi/application

In V2 you were restricted to static files being hosted under the sub URL /static, and thus STATIC_URL in the Django settings module had to be that value. When using mod_wsgi-express you can now change the sub URL used for static assets, as well as the directory into which they are copied by collectstatic. Just modify the settings in the Django settings module and update app.sh to match.

Now triggering a new build and deployment when accessing the Django admin interface, you'll see the styling of the page is now correct. The files for the stylesheets are now able to be loaded by the browser.

What we can't do is login to the admin interface.

This is because we need to initialize the database and also set an initial super user account we can use to login with.

Using a SQLite Database

In the sample starter application from V2 we are using, a SQLite database was used to store data associated with the Django application. This actually presents us with two problems. We need to initialize the database and create an initial super user, but we also need to store that database in a file system directory which is preserved across restarts of the application.

In V2, a persistent data directory was made available to all applications and they could store a file based database, or any other data in that directory. If the application was restarted, the data would still be there when the new instance of the application came up.

In V3, there is no persistent data directory automatically provisioned for applications. If the application is restarted, or more correctly, if the container the application is run in is shutdown and the application started up again in a new container, any changes to the filesystem are lost.

In order to preserve data stored in the file system across a restart of the application when using V3, it is necessary to make a claim for a persistent volume and mount it against the application container at a directory the application expects.

Before we look at the problem of trying to initialize the database, let's look at the issue of setting up the persistent volume.

Claiming a Persistent Volume

When running V2, the location of the persistent data directory was provided by the OPENSHIFT_DATA_DIR environment variable. In the Django settings module for our application, the configuration for working out the location of the data directory is as follows:

import os
DJ_PROJECT_DIR = os.path.dirname(__file__)
BASE_DIR = os.path.dirname(DJ_PROJECT_DIR)
WSGI_DIR = os.path.dirname(BASE_DIR)
REPO_DIR = os.path.dirname(WSGI_DIR)
DATA_DIR = os.environ.get('OPENSHIFT_DATA_DIR', BASE_DIR)

The code therefore does consult the OPENSHIFT_DATA_DIR environment variable, but if it isn't set, as will be the case in V3, it will fallback to using the base directory for the project, which is /opt/app-root/src/wsgi/myproject.

This fallback directory isn't suitable as a place to mount a persistent volume, as mounting a volume at that location will hide the source code for our application.

What we want to do instead is use the directory /opt/app-root/src/data, which already exists as part of the application source code, but is empty.

To override the directory used, we can add a further environment variable setting to the .s2i/environment file of:

OPENSHIFT_DATA_DIR=/opt/app-root/src/data

Once again, start a new build and deployment of the application. When it's complete we now need to claim a persistent volume and mount it at that path. To do that we will run the command:

oc set volume dc/django --add --name=data --claim-name=django-data --type pvc --claim-size=1G --mount-path /opt/app-root/src/data

This data directory actually gets used for two purposes. The first is to hold the SQLite database.

DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
# GETTING-STARTED: change 'db.sqlite3' to your sqlite3 database:
'NAME': os.path.join(DATA_DIR, 'db.sqlite3'),
}
}

The second is to hold a generated file which contains secrets used in the Django settings module.

import sys
sys.path.append(os.path.join(REPO_DIR, 'libs'))
import secrets
SECRETS = secrets.getter(os.path.join(DATA_DIR, 'secrets.json'))

# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = SECRETS['secret_key']

If this secrets file doesn't exist when the application is started, a default value will be used. That default value should not be relied upon for a production system as knowing it would allow session cookies used by Django to be decoded if they are intercepted. So not only do we need to initialize the database, we also need to create the secrets file.

Initializing Database and Secrets

When the persistent volume was claimed and associated with the application using oc set volume, the application was automatically redeployed. We therefore just need to initialize the database, create the initial super user account, and create the secrets file.

To do this we need to access the running container and run a few commands.

To get a list of the running pods for the container, run oc get pods.

NAME              READY     STATUS      RESTARTS   AGE
django-8-zlz58 1/1 Running 0 5m

Now use oc rsh to start an interactive shell within the container for the application. This will leave us in the top level directory of our application source code.

$ oc rsh django-8-zlz58
(app-root)sh-4.2$ ls -las
total 60
4 drwxrwxr-x 12 default root 4096 Jul 3 05:18 .
4 drwxrwxr-x 20 default root 4096 Jul 3 05:18 ..
4 -rwxrwxr-x 1 default root 182 Jul 3 04:10 app.sh
4 drwxrwx--- 2 root root 4096 Jul 3 05:18 data
4 -rw-rw-r-- 1 default root 57 Jul 3 04:10 .gitignore
4 drwxrwxr-x 2 default root 4096 Jul 3 05:20 libs
4 drwxrwxr-x 5 default root 4096 Jul 3 04:10 .openshift
4 drwxrwxr-x 4 default root 4096 May 26 12:32 .pki
4 -rw-rw-r-- 1 default root 2579 Jul 3 04:10 README.md
4 -rw-rw-r-- 1 default root 9 Jul 3 04:10 requirements.txt
4 -rwxrwx--- 1 default root 1024 Jun 22 16:12 .rnd
4 drwxrwxr-x 3 default root 4096 Jul 3 04:10 .s2i
4 -rw-rw-r-- 1 default root 735 Jul 3 04:10 setup.py
4 drwxrwxr-x 5 default root 4096 Jul 3 05:18 wsgi
4 drwxrwxr-x 2 default root 4096 Jul 3 04:10 YourAppName.egg-info

To initialize the database, we run:

python wsgi/myproject/manage.py migrate

When run, this will display all the steps performed to initialize the database.

Operations to perform:
Synchronize unmigrated apps: staticfiles, messages
Apply all migrations: admin, contenttypes, auth, sessions
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Installing custom SQL...
Running migrations:
Rendering model states... DONE
Applying contenttypes.0001_initial... OK
Applying auth.0001_initial... OK
Applying admin.0001_initial... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying sessions.0001_initial... OK

Next we need to create the initial super user account by running:

python wsgi/myproject/manage.py createsuperuser

This will prompt us for the user name, email and password for the account.

To create the secrets file, we can run:

python libs/secrets.py > data/secrets.json

The end result is that the data directory contains:

total 48
4 drwxrwx--- 2 root root 4096 Jul 3 06:02 .
4 drwxrwxr-x 10 default root 4096 Jul 3 05:54 ..
36 -rw-r--r-- 1 1000050000 root 36864 Jul 3 06:00 db.sqlite3
4 -rw-r--r-- 1 1000050000 root 69 Jul 3 06:02 secrets.json

Returning to the login page for the Django admin interface, it is now possible to login with the user account just created.

Scaling and Persistent Volumes

We now have our persistent volume setup and our database working. We aren't quite done yet though.

The issue at this point is what type of persistent volume we have access to. Depending on what type of persistent volume we have available, there can be restrictions on whether you can scale up the web application, but also whether you can use a rolling deployment.

Currently in OpenShift Online (V3), the only storage type which is available has an access mode of "RWO", meaning "ReadWriteOnce."

What this actually means in practice is that the volume can only be mounted as read-write by a single node on the OpenShift cluster at once.

The implication of this is that due to a persistent volume being used by the application, the application can then not be scaled up beyond a single instance or replica. This is because when scaling up the application, the other instances could be deployed to a different node in the cluster.

A further issue is that the default deployment strategy in OpenShift is what is called a rolling deployment. This means that when restarting an application, the new instance of the application will be started first. When the new instance is running, traffic will be re-routed to it, and only then is the old instance shutdown.

Because the new instance might be started on a different node to the existing instance, it may block due to being unable to mount the persistent volume, due to the single node restriction for a persistent volume with mode "RWO".

To avoid this problem, we need to switch the deployment strategy to the recreate deployment strategy. When this strategy is used, the existing instance will be shutdown first and only then the new instance started. This will mean there is a momentary downtime for the application, but this cannot be avoided due to the restriction of the persistent volume type.

To switch to the recreate deployment strategy, we first need to scale down the application so it isn't running.

oc scale dc/django --replicas=0

Next we need to patch the deployment configuration to override the deployment strategy used.

oc patch dc/django --patch '{"spec":{"strategy":{"type":"Recreate"}}}'

We can then safely set the number of instances of the application back to 1.

oc scale dc/django --replicas=1

Whenever we now trigger a new build or deployment, we are guaranteed that the new instance will not block due to not being able to mount the persistent volume on a new node.

The alternative to this would have been to move to using a separate database service such as PostgreSQL, as well as setting the SECRET_KEY for Django differently. In the case of the SECRET_KEY, instead of using a secrets file stored in the persistent volume, we could have had OpenShift manage the secrets file instead. This would have removed the reliance on needing a persistent volume for the web application, allowing it to be scaled, and also keep using the rolling deployment strategy.

Handling Database Migrations

The web application is now working under V3 but we are still not quite done yet. The next thing to look at is how we handle database migrations.

Using Django, when you make code changes that change the database model, it is necessary to perform a database migration at the same time as deploying the new code changes. This will make any appropriate changes to the database tables, populate new fields, etc.

The Django management command to run the database migration is python manage.py migrate.

Under V2 of OpenShift Online, for this application the command was being automatically run from a deploy action hook. Under V3 there are two options as to how you could have it run automatically.

The preferred way would be to define a mid lifecycle hook as part of the deployment configuration for the application. This approach is possible when using the recreate deployment strategy. It allows you to define a command to be run within a container between the time that the existing application instance was stopped and the new instance started. By running it at this time, when the web application is not running, you ensure that you are not running it at a time when the web application is trying to access or modify the database.

The simpler approach, although one which should be avoided if running with a separate database service and using a rolling deployment or a scaled up application, is to use the deploy action hook as done with V2.

Before we add that, we first need to ensure we disable any attempt by the Python S2I builder to run the database migration for us. This is because like with collectstatic, it will run the migrate for us if it can. But as indicated by the warning that was displayed in the logs on startup of the container, it couldn't do it successfully because the application source code layout didn't match what it expected.

To prevent the Python S2I builder running the database migration, add to the .s2i/environment file:

DISABLE_MIGRATE=1

Now create the file .s2i/action_hooks/deploy and add to it:

#!/bin/bash

python $OPENSHIFT_REPO_DIR/wsgi/myproject/manage.py migrate --noinput

Ensure that the deploy action hook script is executable by running chmod +x .s2i/action_hooks/deploy.

When adding the deploy action hook, we can also take the opportunity to have it create the secrets file automatically the first time if it doesn't already exist. This means we can skip needing to create that manually. So also add to the deploy action hook script:

if [ ! -f $OPENSHIFT_DATA_DIR/secrets.json ]; then
python $OPENSHIFT_REPO_DIR/libs/secrets.py > $OPENSHIFT_DATA_DIR/secrets.json
fi

The application source code is now setup to automatically perform data migrations on every new deployment of the application.

Command Execution Environment

When running the Django management commands to initialize the database and create the initial super user account, it was done from an interactive shell running within the container. When the shell is initialized it will perform special steps to setup the execution environment so commands will work properly. This includes enabling the use of the correct Python runtime and virtual environment.

If we were to run the Django management commands, not from within the interactive shell, but by supplying the command to oc rsh or oc exec, the commands will not work. That is, if you were to run:

oc rsh <pod> python wsgi/myproject/manage.py migrate

it will fail with an error:

ImportError: /opt/app-root/lib64/python2.7/lib-dynload/_io.so: undefined symbol: _PyErr_ReplaceException

This is because the required Python runtime will not have been setup correctly.

To workaround this, if you need to run any Python script with oc rsh or oc exec, the execution of it should be wrapped in a shell script. This will work, as the execution of the shell script will have the side effect of enabling the correct Python runtime and virtual environment.

This becomes an issue if you wish to run the database migration command from a mid lifecycle hook, or if you wanted to use a distinct Python script to implement a command based liveness or readiness probe.

An alternative option to creating a shell script wrapper is to use a feature of the package we installed which implemented the action hook mechanism. It provides a command wrapper you can use to execute other commands and which will ensure that the Python runtime and virtual environment are setup correctly. Using this you can instead run:

oc rsh <pod> powershift image exec python wsgi/myproject/manage.py migrate

The added benefit is that the implementation of action hooks provided by the package also provides the ability to specify a deploy_env script. This can set additional environment variables for the deployment of the application dynamically, rather than needing to be static as when using the .s2i/environment file, or through environment passed in the deployment configuration.

The powershift image exec command, as well as ensuring the Python runtime and virtual environment are setup correctly, will also run the deploy_env script to setup the additional environment variables set by that script. This ensures any command you run has the same environment variables as your deployed application will.

If needing an interactive shell and needing to know those same environment variables are set, you can also run:

oc rsh <pod> powershift image shell

and an interactive shell will be started. This will again also run deploy_env and incorporate any environment variables set in the environment of the interactive shell session.

The package also extends V2 style action hooks even further, allowing special scripts to be provided for a range of other purposes.

  • verify - Commands to verify an image. Would be run from postCommit action of a build configuration to test an image before it is used in a deployment.
  • ready - Commands to test whether the application is ready to accept requests. Would be run from a readiness health check of a deployment configuration.
  • alive - Commands to test whether the application is still running okay. Would be run from a liveness health check of a deployment configuration.
  • setup - Commands to initialize any data for an application, including perhaps setting up a database. Would be run manually, or if guarded by a check against being run multiple times, could be run from a deploy action hook script.
  • migrate - Commands to perform any data migration, including perhaps updating a database. Would be run from a mid lifecycle hook if using the recreate deployment strategy, or from a deploy action hook script if it is not a scaled application and not using rolling deployments.

To run any of these you would use the command powershift image <script>. For example:

oc rsh <pod> powershift image migrate

The benefit of using these scripts to hold commands for these various actions is that it encapsulates the commands as part of the application source code. This avoids problems encountered when commands which relate to the specific way that an image is setup are exposed by being used directly in a build or deployment configuration. By using the generic entry point, it is possible to change the commands needing to be run for an action and it will be used automatically on the next build or deployment, without needing to go and update a build or deployment configuration to match what changes may have been made in an image.

For further information on the powershift-cli[image] package see its code repository at:

Further Migration Steps

In this blog post we have covered the keys steps to get the Django starter application from V2 converted to V3. In migrating the application, we focused mainly on application changes which would be required. You can see the final modified application code at:

For other advice, such as on migrating the application source code out of the V2 hosted Git repository, deploying and linking a separate database service, and migrating data held in a database under V2, see the prior blog post for migrating a Ruby application.

If you are new to the V3 version of OpenShift, also check out our interactive learning portal at:

We provide a range of exercises you can work through live with a dynamically provisioned OpenShift cluster. These include other topics you will be interested in, such as accessing an application under OpenShift, transferring data in and out of an application, as well as how to temporarily expose a database service in order to load data into it when migrating an application.