AWS and Red Hat – Digging a Little Deeper

AWS and Red Hat - Digging a Little Deeper

Hopefully by now, you have either seen the Amazon Web Services (AWS) and Red Hat alliance keynote or at least read the press release. Some highlights in case you missed it:

* AWS cloud services integrated with Red Hat OpenShift Container Platform to enable hybrid deployments.
* Joint support path for applications using Openshift with integrations to AWS.
* Collaboration on Kubernetes to make OpenShift run more efficiently on AWS.
* Enhanced Red Hat Enterprise Linux optimizations for AWS.

Read More...

ASP.NET on OpenShift part 5: Models in the MVC

ASP.NET on OpenShift part 5: Models in the MVC

PART 5 – Models: As we’ve been seeing in this series so far, MVC stands for Model-View-Controller. In the first two parts, I talked about the Controller. In the last two parts, we went over Views and putting your project on OpenShift. In this fifth and final part of the MVC series, I’m going to write about Models.

Read More...

Jupyter on OpenShift Part 7: Adding the Image to the Catalog

Jupyter on OpenShift Part 7: Adding the Image to the Catalog

When you are deploying an application from the OpenShift web console you have the choice of deploying an image hosted on an external image registry, or an existing image which was built within the OpenShift cluster using either the Docker or Source build strategies. This is done from Deploy Image after having selected Add to project in the web console.

Read More...

Jupyter on OpenShift Part 6: Running as an Assigned User ID

Jupyter on OpenShift Part 6: Running as an Assigned User ID

When you deploy an application to OpenShift, by default it will be run with an assigned user ID unique to the project the application is running in. This user ID will override whatever user ID a Docker-formatted image may declare as the user it should be run as.

Running applications under a project as a user ID different to applications running in any other project is part of the multi-layered approach to security used in OpenShift. In this post, we will delve more into the topic of user IDs, as well as what changes would need to be made to the Jupyter Notebook image being used to enable it to run as the user ID OpenShift assigns to it.

Read More...

Enhancing your Builds on OpenShift: Chaining Builds

Enhancing your Builds on OpenShift: Chaining Builds

In addition to the typical scenario of using source code as the input to a build, OpenShift build capabilities provides another build input type called “Image source”, that will stream content from one image (source) into another (destination).

Using this, we can combine source from one or multiple source images. And we can pass one or multiple files and/or folders from a source image to a destination image. Once the destination image has been built it will be pushed into the registry (or an external registry), and will be ready to be deployed.

Read More...

Jupyter on OpenShift Part 5: Ad-hoc Package Installation

Jupyter on OpenShift Part 5: Ad-hoc Package Installation

The main reason persistent volumes are used is to store any application data. This is so that if a container running an application is restarted, that data is preserved and available to the new instance of the application.

When using an interactive coding environment such as Jupyter Notebooks, what you may want to persist can extend beyond just the notebooks and data files you are working with. Because it is an interactive environment using the dynamic scripting language Python, a user may want to install additional Python packages at the point they are creating a notebook.

Read More...

Jupyter on OpenShift Part 4: Adding a Persistent Workspace

Jupyter on OpenShift Part 4: Adding a Persistent Workspace

To provide persistence for any work done, it becomes necessary to copy any notebooks and data files from the image into the persistent volume the first time the image is started with that persistent volume. In this blog post I will describe how the S2I enabled image can be extended to do this automatically, as well as go into some other issues related to saving of your work.

Read More...

Jupyter on OpenShift Part 3: Creating a S2I Builder Image

Jupyter on OpenShift Part 3: Creating a S2I Builder Image

In the prior post in this series I described the steps required to run the Jupyter Notebook images supplied by the Jupyter Project developers. When run, these notebook images provide an empty workspace with no initial notebooks to work with. Depending on the image used, they would include a range of pre-installed Python packages, but they may not have all packages installed that a user needs.

Read More...

Jupyter on OpenShift Part 2: Using Jupyter Project Images

Jupyter on OpenShift Part 2: Using Jupyter Project Images

The quickest way to run a Jupyter Notebook instance in a containerised environment such as OpenShift, is to use the Docker-formatted images provided by the Jupyter Project developers. Unfortunately the Jupyter Project images do not run out of the box with the typical default configuration of an OpenShift cluster.

In this second post of this series about running Jupyter Notebooks on OpenShift, I am going to detail the steps required in order to run the Jupyter Notebook software on OpenShift.

Read More...