Hopefully by now, you have either seen the Amazon Web Services (AWS) and Red Hat alliance keynote or at least read the press release. Some highlights in case you missed it:
- AWS cloud services integrated with Red Hat OpenShift Container Platform to enable hybrid deployments.
- Joint support path for applications using Openshift with integrations to AWS.
- Collaboration on Kubernetes to make OpenShift run more efficiently on AWS.
- Enhanced Red Hat Enterprise Linux optimizations for AWS.
I want to spend some time elaborating on each of these, but first, I would ask that you indulge me in a bit of history. For me personally, this collaboration has been over seven years in the making and really a culmination of where the industry is currently, and an indication of where we still need to go.
March 2010 was an exciting month in the collaboration history of Red Hat. It was a frantic all-hands-on-deck time internally for myself along with a tireless team of engineers, operations admins, and countless others putting together the finishing touches to deliver, after years of public beta, the generally available and supported Red Hat Enterprise Linux subscriptions in Amazon EC2. As was the case, history showed that this alliance represented more than just making our flagship Red Hat Enterprise Linux product available through the AWS channel — it triggered out of necessity, a few other firsts within the company.
Enabling Red Hat Enterprise Linux was the first time (to my knowledge) that any engineering group within Red Hat had used agile methodology to deliver something usable at the end of a “sprint” — no PRD, no ERD, no Waterfall — just a bunch of user stories and a common mission because we had less than 30 days to deliver. The code seemed simple in theory, but was a big step for Red Hat. We were tasked with designing a system to enable users to gain access to our continuous update stream with no direct or even indirect knowledge of whether that user was entitled — this was the basis for enabling Red Hat Enterprise Linux on demand. It was exciting and a culture shock all at the same time that was not without its challenges — solving a problem for “cloud” and doing it with a new process for us. The result, however, is what anyone that goes to AWS today and starts up a Red Hat Enterprise Linux instance uses this capability without the need to “register” with Red Hat.
We were not finished — though the code was completed, tested and deployed — someone needed to be responsible for running it. In those days, outside of a few folks that maintained the existing Red Hat Network, we really did not have the concept of a team to run a product as a service. Enter our foray into DevOps — the same developers that wrote the code, became the operations team overnight; once again, out of necessity. What we learned about ourselves, though, is that the model was a good one for Red Hat. The engineers were able to make a better product, by also being able to directly experience the challenges of running the service themselves as the weeks passed. There was something to the “as-a-Service” concept that, while foreign to many of us, really made sense because of the epiphany that we could more easily anticipate customer issues in a production situation before they did.
All of this brings us to today’s announcement — productization with online-first and DevOps practices are ingrained into our portfolio with the foundation of what we started within a 30 day period more than seven years ago with Red Hat Enterprise Linux on AWS.
Today is about hybrid applications and services
Our initial work with AWS was not only about enabling Red Hat Enterprise Linux, but about enabling hybrid cloud with a consistent and supported operating system experience as the foundation. As we fast forward to today’s announcement, that same foundation is as relevant now as it was more than seven years ago with containers as the foundation for that abstraction, because containers are Linux, as Joe Fernandes notes. However, what is interesting for customers, partners — really the Red Hat ecosystem — is how now we are laying the groundwork for hybrid applications and services.
From a developer and application perspective, whether it’s public cloud with AWS or your own datacenter, the experience looks and feels like one single platform. As we mentioned during the keynote demo — if you can see Amazon Web Services, then we expect you to be able to use Amazon Web Services for your applications in OpenShift. Thanks to the portability of containers and the Open Service Broker API, this means any OpenShift offering should have the ability to take advantage of these integrations right out of the box. We want to continue to bring the tools and services for developers directly to OpenShift, and this announcement is the next phase in that evolution.
Now, many of you may be thinking, “Great, I can already do this today. What is so special about it now besides a nice looking web frontend?” This is where we get into what is arguably the most important component of the announcement.
Single support path
In short, AWS and Red Hat plan to develop a single support path for the use of AWS Services integrated with OpenShift. In other words, if you call Red Hat for support, then we plan to support you with these integrations backed by the specific service experts at AWS. We know customers rely on both companies, and are not keen on having to reach out to multiple organizations for assistance, so being able to call Red Hat with the knowledge that the two companies plan to collaborate on support issues is important to customers.
Let’s give an example. If your organization is using Amazon Aurora as the datastore for statefulness of your OpenShift application, and you are using the shipped and supported Red Hat Integration to AWS Services, then we will help you — it’s as simple as that. We want the user and customer experience with AWS to be as seamless as the other services and frameworks we support within OpenShift.
Speaking of services and frameworks supported…
Give me more JBoss!
I mentioned earlier the “online first” mentality, and that can be evidenced by our Red Hat JBoss Middleware portfolio. Our 3scale and Red Hat Mobile Application Platform are public cloud service offerings, added to our library in recent years via acquisition, and both offerings integrate with OpenShift. Taking things further, our Red Hat OpenShift Application Runtimes enables cloud-native application development and is also now in the “online first” world with their OpenShift integration. In fact, our entire suite of JBoss offerings is moving to this model.
Why is that important? For starters, it allows for quicker and more robust creation of containerized applications and microservices. Want a quicker way to integrate your existing business services? No problem; spin up JBoss BPM Suite on OpenShift. How about a more reliable messaging service? Can do; we have JBoss AM-Q available as well. The possibilities are numerous.
However, likely the biggest benefit for customers with this change in methodology is now the work of making our middleware offerings more highly available and operationally efficient is handled by our engineering team first (see, we had no idea back in 2010 the precedent we would be setting). So should you choose to run our solutions yourself, you’ll have access to the knowledge and code that we use to run our offerings at scale for multiple users within your own enterprise or wherever you may deploy OpenShift with Red Hat JBoss Middleware.
All of this brings us back to the alliance announcement. Now with Red Hat JBoss Middleware optimized to run on OpenShift, we can turn our focus to optimizing OpenShift itself to run at a peak performance level on AWS
Working closely together, Amazon and Red Hat will be helping to make Kubernetes more performant on AWS. As many of you know, Kubernetes is the orchestrator at the heart of OpenShift, so enabling a finely tuned Kubernetes can help to make applications running there run more efficiently. This goes beyond raw compute, but extends into storage and networking as well. Though still too early to know for certain, it is likely that this may also include some operational enhancements — either way, having AWS expertise and input into Kubernetes is good for all members of the community.
For OpenShift specifically, this is impactful. Starting upstream with OpenShift Origin, developers and contributors using a public cloud for working with our community can see benefits from these incremental enhancements. Also, OpenShift Online runs on AWS today, so customers there should be able to see improvements as these contributions progress. Same goes for customers that use OpenShift Dedicated on AWS or choose to deploy Red Hat OpenShift Container Platform on AWS — no matter the commercial offering, Red Hat is focused on providing a great customer experience.
Which in a bit of coincidence, leads us back to where we started..
Red Hat Enterprise Linux is still the rock
You need an operating system for an effective container platform, and even with my obvious bias, I cannot think of a better one than Red Hat Enterprise Linux. Much like we seek to do with our existing hardware certification and programs today, we are seeking to deepen our alignment with AWS. If there are new features and instance types coming out from AWS, we will seek to enable Red Hat Enterprise Linux to be available for them. More than seven years after we delivered Red Hat Enterprise Linux on AWS.we believe the question of whether or not the operating system matters is a resounding “yes”.
Where do we go from here
What I like most about our announcement is that we have something tangible available for the ecosystem to begin working with and experiencing. At Red Hat, we believe in a “community first” development model. So if you want to try this yourself, check out this repo. If you’re short on time, and missed the keynote, then here is a video where you can see a similar demonstration:
In the coming months, I believe you will see exciting gains for the ecosystem. In fact, if you’re tracking the stack in this post, then you can see the potential benefits for the ecosystem:
- World class public cloud infrastructure
- Tightly aligned enterprise operating system designed for containers
- Optimized best-in-class orchestrator
- Seamless access to a host of diversified and supported services
It’s impossible to know if this announcement has the ability to set us on a similar path of change that was initiated in March 2010, but what is evident is that today is a continuation of that work and a promise of what we seek to make happen tomorrow