Today, Red Hat OpenShift Service Mesh is now available. 

As Kubernetes and Linux-based infrastructure take hold in digitally transforming organizations, modern applications frequently run in a microservices architecture and therefore can have complex route requests from one service to another. With Red Hat OpenShift Service Mesh, we’ve gone beyond routing the requests between services and included tracing and visualization components that make deploying a service mesh more robust. The service mesh layer helps us simplify the connection, observability and ongoing management of every application deployed on Red Hat OpenShift, the industry’s most comprehensive enterprise Kubernetes platform.

Red Hat OpenShift Service Mesh is available through the OpenShift Service Mesh Operator, and we encourage teams to try this out on Red Hat OpenShift 4 here.

Better track, route and optimize application communication

With hardware-based load balancers, bespoke network devices, and more being the norm in modern IT environments, it was complex, if not nearly impossible, to have a consistent, general purpose way to manage and govern service-to-service communications between applications and their services. With a service mesh management layer, containerized applications can better track, route and optimize communications with Kubernetes as the core. Service mesh can help manage hybrid workloads in multiple locations and is more granularly aware of data locality. With the official introduction of OpenShift Service Mesh, we believe this critical layer of the microservices stack has the power to further enable a multi- and hybrid cloud strategy in enterprises.

OpenShift Service Mesh is based on a set of community projects, including Istio, Kiali and Jaeger, providing capability that encodes communication logic for microservices-based application architectures. This helps to free development teams to focus on coding applications and services that can add real business value. 

Making Developers’ Lives Easier

Like we said before, until service mesh, much of the burden of managing these complex services interactions has been placed on the application developer. Developers need a set of tools that can help them manage their entire application lifecycle beginning with determining the success of deploying their code all the way to managing the traffic patterns in production. Each service needs to properly interact with other services for the complete application to run. Tracing provides a way that developers can track how each service interacts with these other functions to determine if there are latency bottlenecks as they operate together. 

The ability to visualize the connections between all the services and look at the topology of how they interconnect can also be helpful in understanding these complex service interconnections. By packaging these features together as part of the OpenShift Service Mesh, Red Hat is making it easier for developers to access more of the tools they need to successfully develop and deploy cloud-native microservices. 

To ease the implementation of a service mesh, the Red Hat OpenShift Service Mesh can be added to your current OpenShift instance through the OpenShift Service Mesh Operator. The logic of installation, connecting the components together and the ongoing management of all the bits is built into the Operator, allowing you to get right to managing the service mesh for your application deployments. 

By reducing the overhead of implementing and managing the service mesh, it becomes easier to introduce the concept earlier in you application lifecycle and enjoy the benefits before things start to get out of hand. Why wait until it’s too late and you’re overwhelmed managing the communication layer? OpenShift Service Mesh can enable the scalable features you will need in an easy to implement fashion before you start to scale.

The features that OpenShift customers can benefit from with OpenShift Service Mesh include:

  • Tracing and Measurement (via Jaeger): While enabling service mesh can come with performance trade-offs for the betterment of management, OpenShift Service Mesh now measures baseline performance. This data can be analyzed and used to drive future performance improvements.
  • Visualization (via Kiali): Observability of the service mesh allows for an easier way to view the topology of the service mesh, and to observe how the services are interacting together.  
  • Service Mesh Operator: A Kubernetes Operator minimizes the administrative burdens of application management, allowing for automation of common tasks such as install, maintenance of the service and lifecycle management. Adding business logic with these applications helps to ease management and bring the latest upstream features to customers sooner. The OpenShift Service Mesh operator deploys Istio, Kiali and Jaeger bundles along with the configuration logic needed to make implementing all the features at once easier. 
  • Multiple interfaces for networking (via multus): OpenShift Service Mesh takes out manual steps and enables developers to execute code with increased security via Security Context Constraint (SCC). This allows additional lockdown of workloads on the cluster, such as outlining which workloads in a namespace can run as root and not. With this feature, developers get the usability of Istio while cluster admins gain well-defined security controls.
  • Integrations with Red Hat 3scale API management: For developers or operations teams looking to further secure API access across services, OpenShift Service Mesh ships with the Red Hat 3scale Istio Mixer Adapter. This allows for deeper control over the API layer of service communications beyond the traffic management of the service mesh capabilities.

Looking to the future of service mesh capabilities and interoperability, we introduced the Service Mesh Interface (SMI) earlier this year. Our hope in collaborating across the industry is to continue working to abstract these components in an effort to make it easier for service meshes to run on Kubernetes. This collaboration can help maximize choice and flexibility for our Red Hat OpenShift customers, and once applied, has the potential to bring a “NoOps” environment for developers closer to reality.

Try OpenShift

Service mesh can help ease the journey in operating a microservices based stack across the hybrid cloud. Customers with a growing Kubernetes and containers based environment are invited to try out Red Hat OpenShift Service Mesh.

Learn more: