Hassle-free network security as fast as your developers can release code

Cloud-native applications are quickly emerging, and yet developers, security professionals, and operations teams are struggling. These teams are striving to balance frequent code updates, varying cloud and platform interfaces, and rapidly evolving container technology. Today's enterprise data center is becoming increasingly global, yet a fundamental question remains: how does your security runbook scale with your enterprise software rollouts in a cloud-native world? We're excited to announce that Aporeto has partnered with Red Hat to help solve these challenges.

DevOps security experts agree that while there is no one-size-fits-all approach to application security, there are a general set of priorities and policies for full stack security. Even if a web application uses encrypted data and authorized users, both of these security measures are insufficient without the most fundamental aspect of hosted software security: network zoning & containment.

The Red Hat OpenShift Container Platform provides developers a multi-tenant, multi-cloud capable container platform that automates your development pipeline (CI/CD) with source-to-image (S2I) functionality, Docker image management, and container orchestration based on Google Kubernetes. Kubernetes provides an API to define network segmentation policies for traffic between your application containers - enforcement of these policies, however, has been left to the community.

Aporeto is proud to partner with Red Hat OpenShift on an implementation of network segmentation enforcement of the Kubernetes network policy via Trireme, which we’ve adapted for Kubernetes. Trireme is an open-source network enforcer that reads the network policy defined in OpenShift and Kubernetes and defends your application at the source - at the initial TCP 3-way handshake.

Once you've configured OpenShift and Kubernetes to permit traffic between labeled pods, Trireme will automatically enforce your network segmentation definition, allowing and dropping connections per policy. Trireme extends the intended state enforcement domain of Kubernetes with Zero Trust Network functionality - end-to-end authentication, authorization, and encryption.

Setting up Trireme is easy - anywhere you can run Docker, you can run Trireme! Trireme requires no specific kernel modules, and there's no need for any external dependencies like a central controller, VLANs or other SDN complexities. Trireme gives your DevOps teams the ability to go fast yet stay secure.

The steps outlined in this guide will help you get started with Trireme.

Prerequisites

NOTE: The example output in this guide is from an Aporeto lab configuration, using a fresh install of OpenShift via "oc cluster up" with default settings.

Since the trireme-kubernetes container(s) runs in privileged mode, the OpenShift host must be configured to allow privileged mode. In addition, trireme-kubernetes requires access to network interfaces and storage, so the equivalent of the OpenShift cluster-admin role is required for the associated serviceaccount. Due to these requirements, the only OpenShift deployment method supported is the OpenShift Container Platform. OpenShift Dedicated and OpenShift Online are not supported.

You must have command-line access to OpenShift via oc and oc adm. The following steps assume system:admin access. If you are not able to login as system:admin, consult with your OpenShift administrator to configure equivalent access.

Obtain trireme-kubernetes from Github.

% git clone https://github.com/aporeto-inc/trireme-kubernetes.git
Cloning into 'trireme-kubernetes'...
remote: Counting objects: 1029, done.
remote: Total 1029 (delta 0), reused 0 (delta 0), pack-reused 1028
Receiving objects: 100% (1029/1029), 241.37 KiB | 294.00 KiB/s, done.
Resolving deltas: 100% (554/554), done.

% docker --version
Docker version 1.12.6, build 96d83a5/1.12.6

% oc version
oc v1.4.0-alpha.1+f189ede
kubernetes v1.4.0+776c994
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://192.168.88.131:8443
openshift v1.4.0-alpha.1+f189ede
kubernetes v1.4.0+776c994

OpenShift Preparatory Commands (see the video for these steps)

Login with admin credentials.

% oc login -u system:admin
Logged into "https://192.168.88.131:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project ':

default
kube-system
* myproject
openshift
openshift-infra

Using project "myproject".

Change the OpenShift project scope to one in which you will run trireme-kubernetes. In this example, we use the default project in OpenShift.

NOTE: If you want to use a different OpenShift project, adjust the parameters appropriately for the commands that deal with serviceaccounts and trireme secrets.

% oc project default
Now using project "default" on server "https://192.168.88.131:8443".

Add the appropriate serviceaccount to the privileged security context constraint (SCC). In this example, we use the default service account (system:serviceaccount:default:default) for the default project.

% oc edit scc privileged
...
users:
- system:serviceaccount:openshift-infra:build-controller
- system:serviceaccount:default:router
- system:serviceaccount:default:default
...

Verify the users for the "privileged" scc.

% oc describe scc privileged
Name: privileged
Priority:
Access:
Users:
system:serviceaccount:openshift-infra:build-controller,system:serviceaccount:default:router,system:serviceaccount:default:default
...

Add binding for the appropriate service account to the cluster-admin role. In this example, we use the default service account.

% oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:default:default

Verify the cluster-admin role binding for the appropriate service account.

% oc describe clusterPolicyBindings :default
Name: :default
Namespace:
Created: About an hour ago
Labels:
Annotations:
Last Modified: 2017-03-16 13:07:21 -0700 PDT
Policy:
RoleBinding[basic-users]:
Role: basic-user
Users:
Groups: system:authenticated
ServiceAccounts:
Subjects:
RoleBinding[cluster-admins]:
Role: cluster-admin
Users: system:admin
Groups: system:cluster-admins
ServiceAccounts: default/default
Subjects:
...

Trireme execution commands (see the video for these steps)

NOTE: This section is similar to the steps described in trireme-kubernetes/deployment/README.md, with some additional instructions for OpenShift. In this example, we will utilize the easier PSK method for node authentication (vs PKI). For the demonstration application, we will create a "demo" project. However, trireme-kubernetes will run in the "default" project.

NOTE: You should still be in the "default" OpenShift project.

In a command shell, navigate to the trireme-kubernetes/deployment/Trireme/KubeDaemonSet directory.

Edit daemonSetPSK.yaml for PSK and proper TRIREME_NETS subnet value. This value is hard-coded in the repository as 10.0.0.0/8, but you must change this to your OpenShift container subnet. In the Aporeto lab environment (via oc cluster up), containers default to 172.17.0.0./16 for container IP addresses.

% cat daemonSetPSK.yaml
...
containers:
- name: trireme
image: aporeto/trireme-kubernetes
env:
- name: SYNC_EXISTING_CONTAINERS
value: "true"
- name: TRIREME_NETS
value: 172.17.0.0/16
- name: TRIREME_AUTH_TYPE
value: PSK
- name: TRIREME_PSK
valueFrom:
secretKeyRef:
name: trireme
key: triremepsk
...

Create and verify the trireme secret.

% sh createPSK.sh
Attempting to generate PSK
secret "trireme" created
% oc get secrets
NAME TYPE DATA AGE
builder-dockercfg-obib3 kubernetes.io/dockercfg 1 1h
builder-token-au3nw kubernetes.io/service-account-token 4 1h
builder-token-s454w kubernetes.io/service-account-token 4 1h
default-dockercfg-m12x8 kubernetes.io/dockercfg 1 1h
default-token-hmu84 kubernetes.io/service-account-token 4 1h
default-token-zjqkh kubernetes.io/service-account-token 4 1h
deployer-dockercfg-e77ts kubernetes.io/dockercfg 1 1h
deployer-token-dfhra kubernetes.io/service-account-token 4 1h
deployer-token-o74u4 kubernetes.io/service-account-token 4 1h
registry-dockercfg-i72vx kubernetes.io/dockercfg 1 1h
registry-token-eqarq kubernetes.io/service-account-token 4 1h
registry-token-r90bq kubernetes.io/service-account-token 4 1h
router-certs kubernetes.io/tls 2 1h
router-dockercfg-n1dh3 kubernetes.io/dockercfg 1 1h
router-token-e0ry3 kubernetes.io/service-account-token 4 1h
router-token-flh98 kubernetes.io/service-account-token 4 1h
trireme Opaque 1 9s

Create and verify the trireme-kubernetes daemonset.

% oc create -f daemonSetPSK.yaml
daemonset "trireme" created
% oc get daemonset
NAME DESIRED CURRENT NODE-SELECTOR AGE
trireme 1 1 2m
% oc get pods
NAME READY STATUS RESTARTS AGE
docker-registry-1-objnf 1/1 Running 231 59m
router-1-gbf5j 1/1 Running 227 59m
trireme-4ejgm 1/1 Running 5 3m

WARNING: Improper permission configuration for the equivalent of the cluster-admin role binding will cause the OpenShift docker-registry and router pods to fail. Improper privileged container creation access will prevent the creation of the trireme pod.

Things to look out for:

  • Proper project/namespace scoping for secrets and where you run pods
  • daemonSet yaml: TRIREME_NETS proper value

Sample Application for Demonstrating Network Segmentation (see the video for these steps)

NOTE: This section is similar to the demo example steps in the trireme-kubernetes GitHub repository.

In a command shell, navigate to the trireme-kubernetes/deployment/PolicyExample directory.

Create the demo namespace with DefaultDeny networkpolicy.

% oc create -f DemoNamespace.yaml
namespace "demo" created

Create and verify the sample networkpolicy resource for Kubernetes in the demo namespace. In this example, our backend-policy permits ingress traffic to pods with the BusinessBackend role from pods with the WebFrontend and BusinessBackend roles. Any other pods may not access the BusinessBackend pods.

% oc create -f Demo3TierPolicy.yaml
networkpolicy "frontend-policy" created
networkpolicy "backend-policy" created
networkpolicy "database-policy" created
% oc get networkpolicy --namespace=demo
NAME POD-SELECTOR AGE
backend-policy role=BusinessBackend 1m
database-policy role=Database 1m
frontend-policy role=WebFrontend 1m

Create the sample External, WebFrontend, and BusinessBackend pods in the demo namespace.

% oc create -f Demo3TierPods.yaml
pod "external" created
pod "frontend" created
pod "backend" created
% oc get pods --namespace=demo
NAME READY STATUS RESTARTS AGE
backend 1/1 Running 0 12s
external 1/1 Running 0 12s
frontend 1/1 Running 0 12s

Obtain the IP addresses for the demonstration pods.

% oc describe pods --namespace=demo | grep '^Name:\|IP'
Name: backend
IP: 172.17.0.3
Name: external
IP: 172.17.0.2
Name: frontend
IP: 172.17.0.5

Verify connectivity from WebFrontend to BusinessBackend via wget.

% oc exec --namespace=demo -it frontend /bin/bash
root@frontend:/data# wget http://172.17.0.3
converted 'http://172.17.0.3' (ANSI_X3.4-1968) -> 'http://172.17.0.3' (UTF-8)
--2017-03-16 22:12:11-- http://172.17.0.3/
Connecting to 172.17.0.3:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 612 [text/html]
Saving to: 'index.html'

index.html 100%[=====================>] 612 --.-KB/s in 0s

2017-03-16 22:12:11 (191 MB/s) - 'index.html' saved [612/612]

root@frontend:/data# exit
exit

Verify that connectivity cannot be established between External and BusinessBackend via wget.

% oc exec --namespace=demo -it external /bin/bash
root@external:/data# wget http://172.17.0.3
converted 'http://172.17.0.3' (ANSI_X3.4-1968) -> 'http://172.17.0.3' (UTF-8)
--2017-03-16 22:14:55-- http://172.17.0.3/
Connecting to 172.17.0.3:80... ^C
root@external:/data# exit
exit

Try modifying and resubmitting the Demo3TierPolicy.yaml file to allow External to access BusinessBackend.

Thank you for evaluating Trireme on Red Hat OpenShift! For more information about how Aporeto can help secure your cloud -native application, visit us at https://www.aporeto.com/ or for more immediate communications, sign up for our slack channel at https://triremehq.signup.team/.