OpenShift Egress Options
In most enterprise environments, there are a lot of systems separated via firewalls, and the CSO does not want to allow the whole PaaS to access the external services.
Introduction
With the described egress solutions in this blog, is it possible to keep the above-mentioned separation AND all the benefits of the PaaS available for use. These benefits include:
- High Availability.
- Ability to scale out.
- Reproducible images/router.
In the old days, around one year ago, there was no egress router in OpenShift.
Nowadays, you have several options with different techniques. The egress router is a dedicated outgoing border to destinations outside of the PaaS.
This schema shows a possible setup.
Current Options
I will describe the following solutions:
- Built-in via iproute2 package.
- Built-in via squid.
- External via a generic haproxy image.
- Your own Solution. Out of scope.
Okay, what’s the difference?
Solution | Multi home | configmap | Environment | handycaps | Source | Documentation |
---|---|---|---|---|---|---|
iproute2 | partly | partly | based on macvlan | https://goo.gl/ZLA7Xx | https://goo.gl/ghjcdC | |
squid | partly | partly | https://goo.gl/iv6EC4 | https://goo.gl/ghjcdC | ||
haproxy | full | partly | requires knowledge about haproxy | https://goo.gl/6zZTnm | https://goo.gl/6zZTnm |
Every solution given above has a valid use case.
Node Selector
All the above solutions can be placed on a dedicated node with a nodeSelector (see: Assigning Pods to Specific Nodes) in the deployment config of a project.
A dedicated nodeSelector is only possible when the project annotations key, openshift.io/node-selector, is set to an empty string.
iproute2 (Kernel-Based Router)
Red Hat calls this redirect mode and documented it at Deploying an Egress Router Pod in Redirect Mode.
This is the easiest option and one of the fastest because the traffic is passed by the kernel.
There are some pitfalls when you use this option, so please read the hints in the documentation very carefully.
The default configuration and script are not prepared for a multi-homed and multi-route solution.
If you want to use another router-script, other than Image, then you will need to rebuild the Image from the existing solution.
The egress router and the app MUST be in the same VNID (project/namespace).
squid (HTTP Proxy)
Red Hat calls this http-proxy mode and documented it here.
The default configuration and script are not prepared for a multi-homed solution. If you want to use another router-script, then you will need to rebuild the Image from the existing solution, or you can build your own solution.
This egress router CAN be in another project because it’s just a normal service with a running pod.
This option can easily be used as a generic HTTP proxy with white- and black-lists acls.
For the full syntax of the acl, please take a look into the squid reference manual
haproxy (Generic Proxy)
This option is a generic tcp proxy based on haproxy 1.7.
You can use this solution like any other app due to the fact that one can use the standard features from OpenShift.
I have created a Docker image which you can use out of the box, haproxy17, which is based on this source, haproxy17-centos.
There is also a Dockerfile for rhel7 which you can use to build your own haproxy with RHEL7.
Components
This image has the following components.
- LUA 5
- Socklog
- haproxy 1.7
- haproxy_exporter
The version in place is visible in the Dockerfile.
LUA
HAProxy has had the ability to use LUA scripts since version 1.6. LUA can be used at several stages. I have added this feature here just in case I will need it.
Socklog
I have described the Socklog in this blog post: syslog in a container world
haproxy 1.7
The main component of this egress solution. I strongly suggest reading the detailed documentation, due to the huge amount of features in haproxy.
haproxy_exporter
The haproxy_exporter offers an interface to Prometheus to get the haproxy statistics into Prometheus.
Setup in OpenShift
In this section, I will describe the following scenario:
I created a new project for the example. If you can’t create a new project then you can use the current one.
Requirements
- Internet Access.
- Docker hub is usable.
- The default domain is set.
In Short
Here is a short example which you can copy and paste, if the requirements are fulfilled.
# oc new-app test-haproxy
# oc process -f https://gitlab.com/aleks001/haproxy17-centos/raw/master/haproxy-osev3.yaml \
-p PROXY_SERVICE=test-scraper \
-p SERVICE_NAME=tst-scr-svc \
-p SERVICE_TCP_PORT=8443 \
-p SERVICE_DEST_PORT=443 \
-p SERVICE_DEST=www.google.com \
| oc create -f -
# oc rsh -c test-scraper $( oc get po -o jsonpath='{.items[*].metadata.name }')
sh-4.2$ curl -v http://test-scraper:${SERVICE_TCP_PORT}
...
Logs From haproxy
oc logs -c test-scraper $( oc get po -o jsonpath='{.items[*].metadata.name }') -f
00000001:public_tcp.accept(0007)=0009 from [172.18.12.1:36314]
00000001:be_generic_tcp.srvcls[0009:000a]
00000001:be_generic_tcp.clicls[0009:000a]
00000001:be_generic_tcp.closed[0009:000a]
...
Logs From haproxy-exporter
oc logs -c haproxy-exporter $( oc get po -o jsonpath='{.items[*].metadata.name }') -f
time="2017-09-28T17:20:01+02:00" level=info msg="Starting haproxy_exporter (version=0.8.0, branch=HEAD, revision=4ce06e84e1701827f2706fd58b1e1320a52e3967)" source="haproxy_exporter.go:476"
time="2017-09-28T17:20:01+02:00" level=info msg="Build context (go=go1.8.3, user=root@27187aec7434, date=20170824-21:39:12)" source="haproxy_exporter.go:477"
time="2017-09-28T17:20:01+02:00" level=info msg="Listening on :9101" source="haproxy_exporter.go:502"
Prometheus Stats
The template creates a route test-scraper
which is reachable as any other route in the cluster. You can now configure Prometheus to scrape the metrics from the generic router.