nginx ingress with istio

Using nginx ingress with istio service mesh

So you started using Kubernetes (which is great). To get traffic into your cluster you need to choose an ingress implementation. Luckily several are available and there are a lot of good reasons to choose nginx as your ingress. After all nginx is a great web server and makes a great ingress for your kubernetes installation.

But now you read about service meshes and want to have a service mesh. We decided to go with istio. There are probably other great service meshes out there (have an eye on conduit ), but for us istio was a great match. If you follow the tutorials they want you to install an istio ingress. But you already have a nice nginx based ingress. And in our case we were using some features available only on nginx and not on istio. So how are we gonna use nginx with istio and vice versa?

First attempt, simply use the sidecar.

Since it is our goal to receive traffic on our nginx ingress and immediately push this traffic into out service mesh I thought it would be a great idea to simply inject the istio sidecar into our ingress controller deployment. Thought and done thanks to the great istio inject command. Everything deployed as expected, but the test site wouldn’t load. A closer look at the connection via curl showed an interesting error.

* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to
* stopped the pause stream!
* Closing connection 0
curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to

We aren’t even getting the TLS handshake let alone can send our HTTP request. So simply just injecting the istio sidecar wasn’t gonna cut it.

What does the istio sidecar do?

The istio sidecar is simply an envoy proxy with some added logic to generate an envoy configuration of the state of your service mesh. But all of this is not the part breaking our TLS connection. Istio is also injecting an init container (actually two of them). The proxy_init container calls a script which configures the iptables for this pod. This is done so that istio can act as a transparent proxy. Our application doesn’t have to care about correctly setting proxy variables, configuring your http client etc. But these iptables rules also make sure that all incoming traffic to this pod is routed through envoy. In the default use case for the istio sidecar this is totally fine and what you actually want.

So in essence all of our incoming as well as our outgoing traffic is redirected with the help of iptables to go through our envoy sidecar proxy. But all we want is that our outgoing traffic is redirected.

In case you are interested, the resulting nat table looks like this:

Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
 129K 7719K ISTIO_REDIRECT  all  --  any    any     anywhere             anywhere             /* istio/install-istio-prerouting */

Chain INPUT (policy ACCEPT 129K packets, 7719K bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 542K packets, 50M bytes)
 pkts bytes target     prot opt in     out     source               destination
 444K   27M ISTIO_OUTPUT  tcp  --  any    any     anywhere             anywhere             /* istio/install-istio-output */

Chain POSTROUTING (policy ACCEPT 842K packets, 68M bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain ISTIO_OUTPUT (1 references)
 pkts bytes target     prot opt in     out     source               destination
 299K   18M ISTIO_REDIRECT  all  --  any    lo      anywhere            !localhost            /* istio/redirect-implicit-loopback */
 143K 8564K RETURN     all  --  any    any     anywhere             anywhere             owner UID match 1337 /* istio/bypass-envoy */
 1282 76920 RETURN     all  --  any    any     anywhere             localhost            /* istio/bypass-explicit-loopback */
  903 54180 ISTIO_REDIRECT  all  --  any    any     anywhere              /* istio/redirect-ip-range- */
    5   300 RETURN     all  --  any    any     anywhere             anywhere             /* istio/bypass-default-outbound */

Chain ISTIO_REDIRECT (3 references)
 pkts bytes target     prot opt in     out     source               destination
 429K   26M REDIRECT   tcp  --  any    any     anywhere             anywhere             /* istio/redirect-to-envoy-port */ redir ports 15001

If we modify the PREROUTING chain we can exclude traffic from the redirect to istio.

Fixing the iptables for usage in an ingress

So now we identified our main problem. Traffic is not hitting nginx directly, but our envoy sidecar proxy. Since we now know that all this magic is realised via iptables, we can start modifying the iptables rules.

A first look at the init script of the proxy_init init container suggests that we could simply set the ISTIO_LOCAL_EXCLUDE_PORTS environment variable to achieve what we want. At least if we also set ISTIO_INBOUND_PORTS to * it should work, but it actually doesn’t. I don’t really know why this doesn’t work, so if you have an idea I would be glad to hear from you.

But you are not forced to rely on the init script of the proxy_init container. My current, very simplistic solution is to create another init container with the image of the proxy_init container (because it has already all the necessary tools to manipulate iptables) and overwrite the command in the container. This allows us to simply spawn a bash process which we can use to execute an iptables command. The whole construction looks like this:

      - name: ingress-iptables
        - -c
        - iptables -t nat -I PREROUTING -p tcp -m multiport --dports 80,443 -j RETURN
        - /bin/bash
        imagePullPolicy: IfNotPresent
          privileged: true

As you can see all this does is to create a rule in the PREROUTING chain of the nat table to exclude traffic received via TCP on ports 80 and 443. Since we are manipulating iptables the privileged security context is required, but you can probably also only give the container the NET_ADMIN capability.

The best of both worlds

After adding this init container you can now use nginx as your ingress. nginx will be configured via its ingress controller and create its requests to upstream services as usual. The only difference now is that all outgoing requests from nginx are routed through envoy into our service mesh.

Tue Jan 23, 2018