Containers

Quick and dirty external load balancer for Kubernetes Applications

A quick post to illustrate an easy way to load balance connections to Kubernetes-hosted application in a non-cloud environment.
Ryan Wendel Featured Team Member
Ryan Wendel | Mar 15 2022
5 min read

Here at Trek10, I am frequently exposed to Kubernetes and EKS projects. To avoid costs and general AWS-specific administrative overhead, I frequently work with a Kubernetes cluster in my home lab. While Application Load Balancers (ALB) are the go-to when load balancing web application traffic within the AWS cloud, I need a more readily available solution when working on POCs in my home lab.

In this blog post, I’ll share the solution I came up with that uses an Nginx instance as a reverse proxy that doesn’t make use of any common Kubernetes ingress objects. In other words, I’ve created a quick way to load balance access to a Kubernetes service from a perspective outside of a cluster.

Full disclosure, two different types of popular Nginx ingress controllers already exist for Kubernetes that you can use to load balance requests across pods. These would be something you might use in a production environment if you chose not to opt for the AWS Load Balancer Controller Ingress.

You can find these two Nginx projects at the following links.

Despite their existence, I choose to use a different, easier way of sending requests to Kubernetes workloads in my lab environment. And just as a general preference, I like the idea of using an external load balancer over one provisioned in Kubernetes for the reason that doing so presents a system closer to what you might work with in a cloud environment (an AWS application load balancer).

With all that said, let’s jump into the nuts-and-bolts of what I’m babbling about.

I’ve created a script that runs an ephemeral Nginx Docker container that will proxy requests to services provisioned in Kubernetes. That’s it. Plain and simple but very useful when testing Kubernetes workloads.

Getting into the details, let’s kick this off by creating a Kubernetes deployment and an accompanying service. Apply the following manifest to your cluster.

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: example-foo
  namespace: default
  labels:
    app: example-foo
    deployment: foo
spec:
  replicas: 4
  selector:
    matchLabels:
      app: example-foo
  template:
    metadata:
      labels:
        app: example-foo
        deployment: foo
    spec:
      containers:
      - name: example-foo
        image: public.ecr.aws/nginx/nginx:latest
        command: [ "/bin/sh", "-c" ]
        args:
        - echo "foo - $HOSTNAME" > /usr/share/nginx/html/index.html;
          nginx -g "daemon off;";

---
apiVersion: v1
kind: Service
metadata:
  name: foo-nodeport-svc
  labels:
    deployment: foo
spec:
  externalTrafficPolicy: Local
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 80
      nodePort: 30080
  selector:
    deployment: foo
  type: NodePort

Note that I make use of the spec.externalTrafficPolicy directive in my service manifest to ensure all traffic sent to a node stays “local” to that node. As in, a request routed to node1 (by the Nginx load balancer) won’t end up being routed to node2 (by kube-proxy).

Once applied, verify that your pods and services are provisioned and functioning properly.

$ kubectl apply -f simple.yaml

deployment.apps/example-foo created
service/foo-nodeport-svc created

$ kubectl get pods -o custom-columns=NAME:.metadata.name,NODE:.spec.nodeName
NAME                           NODE
example-foo-5db6f64ccd-49qf8   k8sworker2 
example-foo-5db6f64ccd-lhz88   k8sworker1
example-foo-5db6f64ccd-nt268   k8sworker1
example-foo-5db6f64ccd-s966g   k8sworker2

Note that my test cluster utilizes two worker nodes.

$ kubectl get nodes -o json | jq -rc '.items[].status.addresses[] | (select (.type == "Hostname" or .type == "InternalIP") | .address)'

192.168.0.230
k8smaster
192.168.0.231
k8sworker1
192.168.0.232
k8sworker2

Making a request to each worker node’s NodePort service should present us with successful responses.

$ curl http://192.168.0.231:30080

foo - example-foo-5db6f64ccd-lhz88

$ curl http://192.168.0.232:30080

foo - example-foo-5db6f64ccd-49qf8

With our test infrastructure provisioned, let’s now look at how we would manually create an Nginx container to load balance connections to the NodePort services listening on TCP port 30080 on each worker node.

We first need to write a configuration file to disk for Nginx to utilize. Something like the following could be written to /tmp/conf/nginx.conf.

events {}
http {
    upstream app1 {
        server 192.168.0.231:30080;
        server 192.168.0.232:30080;
    }
    server {
        listen 30080;

        location / {
            proxy_pass http://app1;

           # disable caching
           add_header Last-Modified $date_gmt;
           add_header Cache-Control 'no-store, no-cache';
           if_modified_since off;
           expires off;
           etag off;
        }
    }
}

I’ve added some directives that instruct the requesting user-agent (browser) to disable caching such that a fresh response is received from the responding web server every time a request is made. Something that can help out quite a bit when testing minor code changes during development. Something to note is that this configuration applies to standard HTTP requests. This will not allow requests making use of SSL/TLS.

The following configuration will allow both HTTP and HTTPS requests. The caching directives are not allowed here as this configuration acts on layer 4, instead of 7, of the OSI Model.

events {}
stream {
    upstream app1 {
        server 192.168.0.231:30080;
        server 192.168.0.232:30080;
    }
    server {
        listen 30080;
        proxy_pass app1;
    }
}

With our configuration file written to disk we are now ready to start an Nginx container (outside of a kubernetes cluster) that will mount and use this configuration. This can be accomplished via the following command.

docker run -p 30080:30080 -v /tmp/conf:/tmp/conf nginx nginx -c /tmp/conf/nginx.conf -g "daemon off;"

Once running, this command will create a Docker container with an Nginx server that will listen on TCP port 30080 (on the host running the container) and will load balance requests made to this port and send them to TCP port 30080 on the cluster’s worker nodes (192.168.0.231 and 192.168.0.232).

Which ends up looking like the following.

Requesting the load-balanced URL several times will see us cycle through all of the pods in the deployment we provisioned.

# while true; do curl http://192.168.0.128:30080; sleep 1; done

foo - example-foo-5db6f64ccd-49qf8
foo - example-foo-5db6f64ccd-lhz88
foo - example-foo-5db6f64ccd-49qf8
foo - example-foo-5db6f64ccd-nt268
foo - example-foo-5db6f64ccd-s966g
foo - example-foo-5db6f64ccd-nt268
foo - example-foo-5db6f64ccd-49qf8
^C

Excellent! This is exactly what we’re looking for.

We could stop there and claim success but I figured I ought to accompany this blog post with a script that helps automate the process. Seems like the right thing to do. Someone might even find it useful!

You can use the following script to run an Nginx container in the same manner as described above with multiple listening ports and multiple Kubernetes back-end services.

https://gist.github.com/ryan-wendel/c09d24d9d25d5ca4ba8d605169e5ee86

An example of running this script using multiple services looks like the following:

./run_nginx_lb.sh -p 30080 -p 30090 -s 192.168.0.231 -s 192.168.0.232

You feed the script ports you want to listen on and connect with back-end Kubernetes services using the “-p” flag as well as your cluster’s worker nodes via the “-s” flag. You can use both flags multiple times.

Reference the following diagram for what this looks like once executed.

And that’s all folks! Thanks for hanging out with me for a bit. I hope you were able to take something from this post and find this technique useful.

Author
Ryan Wendel Featured Team Member
Ryan Wendel