DigitalOcean External Service LB

DigitalOcean provides managed Kubernetes clusters, however, they require a DigitalOcean load balancer in order to function. This has many issues, firstly there are performance limitations, and then feature limitations.

This guide explains how to use Nova ADCs instead of DigitalOcean load balancers on K8S clusters within DO.

External Load Balancer!

This guide is to replace the external service load balancer in DigitalOcean (or other clouds), not to deploy Nova into Kubernetes. For that please follow the guide here.

K8S Configuration and NodePort

For this demonstration we will use the example Kubernetes Guestbook application, you can use any service though. You'll see below we have a two node cluster:

On that Kubernetes cluster we have deployed the Guestbook application, which deploys a service called "frontend". If we describe that service we can see the NodePort that has been allocated (note: you use NotePort instead of LoadBalancer with this).

❯ kubectl describe svc frontend
Name:                     frontend
Namespace:                default
Labels:                   app=guestbook
Annotations:              <none>
Selector:                 app=guestbook,tier=frontend
Type:                     NodePort
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  31971/TCP
Endpoints:      ,,
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

Take note that in reality this service is running on port 31971 - that's what we need to know from this.

Node IPs

Now that we have the NodePort (31971 in our case) we need to know the IP addresses to send traffic to. These are the actual IPs of the droplets in our Kubernetes cluster, not the Endpoints within Kubernetes. Go to Droplets and you can see them, as shown below:

We see the IPs and Ensure your firewall setup at DO allows it, and the connect to your NodePort from above (31971 for us) on those ports to verify:

❯ curl
<html ng-app="redis">
    <link rel="stylesheet" href="//">
    <script src=""></script>
    <script src="controllers.js"></script>
    <script src=""></script>
  <body ng-controller="RedisCtrl">
    <div style="width: 50%; margin-left: 20px">
    <input ng-model="msg" placeholder="Messages" class="form-control" type="text" name="input"><br>
    <button type="button" class="btn btn-primary" ng-click="controller.onRedis()">Submit</button>
      <div ng-repeat="msg in messages track by $index">

That means we can load balance to this service from Nova Nodes in DigitalOcean.

Deploying Nova

You have multiple options for how to deploy Nova into DigitalOcean. We recommend adding DigitalOcean as a Connected Cloud and deploying directly in, either a fixed number of droplets, or an Autoscaler which will automatically provision
however many droplets are needed.

If for some reason you need a custom install, you can also run Nova on any stock Ubuntu system, so just launch your own Ubuntu droplets.

You can follow the cloud guide or the manual install guide.

You need at least 1 Nova droplet deployed into the environment to eventually load the ADC on to. This is a standard droplet(s) outside of K8S.

Configuring Nova Backend

There are two things to configure on Nova - a backend, and an ADC.

For the Backend you have options as well. The backend is the method used to define where we send the traffic. In this case your K8S Node IPs and NodePorts.

You can use either a Simple Backend where you specify the two IPs and Ports like so (remember to use your IPs and Ports discovered above):

The simple backend looks like this on Nova:

Or, you can use DigitalOcean's tags to do it. Add a Cloud API backend and enter port 31971 (in our case) and choose the tag "k8s" if you only have one managed Kubernetes installation.

The Cloud API backend looks like this on Nova:

Configuring Nova ADC

Now that we have the backend the ADC part is easy! Add an ADC type, typically HTTP or SSL Termination and set it to run on port 80 or port 443 (and so on).

Under the Backends section you set it to send traffic to the Backend you just added, your Kubernetes service. See below: Screenshot

Then configure any other options you want, and save. At this point you will attach it to your new Droplet(s) in DigitalOcean and you'll be online!

Please contact us if you need any assistance with the deployment.


  1. You can define a static NodePort so this behaviour is more predictable in your Kubernetes services.
  2. You can also manually publish any local ingress services on Kubernetes and use this functionality with it.