Load Balancers
MetaKube allows you to easily publish your Service behind a load balancer with an external IP address. The load balancers are provided by OpenStack Octavia.
Configuration
MetaKube will automatically create an Octavia load balancer for each Service with spec.type: LoadBalancer.
Minimal LoadBalancer Service manifest
A minimal Kubernetes manifest that creates a load balancer:
externalTrafficPolicy
There's two different values for spec.externalTrafficPolicy.
The option determines each node balances connections between all endpoints of a Service (Cluster, default) or only such that reside on that Node (Local).
-
LocalRecommendation
We strongly recommend setting
externalTrafficPolicy: Localas it has several benefits:- One less network hop as packets stays are routed locally
- No masquerading (SNAT) required
- Reduced latency (TCP and TLS alone cause 6 trips)
- No superfluous port allocation and connection tracking resulting frm SNAT
The Node will only forward a packet directly to Pods on the same node. If the node doesn't have a Pod of the Service, traffic to the NodePort is dropped.
When choosing
Local, Kubernetes will answer healthcheck requests (http) on a separate node port (spec.healthCheckNodePort) selected automatically by Kubernetes. This allows the load balancer to exclude these nodes from getting traffic.Note
OpenStack security groups must allow ingress on both the Service's
nodePortandhealthCheckNodePort.Original client IP
The load balancer (L4) has two separate connections with the client (downstream) and Kubernetes Node/Pod (upstream). This means that the application in Kubernetes always sees the load balancer's IP as the source IP.
To preserve the original client IP with a load balancer, you may use Proxy Protocol.
-
Cluster(default)The Node takes the role of a proxy and forwards packets to a Pod from the Service's Endpoints (likely on different nodes) which have to masqueraded to preserve the return route. So packets may traverse two nodes until they reach the Pod.
Warning
The additional port allocation related to SNAT may cause port collisions and failures in connection tracking.
Proxy Protocol
The Proxy Protocol is an industry standard to pass client connection information through load balancers to the destination server. Activating the Proxy Protocol allows you to retain the original client IP address and see it in your application.
To use Proxy Protocol, specify the floowing annotation in your Service:
Warning
Changing the annotation after the creation of the load balancer does not reconfigure the load balancer!
To migrate a Service of type LoadBalancer to Proxy Protocol, you need to recreate the Service.
To avoid losing the associated Floating IP, see Keeping load balancer floating IP.
Caveat: cluster-internal traffic
Note
This issue only affects Kubernetes versions older than 1.30
A fix for this issue has been added in Kubernetes 1.30. The LoadBalancerIPMode will be automatically set to Proxy when enabling Proxy Protocol.
Note
This issue does not affect every proxy. Some proxies (e.g. Traefik) accept connections with or without proxy protocol header.
Proxy Protocol adds a header at the beginning of the connection and your reverse proxy (Ingress Controller) will usually expect it. Connections where the header is not prepended to the payload, will likely lead to a parsing error.
This causes issues in conjunction with hairpin NAT:
Let's say a Pod sends a request to the address of the load balancer, behind which sits a typical reverse proxy with Proxy Protocol enabled. Cilium will intercept the packets because it knows their final destination (the reverse proxy Pods) and send the packets there directly. This means the request will forego the hop over the load balancer, which would normally add the Proxy Protocol header and thus, omit it. Your reverse proxy will return an error because it fails to parse the payload, since it expects the Proxy Protocol header.
Typical situations where this problem appears are:
- Cert-manager with http01 challenges
- Prometheus blackbox exporter probes
- CMS server-side rendered previews
Workarounds
-
Use a proxy that can handle connections, both with and without proxy protocol header, e.g. Traefik.
-
Add a second Ingress Controller behind a load balancer without Proxy Protocol. All cluster-internal traffic then needs to be sent to that Service.
Keeping load balancer floating IP
When a Service of type LoadBalancer is created without further configuration it gets an ephemeral IP address from the IP pool of the external network of the cluster (usually ext-net).
Deleting the Service also releases that floating IP again into the pool, and it becomes available for others.
There are no guarantees that the IP will still be available afterward.
To prevent the deletion of the floating IP, set the following annotation:
To reuse the floating IP afterward, specify it the Service:
Info
The floating IP must exist in the same region as the LB.
Allow ingress only from specific subnets
Octavia supports the spec.loadBalancerSourceRanges option in the LoadBalancer Service to block traffic from non-matching source IPs.
Specify an array of subnets in CIDR notation.
Info
- With many rules (~50), it might take a couple of minutes for all to take effect.
More granular configuration
You may configure more aspects of the load balancer using annotations on the Service. For a complete list see the docs of openstack-cloud-controller.
Troubleshooting
I created a load balancer, but I can't reach the application
This can have multiple reasons. Typical problems are:
-
Your application is not reachable:
Try to use
kubectl port-forward svc/$SVC_NAME 8080:$SVC_PORTand check if you can reach your application locally on port8080. -
The service node port is not reachable.
Create a Pod with an interactive shell and test the connection to the node port:
-
The load balancer can't reach the worker nodes.
Make sure that your nodes' security group has opened the port range
30000 - 32767for the node network (default192.168.1.0/24). On Clusters without advanced configuration, we create this rule automatically. To list the security group rules run:
Client connections timeout after 50 seconds
By default, the SysEleven OpenStack Cloud's LBaaS closes idle connections after 50s. If you encounter timeouts at 50s, you may configure higher timeout values with annotations on the Service:
metadata:
annotations:
loadbalancer.openstack.org/timeout-client-data: "300000"
loadbalancer.openstack.org/timeout-member-data: "300000"
Limitations
- MetaKube does not support Layer 7 (HTTP) Octavia load balancers
See Ingress Controllers for more information.
- MetaKube does not support UDP Octavia load balancers.
A Service that uses a port with UDP will stay in status Pending, since the cloud controller manager will not provision a load balancer for it.