Customize load balancing with KongUpstreamPolicy
Create a KongUpstreamPolicy
resource then add the konghq.com/upstream-policy
annotation to your Service
Prerequisites
Kong Konnect
If you don’t have a Konnect account, you can get started quickly with our onboarding wizard.
- The following Konnect items are required to complete this tutorial:
- Personal access token (PAT): Create a new personal access token by opening the Konnect PAT page and selecting Generate Token.
-
Set the personal access token as an environment variable:
export KONNECT_TOKEN='YOUR KONNECT TOKEN'
Enable the Gateway API
-
Install the Gateway API CRDs before installing Kong Ingress Controller.
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yaml
-
Create a
Gateway
andGatewayClass
instance to use.
echo "
apiVersion: v1
kind: Namespace
metadata:
name: kong
---
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: kong
annotations:
konghq.com/gatewayclass-unmanaged: 'true'
spec:
controllerName: konghq.com/kic-gateway-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: kong
spec:
gatewayClassName: kong
listeners:
- name: proxy
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: All
" | kubectl apply -n kong -f -
Create a KIC Control Plane
Use the Konnect API to create a new CLUSTER_TYPE_K8S_INGRESS_CONTROLLER
Control Plane:
CONTROL_PLANE_DETAILS=$(curl -X POST "https://us.api.konghq.com/v2/control-planes" \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"name": "My KIC CP",
"cluster_type": "CLUSTER_TYPE_K8S_INGRESS_CONTROLLER"
}')
We’ll need the id
and telemetry_endpoint
for the values.yaml
file later. Save them as environment variables:
CONTROL_PLANE_ID=$(echo $CONTROL_PLANE_DETAILS | jq -r .id)
CONTROL_PLANE_TELEMETRY=$(echo $CONTROL_PLANE_DETAILS | jq -r '.config.telemetry_endpoint | sub("https://";"")')
Create mTLS certificates
Kong Ingress Controller talks to Konnect over a connected secured with TLS certificates.
Generate a new certificate using openssl
:
openssl req -new -x509 -nodes -newkey rsa:2048 -subj "/CN=kongdp/C=US" -keyout ./tls.key -out ./tls.crt
The certificate needs to be a single line string to send it to the Konnect API with curl. Use awk
to format the certificate:
export CERT=$(awk 'NF {sub(/\r/, ""); printf "%s\\n",$0;}' tls.crt);
Next, upload the certificate to Konnect:
curl -X POST "https://us.api.konghq.com/v2/control-planes/$CONTROL_PLANE_ID/dp-client-certificates" \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"cert": "'$CERT'"
}'
Finally, store the certificate in a Kubernetes secret so that Kong Ingress Controller can read it:
kubectl create namespace kong -o yaml --dry-run=client | kubectl apply -f -
kubectl create secret tls konnect-client-tls -n kong --cert=./tls.crt --key=./tls.key
Kong Ingress Controller running
-
Add the Kong Helm charts:
helm repo add kong https://charts.konghq.com helm repo update
-
Install Kong Ingress Controller using Helm:
helm install kong kong/ingress -n kong --create-namespace
-
Set
$PROXY_IP
as an environment variable for future commands:export PROXY_IP=$(kubectl get svc --namespace kong kong-gateway-proxy -o jsonpath='{range .status.loadBalancer.ingress[0]}{@.ip}{@.hostname}{end}') echo $PROXY_IP
Required Kubernetes resources
This how-to requires some Kubernetes services to be available in your cluster. These services will be used by the resources created in this how-to.
kubectl apply -f https://developer.konghq.com/manifests/kic/echo-service.yaml -n kong
This how-to also requires 1 pre-configured route:
Deploy additional echo replicas
To demonstrate Kong’s load balancing functionality we need multiple echo
Pods. Scale out the echo
deployment.
kubectl scale -n kong --replicas 2 deployment echo
Use KongUpstreamPolicy with a Service resource
By default, Kong will round-robin requests between upstream replicas. If you run curl -s $PROXY_IP/echo | grep "Pod"
repeatedly, you should see the reported Pod name alternate between two values.
You can configure the Kong Upstream associated with the Service to use a different load balancing strategy, such as consistently sending requests to the same upstream based on a header value. See the KongUpstreamPolicy reference for the full list of supported algorithms and their configuration options.
Let’s create a KongUpstreamPolicy resource defining the new behavior:
echo '
apiVersion: configuration.konghq.com/v1beta1
kind: KongUpstreamPolicy
metadata:
name: sample-customization
namespace: kong
spec:
algorithm: consistent-hashing
hashOn:
header: demo
hashOnFallback:
input: ip
' | kubectl apply -f -
Now, let’s associate this KongUpstreamPolicy resource with our Service resource
using the konghq.com/upstream-policy
annotation.
kubectl patch -n kong service echo \
-p '{"metadata":{"annotations":{"konghq.com/upstream-policy":"sample-customization"}}}'
With consistent hashing and client IP fallback, sending repeated requests without any x-lb
header now sends them to the same Pod:
for n in {1..5}; do curl -s $PROXY_IP/echo | grep "Pod"; done
Running on Pod echo-965f7cf84-frpjc.
Running on Pod echo-965f7cf84-frpjc.
Running on Pod echo-965f7cf84-frpjc.
Running on Pod echo-965f7cf84-frpjc.
Running on Pod echo-965f7cf84-frpjc.
If you add the header, Kong hashes its value and distributes it to the same replica when using the same value:
for n in {1..3}; do
curl -s $PROXY_IP/echo -H "demo: foo" | grep "Pod";
curl -s $PROXY_IP/echo -H "demo: bar" | grep "Pod";
curl -s $PROXY_IP/echo -H "demo: baz" | grep "Pod";
done
Running on Pod echo-965f7cf84-wlvw9.
Running on Pod echo-965f7cf84-frpjc.
Running on Pod echo-965f7cf84-wlvw9.
Running on Pod echo-965f7cf84-wlvw9.
Running on Pod echo-965f7cf84-frpjc.
Running on Pod echo-965f7cf84-wlvw9.
Running on Pod echo-965f7cf84-wlvw9.
Running on Pod echo-965f7cf84-frpjc.
Running on Pod echo-965f7cf84-wlvw9.
Increasing the replicas redistributes some subsequent requests onto the new replica:
kubectl scale -n kong --replicas 3 deployment echo
for n in {1..3}; do
curl -s $PROXY_IP/echo -H "demo: foo" | grep "Pod";
curl -s $PROXY_IP/echo -H "demo: bar" | grep "Pod";
curl -s $PROXY_IP/echo -H "demo: baz" | grep "Pod";
done
Running on Pod echo-965f7cf84-5h56p.
Running on Pod echo-965f7cf84-5h56p.
Running on Pod echo-965f7cf84-wlvw9.
Running on Pod echo-965f7cf84-5h56p.
Running on Pod echo-965f7cf84-5h56p.
Running on Pod echo-965f7cf84-wlvw9.
Running on Pod echo-965f7cf84-5h56p.
Running on Pod echo-965f7cf84-5h56p.
Running on Pod echo-965f7cf84-wlvw9.
Kong’s load balancer doesn’t directly distribute requests to each of the Service’s endpoints. It first distributes them evenly across a number of equal-size buckets. These buckets are then distributed across the available endpoints according to their weight. For Ingresses, however, there is only one Service, and the controller assigns each endpoint (represented by a Kong Upstream Target) equal weight. In this case, requests are evenly hashed across all endpoints.
Gateway API HTTPRoute rules support distributing traffic across multiple Services. The rule can assign weights to the Services to change the proportion of requests an individual Service receives. In Kong’s implementation, all endpoints of a Service have the same weight. Kong calculates a per-endpoint Upstream Target weight such that the aggregate target weight of the endpoints is equal to the proportion indicated by the HTTPRoute weight.
For example, say you have two Services with the following configuration:
- One Service has four endpoints
- The other Service has two endpoints
- Each Service has weight
50
in the HTTPRoute
The Targets created for the two-endpoint Service have double the weight of the Targets created for the four-endpoint Service (two weight 16
Targets and four weight 8
Targets). Scaling the four-endpoint Service to eight would halve the weight of its Targets (two weight 16
Targets and eight weight 4
Targets).
KongUpstreamPolicy can also configure Upstream health checking behavior as well. See the KongUpstreamPolicy reference for the health check fields.
Cleanup
Delete created Kubernetes resources
kubectl delete -n kong -f https://developer.konghq.com/manifests/kic/echo-service.yaml
Uninstall KIC from your cluster
helm uninstall kong -n kong