Manage sticky sessions with KongUpstreamPolicy
Create a KongUpstreamPolicy
with the sticky-sessions
algorithm and attach it to your Service using the konghq.com/upstream-policy
annotation
Prerequisites
Kong Konnect
If you don’t have a Konnect account, you can get started quickly with our onboarding wizard.
- The following Konnect items are required to complete this tutorial:
- Personal access token (PAT): Create a new personal access token by opening the Konnect PAT page and selecting Generate Token.
-
Set the personal access token as an environment variable:
export KONNECT_TOKEN='YOUR KONNECT TOKEN'
Enable the Gateway API
-
Install the Gateway API CRDs before installing Kong Ingress Controller.
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yaml
-
Create a
Gateway
andGatewayClass
instance to use.
echo "
apiVersion: v1
kind: Namespace
metadata:
name: kong
---
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: kong
annotations:
konghq.com/gatewayclass-unmanaged: 'true'
spec:
controllerName: konghq.com/kic-gateway-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: kong
spec:
gatewayClassName: kong
listeners:
- name: proxy
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: All
" | kubectl apply -n kong -f -
Create a KIC Control Plane
Use the Konnect API to create a new CLUSTER_TYPE_K8S_INGRESS_CONTROLLER
Control Plane:
CONTROL_PLANE_DETAILS=$( curl -X POST "https://us.api.konghq.com/v2/control-planes" \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"name": "My KIC CP",
"cluster_type": "CLUSTER_TYPE_K8S_INGRESS_CONTROLLER"
}')
We’ll need the id
and telemetry_endpoint
for the values.yaml
file later. Save them as environment variables:
CONTROL_PLANE_ID=$(echo $CONTROL_PLANE_DETAILS | jq -r .id)
CONTROL_PLANE_TELEMETRY=$(echo $CONTROL_PLANE_DETAILS | jq -r '.config.telemetry_endpoint | sub("https://";"")')
Create mTLS certificates
Kong Ingress Controller talks to Konnect over a connected secured with TLS certificates.
Generate a new certificate using openssl
:
openssl req -new -x509 -nodes -newkey rsa:2048 -subj "/CN=kongdp/C=US" -keyout ./tls.key -out ./tls.crt
The certificate needs to be a single line string to send it to the Konnect API with curl. Use awk
to format the certificate:
export CERT=$(awk 'NF {sub(/\r/, ""); printf "%s\\n",$0;}' tls.crt);
Next, upload the certificate to Konnect:
curl -X POST "https://us.api.konghq.com/v2/control-planes/$CONTROL_PLANE_ID/dp-client-certificates" \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"cert": "'$CERT'"
}'
Finally, store the certificate in a Kubernetes secret so that Kong Ingress Controller can read it:
kubectl create namespace kong -o yaml --dry-run=client | kubectl apply -f -
kubectl create secret tls konnect-client-tls -n kong --cert=./tls.crt --key=./tls.key
Kong Ingress Controller running (attached to Konnect)
-
Add the Kong Helm charts:
helm repo add kong https://charts.konghq.com helm repo update
-
Create a
values.yaml
file:cat <<EOF > values.yaml controller: ingressController: image: tag: "3.5" env: feature_gates: "FillIDs=true" konnect: license: enabled: true enabled: true controlPlaneID: "$CONTROL_PLANE_ID" tlsClientCertSecretName: konnect-client-tls apiHostname: "us.kic.api.konghq.com" gateway: image: repository: kong/kong-gateway tag: "3.11" env: konnect_mode: 'on' vitals: "off" cluster_mtls: pki cluster_telemetry_endpoint: "$CONTROL_PLANE_TELEMETRY:443" cluster_telemetry_server_name: "$CONTROL_PLANE_TELEMETRY" cluster_cert: /etc/secrets/konnect-client-tls/tls.crt cluster_cert_key: /etc/secrets/konnect-client-tls/tls.key lua_ssl_trusted_certificate: system proxy_access_log: "off" dns_stale_ttl: "3600" secretVolumes: - konnect-client-tls EOF
-
Install Kong Ingress Controller using Helm:
helm install kong kong/ingress -n kong --create-namespace --values ./values.yaml
-
Set
$PROXY_IP
as an environment variable for future commands:export PROXY_IP=$(kubectl get svc --namespace kong kong-gateway-proxy -o jsonpath='{range .status.loadBalancer.ingress[0]}{@.ip}{@.hostname}{end}') echo $PROXY_IP
Kong Ingress Controller running (with an Enterprise license)
-
Add the Kong Helm charts:
helm repo add kong https://charts.konghq.com helm repo update
-
Create a file named
license.json
containing your Kong Gateway Enterprise license and store it in a Kubernetes secret:kubectl create namespace kong --dry-run=client -o yaml | kubectl apply -f - kubectl create secret generic kong-enterprise-license --from-file=license=./license.json -n kong
-
Create a
values.yaml
file:cat <<EOF > values.yaml gateway: image: repository: kong/kong-gateway tag: "3.11" env: LICENSE_DATA: valueFrom: secretKeyRef: name: kong-enterprise-license key: license EOF
-
Install Kong Ingress Controller using Helm:
helm install kong kong/ingress -n kong --create-namespace --values ./values.yaml
-
Set
$PROXY_IP
as an environment variable for future commands:export PROXY_IP=$(kubectl get svc --namespace kong kong-gateway-proxy -o jsonpath='{range .status.loadBalancer.ingress[0]}{@.ip}{@.hostname}{end}') echo $PROXY_IP
Required Kubernetes resources
This how-to requires some Kubernetes services to be available in your cluster. These services will be used by the resources created in this how-to.
kubectl apply -f https://developer.konghq.com/manifests/kic/echo-service.yaml -n kong
This how-to also requires 1 pre-configured route:
Overview
Sticky sessions, also known as session affinity, ensure that requests from the same client are consistently routed to the same backend pod. This is particularly useful for:
- Session persistence: Applications that store session data in memory or local storage
- Graceful shutdowns: Allowing existing connections to complete before terminating pods
- Connection affinity: Applications that benefit from maintaining state between requests
v3.11+ Kong Gateway supports sticky sessions through the sticky-sessions
load balancing algorithm, which uses browser-managed cookies to maintain session affinity.
Deploy multiple backend pods
To test sticky sessions, you need more than one pod. Scale the echo
deployment:
kubectl scale -n kong --replicas 3 deployment echo
Verify the pods are running:
kubectl get pods -n kong -l app=echo
Create a KongUpstreamPolicy
Apply a KongUpstreamPolicy
resource that enables sticky sessions:
echo '
apiVersion: configuration.konghq.com/v1beta1
kind: KongUpstreamPolicy
metadata:
name: sticky-session-policy
namespace: kong
spec:
algorithm: "sticky-sessions"
hashOn:
input: "none"
stickySessions:
cookie: "session_id"
cookiePath: "/"
' | kubectl apply -f -
Explanation of key fields:
-
algorithm: sticky-sessions
: Enables the sticky session load balancing algorithm -
hashOn.input: "none"
: Set it tonone
(required for sticky sessions) -
stickySessions.cookie
: Name of the cookie used for session tracking -
stickySessions.cookiePath
: Path for the session cookie (default:/
)
Attach policy to your service
Associate the KongUpstreamPolicy
with your service using the konghq.com/upstream-policy
annotation:
kubectl annotate -n kong service echo konghq.com/upstream-policy=sticky-session-policy
Check that the annotation was applied:
kubectl get service echo -n kong -o jsonpath='{.metadata.annotations.konghq\.com/upstream-policy}'
Test sticky session behavior
Initial request
-
Make an initial request to observe the session cookie being set:
curl -v $PROXY_IP/echo
-
You should see a
Set-Cookie
header in the response:< Set-Cookie: session_id=01234567-89ab-cdef-0123-456789abcdef; Path=/
-
Note the pod name in the response:
Running on Pod echo-965f7cf84-frpjc.
Repeat requests with the same cookie
Extract the cookie and make multiple requests:
COOKIE=$(curl -s -D - $PROXY_IP/echo | grep -i 'set-cookie:' | sed 's/.*session_id=\([^;]*\).*/\1/')
for i in {1..5}; do
curl -s -H "Cookie: session_id=$COOKIE" $PROXY_IP/echo | grep "Running on Pod"
done
All requests are routed to the same pod:
Running on Pod echo-965f7cf84-frpjc.
Running on Pod echo-965f7cf84-frpjc.
Running on Pod echo-965f7cf84-frpjc.
Running on Pod echo-965f7cf84-frpjc.
Running on Pod echo-965f7cf84-frpjc.
Behavior without cookie
Compare this to requests without the cookie, which should distribute across different pods:
for i in {1..5}; do
curl -s $PROXY_IP/echo | grep "Running on Pod"
done
Running on Pod echo-965f7cf84-frpjc.
Running on Pod echo-965f7cf84-wlvw9.
Running on Pod echo-965f7cf84-5h56p.
Running on Pod echo-965f7cf84-frpjc.
Running on Pod echo-965f7cf84-wlvw9.
Conclusion
Sticky sessions provide a powerful mechanism for maintaining session affinity in Kubernetes environments. By using KongUpstreamPolicy
with the sticky-sessions
algorithm, you can ensure that client requests are consistently routed to the same backend pod, improving application performance and user experience.
- Test thoroughly with your specific application requirements
- Consider the trade-offs between session affinity and load distribution
- Combine with health checks for robust traffic management
For more advanced load balancing scenarios, refer to the load balancing documentation and explore other algorithms like Consistent-hashing and Least-connections.
Cleanup
Delete created Kubernetes resources
kubectl delete -n kong -f https://developer.konghq.com/manifests/kic/echo-service.yaml
Uninstall KIC from your cluster
helm uninstall kong -n kong