Configure sticky sessions with drain support
Deploy Kong Ingress Controller using the --enable-drain-support=true
flag. Next, configure spec.stickySessions
and set spec.algorithm
to sticky-sessions
in a KongUpstreamPolicy
resource. Finally, attach the KongUpstreamPolicy
resource to a Kubernetes Service with the konghq.com/upstream-policy
annotation.
Prerequisites
Kong Konnect
If you don’t have a Konnect account, you can get started quickly with our onboarding wizard.
- The following Konnect items are required to complete this tutorial:
- Personal access token (PAT): Create a new personal access token by opening the Konnect PAT page and selecting Generate Token.
-
Set the personal access token as an environment variable:
export KONNECT_TOKEN='YOUR KONNECT TOKEN'
Enable the Gateway API
-
Install the Gateway API CRDs before installing Kong Ingress Controller.
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yaml
-
Create a
Gateway
andGatewayClass
instance to use.
echo "
apiVersion: v1
kind: Namespace
metadata:
name: kong
---
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: kong
annotations:
konghq.com/gatewayclass-unmanaged: 'true'
spec:
controllerName: konghq.com/kic-gateway-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: kong
spec:
gatewayClassName: kong
listeners:
- name: proxy
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: All
" | kubectl apply -n kong -f -
Create a KIC Control Plane
Use the Konnect API to create a new CLUSTER_TYPE_K8S_INGRESS_CONTROLLER
Control Plane:
CONTROL_PLANE_DETAILS=$( curl -X POST "https://us.api.konghq.com/v2/control-planes" \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"name": "My KIC CP",
"cluster_type": "CLUSTER_TYPE_K8S_INGRESS_CONTROLLER"
}')
We’ll need the id
and telemetry_endpoint
for the values.yaml
file later. Save them as environment variables:
CONTROL_PLANE_ID=$(echo $CONTROL_PLANE_DETAILS | jq -r .id)
CONTROL_PLANE_TELEMETRY=$(echo $CONTROL_PLANE_DETAILS | jq -r '.config.telemetry_endpoint | sub("https://";"")')
Create mTLS certificates
Kong Ingress Controller talks to Konnect over a connected secured with TLS certificates.
Generate a new certificate using openssl
:
openssl req -new -x509 -nodes -newkey rsa:2048 -subj "/CN=kongdp/C=US" -keyout ./tls.key -out ./tls.crt
The certificate needs to be a single line string to send it to the Konnect API with curl. Use awk
to format the certificate:
export CERT=$(awk 'NF {sub(/\r/, ""); printf "%s\\n",$0;}' tls.crt);
Next, upload the certificate to Konnect:
curl -X POST "https://us.api.konghq.com/v2/control-planes/$CONTROL_PLANE_ID/dp-client-certificates" \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"cert": "'$CERT'"
}'
Finally, store the certificate in a Kubernetes secret so that Kong Ingress Controller can read it:
kubectl create namespace kong -o yaml --dry-run=client | kubectl apply -f -
kubectl create secret tls konnect-client-tls -n kong --cert=./tls.crt --key=./tls.key
Kong Ingress Controller running (attached to Konnect)
-
Add the Kong Helm charts:
helm repo add kong https://charts.konghq.com helm repo update
-
Create a
values.yaml
file:cat <<EOF > values.yaml controller: ingressController: image: tag: "3.5" env: feature_gates: "FillIDs=true" konnect: license: enabled: true enabled: true controlPlaneID: "$CONTROL_PLANE_ID" tlsClientCertSecretName: konnect-client-tls apiHostname: "us.kic.api.konghq.com" gateway: image: repository: kong/kong-gateway tag: "3.11" env: konnect_mode: 'on' vitals: "off" cluster_mtls: pki cluster_telemetry_endpoint: "$CONTROL_PLANE_TELEMETRY:443" cluster_telemetry_server_name: "$CONTROL_PLANE_TELEMETRY" cluster_cert: /etc/secrets/konnect-client-tls/tls.crt cluster_cert_key: /etc/secrets/konnect-client-tls/tls.key lua_ssl_trusted_certificate: system proxy_access_log: "off" dns_stale_ttl: "3600" secretVolumes: - konnect-client-tls EOF
-
Install Kong Ingress Controller using Helm:
helm install kong kong/ingress -n kong --create-namespace --set controller.ingressController.env.enable_drain_support=true --values ./values.yaml
-
Set
$PROXY_IP
as an environment variable for future commands:export PROXY_IP=$(kubectl get svc --namespace kong kong-gateway-proxy -o jsonpath='{range .status.loadBalancer.ingress[0]}{@.ip}{@.hostname}{end}') echo $PROXY_IP
Kong Ingress Controller running (with an Enterprise license)
-
Add the Kong Helm charts:
helm repo add kong https://charts.konghq.com helm repo update
-
Create a file named
license.json
containing your Kong Gateway Enterprise license and store it in a Kubernetes secret:kubectl create namespace kong --dry-run=client -o yaml | kubectl apply -f - kubectl create secret generic kong-enterprise-license --from-file=license=./license.json -n kong
-
Create a
values.yaml
file:cat <<EOF > values.yaml gateway: image: repository: kong/kong-gateway tag: "3.11" env: LICENSE_DATA: valueFrom: secretKeyRef: name: kong-enterprise-license key: license EOF
-
Install Kong Ingress Controller using Helm:
helm install kong kong/ingress -n kong --create-namespace --set controller.ingressController.env.enable_drain_support=true --values ./values.yaml
-
Set
$PROXY_IP
as an environment variable for future commands:export PROXY_IP=$(kubectl get svc --namespace kong kong-gateway-proxy -o jsonpath='{range .status.loadBalancer.ingress[0]}{@.ip}{@.hostname}{end}') echo $PROXY_IP
Required Kubernetes resources
This how-to requires some Kubernetes services to be available in your cluster. These services will be used by the resources created in this how-to.
kubectl apply -f https://developer.konghq.com/manifests/kic/echo-service.yaml -n kong
This how-to also requires 1 pre-configured route:
Deploy additional echo replicas
To demonstrate Kong’s sticky session functionality we need multiple echo
Pods. Scale out the echo
deployment.
kubectl scale -n kong --replicas 3 deployment echo
Configure sticky sessions with KongUpstreamPolicy
To implement sticky sessions, you’ll need to create a KongUpstreamPolicy
resource that specifies the sticky-sessions
algorithm and configure your Service to use it.
-
Create a
KongUpstreamPolicy
with sticky sessions:echo ' apiVersion: configuration.konghq.com/v1beta1 kind: KongUpstreamPolicy metadata: name: sticky-session-policy namespace: kong spec: algorithm: sticky-sessions hashOn: input: "none" stickySessions: cookie: "session-id" cookiePath: "/" ' | kubectl apply -f -
-
Annotate your service to use this policy:
kubectl annotate -n kong service echo konghq.com/upstream-policy=sticky-session-policy --overwrite
Test sticky sessions
To test if sticky sessions are working, make a request to your service and inspect the response headers for the session-id
cookie:
curl -i "$PROXY_IP/echo"
curl -i "$PROXY_IP/echo"
Make additional requests and verify they’re being routed to the same pod.
Test drain support
Scale down your deployment:
kubectl scale -n kong --replicas 2 deployment echo
Send another request:
curl -i "$PROXY_IP/echo"
curl -i "$PROXY_IP/echo"
If you have an active session with a pod that’s terminating, your session should continue to work. New sessions should be directed only to the remaining healthy pods
Cleanup
Delete created Kubernetes resources
kubectl delete -n kong -f https://developer.konghq.com/manifests/kic/echo-service.yaml
Uninstall KIC from your cluster
helm uninstall kong -n kong