Autoscale workloads with Prometheus

Uses: Kong Gateway Operator
Related Documentation
TL;DR

Deploy a DataPlaneMetricsExtension to expose latency metrics from a Service, then configure the Kong Gateway Operator to associate those metrics with the Data Plane. This enables external tools like Prometheus and KEDA to trigger scaling decisions.

Prerequisites

If you don’t have a Konnect account, you can get started quickly with our onboarding wizard.

  1. The following Konnect items are required to complete this tutorial:
    • Personal access token (PAT): Create a new personal access token by opening the Konnect PAT page and selecting Generate Token.
  2. Set the personal access token as an environment variable:

    export KONNECT_TOKEN='YOUR KONNECT TOKEN'
    
  1. Install the Gateway API CRDs before installing Kong Ingress Controller.

    kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yaml
    
  2. Create a Gateway and GatewayClass instance to use.

echo "
apiVersion: v1
kind: Namespace
metadata:
  name: kong
---
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: kong
  annotations:
    konghq.com/gatewayclass-unmanaged: 'true'
spec:
  controllerName: konghq.com/gateway-operator
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: kong
spec:
  gatewayClassName: kong
  listeners:
  - name: proxy
    port: 80
    protocol: HTTP
    allowedRoutes:
      namespaces:
         from: All
" | kubectl apply -n kong -f -
  1. Add the Kong Helm charts:

    helm repo add kong https://charts.konghq.com
    helm repo update
    
  2. Create a kong namespace:

    kubectl create namespace kong --dry-run=client -o yaml | kubectl apply -f -
    
  3. Install Kong Ingress Controller using Helm:

    helm upgrade --install kgo kong/gateway-operator -n kong-system --create-namespace  \
      --set image.tag=1.5 \
      --set kubernetes-configuration-crds.enabled=true \
      --set env.ENABLE_CONTROLLER_KONNECT=true
    
  4. Apply a KongLicense. This assumes that your license is available in ./license.json

    echo "
    apiVersion: configuration.konghq.com/v1alpha1
    kind: KongLicense
    metadata:
     name: kong-license
    rawLicenseString: '$(cat ./license.json)'
    " | kubectl apply -f -
    

This how-to requires some Kubernetes services to be available in your cluster. These services will be used by the resources created in this how-to.

kubectl apply -f https://developer.konghq.com/manifests/kic/command-service.yaml -n kong

This how-to also requires 1 pre-configured route:

Autoscaling Workloads

This tutorial shows how to autoscale workloads based on Service latency. The command service created in the prerequisites allows us to inject an artificial delay in to responses to trigger autoscaling.

Create a DataPlaneMetricsExtension

The DataPlaneMetricsExtension allows Kong Gateway Operator to monitor Service latency and expose it on the /metrics endpoint.

  1. Create a DataPlaneMetricsExtension that points to the command service:

     echo '
     kind: DataPlaneMetricsExtension
     apiVersion: gateway-operator.konghq.com/v1alpha1
     metadata:
       name: kong
       namespace: kong
     spec:
       serviceSelector:
         matchNames:
         - name: command
       config:
         latency: true
     ' | kubectl apply -f -
    
  2. Create a GatewayConfiguration that uses it:

     echo '
     kind: GatewayConfiguration
     apiVersion: gateway-operator.konghq.com/v1beta1
     metadata:
       name: kong
       namespace: kong
     spec:
       controlPlaneOptions:
         extensions:
         - kind: DataPlaneMetricsExtension
           group: gateway-operator.konghq.com
           name: kong
     ' | kubectl apply -f -
    
  3. Patch the GatewayClass to use the config:

     kubectl patch -n kong --type=json gatewayclass kong -p='[
         {
             "op":"add",
             "path":"/spec/parametersRef",
             "value":{
                     "group": "gateway-operator.konghq.com",
                     "kind": "GatewayConfiguration",
                     "name": "kong",
                     "namespace": "kong",
             }
         }
     ]'
    

Install Prometheus

Note: You can reuse your current Prometheus setup and skip this step but please be aware that it needs to be able to scrape Kong Gateway Operator’s metrics (e.g. through ServiceMonitor) and note down the namespace in which it’s deployed.

  1. Add the prometheus-community helm charts:

    helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
    helm repo update
    
  2. Install Prometheus via kube-prometheus-stack helm chart:

    helm upgrade --install --create-namespace -n prometheus prometheus prometheus-community/kube-prometheus-stack
    

Create a ServiceMonitor to scrape Kong Gateway Operator

To make Prometheus scrape Kong Gateway Operator’s /metrics endpoint, we’ll need to create a ServiceMonitor:

echo '
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    release: prometheus
  name: gateway-operator
  namespace: kong-system
spec:
  endpoints:
  - port: https
    scheme: http
    path: /metrics
    bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
  selector:
    matchLabels:
      control-plane: controller-manager ' | kubectl apply -f -

After applying the above manifest you can check one of the metrics exposed by Kong Gateway Operator to verify that the scrape config has been applied.

To access the Prometheus UI, create a port-forward and visit http://localhost:9090:

kubectl port-forward service/prometheus-kube-prometheus-prometheus 9090:9090 -n prometheus

This can be verified by going to your Prometheus UI and querying:

up{service=~"kgo-gateway-operator-metrics-service"}

Prometheus metrics can take up to 2 minutes to appear.

Install prometheus-adapter

The prometheus-adapter package makes Prometheus metrics usable in Kubernetes.

To deploy prometheus-adapter, you’ll need to decide what time series to expose so that Kubernetes can consume it.

Note: Kong Gateway Operator enriches specific metrics for use with prometheus-adapter. See the overview for a complete list.

Create a values.yaml file to deploy the prometheus-adapter helm chart. This configuration calculates a kong_upstream_latency_ms_60s_average metric, which exposes a 60s moving average of upstream response latency:

echo $'
prometheus:
  # Update this value if Prometheus is installed in a different namespace
  url: http://prometheus-kube-prometheus-prometheus.prometheus.svc

rules:
  default: false
  custom:
  - seriesQuery: \'{__name__=~"^kong_upstream_latency_ms_(sum|count)",kubernetes_namespace!="",kubernetes_name!="",kubernetes_kind!=""}\'
    resources:
      overrides:
        exported_namespace:
          resource: "namespace"
        exported_service:
          resource: "service"
    name:
      as: "kong_upstream_latency_ms_60s_average"
    metricsQuery: |
      sum by (exported_service) (rate(kong_upstream_latency_ms_sum{<<.LabelMatchers>>}[60s:10s]))
        /
      sum by (exported_service) (rate(kong_upstream_latency_ms_count{<<.LabelMatchers>>}[60s:10s]))
' > values.yaml

Install prometheus-adapter using Helm:

helm upgrade --install --create-namespace -n prometheus --values values.yaml prometheus-adapter prometheus-community/prometheus-adapter

Send traffic

To trigger autoscaling, run the following command in a new terminal window. This will cause the underlying deployment to sleep for 100ms on each request and thus increase the average response time to that value.

while curl -k "http://$(kubectl get -n kong gateway kong -o custom-columns='name:.status.addresses[0].value' --no-headers)/command/shell?cmd=sleep%200.1" ; do sleep 1; done

Keep this running while we move on to next steps.

Verify metrics are exposed in Kubernetes

When all is configured, you should be able to see the metric you’ve configured in prometheus-adapter exposed via the Kubernetes Custom Metrics API:

kubectl get --raw '/apis/custom.metrics.k8s.io/v1beta1/namespaces/kong/services/command/kong_upstream_latency_ms_60s_average' | jq

Note: The prometheus-adapter may take up to 2 minutes to populate the custom metrics

This should result in:

{
  "kind": "MetricValueList",
  "apiVersion": "custom.metrics.k8s.io/v1beta1",
  "metadata": {},
  "items": [
    {
      "describedObject": {
        "kind": "Service",
        "namespace": "kong",
        "name": "command",
        "apiVersion": "/v1"
      },
      "metricName": "kong_upstream_latency_ms_60s_average",
      "timestamp": "2024-03-06T13:11:12Z",
      "value": "102312m",
      "selector": null
    }
  ]
}

Note: 102312m is a Kubernetes way of expressing numbers as integers. value represents the latency in microseconds, and is approximately equivalent to 102 milliseconds (ms).

Use exposed metric in HorizontalPodAutoscaler

When the metric configured in prometheus-adapter is available through Kubernetes’s Custom Metrics API, we can use it in HorizontalPodAutoscaler to autoscale our workload, specifically the command Deployment.

This can be done by using the following manifest, which will scale the underlying command Deployment between 1 and 10 replicas, trying to keep the average latency across last 60s at 40ms.

echo '
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: command
  namespace: kong
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: command
  minReplicas: 1
  maxReplicas: 10
  behavior:
    scaleDown:
      stabilizationWindowSeconds: 1
      policies:
      - type: Percent
        value: 100
        periodSeconds: 10
    scaleUp:
      stabilizationWindowSeconds: 1
      policies:
      - type: Percent
        value: 100
        periodSeconds: 2
      - type: Pods
        value: 4
        periodSeconds: 2
      selectPolicy: Max
  metrics:
  - type: Object
    object:
      metric:
        name: "kong_upstream_latency_ms_60s_average"
      describedObject:
        apiVersion: v1
        kind: Service
        name: command
      target:
        type: Value
        value: "40" ' | kubectl apply -f -

Observe Kubernetes SuccessfulRescale events

You can watch SuccessfulRescale events using the following kubectl command:

kubectl get events -n kong --field-selector involvedObject.name=command --field-selector involvedObject.kind=HorizontalPodAutoscaler -w

If everything went well we should see the SuccessfulRescale events:

12m          Normal   SuccessfulRescale   horizontalpodautoscaler/command   New size: 2; reason: Service metric kong_upstream_latency_ms_60s_average above target
12m          Normal   SuccessfulRescale   horizontalpodautoscaler/command   New size: 4; reason: Service metric kong_upstream_latency_ms_60s_average above target
12m          Normal   SuccessfulRescale   horizontalpodautoscaler/command   New size: 8; reason: Service metric kong_upstream_latency_ms_60s_average above target
12m          Normal   SuccessfulRescale   horizontalpodautoscaler/command   New size: 10; reason: Service metric kong_upstream_latency_ms_60s_average above target

Then when latency drops (when you stop sending traffic with the curl command) you should observe the SuccessfulRescale events scaling your workloads down:

4s          Normal   SuccessfulRescale   horizontalpodautoscaler/command   New size: 1; reason: All metrics below target
Something wrong?

Help us make these docs great!

Kong Developer docs are open source. If you find these useful and want to make them better, contribute today!