Autoscaling workloads

Uses: Kong Gateway Operator

Kong Gateway provides extensive metrics through its Prometheus plugin. However, these metrics are labelled with Kong entities such as Service and Route rather than Kubernetes resources.

Kong Gateway Operator can scrape Kong Gateway and enrich it with Kubernetes metadata so that it can be used by users to autoscale their workloads.

Kong Gateway Operator provides DataPlaneMetricsExtension, which scrapes the Kong metrics and enriches them with Kubernetes labels before exposing them on it’s own /metrics endpoint.

These enriched metrics can be used with the Kubernetes HorizontalPodAutoscaler to autoscale workloads.

How it works

Attaching a DataPlaneMetricsExtension resource to a ControlPlane will:

  • Create a managed Prometheus KongPlugin instance with the configuration defined in MetricsConfig
  • Append the managed plugin to the selected Services (through DataPlaneMetricsExtension’s serviceSelector field) konghq.com/plugins annotation
  • Scrape Kong Gateway’s metrics and enrich them with Kubernetes metadata
  • Expose those metrics on Kong Gateway Operator’s /metrics endpoint

Metrics support for enrichment

  • Upstream latency enabled via latency configuration option
    • kong_upstream_latency_ms

Custom metrics providers support

Metrics exposed by Kong Gateway Operator can be integrated with a variety of monitoring systems:

Limitations

Multi backend Kong services

Kong Gateway Operator is not able to provide accurate measurements for multi backend Kong Services. For example, HTTPRoutes that have more than 1 backendRef:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: httproute-testing
spec:
  parentRefs:
  - name: kong
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /httproute-testing
    backendRefs:
    - name: httpbin
      kind: Service
      port: 80
      weight: 75
    - name: nginx
      kind: Service
      port: 8080
      weight: 25
Something wrong?

Help us make these docs great!

Kong Developer docs are open source. If you find these useful and want to make them better, contribute today!