Backfill broken objects with fallback configuration
Enable the FallbackConfiguration
feature gate and the CONTROLLER_USE_LAST_VALID_CONFIG_FOR_FALLBACK=true
environment variable for Kong Ingress Controller
Prerequisites
Kong Konnect
If you don’t have a Konnect account, you can get started quickly with our onboarding wizard.
- The following Konnect items are required to complete this tutorial:
- Personal access token (PAT): Create a new personal access token by opening the Konnect PAT page and selecting Generate Token.
-
Set the personal access token as an environment variable:
export KONNECT_TOKEN='YOUR KONNECT TOKEN'
Enable the Gateway API
-
Install the Gateway API CRDs before installing Kong Ingress Controller.
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yaml
-
Create a
Gateway
andGatewayClass
instance to use.
echo "
apiVersion: v1
kind: Namespace
metadata:
name: kong
---
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: kong
annotations:
konghq.com/gatewayclass-unmanaged: 'true'
spec:
controllerName: konghq.com/kic-gateway-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: kong
spec:
gatewayClassName: kong
listeners:
- name: proxy
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: All
" | kubectl apply -n kong -f -
Create a KIC Control Plane
Use the Konnect API to create a new CLUSTER_TYPE_K8S_INGRESS_CONTROLLER
Control Plane:
CONTROL_PLANE_DETAILS=$(curl -X POST "https://us.api.konghq.com/v2/control-planes" \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"name": "My KIC CP",
"cluster_type": "CLUSTER_TYPE_K8S_INGRESS_CONTROLLER"
}')
We’ll need the id
and telemetry_endpoint
for the values.yaml
file later. Save them as environment variables:
CONTROL_PLANE_ID=$(echo $CONTROL_PLANE_DETAILS | jq -r .id)
CONTROL_PLANE_TELEMETRY=$(echo $CONTROL_PLANE_DETAILS | jq -r '.config.telemetry_endpoint | sub("https://";"")')
Create mTLS certificates
Kong Ingress Controller talks to Konnect over a connected secured with TLS certificates.
Generate a new certificate using openssl
:
openssl req -new -x509 -nodes -newkey rsa:2048 -subj "/CN=kongdp/C=US" -keyout ./tls.key -out ./tls.crt
The certificate needs to be a single line string to send it to the Konnect API with curl. Use awk
to format the certificate:
export CERT=$(awk 'NF {sub(/\r/, ""); printf "%s\\n",$0;}' tls.crt);
Next, upload the certificate to Konnect:
curl -X POST "https://us.api.konghq.com/v2/control-planes/$CONTROL_PLANE_ID/dp-client-certificates" \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"cert": "'$CERT'"
}'
Finally, store the certificate in a Kubernetes secret so that Kong Ingress Controller can read it:
kubectl create namespace kong -o yaml --dry-run=client | kubectl apply -f -
kubectl create secret tls konnect-client-tls -n kong --cert=./tls.crt --key=./tls.key
Kong Ingress Controller running
-
Add the Kong Helm charts:
helm repo add kong https://charts.konghq.com helm repo update
-
Install Kong Ingress Controller using Helm:
helm install kong kong/ingress -n kong --create-namespace --set controller.ingressController.env.feature_gates="FallbackConfiguration=true" --set controller.ingressController.env.dump_config=true --set controller.ingressController.env.use_last_valid_config_for_fallback=true
-
Set
$PROXY_IP
as an environment variable for future commands:export PROXY_IP=$(kubectl get svc --namespace kong kong-gateway-proxy -o jsonpath='{range .status.loadBalancer.ingress[0]}{@.ip}{@.hostname}{end}') echo $PROXY_IP
Required Kubernetes resources
This how-to requires some Kubernetes services to be available in your cluster. These services will be used by the resources created in this how-to.
kubectl apply -f https://developer.konghq.com/manifests/kic/echo-service.yaml -n kong
Backfilling broken objects
Fallback Configuration supports backfilling broken objects with their last valid version. To demonstrate this, we’ll use the same setup as in the default mode, but this time we’ll test with the CONTROLLER_USE_LAST_VALID_CONFIG_FOR_FALLBACK
environment variable set to true
.
Configure plugins
This how-to requires three plugins to demonstrate how fallback configuration works.
-
As the example uses a Consumer, we need to create an authentication plugin to identify the incoming request:
echo " apiVersion: configuration.konghq.com/v1 kind: KongPlugin metadata: name: key-auth namespace: kong annotations: kubernetes.io/ingress.class: kong plugin: key-auth " | kubectl apply -f -
-
Unidentified traffic has a base rate limit of one request per second:
echo " apiVersion: configuration.konghq.com/v1 kind: KongPlugin metadata: name: rate-limit-base namespace: kong annotations: kubernetes.io/ingress.class: kong plugin: rate-limiting config: second: 1 policy: local " | kubectl apply -f -
-
Identified Consumers have a rate limit of five requests per second:
echo " apiVersion: configuration.konghq.com/v1 kind: KongPlugin metadata: name: rate-limit-consumer namespace: kong annotations: kubernetes.io/ingress.class: kong plugin: rate-limiting config: second: 5 policy: local " | kubectl apply -f -
Create Routes
Let’s create two Routes for testing purposes:
-
route-a
has no plugins attached -
route-b
has the three plugins created above attached
Create a Consumer
Finally, let’s create a KongConsumer
with credentials and associate the rate-limit-consumer
KongPlugin
.
Create a Secret containing the key-auth
credential:
echo 'apiVersion: v1
kind: Secret
metadata:
name: bob-key-auth
namespace: kong
labels:
konghq.com/credential: key-auth
stringData:
key: bob-password
' | kubectl apply -f -
Then create a KongConsumer
that references this Secret:
echo "
apiVersion: configuration.konghq.com/v1
kind: KongConsumer
metadata:
name: bob
namespace: kong
annotations:
kubernetes.io/ingress.class: kong
konghq.com/plugins: rate-limit-consumer
username: bob
credentials:
- bob-key-auth
" | kubectl apply -f -
Validate the Routes
At this point we can validate that our Routes are working as expected.
Route A
route-a
is accessible without any authentication and will return an HTTP 200
:
curl "$PROXY_IP/route-a"
curl "$PROXY_IP/route-a"
The results should look like this:
Welcome, you are connected to node orbstack.
Running on Pod echo-74c66b778-szf8f.
In namespace default.
With IP address 192.168.194.13.
Route B
Authenticated requests with the valid apikey
header on the route-b
should be accepted:
curl "$PROXY_IP/route-b" \
-H "apikey:bob-password"
curl "$PROXY_IP/route-b" \
-H "apikey:bob-password"
The results should look like this:
Welcome, you are connected to node orbstack.
Running on Pod echo-74c66b778-szf8f.
In namespace default.
With IP address 192.168.194.13.
Requests without the apikey
header should be rejected:
curl "$PROXY_IP/route-b"
curl "$PROXY_IP/route-b"
The results should look like this:
{
"message":"No API key found in request",
"request_id":"520c396c6c32b0400f7c33531b7f9b2c"
}
Break the Route
As we’ve verified that both HTTPRoute
s are operational, let’s break route-b
again by removing the rate-limit-consumer
KongPlugin
from the KongConsumer
:
kubectl annotate -n kong kongconsumer bob konghq.com/plugins-
Verify the broken route was backfilled
Backfilling the broken HTTPRoute
with its last valid version should have restored the Route to its last valid working state. That means we should be able to access route-b
as before the breaking change:
curl "$PROXY_IP/route-b"
curl "$PROXY_IP/route-b"
The results should look like this:
{
"message":"No API key found in request",
"request_id":"4604f84de6ed0b1a9357e935da5cea2c"
}
Inspecting diagnostic endpoints
Using diagnostic endpoints, we can now inspect the objects that were excluded and backfilled in the configuration:
kubectl port-forward -n kong deploy/kong-controller 10256 &
sleep 0.5; curl localhost:10256/debug/config/fallback | jq
The results should look like this:
{
"status": "triggered",
"brokenObjects": [
{
"group": "configuration.konghq.com",
"kind": "KongPlugin",
"namespace": "default",
"name": "rate-limit-consumer",
"id": "7167315d-58f5-4aea-8aa5-a9d989f33a49"
}
],
"excludedObjects": [
{
"group": "configuration.konghq.com",
"kind": "KongPlugin",
"version": "v1",
"namespace": "default",
"name": "rate-limit-consumer",
"id": "7167315d-58f5-4aea-8aa5-a9d989f33a49",
"causingObjects": [
"configuration.konghq.com/KongPlugin:default/rate-limit-consumer"
]
},
{
"group": "gateway.networking.k8s.io",
"kind": "HTTPRoute",
"version": "v1",
"namespace": "default",
"name": "route-b",
"id": "fc82aa3d-512c-42f2-b7c3-e6f0069fcc94",
"causingObjects": [
"configuration.konghq.com/KongPlugin:default/rate-limit-consumer"
]
}
],
"backfilledObjects": [
{
"group": "configuration.konghq.com",
"kind": "KongPlugin",
"version": "v1",
"namespace": "default",
"name": "rate-limit-consumer",
"id": "7167315d-58f5-4aea-8aa5-a9d989f33a49",
"causingObjects": [
"configuration.konghq.com/KongPlugin:default/rate-limit-consumer"
]
},
{
"group": "configuration.konghq.com",
"kind": "KongConsumer",
"version": "v1",
"namespace": "default",
"name": "bob",
"id": "deecb7c5-a3f6-4b88-a875-0e1715baa7c3",
"causingObjects": [
"configuration.konghq.com/KongPlugin:default/rate-limit-consumer"
]
},
{
"group": "gateway.networking.k8s.io",
"kind": "HTTPRoute",
"version": "v1",
"namespace": "default",
"name": "route-b",
"id": "fc82aa3d-512c-42f2-b7c3-e6f0069fcc94",
"causingObjects": [
"configuration.konghq.com/KongPlugin:default/rate-limit-consumer",
"gateway.networking.k8s.io/HTTPRoute:default/route-b"
]
}
]
}
As rate-limit-consumer
and route-b
were reported back as broken by the Kong Gateway, they were excluded from the configuration. However, the Fallback Configuration mechanism backfilled them with their last valid version, restoring the Route to its working state. You may notice that also the KongConsumer
was backfilled. This is because the KongConsumer
was depending on the rate-limit-consumer
plugin in the last valid state.
Note: The Fallback Configuration mechanism will attempt to backfill all the broken objects along with their direct and indirect dependants. The dependencies are resolved based on the last valid Kubernetes objects’ cache state.
Modify the affected objects
As we’re now relying on the last valid version of the broken objects and their dependants, we won’t be able to effectively modify them until we fix the problems. Let’s try and add another key for the bob
KongConsumer
.
Create a new Secret
with a new key:
echo 'apiVersion: v1
kind: Secret
metadata:
name: bob-key-auth-new
namespace: kong
labels:
konghq.com/credential: key-auth
stringData:
key: bob-new-password' | kubectl apply -f -
Associate the new Secret
with the KongConsumer
:
kubectl patch -n kong kongconsumer bob --type merge -p '{"credentials":["bob-key-auth", "bob-key-auth-new"]}'
The change won’t be effective as the HTTPRoute
and KongPlugin
are still broken. We can verify this by trying to access the route-b
with the new key:
curl "$PROXY_IP/route-b" \
-H "apikey:bob-new-password"
curl "$PROXY_IP/route-b" \
-H "apikey:bob-new-password"
The results should look like this:
{
"message":"Unauthorized",
"request_id":"4c706c7e4e06140e56453b22e169df0a"
}
Modify the working route
On the other hand, we can still modify the working HTTPRoute
:
kubectl patch -n kong httproute route-a --type merge -p '{"spec":{"rules":[{"matches":[{"path":{"type":"PathPrefix","value":"/route-a-modified"}}],"backendRefs":[{"name":"echo","port":1027}]}]}}'
Let’s verify the updated HTTPRoute
:
curl "$PROXY_IP/route-a-modified"
curl "$PROXY_IP/route-a-modified"
The results should look like this:
Welcome, you are connected to node orbstack.
Running on Pod echo-bf9d56995-r8c86.
In namespace default.
With IP address 192.168.194.8.
Fixing the broken route
To fix the broken HTTPRoute
, we need to associate the rate-limit-consumer
KongPlugin
back with the KongConsumer
:
kubectl annotate -n kong kongconsumer bob konghq.com/plugins=rate-limit-consumer
This should unblock the changes we’ve made to the bob-key-auth
Secret
. Let’s verify this by accessing the route-b
with the new key:
curl "$PROXY_IP/route-b" \
-H "apikey:bob-new-password"
curl "$PROXY_IP/route-b" \
-H "apikey:bob-new-password"
The results should look like this now:
Welcome, you are connected to node orbstack.
Running on Pod echo-bf9d56995-r8c86.
In namespace default.
With IP address 192.168.194.8.
Cleanup
Delete created Kubernetes resources
kubectl delete -n kong -f https://developer.konghq.com/manifests/kic/echo-service.yaml
Uninstall KIC from your cluster
helm uninstall kong -n kong