Proxy Caching
Use the proxy-cache
plugin by creating a KongPlugin
resource while specifying config.response_code
, config.request_method
and config.cache_ttl
.
Prerequisites
Series Prerequisites
This page is part of the Getting Started with KIC series.
Complete the previous page, Rate Limiting before completing this page.
About the Proxy Cache plugin
One of the ways Kong Gateway delivers performance is through caching. The Proxy Cache plugin accelerates performance by caching responses based on configurable response codes, content types, and request methods. When caching is enabled, upstream services are not impacted by repetitive requests, because Kong Gateway responds on their behalf with cached results. Caching can be enabled on specific Routes or for all requests globally.
Proxy Cache headers
The proxy-cache
plugin returns a X-Cache-Status
header that can contain the following cache results:
Kong Gateway identifies the status of a request’s proxy cache behavior via the X-Cache-Status
header.
There are several possible values for this header:
-
Miss
: The request could be satisfied in cache, but an entry for the resource was not found in cache, and the request was proxied upstream. -
Hit
: The request was satisfied and served from cache. -
Refresh
: The resource was found in cache, but couldn’t satisfy the request, due toCache-Control
behaviors or from reaching its hardcodedconfig.cache_ttl
threshold. -
Bypass
: The request couldn’t be satisfied from cache based on plugin configuration.
Create a proxy-cache KongClusterPlugin
In the previous section you created a KongPlugin
that was applied to a specific service or route. You can also use a KongClusterPlugin
which is a global plugin that applies to all services.
This configuration caches all HTTP 200
responses to GET
and HEAD
requests for 300 seconds:
echo "
apiVersion: configuration.konghq.com/v1
kind: KongClusterPlugin
metadata:
name: proxy-cache-all-endpoints
namespace: kong
annotations:
kubernetes.io/ingress.class: kong
labels:
global: 'true'
plugin: proxy-cache
config:
response_code:
- 200
request_method:
- GET
- HEAD
content_type:
- text/plain; charset=utf-8
cache_ttl: 300
strategy: memory
" | kubectl apply -f -
Test the proxy-cache plugin
To test the proxy-cache plugin, send another six requests to $PROXY_IP/echo
:
for _ in {1..6}; do
curl -sv $PROXY_IP/echo \
-H "apikey:example-key" 2>&1 | grep -E "(Status|< HTTP)"
echo
done
The first request results in X-Cache-Status: Miss
. This means that the request is sent to the upstream service. The next four responses return X-Cache-Status: Hit
which indicates that the request was served from a cache. If you receive a HTTP 429
from the first request, wait 60 seconds for the rate limit timer to reset.
< HTTP/1.1 200 OK
< X-Cache-Status: Miss
< HTTP/1.1 200 OK
< X-Cache-Status: Hit
< HTTP/1.1 200 OK
< X-Cache-Status: Hit
< HTTP/1.1 200 OK
< X-Cache-Status: Hit
< HTTP/1.1 200 OK
< X-Cache-Status: Hit
< HTTP/1.1 429 Too Many Requests
for _ in {1..6}; do
curl -sv $PROXY_IP/echo \
-H "apikey:example-key" 2>&1 | grep -E "(Status|< HTTP)"
echo
done
The first request results in X-Cache-Status: Miss
. This means that the request is sent to the upstream service. The next four responses return X-Cache-Status: Hit
which indicates that the request was served from a cache. If you receive a HTTP 429
from the first request, wait 60 seconds for the rate limit timer to reset.
< HTTP/1.1 200 OK
< X-Cache-Status: Miss
< HTTP/1.1 200 OK
< X-Cache-Status: Hit
< HTTP/1.1 200 OK
< X-Cache-Status: Hit
< HTTP/1.1 200 OK
< X-Cache-Status: Hit
< HTTP/1.1 200 OK
< X-Cache-Status: Hit
< HTTP/1.1 429 Too Many Requests
The final thing to note is that when a HTTP 429
request is returned by the rate-limit plugin, you don’t see a X-Cache-Status
header. This is because rate-limiting
executes before proxy-cache
. For more information, see plugin priority.