Related Documentation
Made by
Kong Inc.
Supported Gateway Topologies
hybrid db-less traditional
Supported Konnect Deployments
hybrid cloud-gateways serverless
Compatible Protocols
grpc grpcs http https tcp tls tls_passthrough udp ws wss
Related Resources

The Proxy Cache plugin provides a reverse proxy cache implementation for Kong Gateway. It caches response entities based on a configurable response code, content type, and request method.

The advanced version of this plugin, Proxy Cache Advanced, extends the Proxy Cache plugin with Redis, Redis Cluster, and Redis Sentinel support.

How it works

The Proxy Cache plugin stores cache data in memory, which is a shared dictionary defined in config.memory.dictionary_name.

The default dictionary, kong_db_cache, is also used by other plugins and functions of Kong Gateway to store unrelated database cache entities. Using the kong_db_cache dictionary is an easy way to bootstrap and test the plugin, but we don’t recommend using it for large-scale installations as significant usage will put pressure on other facets of Kong Gateway’s database caching operations. In production, we recommend defining a custom lua_shared_dict via a custom Nginx template.

Cache entities are stored for a configurable period of time, after which subsequent requests to the same resource will fetch and store the resource again.

In Traditional mode, cache entities can also be forcefully purged via the Admin API prior to their expiration time.

Cache key

Kong Gateway keys each cache element based on:

  • The request method
  • The full client request (for example, the request path and query parameters)
  • The UUID of either the API or Consumer associated with the request

Caches are distinct between APIs and Consumers.

Internally, cache keys are generated by computing the SHA256 hash of the combined parts, then encoding the result in hexadecimal:

key = sha256(UUID | method | request | query_params | headers | consumer_groups)

Where:

  • method is defined in the OpenResty ngx.req.get_method() call
  • request is defined via the Nginx $request variable
  • query_params are defined via the plugin’s config.vary_query_params parameter
  • headers are defined in the plugin’s config.vary_headers parameter
  • consumer_groups are defined based on the Consumer Groups this plugin is applied to

Kong Gateway will return the cache key associated with a given request as the X-Cache-Key response header.

Note: The cache key format is hardcoded and can’t be modified.

Cache control

When the config.cache_control configuration option is enabled, Kong Gateway respects request and response Cache-Control headers as defined by RFC7234, with the following exceptions:

  • Cache revalidation is not supported, so directives such as proxy-revalidate are ignored
  • The behavior of no-cache is simplified to exclude the entity from being cached entirely

Cache status

Kong Gateway identifies the status of a request’s proxy cache behavior via the X-Cache-Status header. There are several possible values for this header:

  • Miss: The request could be satisfied in cache, but an entry for the resource was not found in cache, and the request was proxied upstream.
  • Hit: The request was satisfied and served from cache.
  • Refresh: The resource was found in cache, but couldn’t satisfy the request, due to Cache-Control behaviors or from reaching its hardcoded config.cache_ttl threshold.
  • Bypass: The request couldn’t be satisfied from cache based on plugin configuration.

Storage TTL

Kong Gateway can store resource entities in the storage engine longer than the set config.cache_ttl or Cache-Control values indicate. This allows Kong Gateway to maintain a cached copy of a resource past its expiration.

If clients use the max-age and max-stale headers, they can request stale copies of data.

Upstream outages

If an upstream is unreachable, Kong Gateway can serve cache data instead of returning an error. However, this requires managing stale cache data.

We recommend setting a high storage_ttl value measured in hours or days to store stale data in the cache.
If an upstream service becomes unavailable, you can increase the cache_ttl value to treat the stale data as fresh.
This allows Kong Gateway to serve previously cached data to clients before attempting to connect to the unavailable upstream service.

Managing cache entities

The Proxy Cache plugin exposes several /proxy-cache endpoints for cache management through the Kong Admin API.

You can use the Admin API to:

  • Look up cache entities
  • Delete cache entities
  • Purge all caches

To access these endpoints, enable the plugin first. The Proxy Cache caching endpoints will appear once the plugin has been enabled.

This plugin’s API endpoints are not available in hybrid mode. The data that this API targets is located on the Data Planes, and Data Planes can’t use the Kong Admin API.

Something wrong?

Help us make these docs great!

Kong Developer docs are open source. If you find these useful and want to make them better, contribute today!