Proxying with Kong Gateway

Uses: Kong Gateway

Proxying is when Kong Gateway matches an HTTP request with a Route and forwards the request. This page explains how Kong Gateway handles proxying.

 
sequenceDiagram
    actor Client
    participant Gateway as Kong Gateway
    participant Router
    participant Plugins as Plugins
    participant LoadBalancer as Load balancer
    participant UpstreamService as Upstream service

    Client->>Gateway: Sends HTTP request or L4 connection
    Gateway->>Router: Evaluates incoming request against Routes
    Router->>Router: Orders Routes by priority
    Router->>Gateway: Returns highest priority matching Route
    Gateway->>Plugins: Executes plugins the `access` phase
    Gateway->>LoadBalancer: Implements load balancing capabilities
    LoadBalancer->>LoadBalancer: Distributes request across upstream service instances
    LoadBalancer->>UpstreamService: Forwards request to selected instance
    UpstreamService->>Gateway: Sends response
    Gateway->>Plugins: Executes plugins in the `header_filter` phase
    Gateway->>Client: Streams response back to client
  

Kong Gateway handles proxying in the following order:

  1. Kong Gateway listens for HTTP traffic on its configured proxy port(s) (8000 and 8443 by default) and L4 traffic on explicitly configured stream_listen ports.
  2. Kong Gateway evaluates any incoming HTTP request or L4 connection against the Routes you have configured and tries to find a matching one. For more details about how Kong Gateway handles routing, see the Routes entity.
  3. If multiple Routes match, the Kong Gateway router then orders all defined Routes by their priority and uses the highest priority matching Route to handle a request.
  4. If a given request matches the rules of a specific Route, Kong Gateway runs any global, Route, or Gateway Service plugins before it proxies the request. Plugins configured on Routes run before those configured on Services. These configured plugins run their access phase. For more information, see plugin contexts.
  5. Kong Gateway implements load balancing capabilities to distribute proxied requests across a pool of instances of an upstream service.
  6. Once Kong Gateway has executed all the necessary logic (including plugins), it’s ready to forward the request to your upstream service. This is done via Nginx’s ngx_http_proxy_module.
  7. Kong Gateway receives the response from the upstream service and sends it back to the downstream client in a streaming fashion. At this point, Kong Gateway executes subsequent plugins added to the Route and/or Service that implement a hook in the header_filter phase.

Listeners

From a high-level perspective, Kong Gateway listens for HTTP traffic on its configured proxy ports (8000 and 8443 by default) and L4 traffic on explicitly configured stream_listen ports. Kong Gateway will evaluate any incoming HTTP request or L4 connection against the Routes you have configured and try to find a matching one.

Kong Gateway exposes several interfaces which can be configured by the following properties:

  • proxy_listen: Defines a list of addresses/ports on which Kong Gateway accepts public HTTP (gRPC, WebSocket, etc.) traffic from clients and proxies it to your upstream services (8000 by default).
  • admin_listen: Also defines a list of addresses and ports, but those should be restricted to only administrators, as they expose Kong Gateway’s configuration capabilities via the Admin API (8001 by default).

    Important: If you need to expose the admin_listen port to the internet in a production environment, secure it with authentication.

  • stream_listen: Similar to proxy_listen, but for Layer 4 (TCP, TLS) generic proxy. This is turned off by default.

Kong Gateway is a transparent proxy, and it defaults to forwarding the request to your upstream service untouched, with the exception of various headers such as Connection, Date, and others as required by the HTTP specifications.

Proxying and upstream timeouts

You can configure the desired timeouts for the connection between Kong Gateway and a given Upstream using the following properties of a Gateway Service:

  • connect_timeout: Defines, in milliseconds, the timeout for establishing a connection to your upstream service. Defaults to 60000.
  • write_timeout: Defines, in milliseconds, a timeout between two successive write operations for transmitting a request to your upstream service. Defaults to 60000.
  • read_timeout: Defines, in milliseconds, a timeout between two successive read operations for receiving a request from your upstream service. Defaults to 60000.

Kong Gateway sends the request over HTTP/1.1 and sets the following headers:

Header

Description

Host: <your_upstream_host> The host of your Upstream.
Connection: keep-alive Allows for reusing the Upstream connections.
X-Real-IP: <remote_addr> $remote_addr is the variable bearing the same name provided by ngx_http_core_module. $remote_addr is likely overridden by ngx_http_realip_module.
X-Forwarded-For: <address> <address> is the content of $realip_remote_addr provided by ngx_http_realip_module appended to the request header with the same name.
X-Forwarded-Proto: <protocol> <protocol> is the protocol used by the client. If $realip_remote_addr is one of the trusted addresses, the request header with the same name gets forwarded if provided. Otherwise, the value of the $scheme variable provided by ngx_http_core_module will be used.
X-Forwarded-Host: <host> <host> is the host name sent by the client. If $realip_remote_addr is one of the trusted addresses, the request header with the same name gets forwarded if provided. Otherwise, the value of the $host variable provided by ngx_http_core_module will be used.
X-Forwarded-Port: <port> <port> is the port of the server which accepted a request. If $realip_remote_addr is one of the trusted addresses, the request header with the same name gets forwarded if provided. Otherwise, the value of the $server_port variable provided by ngx_http_core_module will be used.
X-Forwarded-Prefix: <path> <path> is the path of the request which was accepted by Kong Gateway. If $realip_remote_addr is one of the trusted addresses, the request header with the same name gets forwarded if provided. Otherwise, the value of the $request_uri variable (with the query string stripped) provided by ngx_http_core_module will be used.

Note: Kong Gateway returns "/" for an empty path, but it doesn’t do any other normalization on the request path.

All other headers Forwarded as-is by Kong Gateway.

One exception to this is when you’re using the WebSocket protocol,Kong Gateway sets the following headers to allow for upgrading the protocol between the client and your upstream services:

  • Connection: Upgrade
  • Upgrade: websocket

For more information, see the Proxy WebSocket traffic section.

Errors and retries

Whenever an error occurs during proxying, Kong Gateway uses the underlying Nginx retries mechanism to pass the request on to the next upstream.

There are two configurable elements:

  1. The number of retries. This can be configured per Service using the retries property.
  2. What exactly constitutes an error. Here Kong Gateway uses the Nginx defaults, which means an error or timeout that occurs while establishing a connection with the server, passing a request to it, or reading the response headers. This is based on Nginx’s proxy_next_upstream directive. This option is not directly configurable through Kong Gateway, but can be added using a custom Nginx configuration. See the Nginx directives reference for more details.

Response

Kong Gateway receives the response from the upstream service and sends it back to the downstream client in a streaming fashion. At this point, Kong Gateway executes subsequent plugins added to the Route or Service that implement a hook in the header_filter phase.

Once the header_filter phase of all registered plugins has been executed, the following headers are added by Kong Gateway and the full set of headers is sent to the client:

Header

Description

Via: kong/x.x.x x.x.x is the Kong Gateway version in use.
X-Kong-Proxy-Latency: <latency> latency is the time, in milliseconds, between Kong Gateway receiving the request from the client and sending the request to your upstream service.
X-Kong-Upstream-Latency: <latency> latency is the time, in milliseconds, that Kong Gateway was waiting for the first byte of the upstream service response.

Once the headers are sent to the client, Kong Gateway starts executing plugins for the Route or Service that implement the body_filter hook. This hook may be called multiple times, due to the streaming nature of Nginx. Each chunk of the upstream response that is successfully processed by such body_filter hooks is sent back to the client.

Proxy WebSocket traffic

Kong Gateway supports WebSocket traffic thanks to the underlying Nginx implementation. When you want to establish a WebSocket connection between a client and your upstream services through Kong Gateway, you must establish a WebSocket handshake. This is done via the HTTP Upgrade mechanism. This is what your client request made to Kong Gateway would look like:

GET / HTTP/1.1
Connection: Upgrade
Host: my-websocket-api.com
Upgrade: WebSocket

This configures Kong Gateway to forward the Connection and Upgrade headers to your upstream service instead of dismissing them due to the hop-by-hop nature of a standard HTTP proxy.

WebSocket proxy modes

There are two methods for proxying WebSocket traffic in Kong Gateway:

  • HTTP(S) Services and Routes
  • WS(S) Services and Routes

HTTP(S) Services and Routes

Services and Routes using the http and https protocols are fully capable of handling WebSocket connections with no special configuration. With this method, WebSocket sessions behave identically to regular HTTP requests, and all of the request and response data is treated as an opaque stream of bytes.

Here’s a configuration example:

_format_version: "3.0"
services:
  - name: my-http-websocket-service
    protocol: http
    host: 1.2.3.4
    port: 80
    path: "/"
    routes:
    - name: my-http-websocket-route
      protocols:
      - http
      - https

WS(S) Services and Routes

In addition to HTTP Services and Routes, Kong Gateway includes the ws (WebSocket-over-http) and wss (WebSocket-over-https) protocols. Unlike http and https, ws and wss Services have full control over the underlying WebSocket connection. This means that they can use WebSocket plugins and the WebSocket PDK to perform business logic on a per-message basis (message validation, accounting, rate-limiting, etc).

Here’s a configuration example:

_format_version: "3.0"
services:
  - name: my-dedicated-websocket-service
    protocol: ws
    host: 1.2.3.4
    port: 80
    path: "/"
    routes:
    - name: my-dedicated-websocket-route
      protocols:
      - ws
      - wss

Note: Decoding and encoding WebSocket messages comes with a non-zero amount of performance overhead when compared with the protocol-agnostic behavior of http(s) Services. If your API doesn’t need the extra capabilities provided by a ws(s) Service, we recommend using an http(s) Service instead.

WebSocket and TLS

Regardless of which Service/Route protocols are in use (http(s) or ws(s)), Kong Gateway will accept plain and TLS WebSocket connections on its respective http and https ports. To enforce TLS connections from clients, set the protocols property of the Route to https or wss only.

When setting up the Service to point to your upstream WebSocket service, you should carefully pick the protocol you want to use between Kong Gateway and the upstream service.

If you want to use TLS, your upstream WebSocket service must be defined using the https (or wss) protocol in the Gateway Service protocol property and the proper port (usually 443). To connect without TLS, then the http (or ws) protocol and port (usually 80) should be used in instead.

If you want Kong Gateway to terminate TLS, you can accept https/wss only from the client, but proxy to the upstream service over plain text (http or ws).

Proxy gRPC traffic

gRPC proxying is natively supported in Kong Gateway. To manage gRPC Services and proxy gRPC requests with Kong Gateway, create Services and Routes for your gRPC services.

Only observability and logging plugins are supported with gRPC. Plugins that support gRPC have grpc and grpcs in the list of compatible protocols. This is the case for File Log, for example.

Proxy TCP/TLS traffic

TCP and TLS proxying is natively supported in Kong Gateway.

In this mode, data of incoming connections reaching the stream_listen endpoints will be passed through to the upstream service. It’s possible to terminate TLS connections from clients using this mode as well.

To use this mode, aside from defining stream_listen, you should create the appropriate Route/Service object with the tcp or tls protocol.

If you want to terminate TLS with Kong Gateway, the following conditions must be met:

  1. The Kong Gateway port used by the TLS connection to must have the ssl flag enabled
  2. A certificate/key that can be used for TLS termination must be present inside Kong Gateway, as shown in TLS Route configuration

Kong Gateway will use the connecting client’s TLS SNI server name extension to find the appropriate TLS certificate to use.

On the Service side, depending on whether the connection between Kong Gateway and the upstream service needs to be encrypted, you can set either the tcp or tls protocol. This means the following setup is supported in this mode:

  1. Client <- TLS -> Kong Gateway <- TLS -> Upstream
  2. Client <- TLS -> Kong Gateway <- Cleartext -> Upstream
  3. Client <- Cleartext -> Kong Gateway <- TLS -> Upstream

Note: In L4 proxy mode, only certain plugins support the tcp or tls protocol. You can find the list of supported protocols for each plugin in the Plugin Hub.

Something wrong?

Help us make these docs great!

Kong Developer docs are open source. If you find these useful and want to make them better, contribute today!