The HTTP Log plugin lets you send request and response logs to an HTTP server.
It also supports stream data (TCP, TLS, and UDP).
The HTTP Log plugin lets you send request and response logs to an HTTP server.
It also supports stream data (TCP, TLS, and UDP).
This logging plugin logs HTTP request and response data, and also supports stream data (TCP, TLS, and UDP).
The Kong Gateway process error file is the Nginx error file. You can find it at the following path:
$PREFIX/logs/error.log
Configure the prefix in
kong.conf
.
Note: If the
max_batch_size
argument > 1, a request is logged as an array of JSON objects.
Every request is logged separately in a JSON object, separated by a new line \n
.
Expand this block to see a sample log object
{ "response": { "size": 9982, "headers": { "access-control-allow-origin": "*", "content-length": "9593", "date": "Thu, 19 Sep 2024 22:10:39 GMT", "content-type": "text/html; charset=utf-8", "via": "1.1 kong/3.8.0.0-enterprise-edition", "connection": "close", "server": "gunicorn/19.9.0", "access-control-allow-credentials": "true", "x-kong-upstream-latency": "171", "x-kong-proxy-latency": "1", "x-kong-request-id": "2f6946328ffc4946b8c9120704a4a155" }, "status": 200 }, "route": { "updated_at": 1726782477, "tags": [], "response_buffering": true, "path_handling": "v0", "protocols": [ "http", "https" ], "service": { "id": "fb4eecf8-dec2-40ef-b779-16de7e2384c7" }, "https_redirect_status_code": 426, "regex_priority": 0, "name": "example_route", "id": "0f1a4101-3327-4274-b1e4-484a4ab0c030", "strip_path": true, "preserve_host": false, "created_at": 1726782477, "request_buffering": true, "ws_id": "f381e34e-5c25-4e65-b91b-3c0a86cfc393", "paths": [ "/example-route" ] }, "workspace": "f381e34e-5c25-4e65-b91b-3c0a86cfc393", "workspace_name": "default", "tries": [ { "balancer_start": 1726783839539, "balancer_start_ns": 1.7267838395395e+18, "ip": "34.237.204.224", "balancer_latency": 0, "port": 80, "balancer_latency_ns": 27904 } ], "client_ip": "192.168.65.1", "request": { "id": "2f6946328ffc4946b8c9120704a4a155", "headers": { "accept": "*/*", "user-agent": "HTTPie/3.2.3", "host": "localhost:8000", "connection": "keep-alive", "accept-encoding": "gzip, deflate" }, "uri": "/example-route", "size": 139, "method": "GET", "querystring": {}, "url": "http://localhost:8000/example-route" }, "upstream_uri": "/", "started_at": 1726783839538, "source": "upstream", "upstream_status": "200", "latencies": { "kong": 1, "proxy": 171, "request": 173, "receive": 1 }, "service": { "write_timeout": 60000, "read_timeout": 60000, "updated_at": 1726782459, "host": "httpbin.konghq.com", "name": "example_service", "id": "fb4eecf8-dec2-40ef-b779-16de7e2384c7", "port": 80, "enabled": true, "created_at": 1726782459, "protocol": "http", "ws_id": "f381e34e-5c25-4e65-b91b-3c0a86cfc393", "connect_timeout": 60000, "retries": 5 } }
The following table describes each object in the log:
Log item |
Description |
---|---|
service
|
Properties of the Gateway Service associated with the requested Route. |
route
|
Properties of the specific Route requested. |
request
|
Properties of the request sent by the client. |
request.tls.version
|
TLS/SSL version used by the connection. |
request.tls.cipher
|
TLS/SSL cipher used by the connection. |
request.tls.client_verify
|
mTLS validation result. Contents are the same as described in $ssl_client_verify. |
response
|
Properties of the response sent to the client. |
latencies
|
Latency data. |
latencies.kong
|
The internal Kong Gateway latency, in milliseconds, that it takes to process the request.
|
latencies.request
|
The time, in milliseconds, that has elapsed between when the first bytes were read from the client and the last byte was sent to the client. This is useful for detecting slow clients. |
latencies.proxy
|
The time, in milliseconds, that it took for the upstream service to process the request. In other words, it’s the time elapsed between transferring the request to the final Service and when Kong Gateway starts receiving the response. |
latencies.receive
|
The time, in milliseconds, that it took to receive and process the response (headers and body) from the upstream service. |
tries
|
A list of iterations made by the load balancer for this request. |
tries.balancer_start
|
A Unix timestamp for when the balancer started. |
tries.ip
|
The IP address of the contacted balancer. |
tries.port
|
The port number of the contacted balancer. |
tries.balancer_latency
|
The latency of the balancer expressed in milliseconds. |
client_ip
|
The original client IP address. |
workspace
|
The UUID of the Workspace associated with this request. |
workspace_name
|
The name of the Workspace associated with this request. |
upstream_uri
|
The URI, including query parameters, for the configured upstream service. |
authenticated_entity
|
Properties of the authenticated credential. Only present if authentication is enabled. |
consumer
|
The authenticated Consumer. Only present if authentication is enabled. |
started_at
|
The Unix timestamp of when the request has started to be processed. |
source
|
v3.6+ Indicates whether the response is generated by kong or upstream .
|
upstream_status
|
v3.6+ The status code received from the upstream service in the response. |
Log plugins enabled on Services and Routes also contain information about the Service or Route.
The HTTP Log plugin uses internal queues to decouple the production of log entries from their transmission to the upstream log server.
With queuing, request information is put in a configurable queue before being sent in batches to the upstream server. This has the following benefits:
Note: Because queues are structural elements for components in Kong Gateway, they only live in the main memory of each worker process and are not shared between workers. Therefore, queued content isn’t preserved under abnormal operational situations, like power loss or unexpected worker process shutdown due to memory shortage or program errors.
You can configure several parameters for queuing:
Parameters |
Description |
---|---|
Queue capacity limits:
config.queue.max_entries
config.queue.max_bytes
config.queue.max_batch_size
|
Configure sizes for various aspects of the queue: maximum number of entries, batch size, and queue size in bytes.
When a queue reaches the maximum number of entries queued and another entry is enqueued, the oldest entry in the queue is deleted to make space for the new entry. The queue code provides warning log entries when it reaches a capacity threshold of 80% and when it starts to delete entries from the queue. It also writes log entries when the situation normalizes. |
Timer usage:
config.queue.concurrency_limit
|
Only one timer is used to start queue processing in the background. You can add more if needed. Once the queue is empty, the timer handler terminates and a new timer is created as soon as a new entry is pushed onto the queue. |
Retry logic:
config.queue.initial_retry_delay
config.queue.max_coalescing_delay
config.queue.max_retry_delay
config.queue.max_retry_time
|
If a queue fails to process, the queue library can automatically retry processing it if the failure is temporary
(for example, if there are network problems or upstream unavailability).
Before retrying, the library waits for the amount of time specified by the initial_retry_delay parameter.
This wait time is doubled every time the retry fails, until it reaches the maximum wait time specified by the max_retry_time parameter.
|
When a Kong Gateway shutdown is initiated, the queue is flushed. This allows Kong Gateway to shut down even if it was waiting for new entries to be batched, ensuring upstream servers can be contacted.
Queues are not shared between workers and queuing parameters are scoped to one worker. For whole-system capacity planning, the number of workers needs to be considered when setting queue parameters.
In contrast to other plugins that use queues, all HTTP Log plugin instances that have the same values for the following parameters share one queue:
The custom_fields_by_lua
configuration allows for the dynamic modification of
log fields using Lua code. Below is a snippet of an example configuration that
removes the route
field from the logs:
curl -i -X POST http://localhost:8001/plugins \
...
--data config.custom_fields_by_lua.route="return nil"
Similarly, new fields can be added:
curl -i -X POST http://localhost:8001/plugins \
...
--data config.custom_fields_by_lua.header="return kong.request.get_header('h1')"
Dot characters (.
) in the field key create nested fields. You can use a backslash \
to escape a dot if you want to keep it in the field name.
For example, if you configure a field in the File Log plugin with both a regular dot and an escaped dot:
curl -i -X POST http://localhost:8001/plugins/ \
...
--data config.name=file-log \
--data config.custom_fields_by_lua[my_file.log\.field]="return foo"
The field will look like this in the log:
"my_file": {
"log.field": "foo"
}
All logging plugins use the same table for logging.
If you set custom_fields_by_lua
in one plugin, all logging plugins that execute after that plugin will also use the same configuration.
For example, if you configure fields via custom_fields_by_lua
in File Log, those same fields will appear in Kafka Log, since File Log executes first.
If you want all logging plugins to use the same configuration, we recommend using the Pre-function plugin to call kong.log.set_serialize_value so that the function is applied predictably and is easier to manage.
If you don’t want all logging plugins to use the same configuration, you need to manually disable the relevant fields in each plugin.
For example, if you configure a field in File Log that you don’t want appearing in Kafka Log, set that field to return nil
in the Kafka Log plugin:
curl -i -X POST http://localhost:8001/plugins/ \
...
--data config.name=kafka-log \
--data config.custom_fields_by_lua.my_file_log_field="return nil"
See the plugin execution order reference for more details on plugin ordering.
Lua code runs in a restricted sandbox environment, whose behavior is governed
by the untrusted_lua
configuration properties.
Sandboxing consists of several limitations in the way the Lua code can be executed, for heightened security.
The following functions are not available because they can be used to abuse the system:
string.rep
: Can be used to allocate millions of bytes in one operation.{set|get}metatable
: Can be used to modify the metatables of global objects (strings, numbers).collectgarbage
: Can be abused to kill the performance of other workers._G
: Is the root node which has access to all functions. It is masked by a temporary table.load{file|string}
: Is deemed unsafe because it can grant access to the global environment.raw{get|set|equal}
: Potentially unsafe because sandboxing relies on some metatable manipulation.string.dump
: Can display confidential server information (such as implementation of functions).math.randomseed
: Can affect the host system. Kong Gateway already seeds the random number generator properly.os.*
(except os.clock
, os.difftime
, and os.time
). os.execute
can significantly alter the host system.io.*
: Provides access to the hard drive.dofile|require
: Provides access to the hard drive.The exclusion of require
means that plugins must only use PDK functions kong.*
. The ngx.*
abstraction is
also available, but it is not guaranteed to be present in future versions of the plugin.
In addition to the above restrictions:
string
or table
) are read-only and can’t be modified.kong.cache
points to a cache instance that is dedicated to the Serverless Functions plugins. It does not provide access to the global Kong Gateway cache. It only exposes the get
method. Explicit write operations like set
or invalidate
are not available.Further, as code runs in the context of the log phase, only PDK methods that can run in said phase can be used.
When does the HTTP Log plugin record log entries in a request/response timeline?
The log is executed after Kong Gateway sends the last response byte to the client.
Can the HTTP Log plugin expose latency metrics for individual phases of the request lifecycle (such as rewrite
, access
, header_filter
, and body_filter
)?
The HTTP Log plugin doesn’t provide latency metrics at this granular level. Instead, use active tracing in Konnect.