Get started with Kong Native Event Proxy
Get started with Kong Native Event Proxy by setting up a Konnect Control Plane and a Kafka cluster, then configuring the Control Plane using the /declarative_config
endpoint of the Control Plane Config API.
Prerequisites
Kong Konnect
If you don’t have a Konnect account, you can get started quickly with our onboarding wizard.
- The following Konnect items are required to complete this tutorial:
- Personal access token (PAT): Create a new personal access token by opening the Konnect PAT page and selecting Generate Token.
-
Set the personal access token as an environment variable:
export KONNECT_TOKEN='YOUR KONNECT TOKEN'
Sign up for the KNEP beta
If you’re an existing Kong customer or prospect, please fill out the beta participation form and we will reach out to you.
Install kafkactl
Install kafkactl. You’ll need it to interact with Kafka clusters.
Create a Control Plane in Konnect
Use the Konnect API to create a new CLUSTER_TYPE_KAFKA_NATIVE_EVENT_PROXY
Control Plane:
KONNECT_CONTROL_PLANE_ID=$(curl -X POST "https://us.api.konghq.com/v2/control-planes" \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json '{
"name": "KNEP getting started",
"cluster_type": "CLUSTER_TYPE_KAFKA_NATIVE_EVENT_PROXY"
}' | jq -r '.id')
Start a local Kafka cluster
We will start a Docker Compose cluster with Kafka, KNEP, confluent-schema-registry
and a Kafka UI.
First, we need to create a docker-compose.yaml
file. This file will define the services we want to run in our local environment:
cat <<EOF > docker-compose.yaml
services:
broker:
image: apache/kafka:latest
container_name: broker
environment:
KAFKA_NODE_ID: 1
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093,EXTERNAL://0.0.0.0:9094
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,EXTERNAL://localhost:9094
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@broker:9093
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY: 0
KAFKA_NUM_PARTITIONS: 3
KAFKA_CONTROLLER_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT
ports:
- "9092:9092"
- "9094:9094"
healthcheck:
test: kafka-topics.sh --bootstrap-server broker:9092 --list
interval: 10s
timeout: 10s
retries: 5
schema-registry:
image: confluentinc/cp-schema-registry:latest
container_name: schema-registry
depends_on:
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: broker:9092
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
healthcheck:
test: curl -f http://localhost:8081/subjects
interval: 10s
timeout: 5s
retries: 5
knep:
image: kong/kong-native-event-proxy:latest
container_name: knep
depends_on:
- broker
ports:
- "9192-9292:9192-9292"
- "8080:8080"
env_file: "knep.env"
environment:
KNEP__RUNTIME__DRAIN_DURATION: 1s # makes shutdown quicker, not recommended to be set like this in production
healthcheck:
test: curl -f http://localhost:8080/health/probes/liveness
interval: 10s
timeout: 5s
retries: 5
kafka-ui:
image: provectuslabs/kafka-ui:latest
container_name: kafka-ui
environment:
# First cluster configuration (direct Kafka connection)
KAFKA_CLUSTERS_0_NAME: "direct-kafka-cluster"
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: "broker:9092"
KAFKA_CLUSTERS_0_SCHEMAREGISTRY: "http://schema-registry:8081"
# Second cluster configuration (KNEP proxy connection)
KAFKA_CLUSTERS_1_NAME: "knep-proxy-cluster"
KAFKA_CLUSTERS_1_BOOTSTRAPSERVERS: "knep:9092"
KAFKA_CLUSTERS_1_SCHEMAREGISTRY: "http://schema-registry:8081"
SERVER_PORT: 8082
ports:
- "8082:8082"
EOF
Note that the above config publishes the following ports to the host:
-
kafka:9092
for plaintext auth -
kafka:9094
for SASL username/password auth -
kafka-ui:8082
for access to the Kafka UI -
schema-registry:8081
for access to the schema registry -
knep:9192
toknow:9292
for access to the KNEP proxy (the port range is wide to allow many virtual clusters to be created) -
knep:8080
for probes and metrics access to KNEP
The KNEP container will use environment variables from knep.env
file. Let’s create it:
cat <<EOF > knep.env
KONNECT_API_TOKEN=\${KONNECT_TOKEN}
KONNECT_API_HOSTNAME=us.api.konghq.com
KONNECT_CONTROL_PLANE_ID=\${KONNECT_CONTROL_PLANE_ID}
EOF
Now let’s start the local setup:
docker compose up -d
Let’s look at the logs of the KNEP container to see if it started correctly:
docker compose logs knep
You should see something like this:
knep | 2025-04-30T08:59:58.004076Z WARN tokio-runtime-worker ThreadId(09) add_task{task_id="konnect_watch_config"}:task_run:check_dataplane_config{cp_config_url="/v2/control-planes/c6d325ec-0bd6-4fbc-b2c1-6a56c0a3edb0/declarative-config/native-event-proxy"}: knep::konnect: src/konnect/mod.rs:218: Konnect API returned 404, is the control plane ID correct?
This is expected, as we haven’t configured the Control Plane yet. We’ll do this in the next step.
Configure Kong Native Event Proxy control plane with a passthrough cluster
Let’s create the configuration file for the Control Plane. This file will define the backend cluster and the virtual cluster:
cat <<EOF > knep-config.yaml
virtual_clusters:
- name: demo
backend_cluster_name: kafka-1
route_by:
type: port
port:
min_broker_id: 1
authentication: # don't set any authentication for now
- type: anonymous
mediation:
type: anonymous
backend_clusters:
- name: kafka-1
bootstrap_servers:
- broker:9092
listeners:
port:
- listen_address: 0.0.0.0
advertised_host: knep
listen_port_start: 9092
- listen_address: 0.0.0.0
advertised_host: localhost
listen_port_start: 9192
EOF
Send a basic config to the Control Plane using the /declarative-config
endpoint:
curl -X PUT "https://us.api.konghq.com/v2/control-planes/$KONNECT_CONTROL_PLANE_ID/declarative-config" \
-H "Authorization: Bearer $KONNECT_TOKEN" \
--json "$(jq -Rs '{config: .}' < knep-config.yaml)"
Check the cluster works
Now let’s check that the cluster works. We can use the Kafka UI to do this by going to http://localhost:8082 and checking the cluster list.
You should see the direct-kafka-cluster
and knep-proxy-cluster
cluster listed there.
You can also use the kafkactl
command to check the cluster. First, let’s set up the kafkactl
config file:
cat <<EOF > kafkactl.yaml
contexts:
direct:
brokers:
- localhost:9092
knep:
brokers:
- localhost:9192
current-context: knep
EOF
Now let’s check the Kafka cluster directly:
kafkactl -C kafkactl.yaml --context direct list topics
You should see the topics listed there:
TOPIC PARTITIONS REPLICATION FACTOR
__consumer_offsets 50 1
_schemas 1 1
Now let’s check the same command but through KNEP:
kafkactl -C kafkactl.yaml --context knep list topics
You should see a similar output:
TOPIC PARTITIONS REPLICATION FACTOR
__consumer_offsets 50 1
_schemas 1 1
Cleanup
Clean up Konnect environment
If you created a new control plane and want to conserve your free trial credits or avoid unnecessary charges, delete the new control plane used in this tutorial.