Route Qwen Code CLI traffic through AI Gateway
Configure AI Proxy to forward requests to OpenAI, enable the File Log plugin to inspect traffic, and point Qwen Code CLI to the local proxy endpoint so all requests go through the Gateway for monitoring and control.
Prerequisites
Kong Konnect
This is a Konnect tutorial and requires a Konnect personal access token.
-
Create a new personal access token by opening the Konnect PAT page and selecting Generate Token.
-
Export your token to an environment variable:
export KONNECT_TOKEN='YOUR_KONNECT_PAT'Copied! -
Run the quickstart script to automatically provision a Control Plane and Data Plane, and configure your environment:
curl -Ls https://get.konghq.com/quickstart | bash -s -- -k $KONNECT_TOKEN --deck-outputCopied!This sets up a Konnect Control Plane named
quickstart, provisions a local Data Plane, and prints out the following environment variable exports:export DECK_KONNECT_TOKEN=$KONNECT_TOKEN export DECK_KONNECT_CONTROL_PLANE_NAME=quickstart export KONNECT_CONTROL_PLANE_URL=https://us.api.konghq.com export KONNECT_PROXY_URL='http://localhost:8000'Copied!Copy and paste these into your terminal to configure your session.
Kong Gateway running
This tutorial requires Kong Gateway Enterprise. If you don’t have Kong Gateway set up yet, you can use the quickstart script with an enterprise license to get an instance of Kong Gateway running almost instantly.
-
Export your license to an environment variable:
export KONG_LICENSE_DATA='LICENSE-CONTENTS-GO-HERE'Copied! -
Run the quickstart script:
curl -Ls https://get.konghq.com/quickstart | bash -s -- -e KONG_LICENSE_DATACopied!Once Kong Gateway is ready, you will see the following message:
Kong Gateway Ready
decK v1.43+
decK is a CLI tool for managing Kong Gateway declaratively with state files. To complete this tutorial, install decK version 1.43 or later.
This guide uses deck gateway apply, which directly applies entity configuration to your Gateway instance.
We recommend upgrading your decK installation to take advantage of this tool.
You can check your current decK version with deck version.
Required entities
For this tutorial, you’ll need Kong Gateway entities, like Gateway Services and Routes, pre-configured. These entities are essential for Kong Gateway to function but installing them isn’t the focus of this guide. Follow these steps to pre-configure them:
-
Run the following command:
echo ' _format_version: "3.0" services: - name: example-service url: http://httpbin.konghq.com/anything routes: - name: example-route paths: - "/anything" service: name: example-service ' | deck gateway apply -Copied!
To learn more about entities, you can read our entities documentation.
OpenAI API Key
This tutorial requires an OpenAI API key with access to GPT models. You can obtain an API key from the OpenAI Platform.
Export the OpenAI API key as an environment variable:
export DECK_OPENAI_API_KEY='YOUR OPENAI API KEY'
Qwen Code CLI
This tutorial uses the Qwen Code CLI tool. Install Node.js 18+ if needed (verify with node --version), then install and launch Qwen Code CLI:
- Run the following command in your terminal to install the Qwen Code CLI:
npm install -g @qwen-code/qwen-codeCopied! - Once the installation process is complete, verify the installation:
qwen --versionCopied! - The CLI will display the installed version number.
Configure the AI Proxy plugin
First, configure the AI Proxy plugin. The Qwen Code CLI uses OpenAI-compatible endpoints for LLM communication. The plugin handles authentication using a bearer token header and forwards requests to the specified model.
CLI tools installed across multiple developer machines typically require distributing API keys to each installation, which exposes credentials and makes rotation difficult. Routing CLI tools through AI Gateway removes this requirement. Developers authenticate against the gateway instead of directly to AI providers. You can centralize authentication, enforce rate limits, track usage costs, enforce guardrails, and cache repeated requests.
The
max_request_body_sizeparameter is set to 4194304 bytes (4MB) to accommodate large code files and extended context windows that Qwen Code CLI sends during code analysis tasks.
echo '
_format_version: "3.0"
plugins:
- name: ai-proxy
config:
max_request_body_size: 4194304
route_type: llm/v1/chat
logging:
log_statistics: true
log_payloads: true
auth:
header_name: Authorization
header_value: Bearer ${{ env "DECK_OPENAI_API_KEY" }}
model:
provider: openai
name: gpt-5
options:
max_tokens: 512
temperature: 1.0
' | deck gateway apply -
Configure the File Log plugin
Let’s configure the File Log plugin to inspect the traffic between Qwen Code CLI and AI Gateway. This plugin will create a local log file for examining requests and responses as Qwen Code CLI runs through Kong.
echo '
_format_version: "3.0"
plugins:
- name: file-log
service: example-service
config:
path: "/tmp/qwen.json"
' | deck gateway apply -
Export environment variables
Open a new terminal window and export the variables that Qwen Code CLI will use. Point OPENAI_BASE_URL to the local proxy endpoint where LLM traffic from Qwen Code CLI will route:
export OPENAI_BASE_URL="http://localhost:8000/anything"
export OPENAI_API_KEY="YOUR OPENAI API KEY"
export OPENAI_MODEL="gpt-5"
export OPENAI_BASE_URL="http://localhost:8000/anything"
export OPENAI_API_KEY="YOUR OPENAI API KEY"
export OPENAI_MODEL="gpt-5"
If you’re using a different Konnect proxy URL, be sure to replace http://localhost:8000 with your proxy URL.
Make sure that
OPENAI_MODELvariable points to the same model configured for the AI Proxy plugin.
Validate the configuration
Now you can test the Qwen Code CLI setup.
-
In the terminal where you exported your environment variables, run:
qwenCopied!You should see the Qwen Code CLI interface start up.
-
Run a command to test the connection:
Explain the singleton pattern in Python.Copied!Expected output will show the model’s response to your prompt.
-
Check that LLM traffic went through AI Gateway:
docker exec kong-quickstart-gateway cat /tmp/qwen.json | jqCopied!Look for entries similar to:
{ ... "request": { "size": 53534, "uri": "/qwen/chat/completions", "method": "POST", "headers": { "user-agent": "QwenCode/0.6.2 (darwin; arm64)", "content-type": "application/json" } }, "response": { "status": 200, "size": 36922, "headers": { "x-kong-llm-model": "openai/gpt-5", "content-type": "text/event-stream; charset=utf-8" } }, "latencies": { "proxy": 8289, "kong": 43, "request": 9889 } ... }Copied!
Cleanup
Clean up Konnect environment
If you created a new control plane and want to conserve your free trial credits or avoid unnecessary charges, delete the new control plane used in this tutorial.
Destroy the Kong Gateway container
curl -Ls https://get.konghq.com/quickstart | bash -s -- -d