Use LangChain with AI Proxy in Kong Gateway

Uses: Kong Gateway AI Gateway decK
Related Resources
Minimum Version
Kong Gateway - 3.6
TL;DR

You can configure LangChain scripts to use your AI Gateway Route by replacing the base_url parameter in the LangChain model instantiation with your proxy URL.

Prerequisites

This is a Konnect tutorial and requires a Konnect personal access token.

  1. Create a new personal access token by opening the Konnect PAT page and selecting Generate Token.

  2. Export your token to an environment variable:

     export KONNECT_TOKEN='YOUR_KONNECT_PAT'
    
  3. Run the quickstart script to automatically provision a Control Plane and Data Plane, and configure your environment:

     curl -Ls https://get.konghq.com/quickstart | bash -s -- -k $KONNECT_TOKEN --deck-output
    

    This sets up a Konnect Control Plane named quickstart, provisions a local Data Plane, and prints out the following environment variable exports:

     export DECK_KONNECT_TOKEN=$KONNECT_TOKEN
     export DECK_KONNECT_CONTROL_PLANE_NAME=quickstart
     export KONNECT_CONTROL_PLANE_URL=https://us.api.konghq.com
     export KONNECT_PROXY_URL='http://localhost:8000'
    

    Copy and paste these into your terminal to configure your session.

This tutorial requires Kong Gateway Enterprise. If you don’t have Kong Gateway set up yet, you can use the quickstart script with an enterprise license to get an instance of Kong Gateway running almost instantly.

  1. Export your license to an environment variable:

     export KONG_LICENSE_DATA='LICENSE-CONTENTS-GO-HERE'
    
  2. Run the quickstart script:

     curl -Ls https://get.konghq.com/quickstart | bash -s -- -e KONG_LICENSE_DATA 
    

    Once Kong Gateway is ready, you will see the following message:

     Kong Gateway Ready
    

decK is a CLI tool for managing Kong Gateway declaratively with state files. To complete this tutorial you will first need to install decK.

For this tutorial, you’ll need Kong Gateway entities, like Gateway Services and Routes, pre-configured. These entities are essential for Kong Gateway to function but installing them isn’t the focus of this guide. Follow these steps to pre-configure them:

  1. Run the following command:

    echo '
    _format_version: "3.0"
    services:
      - name: example-service
        url: http://httpbin.konghq.com/anything
    routes:
      - name: example-route
        paths:
        - "/anything"
        service:
          name: example-service
    ' | deck gateway apply -
    

To learn more about entities, you can read our entities documentation.

This tutorial uses OpenAI:

  1. Create an OpenAI account.
  2. Get an API key.
  3. Create a decK variable with the API key:
export DECK_OPENAI_API_KEY='YOUR OPENAI API KEY'

Configure the AI Proxy plugin

Enable the AI Proxy plugin with your OpenAI API key and the model details. In this example, we’ll use the GPT-4o model.

echo '
_format_version: "3.0"
plugins:
  - name: ai-proxy
    config:
      route_type: llm/v1/chat
      auth:
        header_name: Authorization
        header_value: Bearer ${{ env "DECK_OPENAI_API_KEY" }}
      model:
        provider: openai
        name: gpt-4o
' | deck gateway apply -

Add authentication

To secure the access to your Route, create a Consumer and set up an authentication plugin.

Note that LangChain expects authentication as an Authorization header with a value starting with Bearer. You can use plugins like OAuth 2.0 Authentication or OpenID Connect to generate Bearer tokens. In this example, for testing purposes, we’ll recreate this pattern using the Key Authentication plugin.

echo '
_format_version: "3.0"
plugins:
  - name: key-auth
    route: example-route
    config:
      key_names:
      - Authorization
consumers:
  - username: ai-user
    keyauth_credentials:
    - key: Bearer my-api-key
' | deck gateway apply -

Install LangChain

Load the LangChain SDK into your Python dependencies:

Create a LangChain script

Use the following command to create a file named app.py containing a LangChain Python script:

echo 'from langchain_openai import ChatOpenAI

kong_url = "http://127.0.0.1:8000"
kong_route = "anything"

llm = ChatOpenAI(
    base_url=f"{kong_url}/{kong_route}",
    model="gpt-4o",
    api_key="my-api-key"
)

response = llm.invoke("What are you?")
print(f"$ ChainAnswer:> {response.content}")' > app.py
echo 'from langchain_openai import ChatOpenAI
import os

kong_url = os.environ['KONNECT_PROXY_URL']
kong_route = "anything"

llm = ChatOpenAI(
    base_url=f"{kong_url}/{kong_route}",
    model="gpt-4o",
    api_key="my-api-key"
)

response = llm.invoke("What are you?")
print(f"$ ChainAnswer:> {response.content}")' > app.py

With the base_url parameter, we can override the OpenAI base URL that LangChain uses by default with the URL to our Kong Gateway Route. This way, we can proxy requests and apply Kong Gateway plugins, while also using LangChain integrations and tools.

In the api_key parameter, we’ll add the API key we created, without the Bearer prefix, which is added automatically by LangChain.

Validate

Run your script to validate that LangChain can access the Route:

python3 ./app.py

The response should look like this:

ChainAnswer:> I am an AI language model created by OpenAI, designed to assist with understanding and generating human-like text based on the input I receive. I can help answer questions, provide explanations, and assist with a variety of tasks involving language. What would you like to know or discuss today?

Cleanup

If you created a new control plane and want to conserve your free trial credits or avoid unnecessary charges, delete the new control plane used in this tutorial.

curl -Ls https://get.konghq.com/quickstart | bash -s -- -d
Something wrong?

Help us make these docs great!

Kong Developer docs are open source. If you find these useful and want to make them better, contribute today!