Related Documentation
Minimum Version
Kong Gateway - 3.8
Tags
#ai

You can proxy requests to Amazon Bedrock AI models through AI Gateway using the AI Proxy and AI Proxy Advanced plugins. This reference documents all supported AI capabilities, configuration requirements, and provider-specific details needed for proper integration.

Upstream paths

AI Gateway automatically routes requests to the appropriate Amazon Bedrock API endpoints. The following table shows the upstream paths used for each capability.

Capability Upstream path or API
Chat completions Uses the Converse and ConverseStream API
Completions Uses the Converse and ConverseStream API
Embeddings Uses the InvokeModel and InvokeWithResponseStream API
Function calling Uses the Converse API with tool configuration
Files /openai/files
Batches Uses the ModelInvocationJob API
Image generations Uses the InvokeModel API
Image edits Uses the InvokeModel API
Video generations Uses the StartAsyncInvoke API

Supported capabilities

The following tables show the AI capabilities supported by Amazon Bedrock provider when used with the AI Proxy or the AI Proxy Advanced plugin.

Set the plugin’s route_type based on the capability you want to use. See the tables below for supported route types.

Text generation

Support for Amazon Bedrock basic text generation capabilities including chat, completions, and embeddings:

Capability Route type Streaming Model example Min version
Chat completions llm/v1/chat Use the model name for the specific LLM provider 3.8
Completions llm/v1/completions Use the model name for the specific LLM provider 3.8
Embeddings llm/v1/embeddings Use the model name for the specific LLM provider 3.11

Advanced text generation

Support for Amazon Bedrock function calling to allow Amazon Bedrock models to use external tools and APIs:

Capability Route type Model example Min version
Function calling llm/v1/chat Model-dependent. Supported for Claude, Command, and select models 3.8

Processing

Support for Amazon Bedrock file operations, batch operations, assistants, and response handling:

Capability Route type Model example Min version
Files1 llm/v1/files n/a n/a
Batches2 llm/v1/batches n/a n/a

1 Amazon Bedrock does not have a dedicated files API. File storage uses Google Cloud Storage, similar to AWS S3.

2 Batches processing for Bedrock is supported in the native format from SDK only

Image

Support for Amazon Bedrock image generation and editing capabilities:

Capability Route type Model example Min version
Generations image/v1/images/generations Use the model name for the specific LLM provider 3.11
Edits image/v1/images/edits Use the model name for the specific LLM provider 3.11

For requests with large payloads, consider increasing config.max_request_body_size to three times the raw binary size.

Supported image sizes and formats vary by model. Refer to your provider’s documentation for allowed dimensions and requirements.

Video

Support for Amazon Bedrock video generation capabilities:

Capability Route type Model example Min version
Generations video/v1/videos/generations Use the model name for the specific LLM provider 3.13

For requests with large payloads (video generation), consider increasing config.max_request_body_size to three times the raw binary size.

Amazon Bedrock base URL

The base URL is https://bedrock-runtime.{region}.amazonaws.com, where {route_type_path} is determined by the capability.

AI Gateway uses this URL automatically. You only need to configure a URL if you’re using a self-hosted or Amazon Bedrock-compatible endpoint, in which case set the upstream_url plugin option.

Supported native LLM formats for Amazon Bedrock

By default, the AI Proxy plugin uses OpenAI-compatible request formats. Set config.llm_format to a native format to use Amazon Bedrock-specific APIs and features.

The following native Amazon Bedrock APIs are supported:

LLM format Supported APIs
bedrock
  • /model/{model_name}/converse
  • /model/{model_name}/converse-stream
  • /model/{model_name}/invoke
  • /model/{model_name}/invoke-with-response-stream
  • /model/{model_name}/retrieveAndGenerate
  • /model/{model_name}/retrieveAndGenerateStream
  • /model/{model_name}/rerank
  • /model/{model_name}/async-invoke
  • /model-invocations

Statistics logging limitations for native formats

  • Statistics logging is not available for image generation or editing APIs for Amazon Bedrock

Configure Amazon Bedrock with AI Proxy

To use Amazon Bedrock with AI Gateway, configure the AI Proxy or AI Proxy Advanced.

Here’s a minimal configuration for chat completions:

For more configuration options and examples, see:

FAQs

For cross-region inference, prefix the model ID with a geographic identifier:

{geography-prefix}.{provider}.{model-name}...

For example: us.anthropic.claude-sonnet-4-5-20250929-v1:0

Prefix

Geography

us. United States
eu. European Union
apac. Asia-Pacific
global. All commercial regions

For a full list of supported cross-region inference profiles, see Supported Regions and models for inference profiles in the AWS documentation.

Use the extra_body feature when sending requests in OpenAI format:

    curl http://localhost:8000 \
    -H "Authorization: Bearer $OPENAI_API_KEY" \
    -H "Content-Type: application/json" \
    -d '{
        "model": "amazon.nova-reel-v1:0",
        "prompt": "A large red square that is rotating",
        "extra_body": {
        "fps": 24
        }
    }'

Add a guardrailConfig object to your request body:

      {
          "messages": [
              {
                  "role": "system",
                  "content": "You are a scientist."
              },
              {
                  "role": "user",
                  "content": "What is the Boltzmann equation?"
              }
          ],
          "guardrailConfig": {
              "guardrailIdentifier": "$GUARDRAIL-IDENTIFIER",
              "guardrailVersion": "1",
              "trace": "enabled"
          }
      }

This feature requires Kong Gateway 3.9 or later. For more details, see Guardrails and content safety and the AWS Bedrock guardrails documentation.

Configure AI Proxy with the bedrock provider and set up AWS authentication using IAM credentials or assumed roles. See Use AWS Bedrock rerank API with AI Proxy.

Something wrong?

Help us make these docs great!

Kong Developer docs are open source. If you find these useful and want to make them better, contribute today!