All AI Gateway documentation

  • AI Azure Content Safety

    Use Azure AI Content Safety to check and audit AI Proxy plugin messages before proxying them to an upstream LLM

  • AI Prompt Decorator

    Prepend or append an array of llm/v1/chat messages to a user's chat history

  • AI Prompt Guard

    Check llm/v1/chat or llm/v1/completions requests against a list of allowed or denied expressions

  • AI Prompt Template

    Provide fill-in-the-blank AI prompts to users

  • AI Proxy

    The AI Proxy plugin lets you transform and proxy requests to a number of AI providers and models.

  • AI Proxy Advanced

    The AI Proxy Advanced plugin lets you transform and proxy requests to multiple AI providers and models at the same time. This lets you set up load balancing between targets.

  • AI RAG Injector

    Create RAG pipelines by automatically injecting content from a vector database

  • AI Rate Limiting Advanced

    Provides rate limiting for the providers used by any AI plugins.

  • AI Request Transformer

    Use an LLM service to transform a client request body prior to proxying the request to the upstream server

  • AI Response Transformer

    Use an LLM service to transform the upstream HTTP(S) prior to forwarding it to the client

  • AI Semantic Prompt Guard

    Semantically and intelligently create allow and deny lists of topics that can be requested across every LLM.

  • AI Sanitizer

    Protect sensitive information in client request bodies before they reach upstream services

  • AI Prompt Decorator

    Prepend or append an array of llm/v1/chat messages to a user's chat history

  • AI Prompt Guard

    Check llm/v1/chat or llm/v1/completions requests against a list of allowed or denied expressions

  • AI Prompt Template

    Provide fill-in-the-blank AI prompts to users

  • AI Proxy

    The AI Proxy plugin lets you transform and proxy requests to a number of AI providers and models.

  • AI Proxy Advanced

    The AI Proxy Advanced plugin lets you transform and proxy requests to multiple AI providers and models at the same time. This lets you set up load balancing between targets.

  • AI RAG Injector

    Create RAG pipelines by automatically injecting content from a vector database

  • AI Rate Limiting Advanced

    Provides rate limiting for the providers used by any AI plugins.

  • AI Request Transformer

    Use an LLM service to transform a client request body prior to proxying the request to the upstream server

  • AI Response Transformer

    Use an LLM service to transform the upstream HTTP(S) prior to forwarding it to the client

  • AI Semantic Prompt Guard

    Semantically and intelligently create allow and deny lists of topics that can be requested across every LLM.

  • AI Sanitizer

    Protect sensitive information in client request bodies before they reach upstream services

Something wrong?

Help us make these docs great!

Kong Developer docs are open source. If you find these useful and want to make them better, contribute today!