The AI Prompt Guard plugin lets you to configure a series of PCRE-compatible regular expressions as allow or deny lists,
to guard against misuse of llm/v1/chat
or llm/v1/completions
requests.
You can use this plugin to allow or block specific prompts, words, phrases, or otherwise have more control over how an LLM service is used when called via Kong Gateway.
It does this by scanning all chat messages where the role is user
for the specific expressions set.
You can use a combination of allow
and deny
rules to preserve integrity and compliance when serving an LLM service using Kong Gateway.
-
For
llm/v1/chat
type models: You can optionally configure the plugin to ignore existing chat history, wherein it will only scan the trailinguser
message. -
For
llm/v1/completions
type models: There is only oneprompt
field, thus the whole prompt is scanned on every request.
This plugin extends the functionality of the AI Proxy plugin, and requires either AI Proxy or AI Proxy Advanced to be configured first. To set up AI Proxy quickly, see Get started with AI Gateway.