AI Gateway Enterprise: This plugin is only available as part of our AI Gateway Enterprise offering.
The AI Semantic Response Guard plugin extends the AI Prompt Guard plugin by filtering LLM responses based on semantic similarity to predefined rules. It helps prevent unwanted or unsafe responses when serving llm/v1/chat, llm/v1/completions, or llm/v1/embeddings requests through AI Gateway.
You can use a combination of allow and deny response rules to maintain integrity and compliance when returning responses from an LLM service.
The plugin analyzes the semantic content of the full LLM response before it is returned to the client. The matching behavior is as follows:
If any deny_responses are set and the response matches a pattern in the deny list, the response is blocked with a 400 Bad response.
If any allow_responses are set, but the response matches none of the allowed patterns, the response is also blocked with a 400 Bad response.
If any allow_responses are set and the response matches one of the allowed patterns, the response is permitted.
If both deny_responses and allow_responses are set, the deny condition takes precedence. A response that matches a deny pattern will be blocked, even if it also matches an allow pattern. If the response does not match any deny pattern, it must still match an allow pattern to be permitted.
Disables streaming (stream=false) to ensure the full response body is buffered before analysis.
Intercepts the response body using the guard-response filter.
Extracts response text, supporting JSON parsing of multiple LLM formats and gzipped content.
Generates embeddings for the extracted text.
Searches the vector database (Redis, Pgvector, or other) against configured allow_responses or deny_responses.
Applies the decision rules described above.
If a response is blocked or if a system error occurs during evaluation, the plugin returns a 400 Bad Request to the client without exposing that the Semantic Response Guard blocked it.
A vector database can be used to store vector embeddings, or numerical representations, of data items. For example, a response would be converted to a numerical representation and stored in the vector database so that it can compare new requests against the stored vectors to find relevant cached items.
The AI Semantic Response Guard plugin supports the following vector databases:
Using config.vectordb.strategy: redis and parameters in config.vectordb.redis:
If your plugin uses a Redis datastore, you can authenticate to it with a cloud Redis provider.
This allows you to seamlessly rotate credentials without relying on static passwords.
The following providers are supported:
AWS ElastiCache
Azure Managed Redis
Google Cloud Memorystore (with or without Valkey)
You need:
A running Redis instance on an AWS ElastiCache instance for Valkey 7.2 or later or ElastiCache for Redis OSS version 7.0 or later