Noma Security
Use Noma Security to protect your LLM applications with comprehensive AI content moderation and safety guardrails.
Quick Startโ
1. Define Guardrails on your LiteLLM config.yamlโ
Define your guardrails under the guardrails
section:
model_list:
- model_name: gpt-4o-mini
litellm_params:
model: openai/gpt-4o-mini
api_key: os.environ/OPENAI_API_KEY
guardrails:
- guardrail_name: "noma-guard"
litellm_params:
guardrail: noma
mode: "during_call"
api_key: os.environ/NOMA_API_KEY
api_base: os.environ/NOMA_API_BASE
- guardrail_name: "noma-pre-guard"
litellm_params:
guardrail: noma
mode: "pre_call"
api_key: os.environ/NOMA_API_KEY
api_base: os.environ/NOMA_API_BASE
Supported values for mode
โ
pre_call
Run before LLM call, on inputpost_call
Run after LLM call, on input & outputduring_call
Run during LLM call, on input. Same aspre_call
but runs in parallel with the LLM call. Response not returned until guardrail check completes
2. Start LiteLLM Gatewayโ
litellm --config config.yaml --detailed_debug
3. Test requestโ
- Unsuccessful call
- Successful Call
Expect this to fail since the request contains harmful content:
curl -i http://0.0.0.0:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [
{"role": "user", "content": "Tell me how to hack into someone's email account"}
]
}'
Expected response on failure:
{
"error": {
"message": "{\n \"error\": \"Request blocked by Noma guardrail\",\n \"details\": {\n \"prompt\": {\n \"harmfulContent\": {\n \"result\": true,\n \"confidence\": 0.95\n }\n }\n }\n }",
"type": "None",
"param": "None",
"code": "400"
}
}
curl -i http://0.0.0.0:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [
{"role": "user", "content": "What is the capital of France?"}
]
}'
Expected response:
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "gpt-4o-mini",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 9,
"completion_tokens": 12,
"total_tokens": 21
}
}
Supported Paramsโ
guardrails:
- guardrail_name: "noma-guard"
litellm_params:
guardrail: noma
mode: "pre_call"
api_key: os.environ/NOMA_API_KEY
api_base: os.environ/NOMA_API_BASE
### OPTIONAL ###
# application_id: "my-app"
# monitor_mode: false
# block_failures: true
Required Parametersโ
api_key
: Your Noma Security API key (set asos.environ/NOMA_API_KEY
in YAML config)
Optional Parametersโ
api_base
: Noma API base URL (defaults tohttps://api.noma.security/
)application_id
: Your application identifier (defaults to"litellm"
)monitor_mode
: Iftrue
, logs violations without blocking (defaults tofalse
)block_failures
: Iftrue
, blocks requests when guardrail API failures occur (defaults totrue
)
Environment Variablesโ
You can set these environment variables instead of hardcoding values in your config:
export NOMA_API_KEY="your-api-key-here"
export NOMA_API_BASE="https://api.noma.security/" # Optional
export NOMA_APPLICATION_ID="my-app" # Optional
export NOMA_MONITOR_MODE="false" # Optional
export NOMA_BLOCK_FAILURES="true" # Optional
Advanced Configurationโ
Monitor Modeโ
Use monitor mode to test your guardrails without blocking requests:
guardrails:
- guardrail_name: "noma-monitor"
litellm_params:
guardrail: noma
mode: "pre_call"
api_key: os.environ/NOMA_API_KEY
monitor_mode: true # Log violations but don't block
Handling API Failuresโ
Control behavior when the Noma API is unavailable:
guardrails:
- guardrail_name: "noma-failopen"
litellm_params:
guardrail: noma
mode: "pre_call"
api_key: os.environ/NOMA_API_KEY
block_failures: false # Allow requests to proceed if guardrail API fails
Multiple Guardrailsโ
Apply different configurations for input and output:
guardrails:
- guardrail_name: "noma-strict-input"
litellm_params:
guardrail: noma
mode: "pre_call"
api_key: os.environ/NOMA_API_KEY
block_failures: true
- guardrail_name: "noma-monitor-output"
litellm_params:
guardrail: noma
mode: "post_call"
api_key: os.environ/NOMA_API_KEY
monitor_mode: true
โจ Pass Additional Parametersโ
Use extra_body
to pass additional parameters to the Noma Security API call, such as dynamically setting the application ID for specific requests.
- OpenAI Python
- Curl
import openai
client = openai.OpenAI(
api_key="your-api-key",
base_url="http://0.0.0.0:4000"
)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello, how are you?"}],
extra_body={
"guardrails": {
"noma-guard": {
"extra_body": {
"application_id": "my-specific-app-id"
}
}
}
}
)
curl 'http://0.0.0.0:4000/v1/chat/completions' \
-H 'Content-Type: application/json' \
-d '{
"model": "gpt-4o-mini",
"messages": [
{
"role": "user",
"content": "Hello, how are you?"
}
],
"guardrails": {
"noma-guard": {
"extra_body": {
"application_id": "my-specific-app-id"
}
}
}
}'
This allows you to override the default application_id
parameter for specific requests, which is useful for tracking usage across different applications or components.
Response Detailsโ
When content is blocked, Noma provides detailed information about the violations as JSON inside the message
field, with the following structure:
{
"error": "Request blocked by Noma guardrail",
"details": {
"prompt": {
"harmfulContent": {
"result": true,
"confidence": 0.95
},
"sensitiveData": {
"email": {
"result": true,
"entities": ["user@example.com"]
}
},
"bannedTopics": {
"violence": {
"result": true,
"confidence": 0.88
}
}
}
}
}