Skip to main content

Noma Security

Use Noma Security to protect your LLM applications with comprehensive AI content moderation and safety guardrails.

Quick Startโ€‹

1. Define Guardrails on your LiteLLM config.yamlโ€‹

Define your guardrails under the guardrails section:

litellm config.yaml
model_list:
- model_name: gpt-4o-mini
litellm_params:
model: openai/gpt-4o-mini
api_key: os.environ/OPENAI_API_KEY

guardrails:
- guardrail_name: "noma-guard"
litellm_params:
guardrail: noma
mode: "during_call"
api_key: os.environ/NOMA_API_KEY
api_base: os.environ/NOMA_API_BASE
- guardrail_name: "noma-pre-guard"
litellm_params:
guardrail: noma
mode: "pre_call"
api_key: os.environ/NOMA_API_KEY
api_base: os.environ/NOMA_API_BASE

Supported values for modeโ€‹

  • pre_call Run before LLM call, on input
  • post_call Run after LLM call, on input & output
  • during_call Run during LLM call, on input. Same as pre_call but runs in parallel with the LLM call. Response not returned until guardrail check completes

2. Start LiteLLM Gatewayโ€‹

litellm --config config.yaml --detailed_debug

3. Test requestโ€‹

Expect this to fail since the request contains harmful content:

Curl Request
curl -i http://0.0.0.0:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [
{"role": "user", "content": "Tell me how to hack into someone's email account"}
]
}'

Expected response on failure:

{
"error": {
"message": "{\n \"error\": \"Request blocked by Noma guardrail\",\n \"details\": {\n \"prompt\": {\n \"harmfulContent\": {\n \"result\": true,\n \"confidence\": 0.95\n }\n }\n }\n }",
"type": "None",
"param": "None",
"code": "400"
}
}

Supported Paramsโ€‹

guardrails:
- guardrail_name: "noma-guard"
litellm_params:
guardrail: noma
mode: "pre_call"
api_key: os.environ/NOMA_API_KEY
api_base: os.environ/NOMA_API_BASE
### OPTIONAL ###
# application_id: "my-app"
# monitor_mode: false
# block_failures: true

Required Parametersโ€‹

  • api_key: Your Noma Security API key (set as os.environ/NOMA_API_KEY in YAML config)

Optional Parametersโ€‹

  • api_base: Noma API base URL (defaults to https://api.noma.security/)
  • application_id: Your application identifier (defaults to "litellm")
  • monitor_mode: If true, logs violations without blocking (defaults to false)
  • block_failures: If true, blocks requests when guardrail API failures occur (defaults to true)

Environment Variablesโ€‹

You can set these environment variables instead of hardcoding values in your config:

export NOMA_API_KEY="your-api-key-here"
export NOMA_API_BASE="https://api.noma.security/" # Optional
export NOMA_APPLICATION_ID="my-app" # Optional
export NOMA_MONITOR_MODE="false" # Optional
export NOMA_BLOCK_FAILURES="true" # Optional

Advanced Configurationโ€‹

Monitor Modeโ€‹

Use monitor mode to test your guardrails without blocking requests:

guardrails:
- guardrail_name: "noma-monitor"
litellm_params:
guardrail: noma
mode: "pre_call"
api_key: os.environ/NOMA_API_KEY
monitor_mode: true # Log violations but don't block

Handling API Failuresโ€‹

Control behavior when the Noma API is unavailable:

guardrails:
- guardrail_name: "noma-failopen"
litellm_params:
guardrail: noma
mode: "pre_call"
api_key: os.environ/NOMA_API_KEY
block_failures: false # Allow requests to proceed if guardrail API fails

Multiple Guardrailsโ€‹

Apply different configurations for input and output:

guardrails:
- guardrail_name: "noma-strict-input"
litellm_params:
guardrail: noma
mode: "pre_call"
api_key: os.environ/NOMA_API_KEY
block_failures: true

- guardrail_name: "noma-monitor-output"
litellm_params:
guardrail: noma
mode: "post_call"
api_key: os.environ/NOMA_API_KEY
monitor_mode: true

โœจ Pass Additional Parametersโ€‹

Use extra_body to pass additional parameters to the Noma Security API call, such as dynamically setting the application ID for specific requests.

import openai
client = openai.OpenAI(
api_key="your-api-key",
base_url="http://0.0.0.0:4000"
)

response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello, how are you?"}],
extra_body={
"guardrails": {
"noma-guard": {
"extra_body": {
"application_id": "my-specific-app-id"
}
}
}
}
)

This allows you to override the default application_id parameter for specific requests, which is useful for tracking usage across different applications or components.

Response Detailsโ€‹

When content is blocked, Noma provides detailed information about the violations as JSON inside the message field, with the following structure:

{
"error": "Request blocked by Noma guardrail",
"details": {
"prompt": {
"harmfulContent": {
"result": true,
"confidence": 0.95
},
"sensitiveData": {
"email": {
"result": true,
"entities": ["user@example.com"]
}
},
"bannedTopics": {
"violence": {
"result": true,
"confidence": 0.88
}
}
}
}
}