Qualifire
Use Qualifire to evaluate LLM outputs for quality, safety, and reliability. Detect prompt injections, hallucinations, PII, harmful content, and validate that your AI follows instructions.
Quick Start​
1. Define Guardrails on your LiteLLM config.yaml​
Define your guardrails under the guardrails section:
litellm config.yaml
model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: openai/gpt-3.5-turbo
api_key: os.environ/OPENAI_API_KEY
guardrails:
- guardrail_name: "qualifire-guard"
litellm_params:
guardrail: qualifire
mode: "during_call"
api_key: os.environ/QUALIFIRE_API_KEY
prompt_injections: true
- guardrail_name: "qualifire-pre-guard"
litellm_params:
guardrail: qualifire
mode: "pre_call"
api_key: os.environ/QUALIFIRE_API_KEY
prompt_injections: true
pii_check: true
- guardrail_name: "qualifire-post-guard"
litellm_params:
guardrail: qualifire
mode: "post_call"
api_key: os.environ/QUALIFIRE_API_KEY
hallucinations_check: true
grounding_check: true
- guardrail_name: "qualifire-monitor"
litellm_params:
guardrail: qualifire
mode: "pre_call"
on_flagged: "monitor" # Log violations but don't block
api_key: os.environ/QUALIFIRE_API_KEY
prompt_injections: true
Supported values for mode​
pre_callRun before LLM call, on inputpost_callRun after LLM call, on input & outputduring_callRun during LLM call, on input. Same aspre_callbut runs in parallel as LLM call. Response not returned until guardrail check completes