Scanner-Based Architecture
LLM Guard takes a different approach from NeMo: instead of a flow language, it provides a library of composable scanners that you chain into a pipeline. Each scanner checks for one specific threat. Input scanners protect prompts; output scanners protect responses. Install with pip install llm-guard. Requires Python ≥3.9.
Available Scanners
Input: Anonymize, BanCode, BanSubstrings, BanTopics, Gibberish detection, Invisible text detection, Language detection, Prompt injection detection, Regex patterns, Secrets detection
Output: Deanonymize, NoRefusal, Relevance, Sensitive content, Bias detection, Regex filtering
# LLM Guard — scanner pipeline
from llm_guard import scan_prompt, scan_output
from llm_guard.input_scanners import (
PromptInjection, Secrets, Anonymize
)
from llm_guard.output_scanners import (
Sensitive, NoRefusal
)
input_scanners = [
PromptInjection(),
Secrets(),
Anonymize(),
]
sanitized, results, valid = scan_prompt(
input_scanners, prompt
)
if not valid:
return "Blocked"
Strength: Modular, open-source (MIT), easy to integrate. Each scanner is independent — add or remove as needed. Tradeoff: Scanner quality varies; prompt injection detection relies on classifier models that can be bypassed by novel attacks.