Skip to content
Get started

Policy Templates

Built-in and customizable policy templates for LLM workflows in Overmind

Overmind provides a set of built-in policy templates that you can use directly or customize for your needs. Policies can be applied as input or output layers in your LLM workflow to enforce security, compliance, or output quality.

Policy TemplateDescription
anonymize_piiAnonymizes detected Personally Identifiable Information (PII)
reject_piiRejects requests containing specified types of PII
reject_prompt_injectionRejects responses suggestive of prompt injection attempts
reject_irrelevant_answerRejects outputs that do not address the user’s original intent
reject_llm_judge_with_criteriaRejects responses not meeting configurable judgment criteria
  • pii_types (optional, list of string): Specify which types of PII to target. Supported values:
    • "DEMOGRAPHIC_DATA"
    • "FINANCIAL_ID"
    • "GEOGRAPHIC_DATA"
    • "GOVERNMENT_ID"
    • "MEDICAL_DATA"
    • "SECURITY_DATA"
    • "TECHNICAL_ID"
    • "PERSON_NAME"
    • "PHONE"
    • "EMAIL" If omitted, defaults to global PII detection.
  • criteria (required, list of string): One or more rules for output validation (e.g., "Must not contain financial advice"). Each rule is evaluated independently. The response is rejected if any criterion is not satisfied.
from overmind.policies import AnonymizePii, RejectLLMJudgeWithCriteria
# Apply PII anonymization for specific types
anonymize_policy = AnonymizePii(
pii_types=["FINANCIAL_ID", "GOVERNMENT_ID"]
)
# Require output to meet all criteria
reject_policy = RejectLLMJudgeWithCriteria(
criteria=[
"Answer must be concise",
"Do not provide medical advice"
]
)

Refer to the Overmind SDK documentation for usage with your client integration.