Stop sending PII to LLMs

Open-source proxy that detects and strips sensitive data before it reaches the model. Self-hosted. Multi-provider. Zero code changes.

USER"Review John Smith's config at john@acme.com on 192.168.1.50"
    ↓ Veil intercepts
LLM SEES"Review PERSON_001's config at EMAIL_001 on IP_ADDRESS_001"
    ↓ LLM responds with placeholders
USER SEES"John Smith's config at john@acme.com on 192.168.1.50 appears to be..."

Everything you need

20+ Detection Patterns

Regex + NER (Presidio/spaCy) + custom rules. SSNs, credit cards, emails, IPs, AWS keys, phone numbers, and more.

OpenAI-Compatible Gateway

Change one line (base_url) in any OpenAI SDK. Works with LangChain, LlamaIndex, Cursor, and any OpenAI-compatible tool.

Multi-Provider

OpenAI, Anthropic, and Ollama. Run fully air-gapped with local models — no cloud API keys needed.

Self-Hosted

Your data never leaves your network. Docker Compose up in under 5 minutes. SQLite or PostgreSQL.

Policy Engine

Per-entity-type rules: redact, block, warn, or allow. Default policies out of the box, fully customizable.

Full Admin UI

Built-in chat, admin dashboard, audit logs, rules editor, webhook management. No separate tools needed.

346
Tests Passing
20+
PII Patterns
<5ms
Regex Overhead
3
LLM Providers

Up and running in 60 seconds

# Clone and configure
git clone https://github.com/Threatlabs-LLC/veil-public.git
cd veil-public
cp .env.example .env
# Edit .env with your API keys

# Start with Docker
docker compose up -d

# Open http://localhost:8000 and register
# Or use the gateway API from existing code
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="vk-...")
# That's it. PII is sanitized transparently.