Enterprise proxy that detects and strips sensitive data before it reaches any LLM. Self-host for free or use Veil Cloud. Multi-provider. Zero code changes.
Block direct access to LLM APIs across your network. Route all AI traffic through Veil — every user authenticated, every message sanitized, every interaction logged. One gateway for all providers.
Regex + NER (Presidio/spaCy) + custom rules. SSNs, credit cards, emails, IPs, AWS keys, Azure/GCP secrets, log-file PII, and more.
Change one line (base_url) in any OpenAI SDK. Works with LangChain, LlamaIndex, Cursor, and any OpenAI-compatible tool.
OpenAI, Anthropic, and Ollama. Run fully air-gapped with local models — no cloud API keys needed.
Your data never leaves your network. Docker Compose up in under 5 minutes. SQLite or PostgreSQL.
Per-entity-type rules: redact, block, warn, or allow. Default policies out of the box, fully customizable.
Upload PDF, DOCX, TXT, CSV, or XLSX files. Scan for PII or attach to chat. Files processed in-memory, never stored.
Built-in chat, admin dashboard, audit logs, rules editor, webhook management. No separate tools needed.
Sanitized prompts work with OpenAI image generation (DALL-E via Responses API). Text and images stream together.
Detect PII in server logs, CI/CD output, and cloud audit trails. Azure, AWS, GCP, Kubernetes, and Docker patterns built-in.
76 regex patterns + NER + custom rules scan every message for sensitive data.
PII is replaced with reversible placeholders. Original data stays in your secure session.
Clean text goes to the LLM. The response is rehydrated with original data before you see it.
Self-host for free forever, or let us run it for you.
Deploy in minutes. No code changes required.