Cloak your prompts. Prove your compliance.
Open-source PII protection middleware for LLMs. Detect sensitive data, replace it with reversible tokens, and maintain tamper-evident audit logs — all before your prompts leave your infrastructure.
▶ Watch interactive demo on asciinema
| SDK | Version | Install | Docs |
|---|---|---|---|
| CloakLLM-PY | 0.1.8 | pip install cloakllm |
Python README |
| CloakLLM-JS | 0.1.8 | npm install cloakllm |
JS/TS README |
| CloakLLM-MCP | 0.1.8 | python -m mcp run server.py |
MCP README |
- PII Detection — emails, SSNs, credit cards, phone numbers, API keys, and more
- LLM-Powered Detection — opt-in local Ollama integration catches context-dependent PII that regex misses (addresses, medical terms)
- Reversible Tokenization — deterministic
[CATEGORY_N]tokens that preserve context for the LLM - Redaction Mode — irreversible
[CATEGORY_REDACTED]replacement for GDPR right-to-erasure - Tamper-Evident Audit Logs — hash-chained entries for EU AI Act Article 12 compliance
- Middleware Integration — drop-in support for LiteLLM and OpenAI SDK (Python) and OpenAI/Vercel AI SDK (JS)
- MCP Server — use CloakLLM directly from Claude Desktop, Cursor, or any MCP-compatible client
pip install cloakllm# Option A: OpenAI SDK
from cloakllm import enable_openai
from openai import OpenAI
client = OpenAI()
enable_openai(client) # Wraps OpenAI SDK — all calls are now protected
# Option B: LiteLLM
import cloakllm
cloakllm.enable() # Wraps LiteLLM — all calls are now protectednpm install cloakllmconst cloakllm = require('cloakllm');
cloakllm.enable(openaiClient); // Wraps OpenAI SDKAdd to your claude_desktop_config.json:
{
"mcpServers": {
"cloakllm": {
"command": "python",
"args": ["-m", "mcp", "run", "server.py"],
"cwd": "/path/to/cloakllm-mcp"
}
}
}This exposes three tools to Claude: sanitize, desanitize, and analyze.
MIT
