The Prompt Management System built for automation engineers and small Scrum teams. Compose modular prompts with [[ injections ]], test with 8 validators using {{ n8n_syntax }}, and deploy instantly via API.
Copy-pasting 50-line prompts into N8N nodes makes your workflows unreadable. Your 8-person team can't debug what they can't read.
Notion is great for docs, but it can't render variables or test outputs against OpenAI. You're flying blind until production breaks.
"Which prompt broke the bot?" Stop guessing in Slack threads. Track changes and roll back without touching your production workflow.
Define your [[System_Instructions]] once. Inject them into 100 different prompts. Update one, update all. The killer feature that makes us "Obsidian for Prompts."
1You are a [[Persona_Sales]] assistant.23[[System_Instructions]]45Customer: {{ $json.customer_name }}6Query: {{ $json.message }}
The same editor powering VS Code. Token counter for cost awareness (1.2k format), Command Palette (Cmd+Shift+P), Find & Replace, intelligent autocomplete, and real-time validation for [[injections]] and {{variables}}.
Run prompts against GPT-4, Claude, or Gemini instantly. Contains, NotContains, Regex, Length, ExactMatch, Javascript, JsonSchema, and LLM Judge. Batch execute your entire test suite.
We support {{ $json.body }}, {{ $node.previous.json }}, and standard N8N variable syntax out of the box. No more regex find-and-replace when moving from editor to production.
Bring Your Own Key model. Your API keys are encrypted with Fernet and stored on our backend—never in the browser. Keys are decrypted only at runtime. Your prompts are your IP. We process them but do not own them. GDPR compliant, hosted in Europe.
Write your master prompt using Monaco Editor. Use [[ ]] to inject reusable blocks. Drag & drop prompts from the sidebar.
Create test cases with 8 validator types. Batch execute for regression testing. See cost and latency per test.
Get a stable API endpoint. Plug it into your N8N HTTP Request node. Roll back with one click if needed.
Built-in quality checks ensure your prompts work every time.
4 role levels. Real-time locking. Slack notifications.
Immutable snapshots with content hashing. Non-destructive rollback.
No vendor lock-in. Export anytime. Your prompts are your IP.
Export your entire organization to JSON or ZIP. Prompts, metadata, test cases—everything.
Import with conflict resolution: Skip existing, Overwrite, or Merge metadata.
Data stored in Europe. Your prompts are your intellectual property. We process, we don't own.
Join automation engineers who ship prompts with confidence. No credit card required during Developer Preview.
We offer 8 built-in validators: Contains, NotContains, ExactMatch, Regex (with i/m/s flags), Length (min/max), Javascript (custom expressions), JsonSchema (for structured outputs), and LLMJudge (AI-powered quality scoring). Static validators run instantly; LLMJudge runs asynchronously.
Git is great for code, but it doesn't let you run and visualize prompt outputs. LLMx combines the versioning of Git with the playground of OpenAI. Think "Obsidian for Prompts."
Yes, but encrypted. We use Fernet encryption to store your API keys securely on our backend. Keys are never stored in the browser and are decrypted only at runtime when making LLM calls. We don't resell or mark up your token costs.
Always. One-click JSON or ZIP export with your entire prompt library. Import with conflict resolution (Skip/Overwrite/Merge). We don't believe in vendor lock-in—your prompts are your IP.
Yes! Early Access is completely free during Developer Preview. No credit card required. Create unlimited prompts, use all 8 validators, collaborate with your team.
Yes. We support multi-organization management with 4 role levels: Owner (full access), Admin (manage members), Editor (create prompts), and Viewer (read-only). Real-time document locking prevents conflicts.
We support OpenAI (GPT-4, GPT-3.5), Anthropic (Claude), Google (Gemini), Azure OpenAI, and local models. Add your API keys once and switch between providers instantly in the Testing Lab.
Use [[folder/prompt-name]] to compose prompts from reusable blocks. You can even pass variable overrides: [[persona | tone=friendly]]. We detect circular dependencies and enforce a max depth of 5 levels.
Every save creates a version you can restore. When you deploy, we snapshot the entire prompt tree (including all injections) into a single immutable version with content hashing. Roll back to any deployment with one click—non-destructive rollback creates a new deployment.
Yes. Each deployment gets a stable REST endpoint. Create scoped API keys per organization in Settings. Perfect for N8N HTTP Request nodes or any integration.
Injections ([[...]]) pull in other prompts as reusable blocks—your DRY principle for prompts. Variables ({{...}}) are dynamic values filled at runtime—we support N8N syntax with JS expressions, nested properties, and built-in helpers like $now and $uppercase().
Yes. Your prompts are your IP. API keys are Fernet-encrypted and stored on our backend. We are GDPR compliant and host in Europe. You can export all your data anytime.
Join automation engineers who ship prompts with confidence. Obsidian for Prompts.
No credit card required · Free during Developer Preview · GDPR Compliant