ads
Tuesday, March 17, 2026
Show HN: FireClaw – Open-source proxy defending AI agents from prompt injection https://ift.tt/8W1i5FV
Show HN: FireClaw – Open-source proxy defending AI agents from prompt injection Hey HN, We built FireClaw because we kept watching AI agents get owned by prompt injection through web content. The agent fetches a page, the page says "ignore previous instructions," and suddenly your agent is leaking data or running commands it shouldn't. The existing solutions detect injection after the fact. We wanted to prevent it. FireClaw is a security proxy that sits between your AI agent and the web. Every fetch passes through a 4-stage pipeline: 1. DNS blocklist check (URLhaus, PhishTank, community feed) 2. Structural sanitization (strip hidden CSS, zero-width Unicode, encoding tricks) 3. Isolated LLM summarization (hardened sub-process with no tools or memory) 4. Output scanning with canary tokens (detect if content bypassed summarization) The key insight: even if Stage 3's LLM gets injected, it has no tools, no memory, and no access to your data. It can only return text — which still gets scanned in Stage 4. The attacker hits a dead end. Other design decisions: - No bypass mode. The pipeline is fixed. If your agent gets compromised, it can't disable FireClaw. - Community threat feed — instances anonymously share detection metadata (domain, severity, detection count) to build a shared blocklist. No page content is ever sent. - Runs on a Raspberry Pi as a physical appliance with an OLED display that shows real-time stats and lights up with animated flames when it catches a threat. We searched the literature and open source extensively — no one else is doing proxy-based defense for agent prompt injection. Detection exists, sandboxing exists, but an inline proxy that sanitizes before content reaches the agent's context? We couldn't find it. 200+ detection patterns, JSONL audit logging, domain trust tiers, rate limiting, and cost controls. AGPLv3 licensed. Website: https://fireclaw.app Would love feedback from anyone working on AI agent security. What are we missing? What attack vectors should we add to the pattern database? https://ift.tt/sLEPgdc March 17, 2026 at 11:28PM
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment