ads

Thursday, April 2, 2026

Wednesday, April 1, 2026

Show HN: Zerobox – Sandbox any command with file and network restrictions https://ift.tt/TBY9NvE

Show HN: Zerobox – Sandbox any command with file and network restrictions I'm excited to introduce Zerobox, a cross-platform, single binary process sandboxing CLI written in Rust. It uses the sandboxing crates from the OpenAI Codex repo and adds additional functionalities like secret injection, SDK, etc. Watch the demo: https://www.youtube.com/watch?v=wZiPm9BOPCg Zerobox follows the same sandboxing policy as Deno which is deny by default. The only operation that the command can run is reading files, all writes and network I/O are blocked by default. No VMs, no Docker, no remote servers. Want to block reads to /etc? zerobox --deny-read=/etc -- cat /etc/passwd cat: /etc/passwd: Operation not permitted How it works: Zerobox wraps any commands/programs, runs an MITM proxy and uses the native sandboxing solutions on each operating system (e.g BubbleWrap on Linux) to run the given process in a sandbox. The MITM proxy has two jobs: blocking network calls and injecting credentials at the network level. Think of it this way, I want to inject "Bearer OPENAI_API_KEY" but I don't want my sandboxed command to know about it, Zerobox does that by replacing "OPENAI_API_KEY" with a placeholder, then replaces it when the actual outbound network call is made, see this example: zerobox --secret OPENAI_API_KEY=$OPENAI_API_KEY --secret-host OPENAI_API_KEY=api.openai.com -- bun agent.ts Zerobox is different than other sandboxing solutions in the sense that it would allow you to easily sandbox any commands locally and it works the same on all platforms. I've been exploring different sandboxing solutions, including Firecracker VMs locally, and this is the closest I was able to get when it comes to sandboxing commands locally. The next thing I'm exploring is `zerobox claude` or `zerobox openclaw` which would wrap the entire agent and preload the correct policy profiles. I'd love to hear your feedback, especially if you are running AI Agents (e.g. OpenClaw), MCPs, AI Tools locally. https://ift.tt/tUSBVAc March 30, 2026 at 09:32PM

Show HN: Aphelo – A Redis-like store in C++ with Progressive Rehashing https://ift.tt/VDy37HR

Show HN: Aphelo – A Redis-like store in C++ with Progressive Rehashing https://ift.tt/6J8Yu4x April 1, 2026 at 11:33PM

Show HN: Real-time dashboard for Claude Code agent teams https://ift.tt/Mrd6jJh

Show HN: Real-time dashboard for Claude Code agent teams This project (Agents Observe) started as an exploration into building automation harnesses around claude code. I needed a way to see exactly what teams of agents were doing in realtime and to filter and search their output. A few interesting learnings from building and using this: - Claude code hooks are blocking - performance degrades rapidly if you have a lot of plugins that use hooks - Hooks provide a lot more useful info than OTEL data - Claude's jsonl files provide the full picture - Lifecycle management of MCP processes started by plugins is a bit kludgy at best The biggest takeaway is how much of a difference it made in claude performance when I switched to background (fire and forget) hooks and removed all other plugins. It's easy to forget how many claude plugins I've installed and how they effect performance. The Agents Observe plugin uses docker to start the API and dashboard service. This is a pattern I'd love to see used more often for security (think Axios hack) reasons. The tricky bit was handling process management across multiple claude instances - the solution was to have the server track active connections then auto shut itself down when not in use. Then the plugin spins it back up when a new session is started. This tool has been incredibly useful for my own daily workflow. Enjoy! https://ift.tt/g8Ap5FE April 1, 2026 at 11:24PM

Show HN: Max Headbox, a local agent that fits on a Raspberry Pi 5 https://ift.tt/9tjBhle

Show HN: Max Headbox, a local agent that fits on a Raspberry Pi 5 https://ift.tt/62WoTOA April 1, 2026 at 09:57PM

Tuesday, March 31, 2026

Monday, March 30, 2026

Show HN: Memv – Memory for AI Agents https://ift.tt/np2Dmso

Show HN: Memv – Memory for AI Agents memv is an open-source Python library that gives AI agents persistent memory. Feed it conversations; it extracts knowledge. The extraction mechanism is predict-calibrate (Nemori paper): given existing knowledge, it predicts what a new conversation should contain, then extracts only what the prediction missed. v0.1.2 adds the production path: - PostgreSQL backend (pgvector for vectors, tsvector for text search, asyncpg pooling). Single db_url parameter — file path for SQLite, connection string for Postgres. - Embedding adapters: OpenAI, Voyage, Cohere, fastembed (local ONNX). Other things it does: - Bi-temporal validity: event time (when was the fact true) + transaction time (when did we learn it), following Graphiti's model. - Hybrid retrieval: vector similarity + BM25 merged with Reciprocal Rank Fusion. - Episode segmentation: groups messages before extraction. - Contradiction handling: new facts invalidate old ones, with full audit trail. Procedural memory (agents learning from past runs) is next, deferred until there's usage data. https://ift.tt/uYHRTsa March 31, 2026 at 12:09AM

Show HN: I made my fitness dashboard public and Apple Health needs an API https://ift.tt/fuOenJ3

Show HN: I made my fitness dashboard public and Apple Health needs an API https://ift.tt/5JmO1CB March 31, 2026 at 12:39AM

Show HN: A Terminal Interface for Jira https://ift.tt/GDAg3Zr

Show HN: A Terminal Interface for Jira https://ift.tt/YcgmUQ3 March 30, 2026 at 08:47PM

Sunday, March 29, 2026

Show HN: QuickBEAM – run JavaScript as supervised Erlang/OTP processes https://ift.tt/XdxsOrY

Show HN: QuickBEAM – run JavaScript as supervised Erlang/OTP processes QuickBEAM is a JavaScript runtime embedded inside the Erlang/OTP VM. If you’re building a full-stack app, JavaScript tends to leak in anyway — frontend, SSR, or third-party code. QuickBEAM runs that JavaScript inside OTP supervision trees. Each runtime is a process with a `Beam` global that can: - call Elixir code - send/receive messages - spawn and monitor processes - inspect runtime/system state It also provides browser-style APIs backed by OTP/native primitives (fetch, WebSocket, Worker, BroadcastChannel, localStorage, native DOM, etc.). This makes it usable for: - SSR - sandboxed user code - per-connection state - backend JS with direct OTP interop Notable bits: - JS runtimes are supervised and restartable - sandboxing with memory/reduction limits and API control - native DOM that Erlang can read directly (no string rendering step) - no JSON boundary between JS and Erlang - built-in TypeScript, npm support, and native addons QuickBEAM is part of Elixir Volt — a full-stack frontend toolchain built on Erlang/OTP with no Node.js. Still early, feedback welcome. https://ift.tt/lAeFcK6 March 29, 2026 at 04:03AM

Show HN: Tinyvision:-Building Ultra-Lightweight Models for Image Tasks https://ift.tt/RoVEhij

Show HN: Tinyvision:-Building Ultra-Lightweight Models for Image Tasks Disclaimer: English is not my first language. I used an LLM to help me write post clearly. Hello everyone, I just wanted to share my project and wanted some feedback on it Goal: Most image models today are bulky and overkill for basic tasks. This project explores how small we can make image classification models while still keeping them functional by stripping them down to the bare minimum. Current Progress & Results: Cat vs Dog Classification: First completed task using a 25,000-image dataset with filter bank preprocessing and compact CNNs. Achieved up to 86.87% test accuracy with models under 12.5k parameters. Several models under 5k parameters reached over 83% accuracy, showcasing strong efficiency-performance trade-offs. CIFAR-10 Classification: Second completed task using the CIFAR-10 dataset. This approach just relies on compact CNN architectures without the filter bank preprocessing. A 22.11k parameter model achieved 87.38% accuracy. A 31.15k parameter model achieved 88.43% accuracy. All code and experiments are available in my GitHub repository: https://ift.tt/MjKI7ap I would love for you to check out the project and let me know your feedback! Also, do leave a star if you find it interesting https://ift.tt/MjKI7ap March 29, 2026 at 10:52PM

Saturday, March 28, 2026

Show HN: Octopus, Open-source alternative to CodeRabbit and Greptile https://ift.tt/FwaklQ0

Show HN: Octopus, Open-source alternative to CodeRabbit and Greptile Hey HN, we built Octopus an open-source, self-hostable AI code reviewer for GitHub and Bitbucket. It uses RAG with vector search (Qdrant) to understand your full codebase, not just the diff, and posts inline findings on PRs with severity ratings. Works with Claude and OpenAI, and you can bring your own API keys. Video: https://www.youtube.com/watch?v=HP1kaKTOdXw | GitHub: https://ift.tt/lUaGAIE https://ift.tt/F51fn3q March 28, 2026 at 08:20PM