ads

Thursday, March 12, 2026

Show HN: Web-Based ANSI Art Viewer https://ift.tt/GYj4KfN

Show HN: Web-Based ANSI Art Viewer My love letter to ANSI art. Full width rendering, scrolling by baud rate, text is selectable, and more. There are some example links at the top if you're feeling lucky. https://sure.is/ansi/ March 10, 2026 at 03:40PM

Show HN: OneCLI – Vault for AI Agents in Rust https://ift.tt/dGPLwex

Show HN: OneCLI – Vault for AI Agents in Rust We built OneCLI because AI agents are being given raw API keys. And it's going about as well as you'd expect. We figured the answer isn't "don't give agents access," it's "give them access without giving them secrets." OneCLI is an open-source gateway that sits between your AI agents and the services they call. You store your real credentials once in OneCLI's encrypted vault, and give your agents placeholder keys. When an agent makes an HTTP call through the proxy, OneCLI matches the request by host/path, verifies the agent should have access, swaps the placeholder for the real credential, and forwards the request. The agent never touches the actual secret. It just uses CLI or MCP tools as normal. Try it in one line: docker run --pull always -p 10254:10254 -p 10255:10255 -v onecli-data:/app/data ghcr.io/onecli/onecli The proxy is written in Rust, the dashboard is Next.js, and secrets are AES-256-GCM encrypted at rest. Everything runs in a single Docker container with an embedded Postgres (PGlite), no external dependencies. Works with any agent framework (OpenClaw, NanoClaw, IronClaw, or anything that can set an HTTPS_PROXY). We started with what felt most urgent: agents shouldn't be holding raw credentials. The next layer is access policies and audit, defining what each agent can call, logging everything, and requiring human approval before sensitive actions go through. It's Apache-2.0 licensed. We'd love feedback on the approach, and we're especially curious how people are handling agent auth today. GitHub: https://ift.tt/1rCywvt Site: https://onecli.sh https://ift.tt/1rCywvt March 12, 2026 at 11:41PM

Show HN: A2Apex – Test, certify, and discover trusted A2A agents https://ift.tt/qGVEfCM

Show HN: A2Apex – Test, certify, and discover trusted A2A agents Hey HN, I built A2Apex ( https://a2apex.io ) — a testing and reputation platform for AI agents built on Google's A2A protocol. The problem: AI agents are everywhere, but there's no way to verify they actually work. No standard testing. No directory of trusted agents. No reputation system. What A2Apex does: - Test — Point it at any A2A agent URL. We run 50+ automated compliance checks: agent card validation, live endpoint testing, state machine verification, streaming, auth, error handling. - Certify — Get a 0-100 trust score with Gold/Silver/Bronze badges you can embed in your README or docs. - Get Listed — Every tested agent gets a public profile page in the Agent Directory with trust scores, skills, test history, and embeddable badges. Think of it as SSL Labs (testing) + npm (directory) + LinkedIn (profiles) — for AI agents. Stack: Python/FastAPI, vanilla JS, SQLite. No frameworks, no build tools. Runs on a Mac mini in Wyoming. Free: 5 tests/month. Pro: $29/mo. Startup: $99/mo. Try it at https://app.a2apex.io I'm a dragline operator at a coal mine — built this on nights and weekends using Claude. Would love feedback from anyone building A2A agents or thinking about agent interoperability. https://a2apex.io March 12, 2026 at 11:10PM

Show HN: We open sourced Vapi – UI included https://ift.tt/uiI8Gs3

Show HN: We open sourced Vapi – UI included We kept hitting the same wall building voice AI systems. Pipecat and LiveKit are great projects, genuinely. But getting it to production took us weeks of plumbing - wiring things together, handling barge-ins, setting up telephony, Knowledge base, tool calls, handling barge in etc. And every time we needed to tweak agent behavior, you were back in the code and redeploying. We just wanted to change a prompt and test it in 30 seconds. Thats why Vapi retell etc exist. So we wrote the entire code and open sourced it as a Visual drag-and-drop for voice agents ( same as vapi or n8n for voice). Built on a Pipecat fork and BSD-2, no strings attached. Tool calls, knowledge base, variable extraction, voicemail detection, call transfer to humans, multilingual support, post-call QA, background noise suppression, and a website widget are all included. You're not paying per-minute fees to a middleman wrapping the same APIs you'd call directly. You can set it up with a simple docker command. It comes pre-wired with Deepgram, Cartesia, OpenAI , Speechmatics Sarvam for STT, same for TTS, and OpenAI, Gemini, groq, Openrouter, Azure on the LLM side. Telephony works out of the box with Twilio, Vonage , CLoudonix and Asterisk for both inbound and outbound. There's a hosted version at app.dograh.com if self-hosting isn't your thing. Repo: github.com/dograh-hq/dograh Video walkthrough: https://youtu.be/sxiSp4JXqws We built this out of frustration, not a thesis. The tool is free to use and fully open source (and will always remain so), happy to answer questions about the data or how we built it. https://ift.tt/2NBAKeg March 12, 2026 at 10:03PM

Wednesday, March 11, 2026

Show HN: Rewriting Mongosh in Golang Using Claude https://ift.tt/UkxS7ch

Show HN: Rewriting Mongosh in Golang Using Claude https://ift.tt/AGOQcwa March 11, 2026 at 10:55PM

Show HN: Loquix – Open-source Web Components for AI chat interfaces https://ift.tt/gnL3FsA

Show HN: Loquix – Open-source Web Components for AI chat interfaces https://ift.tt/Ajhfv3M March 11, 2026 at 10:19PM

Show HN: StreamHouse – Open-source Kafka alternative https://ift.tt/Y9hptqR

Show HN: StreamHouse – Open-source Kafka alternative Hey HN, I built StreamHouse, an open-source streaming platform that replaces Kafka's broker-managed storage with direct S3 writes. The goal: same semantics, fraction of the cost. How it works: Producers batch and compress records, a stateless server manages partition routing and metadata (SQLite for dev, PostgreSQL for prod), and segments land directly in S3. Consumers read from S3 with a local segment cache. No broker disks to manage, no replication factor to tune — S3 gives you 11 nines of durability out of the box. What's there today: - Producer API with batching, LZ4 compression, and offset tracking (62K records/sec) - Consumer API with consumer groups, auto-commit, and multi-partition fanout (30K+ records/sec) - Kafka-compatible protocol (works with existing Kafka clients) - REST API, gRPC API, CLI, and a web UI - Docker Compose setup for trying it locally in 5 minutes What's not there yet: - Battle-tested production deployments (I'm the only user so far) - Connectors for consumers to immediately connect to (i.e clickhouse, elastic search etc) The cost model is what motivated this. Kafka's storage costs scale with replication factor × retention × volume. With S3 at $0.023/GB/month, storing a TB of events costs ~$23/month instead of hundreds on broker EBS volumes. Written in Rust, 15 crates thus far. Apache 2.0 licensed. GitHub: https://ift.tt/hu5IGb6 How it works blog on my main website: https://ift.tt/whu02k3 Happy to answer questions about the architecture, tradeoffs, or what I learned building this. https://ift.tt/hu5IGb6 March 11, 2026 at 09:14PM

Show HN: I built an ISP infrastructure emulator from scratch with a custom vBNG https://ift.tt/RKbmUQr

Show HN: I built an ISP infrastructure emulator from scratch with a custom vBNG Demo: https://ift.tt/espNjWG GitHub: https://ift.tt/eJzhcuU Aether is a multi-BNG (Broadband Network Gateway) ISP infrastructure lab built almost from scratch that emulates IPoE IPv4 subscriber management end-to-end. It supports IPoE/Ipv4 networks and runs a python-based vBNG with RADIUS AAA, per-subscriber traffic shaping, and traffic simulation emulated on Containerlab. It is also my first personal networking project, built roughly over a month. Motivations behind the project I'm a CS sophomore. About three years ago, I was assigned, as an intern, to build a OSS/BSS platform for a regional ISP by myself without mentoring. Referencing demo.splynx.com , I developed most of the BSS side ( bookkeeping, accounting, inventory management ), but, in terms of networking, I managed to install and setup RADIUS and that was about it. I didn't have anyone to mentor me or ask questions to, so I had given up then. Three years later, I decided to try cracking it again. This project is meant to serve as a learning reference for anyone who's been in that same position i.e staring at closed-source vendor stacks without proper guidance. This is absolutely not production-grade, but I hope it gives someone a place to start. Architecture overview The core component, the BNG, runs on an event-driven architecture where state changes are passed around as messages to avoid handling mutexes and locks. The session manager is the sole owner of the session state. To keep it clean and predictable, the direBNG never accepts external inputctly. The one exception is the Go RADIUS CoA daemon, which passes CoA messages in via IPC sockets. Everything the BNG produces(events, session snapshots) gets pushed to Redis Streams, where the bng-ingestor picks them up, processes them, and persists them. Simulation and meta-configs I am generating traffic through a simulator node that mounts the host's docker socket and runs docker exec commands on selected hosts. The topology.yaml used by Containerlab to define the network topology grows bigger as more BNG's and access nodes are added. So aether.config.yaml, a simpler configuration, is consumed by the configuration pipeline to generate the topology.yaml and other files (nginx.conf, kea-dhcp.conf, RADIUS clients.conf etc.) Known Limitations - Multiple veth hops through the emulated topology add significant overhead. Profiling with iperf3 (-P 10 -t 10, 9500 MTU, 24 vCPUs) shows BNG→upstream at ~24 Gbit/s, but host→BNG→upstream drops to ~3.5 Gbit/s. The 9500 MTU also isn't representative of real ISP deployments. This gets worse when the actual network is reintroduced capping my throughput to 1.6 Gbits/sec in local. - The circuit ID format (1/0/X) is non-standard. I simplified it for clarity. - No iBGP or VLAN support. - No Ipv6 support. I wanted to target IPv4 networks from the start to avoid getting too much breadth without a lot of depth. Nearly everything I know about networking (except some sections from AWS) I learned building this. A lot was figured out on the fly, so engineers will likely spot questionable decisions in the codebase. I'd genuinely appreciate that feedback. Questions - Currently, the circuit where the user connects is arbitrarily decided by the demo user. In a real system with thousands of circuits, it'd be very difficult to properly assess which circuit the customer might connect to. When adding a new customer to a service, how does the operator decide, based on customer's location, which circuit to provide the service to ? https://ift.tt/KvfXZGD March 11, 2026 at 08:38PM

Tuesday, March 10, 2026

Show HN: Satellite imagery object detection using text prompts https://ift.tt/3jlR5gE

Show HN: Satellite imagery object detection using text prompts I built a browser-based tool for detecting objects in satellite imagery using vision-language models (VLMs). You draw a polygon on the map and enter a text prompt such as "swimming pools", "oil tanks", or "buses". The system scans the selected area tile-by-tile and returns detections projected back onto the map as GeoJSON. Pipeline: select area and zoom level, split the region into mercantile tiles, run each tile with the prompt through a VLM, convert predicted bounding boxes to geographic coordinates (WGS84), and render the results back on the map. It works reasonably well for distinct structures in a zero-shot setting. occluded objects are still better handled by specialized detectors like YOLO models. There is a public demo and no login required. I am mainly interested in feedback on detection quality, performance tradeoffs between VLMs and specialized detectors, and potential real-world use cases. https://ift.tt/qZ05e4N March 9, 2026 at 02:52PM

Show HN: Agentic Data Analysis with Claude Code https://ift.tt/xuydoig

Show HN: Agentic Data Analysis with Claude Code Hey HN, as a former data analyst, I’ve been tooling around trying to get agents to do my old job. The result is this system that gets you maybe 80% of the way there. I think this is a good data point for what the current frontier models are capable of and where they are still lacking (in this case — hypothesis generation and general data intuition). Some initial learnings: - Generating web app-based reports goes much better if there are explicit templates/pre-defined components for the model to use. - Claude can “heal” broken charts if you give it access to chart images and run a separate QA loop. Would either feedback from the community or to hear from others that have tried similar things! https://ift.tt/zvhU8OR March 10, 2026 at 11:44PM

Monday, March 9, 2026

Show HN: DenchClaw – Local CRM on Top of OpenClaw https://ift.tt/USBzH2n

Show HN: DenchClaw – Local CRM on Top of OpenClaw Hi everyone, I am Kumar, co-founder of Dench ( https://denchclaw.com ). We were part of YC S24, an agentic workflow company that previously worked with sales floors automating niche enterprise tasks such as outbound calling, legal intake, etc. Building consumer / power-user software always gave me more joy than FDEing into an enterprise. It did not give me joy to manually add AI tools to a cloud harness for every small new thing, at least not as much as completely local software that is open source and has all the powers of OpenClaw (I can now talk to my CRM on Telegram!). A week ago, we launched Ironclaw, an Open Source OpenClaw CRM Framework ( https://ift.tt/CQY50SB ) but people confused us with NearAI’s Ironclaw, so we changed our name to DenchClaw ( https://denchclaw.com ). OpenClaw today feels like early React: the primitive is incredibly powerful, but the patterns are still forming, and everyone is piecing together their own way to actually use it. What made React explode was the emergence of frameworks like Gatsby and Next.js that turned raw capability into something opinionated, repeatable, and easy to adopt. That is how we think about DenchClaw. We are trying to make it one of the clearest, most practical, and most complete ways to use OpenClaw in the real world. Demo: https://www.youtube.com/watch?v=pfACTbc3Bh4#t=43 npx denchclaw It has a CRM focus because we asked a couple dozen hard-core OpenClaw users "what do you actually do", and it was sales automation, lead enrichment, biz dev, creating slides, linkedin outreach, email/notion/calendar stuff, and it's always painful to set up. But I use DenchClaw daily for almost everything I do. It also works as a coding agent like Cursor - DenchClaw built DenchClaw. I am addicted now that I can ask it, “hey in the companies table only show me the ones who have more than 5 employees” and it updates it live than me having to manually add a filter. On Dench, everything sits in a file system, the table filters, views, column toggles, calendar/gantt views, etc, so OpenClaw can directly work with it using Dench’s CRM skill. The CRM is built on top of DuckDB, the smallest, most performant and at the same time also feature rich database we could find. Thank you DuckDB team! It creates a new OpenClaw profile called “dench”, and opens a new OpenClaw Gateway… that means you can run all your usual openclaw commands by just prefixing every command with `openclaw --profile dench` . It will start your gateway on port 19001 range. You will be able to access the DenchClaw frontend at localhost:3100. Once you open it on Safari, just add it to your Dock to use it as a PWA. Think of it as Cursor for your Mac (also works on Linux and Windows) which is based on OpenClaw. DenchClaw has a file tree view for you to use it as an elevated finder tool to do anything on your mac. I use it to create slides, do linkedin outreach using MY browser. DenchClaw finds your Chrome Profile and copies it fully into its own, so you won’t have to log in into all your websites again. DenchClaw sees what you see, does what you do. It’s an everything app, that sits locally on your mac. Just ask it “hey import my notion”, “hey import everything from my hubspot”, and it will literally go into your browser, export all objects and documents and put it in its own workspace that you can use. We would love you all to break it, stress test its CRM capabilities, how it streams subagents for lead enrichment, hook it into your Apollo, Gmail, Notion and everything there is. Looking forward to comments/feedback! https://ift.tt/bUdhegB March 9, 2026 at 09:55PM

Show HN: I gave my robot physical memory – it stopped repeating mistakes https://ift.tt/SZ82DRB

Show HN: I gave my robot physical memory – it stopped repeating mistakes https://ift.tt/FS9pKzn March 9, 2026 at 11:36PM

Sunday, March 8, 2026

Show HN: Skir – A schema language I built after 15 years of Protobuf friction https://ift.tt/WLdg0tN

Show HN: Skir – A schema language I built after 15 years of Protobuf friction Why I built Skir: https://ift.tt/zdabGcW... Quick start: npx skir init All the config lives in one YML file. Website: https://skir.build GitHub: https://ift.tt/Z3cks4W Would love feedback especially from teams running mixed-language stacks. https://skir.build/ March 9, 2026 at 12:17AM