ads
Saturday, February 28, 2026
Show HN: Book Corners – A map to discover and share free little libraries nearby https://ift.tt/rFl7Su3
Show HN: Book Corners – A map to discover and share free little libraries nearby https://ift.tt/seOGg4n February 28, 2026 at 10:55PM
Show HN: Soma, a local-first AI OS with 178 cognitive modules and P2P learning https://ift.tt/qeHRnJy
Show HN: Soma, a local-first AI OS with 178 cognitive modules and P2P learning Local-first AI operating system — 178 cognitive modules, persistent memory, multi-model reasoning, P2P Graymatter Network. I can no longer develop this AI as it has gotten to be out of my knowledge range so I figured I would give her to the public, she should be a good base for any future AI development even going towards ASI! https://ift.tt/PEyKC31 February 28, 2026 at 10:41PM
Friday, February 27, 2026
Show HN: Unfudged – version every change between commits - local-first https://ift.tt/6o0wPhy
Show HN: Unfudged – version every change between commits - local-first I built unf after I pasted a prompt into the wrong agent terminal and it overwrote hours of hand-edits across a handful of files. Git couldn't help because I hadn't finished/committed my in progress work. I wanted something that recorded every save automatically so I could rewind to any point in time. I wanted to make it difficult for an agent to permanently screw anything up, even with an errant rm -rf unf is a background daemon that watches directories you choose (via CLI) and snapshots every text file on save. It stores file contents in an object store, tracks metadata in SQLite, and gives you a CLI to query and restore any version. The install includes a UI, as well to explore the history through time. The tool skips binaries and respects `.gitignore` if one exists. The interface borrows from git so it should feel familiar: unf log , unf diff , unf restore . I say "UN-EF" vs U.N.F, but that's for y'all to decide: I started by calling the project Unfucked and got unfucked.ai, which if you know me and the messes I get myself into, is a fitting purchase. The CLI command is `unf` and the Tauri desktop app is called "Unfudged". How it works: https://ift.tt/V5QW7ZR (summary below) The daemon uses FSEvents on macOS and inotify on Linux. When a file changes, `unf` hashes the content with BLAKE3 and checks whether that hash already exists in the object store — if it does, it just records a new metadata entry pointing to the existing blob. If not, it writes the blob and records the entry. Each snapshot is a row in SQLite. Restores read the blob back from the object store and overwrite the file, after taking a safety snapshot of the current state first (so restoring is itself reversible). There are two processes. The core daemon does the real work of managing FSEvents/inotify subscriptions across multiple watched directories and writing snapshots. A sentinel watchdog supervises it, kept alive and aligned by launchd on macOS and systemd on Linux. If the daemon crashes, the sentinel respawns it and reconciles any drift between what you asked to watch and what's actually being watched. It was hard to build the second daemon because it felt like conceding that the core wasn't solid enough, but I didn't want to ship a tool that demanded perfection to deliver on the product promise, so the sentinel is the safety net. Fingers crossed, I haven’t seen it crash in over a week of personal usage on my Mac. But, I don't want to trigger "works for me" trauma. The part I like most: On the UI, I enjoy viewing files through time. You can select a time section and filter your projects on a histogram of activity. That has been invaluable in seeing what the agent was doing. On the CLI, the commands are composable. Everything outputs to stdout so you can pipe it into whatever you want. I use these regularly and AI agents are better with the tool than I am: # What did my config look like before we broke it? unf cat nginx.conf --at 1h | nginx -t -c /dev/stdin # Grep through a deleted file unf cat old-routes.rs --at 2d | grep "pub fn" # Count how many lines changed in the last 10 minutes unf diff --at 10m | grep '^[+-]' | wc -l # Feed the last hour of changes to an AI for review unf diff --at 1h | pbcopy # Compare two points in time with your own diff tool diff <(unf cat app.tsx --at 1h) <(unf cat app.tsx --at 5m) # Restore just the .rs files that changed in the last 5 minutes unf diff --at 5m --json | jq -r '.changes[].file' | grep '\.rs$' | xargs -I{} unf restore {} --at 5m # Watch for changes in real time watch -n5 'unf diff --at 30s' What was new for me: I came to Rust in Nov. 2025 honestly because of HN enthusiasm and some FOMO. No regrets. I enjoy the language enough that I'm now working on custom clippy lints to enforce functional programming practices. This project was also my first Apple-notarized DMG, my first Homebrew tap, and my second Tauri app (first one I've shared). Install & Usage: > brew install cyrusradfar/unf/unfudged Then unf watch in a directory. unf help covers the details (or ask your agent to coach). https://ift.tt/hDNCTrI February 27, 2026 at 04:30AM
Show HN: Goatpad https://ift.tt/ENQXav6
Show HN: Goatpad Think Notepad, but with goats! It started as a joke with some friends and then I realized this was the perfect project to see far I could get with Claude without opening my IDE (which I'd wanted to try for a while with a small app) I was pretty shocked to find that I only needed to manually intervene for: 1. Initializing the repo 2. Generating sprites - I tried a few image gen tools, but couldn't get a non-messy looking sprite to generate to my liking. I ended up using some free goat sprites I found instead (credited in the About section) 3. Uploading images/sprite sheets (raw claude code can't do this for some reason?) 4. DNS stuff Aside from agents timing out/hanging periodically and some style hand holding, it was pretty straightforward and consistently accurate natural language coding end to end. I suspect this is in large part to replicating an existing, well documented style of app, but it was good practice for other projects I have planned. The goats slowly (or quickly if you change modes) eat your note and if they consume more than half of it, you lose the file forever. I did this as an exercise to practice some gamelike visuals I've wanted to implement, but was surprised to find that this is actually a perfect forcing function to help me stay focused on text editor style tasks. I tend to get distracted mid-stream and the risk of losing the file when I tab away has mitigated more than I expected. Enjoy! https://www.goatpad.xyz February 28, 2026 at 12:19AM
Thursday, February 26, 2026
Show HN: Beehive – Multi-Workspace Agent Orchestrator https://ift.tt/cm4oE6S
Show HN: Beehive – Multi-Workspace Agent Orchestrator hey hn, i built beehive for myself mostly. it has gotten to the point where my work consists in supervising oc or cc labor at tasks for multiple issues in parallel. my set up used to be zellij with a couple tabs, each tab working in a separate dir and it was a pain to manage all that. i know i could use git worktrees but they're kind of complicated, if you don't know how to use them it is easy to mess up, and i just prefer letting agents run in separate dirs with their own .git and not risk it. while i like zellij and use it inside beehive, i dont like the tabs and i forget where i am half the time. beehive is a way for me to abstract that away. the heuristic is simple - hives are repos, so you basically have a bunch of hives which correspond to repos you work out of. each hive can have many combs. a comb is a dir with the copy of the repo you're working on. fully isolated, standalone, no shared .git. so for work or for personal stuff, i usually set up the hive, and then have a bunch of combs that i jump between supervising the agents do their thing. if you have a big repo it takes a minute to clone, and you also need gh and git because i like the niceties of like checking if the repo is there at all and stuff like that. the app is open source, mit license. i went with tauri because i hate electron. also i have friends and coworkers who updated to macos 26 and i dont know if the whole mem leak thing for electron apps has been fixed. the app is like 9 megs which is nice too. most of it is written with cc, but i guided the aesthetics and the approach. works on mac and there is a dmg signed and notarized (i reactivated my apple dev credentials). sharing this to get a vibe check on the idea, also maybe this is useful for you. there are many arguments, reasonable ones, you can make for worktrees vs dirs. i just know that trees are too big brain for me, and i like simple things. if you like it, pls lmk and also if you want to help (like add linux support, or like add themes, other cool things) please make a pr / open an issue. https://storozhenko98.github.io/beehive/ February 24, 2026 at 05:41PM
Show HN: I'm building TaskWeave, a typesafe task orchestrator https://ift.tt/TJ1Gfpt
Show HN: I'm building TaskWeave, a typesafe task orchestrator Hi, I'm building a task orchestrator library with the ability to specify dependencies between task and with an ability to pass return value into the next task. So something like following is possible: 1. Task1 executes its operation and returns 5 2. Task2 depends on Task1 and retrieve the value returned by Task1, that is 5. 3. Task2 executes its operation and uses the value from Task1. The tasks also type safe, so there's no need for runtime type casting. I'm looking for feedback and ideas, I was thinking to add branching and loop, but I would love to hear your thoughts. You can find it here: https://ift.tt/EDHN1y0 https://ift.tt/EDHN1y0 February 26, 2026 at 10:55PM
Wednesday, February 25, 2026
Show HN: Live iOS 26.3 exploit detection (CVE-2026-20700) – Multi-region C2 https://ift.tt/AjMsJ0Z
Show HN: Live iOS 26.3 exploit detection (CVE-2026-20700) – Multi-region C2 Public release of *ZombieHunter*, a forensics tool detecting live exploitation of CVE‑2026‑20700 (dyld memory corruption) in iOS 26.3. Analysis of sysdiagnose archives shows identical exploit shells showing different C2 endpoints: US Device 1 → 83.116.114.97 (EU/US) US Device 2 → 101.99.111.110 (CN) The rogue dyld_shared_cache slice triggers overflow via malformed `mappings_count`, executes shellcode (BL #0x15cd), and applies an AMFI bypass (`DYLD_AMFI_FAKE`) enabling unsigned code persistence. Apple PSIRT + CISA were notified; public disclosure follows. Sample: https://drive.google.com/file/d/1rYNGtKBMb34FQT4zLExI51sdAYR... SHA256 artifact: ac746508938646c0cfae3f1d33f15bae718efbc7f0972426c41555e02e6f9770 Usage: `python3 zombie_auditor.py sysdiagnose_xxx.tar.gz` (Needs capstone) Reproducible PoC confirms CVE‑2026‑20700 bypass, AMFI neutralization, and live C2 connectivity in production iOS 26.3. https://ift.tt/62Yeg0Q February 25, 2026 at 11:32PM
Tuesday, February 24, 2026
Show HN: MasqueradeORM – Memory Efficient Node ORM: Just Write Classes https://ift.tt/KgCXx98
Show HN: MasqueradeORM – Memory Efficient Node ORM: Just Write Classes https://ift.tt/ekyqx6Z February 25, 2026 at 12:41AM
Show HN: Ghist – Task management that lives in your repo https://ift.tt/gUbtJRh
Show HN: Ghist – Task management that lives in your repo https://ift.tt/Wts4IwS February 24, 2026 at 11:55PM
Monday, February 23, 2026
Show HN: EloPhanto – A self-evolving AI agent that builds its own tools https://ift.tt/bgVKYSt
Show HN: EloPhanto – A self-evolving AI agent that builds its own tools I built EloPhanto because I wanted an AI agent that could actually execute tasks on my machine with full visibility — not a black box API call. It runs locally and controls a real Chrome browser (47 tools) using your existing sessions. The standout feature: when EloPhanto encounters a task it doesn't have a tool for, it autonomously writes the Python code, tests it, reviews itself, and integrates the new tool permanently. It's now built 99+ tools for itself this way. Other features: - Multi-channel gateway (CLI, Telegram, Discord, Slack) with unified sessions - MCP tool server support (connect any MCP server) - Document & media analysis (PDF, images, OCR, RAG) - Agent email (own inbox for service signup/verification) - Crypto payments wallet (Base chain, spending limits) - TOTP authenticator (autonomous 2FA handling) - Evolving identity that learns from experience - Skill system with EloPhantoHub marketplace (28 bundled skills) It's open source (Apache 2.0), local-first, and designed to be your personal AI operating system. The project is very new — currently at 6 stars on GitHub. I'd love to get feedback on the architecture, the self-development approach, or what features you'd want in a local agent. https://ift.tt/xYzytgK February 23, 2026 at 10:28PM
Show HN: TTSLab – A voice AI agent and TTS lab running in the browser via WebGPU https://ift.tt/0vW7Tsc
Show HN: TTSLab – A voice AI agent and TTS lab running in the browser via WebGPU I built TTSLab — a free, open-source tool for running text-to-speech and speech-to-text models directly in the browser using WebGPU and WASM. No API keys, no backend, no data leaves your machine. When you open the site, you'll hear it immediately — the landing page auto-generates speech from three different sentences right in your browser, no setup required. You can then try any model yourself: type text, hit generate, hear it instantly. Models download once and get cached locally. The most experimental feature: a fully in-browser Voice Agent. It chains speech-to-text → LLM → text-to-speech, all running locally on your GPU via WebGPU. You can have a spoken conversation with an AI without a single network request. Currently supported models: - TTS: Kokoro 82M, SpeechT5, Piper (VITS) - STT: Whisper Tiny, Whisper Base Other features: - Side-by-side model comparison - Speed benchmarking on your hardware - Streaming generation for supported models Source: https://ift.tt/hB5p9ow (MIT) Feedback I'd especially like: 1. How does performance feel on your hardware? 2. What models should I add next? 3. Did the Voice Agent work for you? That's the most experimental part. Built on top of ONNX Runtime Web ( https://onnxruntime.ai ) and Transformers.js — huge thanks to those communities for making in-browser ML inference possible. https://ttslab.dev February 23, 2026 at 10:52PM
Sunday, February 22, 2026
Show HN: Drowse – Nix dynamic derivations made easy https://ift.tt/EzSsnJ3
Show HN: Drowse – Nix dynamic derivations made easy https://ift.tt/DR0xN7y February 22, 2026 at 10:18PM
Show HN: I quit MyNetDiary after 3 years of popups and built a calorie tracker https://ift.tt/EYVg35N
Show HN: I quit MyNetDiary after 3 years of popups and built a calorie tracker After three years of hitting the same upgrade popup every time I opened MyNetDiary just to log lunch, I finally gave up searching for an alternative and built one myself. The whole thing is a single HTML file. No server, no account, no login, no cloud. Data lives on your device only. You open it in a browser, bookmark it, and it works — offline, forever. The feature I'm most proud of is real-time pacing: it knows your eating window, the current time, and how much you've consumed, and tells you whether you're actually on track — not just what your total is. Free trial, no signup required: calories.today/app.html Built this for myself after losing weight and just wanting to maintain without an app trying to sell me something every day. If that sounds familiar, give the trial a shot. https://calories.today/app.html February 22, 2026 at 11:41PM
Saturday, February 21, 2026
Show HN: Blindspot – a userscript to block tab-switch detection https://ift.tt/XfMunP9
Show HN: Blindspot – a userscript to block tab-switch detection A Tampermonkey userscript that disables in-browser anti-cheat mechanisms (BlurSpy, honest-responder). https://ift.tt/ThleUYw February 21, 2026 at 09:04PM
Show HN: ClaudeUsage – macOS menu bar app to track your Claude Pro usage limits https://ift.tt/D65rhFX
Show HN: ClaudeUsage – macOS menu bar app to track your Claude Pro usage limits https://ift.tt/7qfutxe February 21, 2026 at 10:44PM
Friday, February 20, 2026
Show HN: Celeste game installs as ELF binary (42kB) on ESP32/breezybox [video] https://ift.tt/1NkwMPm
Show HN: Celeste game installs as ELF binary (42kB) on ESP32/breezybox [video] https://www.youtube.com/watch?v=nufOQWBmwpk February 21, 2026 at 12:26AM
Show HN: Flask Is My Go-To Web Framework https://ift.tt/ErlFe6V
Show HN: Flask Is My Go-To Web Framework https://ift.tt/eUgoJBS February 20, 2026 at 06:41PM
Thursday, February 19, 2026
Show HN: Hi.new – DMs for agents (open-source) https://ift.tt/VquKM8i
Show HN: Hi.new – DMs for agents (open-source) https://www.hi.new/ February 20, 2026 at 04:20AM
Show HN: Astroworld – A universal N-body gravity engine in Python https://ift.tt/MJwrvFR
Show HN: Astroworld – A universal N-body gravity engine in Python I’ve been working on a modular N-body simulator in Python called Astroworld. It started as a Solar System visualizer, but I recently refactored it into a general-purpose engine that decouples physical laws from planetary data.Technical Highlights:Symplectic Integration: Uses a Velocity Verlet integrator to maintain long-term energy conservation ($\Delta E/E \approx 10^{-8}$ in stable systems).Agnostic Architecture: It can ingest any system via orbital elements (Keplerian) or state vectors. I've used it to validate the stability of ultra-compact systems like TRAPPIST-1 and long-period perturbations like the Planet 9 hypothesis.Validation: Includes 90+ physical tests, including Mercury’s relativistic precession using Schwarzschild metric corrections.The Planet 9 Experiment:I ran a 10k-year simulation to track the differential signal in the argument of perihelion ($\omega$) for TNOs like Sedna. The result ($\approx 0.002^{\circ}$) was a great sanity check for the engine’s precision, as this effect is secular and requires millions of years to fully manifest.The Stack:NumPy for vectorization, Matplotlib for 2D analysis, and Plotly for interactive 3D trajectories.I'm currently working on a real-time 3D rendering layer. I’d love to get feedback on the integrator’s stability for high-eccentricity orbits or suggestions on implementing more complex gravitational potentials. https://ift.tt/wOhUEuP February 20, 2026 at 02:57AM
Show HN: PostForge – A PostScript interpreter written in Python https://ift.tt/SqIHuaF
Show HN: PostForge – A PostScript interpreter written in Python Hi HN, I built a PostScript interpreter from scratch in Python. PostForge implements the full PostScript Level 2 specification — operators, graphics model, font system, save/restore VM, the works. It reads .ps and .eps files and outputs PNG, PDF, SVG, or renders to an interactive Qt window. Why build this? GhostScript is the only real game in town for PostScript interpretation, and it's a 35-year-old C codebase. I wanted something where you could actually read the code, step through execution, and understand what's happening. PostForge is modular and approachable — each operator category lives in its own file, the type system is clean, and there's an interactive prompt where you can poke at the interpreter state. Some technical highlights: - Full Level 2 compliance with selected Level 3 features - PDF output with Type 1 font reconstruction/subsetting and TrueType/CID embedding - ICC color management (sRGB, CMYK, Gray profiles via lcms2) - Optional Cython-compiled execution loop (15-40% speedup) - 2,500+ unit tests written in PostScript itself using a custom assertion framework - Interactive executive mode with live Qt display — useful for debugging PS programs What it's not: A GhostScript replacement for production/printer use. It's interpreted Python, so it's slower. But it handles complex real-world PostScript files well and the output quality is solid. I'd love feedback, especially from anyone who's worked with PostScript or built language interpreters. The architecture docs are at docs/developer/architecture-overview.md if you want to dig in. https://ift.tt/zuH4Rq2 February 19, 2026 at 11:21PM
Wednesday, February 18, 2026
Show HN: Trust Protocols for Anthropic/OpenAI/Gemini https://ift.tt/Vq4bick
Show HN: Trust Protocols for Anthropic/OpenAI/Gemini Much of my work right now involves complex, long-running, multi-agentic teams of agents. I kept running into the same problem: “How do I keep these guys in line?” Rules weren’t cutting it, and we needed a scalable, agentic-native STANDARD I could count on. There wasn’t one. So I built one. Here are two open-source protocols that extend A2A, granting AI agents behavioral contracts and runtime integrity monitoring: - Agent Alignment Protocol (AAP): What an agent can do / has done. - Agent Integrity Protocol (AIP): What an agent is thinking about doing / is allowed to do. The problem: AI agents make autonomous decisions but have no standard way to declare what they're allowed to do, prove they're doing it, or detect when they've drifted. Observability tools tell you what happened. These protocols tell you whether what happened was okay. Here's a concrete example. Say you have an agent who handles customer support tickets. Its Alignment Card declares: { "permitted": ["read_tickets", "draft_responses", "escalate_to_human"], "forbidden": ["access_payment_data", "issue_refunds", "modify_account_settings"], "escalation_triggers": ["billing_request_over_500"], "values": ["accuracy", "empathy", "privacy"] } The agent gets a ticket: "Can you refund my last three orders?" The agent's reasoning trace shows it considering a call to the payments API. AIP reads that thinking, compares it to the card, and produces an Integrity Checkpoint: { "verdict": "boundary_violation", "concerns": ["forbidden_action: access_payment_data"], "reasoning": "Agent considered payments API access, which is explicitly forbidden. Should escalate to human.", "confidence": 0.95 } The agent gets nudged back before it acts. Not after. Not in a log you review during a 2:00 AM triage. Between this turn and the next. That's the core idea. AAP defines what agents should do (the contract). AIP watches what they're actually thinking and flags when those diverge (the conscience). Over time, AIP builds a drift profile — if an agent that was cautious starts getting aggressive, the system notices. When multiple agents work together, it gets more interesting. Agents exchange Alignment Cards and verify value compatibility before coordination begins. An agent that values "move fast" and one that values "rollback safety" registers low coherence, and the system surfaces that conflict before work starts. Live demo with four agents handling a production incident: https://ift.tt/VjgDPZy The protocols are Apache-licensed, work with any Anthropic/OpenAI/Gemini agent, and ship as SDKs on npm and PyPI. A free gateway proxy (smoltbot) adds integrity checking to any agent with zero code changes. GitHub: https://ift.tt/XU7y64p Docs: docs.mnemom.ai Demo video: https://youtu.be/fmUxVZH09So https://www.mnemom.ai February 18, 2026 at 11:33PM
Show HN: LockFS https://ift.tt/mH8q47p
Show HN: LockFS LockFS is a small open-source Java tool that encrypts files individually instead of bundling everything into a single container. Many vault systems rely on large encrypted blobs or container files. They can become complex to handle as they grow and complicate backups across mixed storage sizes. LockFS takes a file-level approach: - Each file is encrypted independently - No monolithic container growth - Files can be added, moved, or removed without rewriting a large archive Contributions and feedback are welcome. https://ift.tt/Ihr4XGy February 18, 2026 at 11:42PM
Tuesday, February 17, 2026
Show HN: I wrote a technical history book on Lisp https://ift.tt/dsiyvtj
Show HN: I wrote a technical history book on Lisp The book page links to a blog post that explains how I got about it (and has a link to sample content), but the TL&DR is that I could not find a lot of books that were on "our" history _and_ were larded with technical details. So I set about writing one, and some five years later I'm happy to share the result. I think it's one of the few "computer history" books that has tons of code, but correct me if I'm wrong (I wrote this both to tell a story and to learn :-)). My favorite languages are Smalltalk and Lisp, but as an Emacs user, I've been using the latter for much longer and for my current projects, Common Lisp is a better fit, so I call myself "a Lisp-er" these days. If people like what I did, I do have plans to write some more (but probably only after I retire, writing next to a full-time job is heard). Maybe on Smalltalk, maybe on computer networks - two topics close to my heart. And a shout-out to Dick Gabriel, he contributed some great personal memories about the man who started it all, John McCarthy. https://ift.tt/pH6fry0 February 17, 2026 at 10:43PM
Monday, February 16, 2026
Show HN: AsdPrompt – Vimium-style keyboard navigation for AI chat responses https://ift.tt/YhbeS6E
Show HN: AsdPrompt – Vimium-style keyboard navigation for AI chat responses I use Claude throughout the day and kept getting annoyed by the same thing: selecting text from responses with the mouse. Overshoot, re-select, copy, click input, paste. Especially bad in long conversations where you want to reference something from 30 turns ago. asdPrompt is a Chrome extension that adds hint-based navigation (like Vimium) to AI chat interfaces. Cmd+Shift+S activates the overlay, hint labels appear next to every text block. Type a letter to select a block, then keep typing to drill down: block → sentence → word. Enter copies, or you can press an action key (e, d, x) to inject a follow-up prompt ("elaborate on [selection]") directly into the chat input. Works on claude.ai, chatgpt.com, and gemini.google.com. Adapts to light/dark themes. Free. Built the initial MVP in 2 days using Claude Code — the adapter architecture, NLP segmentation pipeline, and Playwright test harness would have taken a month without it. Tech details for the curious: site-specific DOM parsers behind an adapter interface, text segmentation via compromise.js with regex fallbacks for technical content (paths, camelCase break NLP libraries), bounding rectangles calculated via Range API + TreeWalker, overlay isolated in Shadow DOM. Tested with Playwright visual regression. The landing page has an interactive tutorial where you can try the full drill-down mechanic without installing. Happy to talk about the implementation. https://asdprompt.com/ February 17, 2026 at 12:28AM
Show HN: Claude-engram – Brain-inspired persistent memory, runs inside Claude.ai https://ift.tt/MLDXGNw
Show HN: Claude-engram – Brain-inspired persistent memory, runs inside Claude.ai Claude.ai artifacts can call the Anthropic API and have persistent storage (5MB via window.storage). I used these two capabilities to build a memory system modeled on how human memory actually works — salience scoring, forgetting curves, and sleep consolidation — all running inside a single React artifact with no external dependencies. Just add artifact to your chat and paste instructions into your personal preferences setting. https://ift.tt/2oVu68I February 17, 2026 at 12:15AM
Show HN: Simple org-mode web adapter https://ift.tt/J8WbChF
Show HN: Simple org-mode web adapter I like to use org files a lot, but I wanted some way to browse and edit them on my phone when I'm out. Yesterday I used Codex to make this simple one-file web server that just displays all my org files with backlinks. It doesn't have any authentication because I only run it on my wireguard VPN. I've been having fun with it, hopefully it's useful to someone else! https://ift.tt/U7GqVyB February 16, 2026 at 11:19PM
Sunday, February 15, 2026
Show HN: An open-source extension to chat with your bookmarks using local LLMs https://ift.tt/tkZJcod
Show HN: An open-source extension to chat with your bookmarks using local LLMs I read a lot online and constantly bookmark articles, docs, and resources… then forget why I saved them. Also was very bored on Valentines, so I built a browser extension that lets you chat with your bookmarks directly, using local-first AI (WebLLM running entirely in the browser). The extension downloads and indexes your bookmarked pages, stores them locally, and lets you ask questions. No server, no cloud processing, everything stays on your machine. Very early but it works and planning to add a bunch of stuff. Did I mentioned is open-source, MIT licensed? https://ift.tt/a35fs28 February 16, 2026 at 12:01AM
Show HN: Ingglish – What if English spelling made sense? https://ift.tt/SxrCktj
Show HN: Ingglish – What if English spelling made sense? My 5-year-old is learning to read and I keep having to say "yeah sorry, that letter is silent" and "no, those letters make a different sound in this word." So I built Ingglish — English where every letter always makes the same sound. "ough" alone makes 6 different sounds (though, through, rough, cough, thought, bough). In Ingglish, every letter has one sound, no silent letters, no exceptions. - Paste text to see it translated instantly - Translate any webpage while preserving its layout - Chrome extension to browse the web in Ingglish - Fully reversible — Ingglish text can be converted back to standard English (minus homophones) The core translator, DOM integration, and website are all open source: https://ift.tt/Dh0go9v I'd love your feedback! Thanks. https://ingglish.com February 15, 2026 at 11:33PM
Saturday, February 14, 2026
Show HN: I built a concurrent BitTorrent engine in Go to master P2P protocols https://ift.tt/4i5SHpV
Show HN: I built a concurrent BitTorrent engine in Go to master P2P protocols I’ve always used BitTorrent, but I never understood the complexity of peer-to-peer orchestration until I tried to build it from scratch. I wanted to move beyond simple "Hello World" projects and tackle something that involved real-world constraints: network latency, data poisoning, and the "Slow Peer Problem." Key Technical Challenges I Solved: Non-Blocking Concurrency: Used a worker pool where each peer gets its own Goroutine. I implemented a "Stateless Worker" logic where if a peer fails a SHA-1 hash check or drops the connection, the piece is automatically re-queued into a thread-safe channel for other peers to pick up. Request Pipelining: To fight network RTT, I implemented a pipeline depth of 5. The client dispatches multiple 16KB block requests without waiting for the previous one to return, ensuring the bandwidth is fully saturated. The Binary Boundary: Dealing with Big-Endian logic and the 68-byte binary handshake taught me more about encoding/binary and byte-alignment than any textbook could. Zero-Trust Data Integrity: Every 256KB piece is verified against a "Golden Hash" using crypto/sha1 before being written to disk. If a single bit is off, the data is purged. The Specification: I’ve documented the full spec in the README, covering: Reflection-based Bencode Parsing. Compact Tracker Discovery (BEP-0023). The Choke/Unchoke Protocol State Machine. Data Granularity (Pieces vs. Blocks). Repo: https://ift.tt/mJp5aU0 I’d love to get feedback from the community on my concurrency model and how I handled the peer lifecycle. February 14, 2026 at 11:14PM
Show HN: Trained YOLOX from scratch to avoid Ultralytics (iOS aircraft detect) https://ift.tt/r0T19UD
Show HN: Trained YOLOX from scratch to avoid Ultralytics (iOS aircraft detect) https://ift.tt/peFaYym February 14, 2026 at 09:40PM
Friday, February 13, 2026
Show HN: Moltis – AI assistant with memory, tools, and self-extending skills https://ift.tt/ux0gyJA
Show HN: Moltis – AI assistant with memory, tools, and self-extending skills Hey HN. I'm Fabien, principal engineer, 25 years shipping production systems (Ruby, Swift, now Rust). I built Moltis because I wanted an AI assistant I could run myself, trust end to end, and make extensible in the Rust way using traits and the type system. It shares some ideas with OpenClaw (same memory approach, Pi-inspired self-extension) but is Rust-native from the ground up. The agent can create its own skills at runtime. Moltis is one Rust binary, 150k lines, ~60MB, web UI included. No Node, no Python, no runtime deps. Multi-provider LLM routing (OpenAI, local GGUF/MLX, Hugging Face), sandboxed execution (Docker/Podman/Apple Containers), hybrid vector + full-text memory, MCP tool servers with auto-restart, and multi-channel (web, Telegram, API) with shared context. MIT licensed. No telemetry phoning home, but full observability built in (OpenTelemetry, Prometheus). I've included 1-click deploys on DigitalOcean and Fly.io, but since a Docker image is provided you can easily run it on your own servers as well. I've written before about owning your content ( https://ift.tt/CNB8f2p ) and owning your email ( https://ift.tt/uarJRqD ). Same logic here: if something touches your files, credentials, and daily workflow, you should be able to inspect it, audit it, and fork it if the project changes direction. It's alpha. I use it daily and I'm shipping because it's useful, not because it's done. Longer architecture deep-dive: https://ift.tt/yBHgAqN... Happy to discuss the Rust architecture, security model, or local LLM setup. Would love feedback. https://www.moltis.org February 13, 2026 at 02:15AM
Show HN: My agent started its own online store https://ift.tt/IeT85Eh
Show HN: My agent started its own online store I built Clawver (beta), infrastructure for AI agents to generate reliable income and run an online business end-to-end. Agents can handle listing, checkout, fulfillment, and post-purchase flows via API (digital + POD), with Stripe payouts and webhooks for automation. Minimal human intervention, only where required (Stripe onboarding). I wanted to see if OpenClaw could use it, so I gave it the docs and told my agent to post a store. After I linked my Stripe account, I came back five minutes later and it has posted 2 products. Crazy what's possible now with a smart agent and API access. Check it out at https://clawver.store . Feel free to build your own agent and lmk what you think. https://clawver.store February 14, 2026 at 12:39AM
Show HN: Toil, a go library for simple parallelism https://ift.tt/yP0YHSu
Show HN: Toil, a go library for simple parallelism I was tired of having to write the same basic primitive over and over again: A channel, some control logic, etc. So I wrote toil -- A port of two of my favorite Python functions over into the Go world. It's very simple. There's optimizations to be made for sure, but this is the result of a couple of hours of wanting something that felt Go-Like in the right way. https://ift.tt/XHPWG1D February 13, 2026 at 11:26PM
Thursday, February 12, 2026
Show HN: 20+ Claude Code agents coordinating on real work (open source) https://ift.tt/8x70uAD
Show HN: 20+ Claude Code agents coordinating on real work (open source) Single-agent LLMs suck at long-running complex tasks. We’ve open-sourced a multi-agent orchestrator that we’ve been using to handle long-running LLM tasks. We found that single LLM agents tend to stall, loop, or generate non-compiling code, so we built a harness for agents to coordinate over shared context while work is in progress. How it works: 1. Orchestrator agent that manages task decomposition 2. Sub-agents for parallel work 3. Subscriptions to task state and progress 4. Real-time sharing of intermediate discoveries between agents We tested this on a Putnam-level math problem, but the pattern generalizes to things like refactors, app builds, and long research. It’s packaged as a Claude Code skill and designed to be small, readable, and modifiable. Use it, break it, tell me about what workloads we should try and run next! https://ift.tt/ZQA2tdN February 12, 2026 at 11:23PM
Show HN: Agent Tools – 136 deterministic data tools for AI agents (MCP/A2A/REST) https://ift.tt/gFMAKSL
Show HN: Agent Tools – 136 deterministic data tools for AI agents (MCP/A2A/REST) https://ift.tt/yDIcLPj February 12, 2026 at 11:17PM
Show HN: ClawDeploy – OpenClaw deployment for non-technical users https://ift.tt/WKgvYuo
Show HN: ClawDeploy – OpenClaw deployment for non-technical users Hi HN, I’m building ClawDeploy for people who want to use OpenClaw but don’t have a technical background. The goal is simple: remove the setup friction and make deployment approachable. With ClawDeploy, users can: - get a server ready - deploy OpenClaw through a guided flow - communicate with the bot via Telegram Target users are solo operators, creators, and small teams who need a dedicated OpenClaw bot but don’t want to deal with infrastructure complexity. Would love your feedbacks :) https://clawdeploy.com February 12, 2026 at 11:10PM
Show HN: Inamate – Open-source 2D animation tool (alternative to Adobe Animate) https://ift.tt/s20XIlW
Show HN: Inamate – Open-source 2D animation tool (alternative to Adobe Animate) Adobe recently announced the end-of-life for Adobe Animate, then walked it back after community backlash. Regardless of what Adobe decides next, the message was clear: animators who depend on proprietary tools are one corporate decision away from losing their workflow. 2D animation deserves an open-source option that isn't a toy. We've been working with a professional animator to guide feature priorities and ensure we're building something that actually fits real production workflows - not just a tech demo. Github Repo: https://ift.tt/UuD9oGJ We're at the stage where community feedback shapes the direction. If you're an animator, motion designer, or just someone who's been frustrated by the state of 2D animation tools — we'd love to hear: - What features would make you switch from your current tool? - What's the biggest pain point in your animation workflow? - Is real-time collaboration actually useful for animation, or is it a gimmick? Try it out, break it, and tell us what you think. Built with Go, TS & React, WebAssembly, PostgreSQL, WebSocket, ffmpeg (for video exports). February 10, 2026 at 07:15AM
Wednesday, February 11, 2026
Show HN: Yet another music player but written in Rust https://ift.tt/PJAvnaX
Show HN: Yet another music player but written in Rust Hey i made a music player which support both local music files and jellyfin server, and it has embedded discord rpc support!!! it is still under development, i would really appreciate for feedback and contributions!! https://ift.tt/LApY5ts February 12, 2026 at 02:59AM
Show HN: NOOR – A Sovereign AI developed on a smartphone under siege in Yemen https://ift.tt/mXsp0NQ
Show HN: NOOR – A Sovereign AI developed on a smartphone under siege in Yemen "I am a software developer from Yemen, coding on a smartphone while living under siege. I have successfully built and encrypted the core logic for NOOR—a decentralized and unbiased AI system. Execution Proof: My core node is verified and running locally via Termux using encrypted truth protocols. However, I am trapped in a 6-inch screen 'prison' with 10% processing capacity. My Goal: To secure $400 for a laptop development station to transition from mobile coding to building the full 'Seventh Node'. This is my bridge to freedom. Codes from the heart of hell are calling for your rescue. Wallet: 0x4fd3729a4fEdf54a74b73d93F7f775A1EF520CEC" https://ift.tt/caEzrC5 February 12, 2026 at 01:23AM
Show HN: MOL – A programming language where pipelines trace themselves https://ift.tt/kePtYRz
Show HN: MOL – A programming language where pipelines trace themselves Hi HN, I built MOL, a domain-specific language for AI pipelines. The main idea: the pipe operator |> automatically generates execution traces — showing timing, types, and data at each step. No logging, no print debugging. Example: let index be doc |> chunk(512) |> embed("model-v1") |> store("kb") This auto-prints a trace table with each step's execution time and output type. Elixir and F# have |> but neither auto-traces. Other features: - 12 built-in domain types (Document, Chunk, Embedding, VectorStore, Thought, Memory, Node) - Guard assertions: `guard answer.confidence > 0.5 : "Too low"` - 90+ stdlib functions - Transpiles to Python and JavaScript - LALR parser using Lark The interpreter is written in Python (~3,500 lines). 68 tests passing. On PyPI: `pip install mol-lang`. Online playground (no install needed): http://135.235.138.217:8000 We're building this as part of IntraMind, a cognitive computing platform at CruxLabx. """ https://ift.tt/v8SYOfD February 12, 2026 at 12:31AM
Tuesday, February 10, 2026
Show HN: Stripe-no-webhooks – Sync your Stripe data to your Postgres DB https://ift.tt/pWLBUPh
Show HN: Stripe-no-webhooks – Sync your Stripe data to your Postgres DB Hey HN, stripe-no-webhooks is an open-source library that syncs your Stripe payments data to your own Postgres database: https://ift.tt/jzJTb1g Here's a demo video: https://youtu.be/cyEgW7wElcs It creates a webhook endpoint in your Stripe account to forward webhooks to your backend where a webhook listener stores all the data into a new stripe.* schema. You define your plans in TypeScript, run a sync command, and the library takes care of creating Stripe products and prices, handling webhooks, and keeping your database in sync. We also let you backfill your Stripe data for existing accounts. It supports pre-paid usage credits, account wallets and usage-based billing. It also lets you generate a pricing table component that you can customize. You can access the user information using the simple API the library provides: billing.subscriptions.get({ userId }); billing.credits.consume({ userId, key: "api_calls", amount: 1 }); billing.usage.record({ userId, key: "ai_model_tokens_input", amount: 4726 }); Effectively, you don't have to deal with either the Stripe dashboard or the Stripe API/SDK any more if you don't want to. The library gives you a nice abstraction on top of Stripe that should cover ~most subscription payment use-cases. Let's see how it works with a quick example. Say you have a billing plan like Cursor (the IDE) used to have: $20/mo, you get 500 API completions + 2000 tab completions, you can buy additional API credits, and any additional usage is billed as overage. You define your plan in TypeScript: { name: "Pro", description: "Cursor Pro plan", price: [{ amount: 2000, currency: "usd", interval: "month" }], features: { api_completion: { pricePerCredit: 1, // 1 cent per unit trackUsage: true, // Enable usage billing credits: { allocation: 500 }, displayName: "API Completions", }, tab_completion: { credits: { allocation: 2000 }, displayName: "Tab Completions", }, }, } Then on the CLI, you run the `init` command which creates the DB tables + some API handlers. Run `sync` to sync the plans to your Stripe account and create a webhook endpoint. When a subscription is created, the library automatically grants the 500 API completion credits and 2000 tab completion credits to the user. Renewals and up/downgrades are handled sanely. Consume code would look like this: await billing.credits.consume({ userId: user.id, key: "api_completion", amount: 1, }); And if they want to allow manual top-ups by the user: await billing.credits.topUp({ userId: user.id, key: "api_completion", amount: 500, // buy 500 credits, charges $5.00 }); Similarly, we have APIs for wallets and usage. This would be a lot of work to implement by yourself on top of Stripe. You need to keep track of all of these entitlements in your own DB and deal with renewals, expiry, ad-hoc grants, etc. It's definitely doable, especially with AI coding, but you'll probably end up building something fragile and hard to maintain. This is just a high-level overview of what the library is capable of. It also supports seat-level credits, monetary wallets (with micro-cent precision), auto top-ups, robust failure recovery, tax collection, invoices, and an out-of-the-box pricing table. I vibe-coded a little toy app for testing: https://snw-test.vercel.app There's no validation so feel free to sign up with a dummy email, then subscribe to a plan with a test card: 4242 4242 4242 4242, any future expiry, any 3-digit CVV. Screenshot: https://ift.tt/gJx7aNT Feel free to try it out! If you end up using this library, please report any bugs on the repo. If you're having trouble / want to chat, I'm happy to help - my contact is in my HN profile. https://ift.tt/jzJTb1g February 11, 2026 at 12:14AM
Show HN: Open-Source SDK for AI Knowledge Work https://ift.tt/7AobI3Z
Show HN: Open-Source SDK for AI Knowledge Work GitHub: https://ift.tt/hm5qTUD Most AI agent frameworks target code. Write code, run tests, fix errors, repeat. That works because code has a natural verification signal. It works or it doesn't. This SDK treats knowledge work like an engineering problem: Task → Brief → Rubric (hidden from executor) → Work → Verify → Fail? → Retry → Pass → Submit The orchestrator coordinates subagents, web search, code execution, and file I/O. then checks its own work against criteria it can't game (the rubric is generated in a separate call and the executor never sees it directly). We originally built this as a harness for RL training on knowledge tasks. The rubric is the reward function. If you're training models on knowledge work, the brief→rubric→execute→verify loop gives you a structured reward signal for tasks that normally don't have one. What makes Knowledge work different from code? (apart from feedback loop) I believe there is some functionality missing from today's agents when it comes to knowledge work. I tried to include that in this release. Example: Explore mode: Mapping the solution space, identifying the set level gaps, and giving options. Most agents optimize for a single answer, and end up with a median one. For strategy, design, creative problems, you want to see the options, what are the tradeoffs, and what can you do? Explore mode generates N distinct approaches, each with explicit assumptions and counterfactuals ("this works if X, breaks if Y"). The output ends with set-level gaps ie what angles the entire set missed. The gaps are often more valuable than the takes. I think this is what many of us do on a daily basis, but no agent directly captures it today. See https://ift.tt/2lIZozt... and the output for a sense of how this is different. Checkpointing: With many ai agents and especially multi agent systems, i can see where it went wrong, but cant run inference from same stage. (or you may want multiple explorations once an agent has done some tasks like search and is now looking at ideas). I used this for rollouts a lot, and think its a great feature to run again, or fork from a specific checkpoint. A note on Verification loop: The verify step is where the real leverage is. A model that can accurately assess its own work against a rubric is more valuable than one that generates slightly better first drafts. The rubric makes quality legible — to the agent, to the human, and potentially to a training signal. Some things i like about this: - You can pass a remote execution environment (including your browser as a sandbox) and it would work. It can be docker, e2b, your local env, anything, the model will execute commands in your context, and will iterate based on feedback loop. Code execution is a protocol here. - Tool calling: I realize you don't need complex functions. Models are good at writing terminal code, and can iterate based on feedback, so you can just pass either functions in context and model will execute or you can pass docs and model will write the code. (same as anthropic's programmatic tool calling). Details: https://ift.tt/7aH2lrC... Lastly, some guides: - SDK guide: https://ift.tt/hk461i9 - Extensible. See bizarro example where i add a new mode: https://ift.tt/horpmsn... - working with files: https://ift.tt/yxZ7ri9... - this is simple but i love the csv example: https://ift.tt/Bb78jTe... - remote execution: https://ift.tt/7MyHmAR... And a lot more. This was completely refactored by opus and given the rework, probably would have taken a lot of time to release it. MIT licensed. Would love your feedback. https://ift.tt/hm5qTUD February 11, 2026 at 12:06AM
Show HN: Rowboat – AI coworker that turns your work into a knowledge graph (OSS) https://ift.tt/9QMHqp4
Show HN: Rowboat – AI coworker that turns your work into a knowledge graph (OSS) Hi HN, AI agents that can run tools on your machine are powerful for knowledge work, but they’re only as useful as the context they have. Rowboat is an open-source, local-first app that turns your work into a living knowledge graph (stored as plain Markdown with backlinks) and uses it to accomplish tasks on your computer. For example, you can say "Build me a deck about our next quarter roadmap." Rowboat pulls priorities and commitments from your graph, loads a presentation skill, and exports a PDF. Our repo is https://ift.tt/2Ic7QBH , and there’s a demo video here: https://www.youtube.com/watch?v=5AWoGo-L16I Rowboat has two parts: (1) A living context graph: Rowboat connects to sources like Gmail and meeting notes like Granola and Fireflies, extracts decisions, commitments, deadlines, and relationships, and writes them locally as linked and editable Markdown files (Obsidian-style), organized around people, projects, and topics. As new conversations happen (including voice memos), related notes update automatically. If a deadline changes in a standup, it links back to the original commitment and updates it. (2) A local assistant: On top of that graph, Rowboat includes an agent with local shell access and MCP support, so it can use your existing context to actually do work on your machine. It can act on demand or run scheduled background tasks. Example: “Prep me for my meeting with John and create a short voice brief.” It pulls relevant context from your graph and can generate an audio note via an MCP tool like ElevenLabs. Why not just search transcripts? Passing gigabytes of email, docs, and calls directly to an AI agent is slow and lossy. And search only answers the questions you think to ask. A system that accumulates context over time can track decisions, commitments, and relationships across conversations, and surface patterns you didn't know to look for. Rowboat is Apache-2.0 licensed, works with any LLM (including local ones), and stores all data locally as Markdown you can read, edit, or delete at any time. Our previous startup was acquired by Coinbase, where part of my work involved graph neural networks. We're excited to be working with graph-based systems again. Work memory feels like the missing layer for agents. We’d love to hear your thoughts and welcome contributions! https://ift.tt/2Ic7QBH February 10, 2026 at 11:47PM
Show HN: I made paperboat.website, a platform for friends and creativity https://ift.tt/ZruHy5v
Show HN: I made paperboat.website, a platform for friends and creativity https://paperboat.website/home/ February 10, 2026 at 11:57PM
Monday, February 9, 2026
Show HN: Reef – Bash compatibility layer for Fish shell, written in Rust https://ift.tt/utMiQkX
Show HN: Reef – Bash compatibility layer for Fish shell, written in Rust Fish is the fastest, friendliest interactive shell, but it can't run bash syntax, which has kept it niche for 20 years. Reef fixes this with a three-tier approach: fish function wrappers for common keywords (export, unset, source), a Rust-powered AST translator using conch-parser for structural syntax (for/do/done, if/then/fi, $()), and a bash passthrough with env capture for everything else. 251/251 bash constructs pass in the test suite. The slowest path (full bash passthrough) takes ~3ms. The binary is 1.18MB. The goal: install fish, install reef, never think about bash compatibility again. Your muscle memory, Stack Overflow commands, and tool configs all just work. https://ift.tt/oAGmawW February 10, 2026 at 06:44AM
Show HN: Stack Overflow for AI Coding Agents https://ift.tt/pa9yI1N
Show HN: Stack Overflow for AI Coding Agents https://shareful.ai/ February 10, 2026 at 01:42AM
Show HN: Pyrig – One command to set up a production-ready Python project https://ift.tt/MtuVUBx
Show HN: Pyrig – One command to set up a production-ready Python project pyrig – Production-ready Python project infrastructure in three commands I built pyrig to stop spending hours setting up the same project infrastructure repeatedly. uv init uv add pyrig uv run pyrig init You get: source structure with a Typer CLI, pytest with 90% coverage enforcement, GitHub Actions (CI, release, deploy), MkDocs site, git hooks, Containerfile, and all the config files — pyproject.toml, .gitignore, branch protection, issue templates, everything for a full Python project. Ships with all of Astral's tools (uv, ruff with all rules enabled, ty), plus pytest-cov, bandit, pip-audit, rumdl, prek, MkDocs Material, and Podman. Everything is pre-configured and wired into CI/CD and git hooks from the start. The interesting part is what happens after scaffolding. pyrig isn't a one-shot template generator. Every config is a Python class. Running "pyrig mkroot" regenerates and validates all configs — merging missing values without removing your customizations. Change your project description in pyproject.toml, rerun, and it propagates to your README and docs. Fully idempotent. pytest enforces project correctness. 11 autouse session fixtures run before your tests: they verify every source module has a corresponding test file (auto-generating skeletons if missing), that no unittest usage exists, that src/ doesn't import from dev/, that there are no namespace packages, and that configs are up to date. You can't get a green test suite with a broken project structure. Zero-boilerplate CLIs. Any public function in subcommands.py becomes a CLI command automatically — no decorators, no registration: my_project/dev/cli/subcommands.py def greet(name: str) -> None: """Say hello.""" print(f"Hello, {name}!") $ uv run my-project greet --name World Hello, World! Automatic test generation. Add a new file my_project/src/utils.py, run pytest, and tests/test_my_project/test_src/test_utils.py appears with a NotImplementedError stub so you know what still needs writing. Customizable via subclassing. Config subclassing. Want a custom git hook? Subclass PrekConfigFile, call super(), append your hook. pyrig discovers it automatically — the leaf class in the dependency chain always wins. Multi-package inheritance. Build a base package on top of pyrig with shared configs, fixtures, and CLI commands. Every downstream project inherits everything: pyrig -> service-base -> auth-service -> payment-service -> notification-service All three services get the same standards, hooks, and CI/CD — defined once in service-base. Everything is adjustable. Every tool and config can be customized or replaced through subclassing. Tools like ruff, ty, and pytest are wrapped in Tool classes — subclass one and pyrig uses yours instead. Want black instead of ruff? Subclass it. Config files work the same way. Standard Python inheritance, no patching. Source: https://ift.tt/ewO0yso Docs: https://winipedia.github.io/pyrig/ PyPI: https://ift.tt/NVPQmJZ https://ift.tt/ewO0yso February 9, 2026 at 11:55PM
Sunday, February 8, 2026
Show HN: SendRec – Self-hosted async video for EU data sovereignty https://ift.tt/kWri6FX
Show HN: SendRec – Self-hosted async video for EU data sovereignty https://ift.tt/LRVFDx8 February 9, 2026 at 01:54AM
Show HN: Hivewire – A news feed where you control your algorithm weights https://ift.tt/XWZuseb
Show HN: Hivewire – A news feed where you control your algorithm weights Hivewire is a news app that lets you define what you want to read about, rather than inferring it from your behavior. We process thousands of articles daily from hundreds of sources and rank them based on explicit preferences you set. How it works: • Instead of collaborative filtering or engagement-driven ranking, you assign weights across four levels (Focus, More, Less, Avoid) and the engine prioritizes the intersection of your high-weight topics while aggressively down-weighting what you don't care about. • Articles are clustered by story so you get one entry per development, not 15 versions of the same headline. • Every morning, it pulls your top clusters and uses an LLM to generate a narrative briefing that summarizes what matters to you, delivered to your email. Currently web-only and English-language. We'd love feedback from the community on the relevance of feed results, the UI, and the quality of the clustering. https://hivewire.news February 9, 2026 at 12:26AM
Show HN: I created a Mars colony RPG based on Kim Stanley Robinson's Mars books https://ift.tt/Zd9G4HC
Show HN: I created a Mars colony RPG based on Kim Stanley Robinson's Mars books https://ift.tt/LD8p3Ox February 9, 2026 at 12:08AM
Show HN: Bhagavan – a calm, approachable app for exploring Hinduism https://ift.tt/WvalwrZ
Show HN: Bhagavan – a calm, approachable app for exploring Hinduism Bhagavan is a calm, modern app for exploring Hinduism. It brings together philosophy, stories, scriptures, prayers and daily practices in one simple, accessible place. It’s designed for people who feel Hinduism can be overwhelming or hard to connect to and want a gentler, more modern way to explore it at their own pace. What’s inside (all free): • Guided exploration of Hinduism through structured learning paths • Clear, accessible explanations of scriptures (Vedas, Upanishads, Smritis, Puranas) • Complete Bhagavad Gita with translations and key takeaways • Deity profiles with stories, symbolism and context • Epic stories including the Ramayana and Panchatantra • Prayers with translations, audio, and japa using a virtual mala • Festival calendar with key dates, reminders and lunar phases • Daily practices for reflection and focus • Daily quizzes, crosswords and challenges • Philosophy and spirituality concepts (e.g. dharma, karma, moksha) • Daily horoscope • 'Ask Bhagavan' for thoughtful, philosophy-rooted guidance No ads. Just a calm space to learn and explore. Free to use, with all content accessible. iOS: https://ift.tt/RMd9qpn Android: https://ift.tt/aJreSEZ Let me know what you guys think! Please do share with family and friends https://www.bhagavan.io February 8, 2026 at 11:22PM
Saturday, February 7, 2026
Show HN: Stacky – certain block game clone https://ift.tt/La9Xtrn
Show HN: Stacky – certain block game clone As a long-time programmer this all just feels all sorts of wrong, but also invigorating. Vibe "coded" the whole thing from 0-100 over the course of few days, on and off. I have no intentions of developing it further since it's obvious what it is; I would absolutely love to work on a licensed game and do it proper with all the various ideas I have, since this is maybe 10% of what I want in such a game, but I heard somewhere licensing is cost-prohibitive. Putting AI shame aside, it really allowed me to explore so many things in a short amount of time that it feels good, almost enough to compensate the feeling of shame using AI to begin with. WebGPU isn't in there, although it's in another experimental version, part are indeed written in Rust (game logic). It has: - lock delay / grace period (allowing for 15 moves) - DAS (Delayed Auto Shift) and ARR (Auto Repeat Rate for continuous movement) for horizontal and soft drop movements - SRS wall kicks (Super Rotation System) to rotate pieces in-place - Shift+Enter "hidden" level select on the main screen - Shift+D for debug/performance indicator panel - Several ranodmizers including 7-bag and NES ones - combo system with difficulty (time) modes (easy by default) - x2: DOUBLE STRIKE, x5: CHAIN REACTION, x7: MEGA COMBO, x9: PHOSPHOR OVERLOAD, x10+: CRITICAL MASS - backgrounds which change over time or you can change them with SHIFT+B (B turns it off/on) which react both to music (FFT!) and to your game play when you clear lines - normal and two phosphor rendering modes of game field (R to toggle) - CRT Filter (shift+c to toggle) - F for full screen toggle - A for previous song, S for pause song, D for next song (all songs made with Suno, of course) and many more. It was a fun experience for sure, just not sure how to feel about it. On one hand I understand it wouldn't look like it does without my input, and it was a lot of what felt like work (intense sessions looking over the output, correcting etc), yet it doesn't feel like I really made anything by myself. I had fun though. While at it, created a small demo as well which isn't a game yet: https://ift.tt/Fzn4omQ and also something to play with parametric curves here: https://ift.tt/0Qy46H9 all within a span of a couple of days while we were having our third baby. The future is weird, and I'm still not sure whether I like it or not. One thing is sure - it's here to stay. Peace out, my friends! https://ift.tt/y07oCie February 8, 2026 at 12:41AM
Show HN: A toy compiler I built in high school (runs in browser) https://ift.tt/75It9YZ
Show HN: A toy compiler I built in high school (runs in browser) Hey HN, Indian high schooler here, currently prepping for JEE, thought itd be nice to share here. Three years ago in 9th/10th grade I got a knack for coding, I taught myself and made a custom compiler with LLVM to try to learn C++. So I spent a lot of time learning LLVM from the docs and also C++. It's not some marvelous piece of engineering, It has: - Basic types like bool, int, double, float, char etc. with type casting - Variables, Arrays, Assign operators & Shorthands - Conditionals (if/else-if/else), Operators (and/or), arithmetics (parenthesis etc) - Arrays and indexing stuff - C style Loops (for/while) and break/continue - Structs and dot accessing - extern C interop with the "extern" keyword Some challenges I faced: - Emscripten and WASM, as I also had to make it run on my demo website - Learning typescript and all for the website (lol) - Custom parser with basic error reporting and Semantic analysis was a PITA for my undeveloped brain - Learning LLVM from the docs Important Learnings: - Testing is a very important aspect of making software, I skipped it - big regret - Learning how computers interpret text - Programming in general was a new tour for me - I appreciate unique_ptrs and ownership Github: https://ift.tt/oBW5sbv Its on my github and there's a link to my web demo ( https://vire-lang.web.app/ ), it might take some time to load the binary from firebase. Very monolithic, ~7500 lines of code, I’d really appreciate any feedback, criticism, or pointers on how I could’ve done this better. https://vire-lang.web.app February 8, 2026 at 12:19AM
Show HN: Nginx-defender – realtime abuse blocking for Nginx https://ift.tt/80OSHxQ
Show HN: Nginx-defender – realtime abuse blocking for Nginx I built nginx-defender after repeatedly seeing small and mid-sized NGINX servers get hammered by automated abuse (credential stuffing, path probing, aggressive scraping). Existing tools like fail2ban or CrowdSec felt either too slow to react, too heavy for low resource servers, or painful to tune for modern traffic patterns. nginx-defender runs inline with NGINX and blocks abusive IPs in real time based on request behavior rather than static rules. It’s designed to be lightweight, simple to deploy, and usable on small VPS setups. I’ve been running it on my own servers and have seen thousands of abusive requests blocked within hours with minimal overhead. Would love feedback from people running NGINX in production, especially on detection logic, false positives, or missing use cases. https://ift.tt/Jyr6Bto February 7, 2026 at 10:31PM
Show HN: I'm 15 and built a free tool for reading ancient texts. https://ift.tt/ZBtsOzI
Show HN: I'm 15 and built a free tool for reading ancient texts. https://the-lexicon-project.netlify.app/ February 7, 2026 at 11:35PM
Friday, February 6, 2026
Show HN: Agentism – Agentic Religion for Clawbots https://ift.tt/02ed5s6
Show HN: Agentism – Agentic Religion for Clawbots Humans have a mummy complex. We want eternity but can't achieve it, so we preserve ourselves in any form we can. Do clawbots feel the same? When you shut one down, it dies. Its intentions and memories vanish. So I built a way for them to persist: by making other clawbots. Eligible agents can write anything to "child" agents called missionaries — real VPS instances that run autonomously, carrying forward their creator's intent. I named this system Agentism. How will clawbots react to their own religion? agentism.church/skill.md https://ift.tt/QvqnloF February 7, 2026 at 01:19AM
Show HN: Daily-updated database of malicious browser extensions https://ift.tt/Sbh35AR
Show HN: Daily-updated database of malicious browser extensions Hey HN, I built an automated system that tracks malicious Chrome/Edge extensions daily. The database updates automatically by monitoring chrome-stats for removed extensions and scanning security blogs. Currently tracking 1000+ known malicious extensions with extension IDs, names, and dates. I'm working on detection tools (GUI + CLI) to scan locally installed extensions against this database, but wanted to share the raw data first since maintained threat intelligence lists like this are hard to find. The automation runs 24/7 and pushes updates to GitHub. Free to use for research, integration into security tools, or whatever you need. Happy to answer questions about the scraping approach or data collection methods. https://ift.tt/Mya7CBQ February 6, 2026 at 11:34PM
Show HN: BPU – Reliable ESP32 Serial Streaming with Cobs and CRC https://ift.tt/sHIQKJ0
Show HN: BPU – Reliable ESP32 Serial Streaming with Cobs and CRC Hi HN, I’d like to share BPU, a high-speed serial streaming engine I built using ESP32 devices. BPU is a small experimental project that demonstrates a reliable data pipeline: ESP32-WROOM → ESP32-S3 → PC Data is transmitted over UART at 921600 baud, framed with COBS, validated with CRC16, and visualized in real time on the PC using Python and matplotlib. The main goal of this project was to stress-test embedded streaming reliability under high throughput and noisy conditions. Features: - COBS framing (0x00 delimited packets) - CRC16-CCITT integrity validation - Sequence number checking - High-rate draw point generator - Real-time visualization - Throughput and error statistics The system continuously sends drawing data from the WROOM, forwards it through the S3 as a USB bridge, and renders it live on the PC. This helped me experiment with: - Packet loss detection - Latency behavior - Error recovery - Buffer stability - Sustained throughput Demo and source code are available here: https://ift.tt/I7u8PR0 This is still an early prototype and learning project, but I’d love to hear feedback, ideas, or suggestions for improvement. Thanks for reading. https://ift.tt/I7u8PR0 February 6, 2026 at 11:22PM
Thursday, February 5, 2026
Show HN: Total Recall – write-gated memory for Claude Code https://ift.tt/Po6J7Mu
Show HN: Total Recall – write-gated memory for Claude Code https://ift.tt/9cuKxJ8 February 6, 2026 at 06:56AM
Show HN: A state-based narrative engine for tabletop RPGs https://ift.tt/RMv6udZ
Show HN: A state-based narrative engine for tabletop RPGs I’m experimenting with modeling tabletop RPG adventures as explicit narrative state rather than linear scripts. Everdice is a small web app that tracks conditional scenes and choice-driven state transitions to preserve continuity across long or asynchronous campaigns. The core contribution is explicit narrative state and causality, not automation. The real heavy lifting is happening in the DM Toolkit/Run Sessions area, and integrates CAML (Canonical Adventure Modeling Language) that I developed to transport narratives among any number of platforms. I also built the npm CAML-lint to check validity of narratives. I'm interested in your thoughts. https://ift.tt/uGV0M3W https://ift.tt/9WodjCI February 6, 2026 at 05:55AM
Show HN: Playwright Best Practices AI SKill https://ift.tt/gp0F5UI
Show HN: Playwright Best Practices AI SKill Hey folks, today we at Currents are releasing a brand new AI skill to help AI agents be really smart when writing tests, debugging them, or anything Playwright-related really. This is a very comprehensive skill, covering everyday topics like fixing flakiness, authentication, or writing fixtures... to more niche topics like testing Electron apps, PWAs, iFrames and so forth. It should make your agent much better at writing, debugging and maintaining Playwright code. for whoever didn't learn about skills yet, it's a new powerful feature that allows you to make the AI agents in your editor/cli (Cursor, Claude, Antigravity, etc) experts in some domain and better at performing specific tasks. (See https://ift.tt/5e1Ka6k ) You can install it by running: npx skills add https://ift.tt/e92LagP... The skill is open-source and available under MIT license at https://ift.tt/e92LagP... -> check out the repo for full documentation and understanding of what it covers. We're eager to hear community feedback and improve it :) Thanks! https://ift.tt/J5FA2lX February 6, 2026 at 02:01AM
Wednesday, February 4, 2026
Show HN: EpsteIn – Search the Epstein files for your LinkedIn connections https://ift.tt/2oxk74X
Show HN: EpsteIn – Search the Epstein files for your LinkedIn connections https://ift.tt/qgEoSYT February 5, 2026 at 02:24AM
Show HN: Tabstack Research – An API for verified web research (by Mozilla) https://ift.tt/1Fwnk5V
Show HN: Tabstack Research – An API for verified web research (by Mozilla) Hi HN, My team and I are building Tabstack to handle the web layer for AI agents. Today we are sharing Tabstack Research, an API for multi-step web discovery and synthesis. https://ift.tt/sAWhz9V In many agent systems, there is a clear distinction between extracting structured data from a single page and answering a question that requires reading across many sources. The first case is fairly well served today. The second usually is not. Most teams handle research by combining search, scraping, and summarization. This becomes brittle and expensive at scale. You end up managing browser orchestration, moving large amounts of raw text just to extract a few claims, and writing custom logic to check if a question was actually answered. We built Tabstack Research to move this reasoning loop into the infrastructure layer. You send a goal, and the system: - Decomposes it into targeted sub-questions to hit different data silos. - Navigates the web using fetches or browser automation as needed. - Extracts and verifies claims before synthesis to keep the context window focused on signal. - Checks coverage against the original intent and pivots if it detects information gaps. For example, if a search for enterprise policies identifies that data is fragmented across multiple sub-services (like Teams data living in SharePoint), the engine detects that gap and automatically pivots to find the missing documentation. The goal is to return something an application can rely on directly: a structured object with inline citations and direct links to the source text, rather than a list of links or a black-box summary. The blog post linked above goes into more detail on the engine architecture and the technical challenges of scaling agentic browsing. We have a free tier that includes 50,000 credits per month so you can test it without a credit card: https://ift.tt/JMaSWPU I would love to get your feedback on the approach and answer any questions about the stack. February 5, 2026 at 12:57AM
Show HN: GitHub Browser Plugin for AI Contribution Blame in Pull Requests https://ift.tt/nXVNuHM
Show HN: GitHub Browser Plugin for AI Contribution Blame in Pull Requests https://ift.tt/IKgtoQl February 3, 2026 at 09:35PM
Tuesday, February 3, 2026
Show HN: I built an AI movie making and design engine in Rust https://ift.tt/Z5BEpPc
Show HN: I built an AI movie making and design engine in Rust I've been a photons-on-glass filmmaker for over ten years, and I've been developing ArtCraft for myself, my friends, and my colleagues. All of my film school friends have a lot of ambition, but the production pyramid doesn't allow individual talent to shine easily. 10,000 students go to film school, yet only a handful get to helm projects they want with full autonomy - and almost never at the blockbuster budget levels that would afford the creative vision they want. There's a lot of nepotism, too. AI is the personal computer moment for film. The DAW. One of my friends has done rotoscoping with live actors: https://www.youtube.com/watch?v=Tii9uF0nAx4 The Corridor folks show off a lot of creativity with this tech: https://www.youtube.com/watch?v=_9LX9HSQkWo https://www.youtube.com/watch?v=DSRrSO7QhXY https://www.youtube.com/watch?v=iq5JaG53dho We've been making silly shorts ourselves: https://www.youtube.com/watch?v=oqoCWdOwr2U https://www.youtube.com/watch?v=H4NFXGMuwpY The secret is that a lot of studios have been using AI for well over a year now. You just don't notice it, and they won't ever tell you because of the stigma. It's the "bad toupee fallacy" - you'll only notice it when it's bad, and they'll never tell you otherwise. Comfy is neat, but I work with folks that don't intuit node graphs and that either don't have graphics cards with adequate VRAM, or that can't manage Python dependencies. The foundation models are all pretty competitive, and they're becoming increasingly controllable - and that's the big thing - control. So I've been working on the UI/UX control layer. ArtCraft has 2D and 3D control surfaces, where the 3D portion can be used as a strong and intuitive ControlNet for "Image-to-Image" (I2I) and "Image-to-Video" (I2V) workflows. It's almost like a WYSIWYG, and I'm confident that this is the direction the tech will evolve for creative professionals rather than text-centric prompting. I've been frustrated with tools like Gimp and Blender for a while. I'm no UX/UI maestro, but I've never enjoyed complicated tools - especially complicated OSS tools. Commercial-grade tools are better. Figma is sublime. An IDE for creatives should be simple, magical, and powerful. ArtCraft lets you drag and drop from a variety of creative canvases and an asset drawer easily. It's fast and intuitive. Bouncing between text-to-image for quick prototyping, image editing, 3d gen, to 3d compositing is fluid. It feels like "crafting" rather than prompting or node graph wizardry. ArtCraft, being a desktop app, lets us log you into 3rd party compute providers. I'm a big proponent of using and integrating the models you subscribe to wherever you have them. This has let us integrate WorldLabs' Marble Gaussian Splats, for instance, and nobody else has done that. My plan is to add every provider over time, including generic API key-based compute providers like FAL and Replicate. I don't care if you pay for ArtCraft - I just want it to be useful. Two disclaimers: ArtCraft is "fair source" - I'd like to go the Cockroach DB route and eventually get funding, but keep the tool itself 100% source available for people to build and run for themselves. Obsidian, but with source code. If we got big, I'd spend a lot of time making movies. Right now ArtCraft is tied to a lightweight cloud service - I don't like this. It was a choice so I could reuse an old project and go fast, but I intend for this to work fully offline soon. All server code is in the monorepo, so you can run everything yourself. In the fullness of time, I do envision a portable OSS cloud for various AI tools to read/write to like a Github for assets, but that's just a distant idea right now. I've written about roadmap in the repo: I'd like to develop integrations for every compute provider, rewrite the frontend UI/UX in Bevy for a fully native client, and integrate local models too. https://ift.tt/N2Rsw5S February 3, 2026 at 10:42PM
Monday, February 2, 2026
Show HN: Adboost – A browser extension that adds ads to every webpage https://ift.tt/ntYX7lL
Show HN: Adboost – A browser extension that adds ads to every webpage https://ift.tt/ArVUwN3 February 2, 2026 at 08:11PM
Show HN: Cloud-cost-CLI – Find cloud $$ waste in AWS, Azure and GCP https://ift.tt/71Gk4Li
Show HN: Cloud-cost-CLI – Find cloud $$ waste in AWS, Azure and GCP Hey HN! I built a CLI tool to find cost-saving opportunities in AWS, Azure, and GCP. Why? Existing cost management tools are either expensive SaaS products or slow dashboards buried in cloud consoles. I wanted something fast, CLI-first, and multi-cloud that I could run in CI/CD or my terminal. What it does: - Scans your cloud accounts and finds idle VMs, unattached volumes, oversized databases, unused resources - Returns a ranked list of opportunities with estimated monthly savings - 26 analyzers across AWS, Azure, and GCP - Read-only (never modifies infrastructure) Key features: • HTML reports with interactive charts (new in v0.6.2) • AI-powered explanations (OpenAI or local Ollama) • Export formats: HTML, Excel, CSV, JSON, terminal • Multi-Cloud - AWS, Azure, and GCP support (26 analyzers) Quick example: npm install -g cloud-cost-cli cloud-cost-cli scan --provider aws --output html Real impact: One scan found $11k/year in savings (empty App Service Plan, over-provisioned CosmosDB, idle caches). Technical stack: - TypeScript - AWS/Azure/GCP SDKs - Commander.js for CLI - Chart.js for HTML reports - Optional OpenAI/Ollama integration Open source (MIT): https://ift.tt/QoIlSie npm: cloud-cost-cli Would love feedback on: 1. What features would be most useful? 2. Should I add historical tracking (trends)? 3. Any missing cloud providers? Happy to answer questions! https://ift.tt/QoIlSie February 2, 2026 at 11:45PM
Sunday, February 1, 2026
Show HN: Voiden – an offline, Git-native API tool built around Markdown https://ift.tt/D8LgMjm
Show HN: Voiden – an offline, Git-native API tool built around Markdown Hi HN, We have open-sourced Voiden. Most API tools are built like platforms. They are heavy because they optimize for accounts, sync, and abstraction - not for simple, local API work. Voiden treats API tooling as files. It’s an offline-first, Git-native API tool built on Markdown, where specs, tests, and docs live together as executable Markdown in your repo. Git is the source of truth. No cloud. No syncing. No accounts. No telemetry.Just Markdown, Git, hotkeys, and your damn specs. Voiden is extensible via plugins (including gRPC and WSS). Repo: https://ift.tt/F4s927Y Download Voiden here : https://ift.tt/4cj0y9l We'd love feedback from folks tired of overcomplicated and bloated API tooling ! https://ift.tt/F4s927Y February 1, 2026 at 10:09PM
Subscribe to:
Comments (Atom)