covid20212022
ads
ads
Friday, February 20, 2026
Show HN: Celeste game installs as ELF binary (42kB) on ESP32/breezybox [video] https://ift.tt/1NkwMPm
Show HN: Celeste game installs as ELF binary (42kB) on ESP32/breezybox [video] https://www.youtube.com/watch?v=nufOQWBmwpk February 21, 2026 at 12:26AM
Show HN: Flask Is My Go-To Web Framework https://ift.tt/ErlFe6V
Show HN: Flask Is My Go-To Web Framework https://ift.tt/eUgoJBS February 20, 2026 at 06:41PM
Thursday, February 19, 2026
Show HN: Hi.new – DMs for agents (open-source) https://ift.tt/VquKM8i
Show HN: Hi.new – DMs for agents (open-source) https://www.hi.new/ February 20, 2026 at 04:20AM
Show HN: Astroworld – A universal N-body gravity engine in Python https://ift.tt/MJwrvFR
Show HN: Astroworld – A universal N-body gravity engine in Python I’ve been working on a modular N-body simulator in Python called Astroworld. It started as a Solar System visualizer, but I recently refactored it into a general-purpose engine that decouples physical laws from planetary data.Technical Highlights:Symplectic Integration: Uses a Velocity Verlet integrator to maintain long-term energy conservation ($\Delta E/E \approx 10^{-8}$ in stable systems).Agnostic Architecture: It can ingest any system via orbital elements (Keplerian) or state vectors. I've used it to validate the stability of ultra-compact systems like TRAPPIST-1 and long-period perturbations like the Planet 9 hypothesis.Validation: Includes 90+ physical tests, including Mercury’s relativistic precession using Schwarzschild metric corrections.The Planet 9 Experiment:I ran a 10k-year simulation to track the differential signal in the argument of perihelion ($\omega$) for TNOs like Sedna. The result ($\approx 0.002^{\circ}$) was a great sanity check for the engine’s precision, as this effect is secular and requires millions of years to fully manifest.The Stack:NumPy for vectorization, Matplotlib for 2D analysis, and Plotly for interactive 3D trajectories.I'm currently working on a real-time 3D rendering layer. I’d love to get feedback on the integrator’s stability for high-eccentricity orbits or suggestions on implementing more complex gravitational potentials. https://ift.tt/wOhUEuP February 20, 2026 at 02:57AM
Show HN: PostForge – A PostScript interpreter written in Python https://ift.tt/SqIHuaF
Show HN: PostForge – A PostScript interpreter written in Python Hi HN, I built a PostScript interpreter from scratch in Python. PostForge implements the full PostScript Level 2 specification — operators, graphics model, font system, save/restore VM, the works. It reads .ps and .eps files and outputs PNG, PDF, SVG, or renders to an interactive Qt window. Why build this? GhostScript is the only real game in town for PostScript interpretation, and it's a 35-year-old C codebase. I wanted something where you could actually read the code, step through execution, and understand what's happening. PostForge is modular and approachable — each operator category lives in its own file, the type system is clean, and there's an interactive prompt where you can poke at the interpreter state. Some technical highlights: - Full Level 2 compliance with selected Level 3 features - PDF output with Type 1 font reconstruction/subsetting and TrueType/CID embedding - ICC color management (sRGB, CMYK, Gray profiles via lcms2) - Optional Cython-compiled execution loop (15-40% speedup) - 2,500+ unit tests written in PostScript itself using a custom assertion framework - Interactive executive mode with live Qt display — useful for debugging PS programs What it's not: A GhostScript replacement for production/printer use. It's interpreted Python, so it's slower. But it handles complex real-world PostScript files well and the output quality is solid. I'd love feedback, especially from anyone who's worked with PostScript or built language interpreters. The architecture docs are at docs/developer/architecture-overview.md if you want to dig in. https://ift.tt/zuH4Rq2 February 19, 2026 at 11:21PM
Wednesday, February 18, 2026
Show HN: Trust Protocols for Anthropic/OpenAI/Gemini https://ift.tt/Vq4bick
Show HN: Trust Protocols for Anthropic/OpenAI/Gemini Much of my work right now involves complex, long-running, multi-agentic teams of agents. I kept running into the same problem: “How do I keep these guys in line?” Rules weren’t cutting it, and we needed a scalable, agentic-native STANDARD I could count on. There wasn’t one. So I built one. Here are two open-source protocols that extend A2A, granting AI agents behavioral contracts and runtime integrity monitoring: - Agent Alignment Protocol (AAP): What an agent can do / has done. - Agent Integrity Protocol (AIP): What an agent is thinking about doing / is allowed to do. The problem: AI agents make autonomous decisions but have no standard way to declare what they're allowed to do, prove they're doing it, or detect when they've drifted. Observability tools tell you what happened. These protocols tell you whether what happened was okay. Here's a concrete example. Say you have an agent who handles customer support tickets. Its Alignment Card declares: { "permitted": ["read_tickets", "draft_responses", "escalate_to_human"], "forbidden": ["access_payment_data", "issue_refunds", "modify_account_settings"], "escalation_triggers": ["billing_request_over_500"], "values": ["accuracy", "empathy", "privacy"] } The agent gets a ticket: "Can you refund my last three orders?" The agent's reasoning trace shows it considering a call to the payments API. AIP reads that thinking, compares it to the card, and produces an Integrity Checkpoint: { "verdict": "boundary_violation", "concerns": ["forbidden_action: access_payment_data"], "reasoning": "Agent considered payments API access, which is explicitly forbidden. Should escalate to human.", "confidence": 0.95 } The agent gets nudged back before it acts. Not after. Not in a log you review during a 2:00 AM triage. Between this turn and the next. That's the core idea. AAP defines what agents should do (the contract). AIP watches what they're actually thinking and flags when those diverge (the conscience). Over time, AIP builds a drift profile — if an agent that was cautious starts getting aggressive, the system notices. When multiple agents work together, it gets more interesting. Agents exchange Alignment Cards and verify value compatibility before coordination begins. An agent that values "move fast" and one that values "rollback safety" registers low coherence, and the system surfaces that conflict before work starts. Live demo with four agents handling a production incident: https://ift.tt/VjgDPZy The protocols are Apache-licensed, work with any Anthropic/OpenAI/Gemini agent, and ship as SDKs on npm and PyPI. A free gateway proxy (smoltbot) adds integrity checking to any agent with zero code changes. GitHub: https://ift.tt/XU7y64p Docs: docs.mnemom.ai Demo video: https://youtu.be/fmUxVZH09So https://www.mnemom.ai February 18, 2026 at 11:33PM
Show HN: LockFS https://ift.tt/mH8q47p
Show HN: LockFS LockFS is a small open-source Java tool that encrypts files individually instead of bundling everything into a single container. Many vault systems rely on large encrypted blobs or container files. They can become complex to handle as they grow and complicate backups across mixed storage sizes. LockFS takes a file-level approach: - Each file is encrypted independently - No monolithic container growth - Files can be added, moved, or removed without rewriting a large archive Contributions and feedback are welcome. https://ift.tt/Ihr4XGy February 18, 2026 at 11:42PM
Tuesday, February 17, 2026
Show HN: I wrote a technical history book on Lisp https://ift.tt/dsiyvtj
Show HN: I wrote a technical history book on Lisp The book page links to a blog post that explains how I got about it (and has a link to sample content), but the TL&DR is that I could not find a lot of books that were on "our" history _and_ were larded with technical details. So I set about writing one, and some five years later I'm happy to share the result. I think it's one of the few "computer history" books that has tons of code, but correct me if I'm wrong (I wrote this both to tell a story and to learn :-)). My favorite languages are Smalltalk and Lisp, but as an Emacs user, I've been using the latter for much longer and for my current projects, Common Lisp is a better fit, so I call myself "a Lisp-er" these days. If people like what I did, I do have plans to write some more (but probably only after I retire, writing next to a full-time job is heard). Maybe on Smalltalk, maybe on computer networks - two topics close to my heart. And a shout-out to Dick Gabriel, he contributed some great personal memories about the man who started it all, John McCarthy. https://ift.tt/pH6fry0 February 17, 2026 at 10:43PM
Monday, February 16, 2026
Show HN: AsdPrompt – Vimium-style keyboard navigation for AI chat responses https://ift.tt/YhbeS6E
Show HN: AsdPrompt – Vimium-style keyboard navigation for AI chat responses I use Claude throughout the day and kept getting annoyed by the same thing: selecting text from responses with the mouse. Overshoot, re-select, copy, click input, paste. Especially bad in long conversations where you want to reference something from 30 turns ago. asdPrompt is a Chrome extension that adds hint-based navigation (like Vimium) to AI chat interfaces. Cmd+Shift+S activates the overlay, hint labels appear next to every text block. Type a letter to select a block, then keep typing to drill down: block → sentence → word. Enter copies, or you can press an action key (e, d, x) to inject a follow-up prompt ("elaborate on [selection]") directly into the chat input. Works on claude.ai, chatgpt.com, and gemini.google.com. Adapts to light/dark themes. Free. Built the initial MVP in 2 days using Claude Code — the adapter architecture, NLP segmentation pipeline, and Playwright test harness would have taken a month without it. Tech details for the curious: site-specific DOM parsers behind an adapter interface, text segmentation via compromise.js with regex fallbacks for technical content (paths, camelCase break NLP libraries), bounding rectangles calculated via Range API + TreeWalker, overlay isolated in Shadow DOM. Tested with Playwright visual regression. The landing page has an interactive tutorial where you can try the full drill-down mechanic without installing. Happy to talk about the implementation. https://asdprompt.com/ February 17, 2026 at 12:28AM
Show HN: Claude-engram – Brain-inspired persistent memory, runs inside Claude.ai https://ift.tt/MLDXGNw
Show HN: Claude-engram – Brain-inspired persistent memory, runs inside Claude.ai Claude.ai artifacts can call the Anthropic API and have persistent storage (5MB via window.storage). I used these two capabilities to build a memory system modeled on how human memory actually works — salience scoring, forgetting curves, and sleep consolidation — all running inside a single React artifact with no external dependencies. Just add artifact to your chat and paste instructions into your personal preferences setting. https://ift.tt/2oVu68I February 17, 2026 at 12:15AM
Show HN: Simple org-mode web adapter https://ift.tt/J8WbChF
Show HN: Simple org-mode web adapter I like to use org files a lot, but I wanted some way to browse and edit them on my phone when I'm out. Yesterday I used Codex to make this simple one-file web server that just displays all my org files with backlinks. It doesn't have any authentication because I only run it on my wireguard VPN. I've been having fun with it, hopefully it's useful to someone else! https://ift.tt/U7GqVyB February 16, 2026 at 11:19PM
Sunday, February 15, 2026
Show HN: An open-source extension to chat with your bookmarks using local LLMs https://ift.tt/tkZJcod
Show HN: An open-source extension to chat with your bookmarks using local LLMs I read a lot online and constantly bookmark articles, docs, and resources… then forget why I saved them. Also was very bored on Valentines, so I built a browser extension that lets you chat with your bookmarks directly, using local-first AI (WebLLM running entirely in the browser). The extension downloads and indexes your bookmarked pages, stores them locally, and lets you ask questions. No server, no cloud processing, everything stays on your machine. Very early but it works and planning to add a bunch of stuff. Did I mentioned is open-source, MIT licensed? https://ift.tt/a35fs28 February 16, 2026 at 12:01AM
Show HN: Ingglish – What if English spelling made sense? https://ift.tt/SxrCktj
Show HN: Ingglish – What if English spelling made sense? My 5-year-old is learning to read and I keep having to say "yeah sorry, that letter is silent" and "no, those letters make a different sound in this word." So I built Ingglish — English where every letter always makes the same sound. "ough" alone makes 6 different sounds (though, through, rough, cough, thought, bough). In Ingglish, every letter has one sound, no silent letters, no exceptions. - Paste text to see it translated instantly - Translate any webpage while preserving its layout - Chrome extension to browse the web in Ingglish - Fully reversible — Ingglish text can be converted back to standard English (minus homophones) The core translator, DOM integration, and website are all open source: https://ift.tt/Dh0go9v I'd love your feedback! Thanks. https://ingglish.com February 15, 2026 at 11:33PM
Subscribe to:
Comments (Atom)