covid20212022
ads
ads
Monday, February 23, 2026
Show HN: EloPhanto – A self-evolving AI agent that builds its own tools https://ift.tt/bgVKYSt
Show HN: EloPhanto – A self-evolving AI agent that builds its own tools I built EloPhanto because I wanted an AI agent that could actually execute tasks on my machine with full visibility — not a black box API call. It runs locally and controls a real Chrome browser (47 tools) using your existing sessions. The standout feature: when EloPhanto encounters a task it doesn't have a tool for, it autonomously writes the Python code, tests it, reviews itself, and integrates the new tool permanently. It's now built 99+ tools for itself this way. Other features: - Multi-channel gateway (CLI, Telegram, Discord, Slack) with unified sessions - MCP tool server support (connect any MCP server) - Document & media analysis (PDF, images, OCR, RAG) - Agent email (own inbox for service signup/verification) - Crypto payments wallet (Base chain, spending limits) - TOTP authenticator (autonomous 2FA handling) - Evolving identity that learns from experience - Skill system with EloPhantoHub marketplace (28 bundled skills) It's open source (Apache 2.0), local-first, and designed to be your personal AI operating system. The project is very new — currently at 6 stars on GitHub. I'd love to get feedback on the architecture, the self-development approach, or what features you'd want in a local agent. https://ift.tt/xYzytgK February 23, 2026 at 10:28PM
Show HN: TTSLab – A voice AI agent and TTS lab running in the browser via WebGPU https://ift.tt/0vW7Tsc
Show HN: TTSLab – A voice AI agent and TTS lab running in the browser via WebGPU I built TTSLab — a free, open-source tool for running text-to-speech and speech-to-text models directly in the browser using WebGPU and WASM. No API keys, no backend, no data leaves your machine. When you open the site, you'll hear it immediately — the landing page auto-generates speech from three different sentences right in your browser, no setup required. You can then try any model yourself: type text, hit generate, hear it instantly. Models download once and get cached locally. The most experimental feature: a fully in-browser Voice Agent. It chains speech-to-text → LLM → text-to-speech, all running locally on your GPU via WebGPU. You can have a spoken conversation with an AI without a single network request. Currently supported models: - TTS: Kokoro 82M, SpeechT5, Piper (VITS) - STT: Whisper Tiny, Whisper Base Other features: - Side-by-side model comparison - Speed benchmarking on your hardware - Streaming generation for supported models Source: https://ift.tt/hB5p9ow (MIT) Feedback I'd especially like: 1. How does performance feel on your hardware? 2. What models should I add next? 3. Did the Voice Agent work for you? That's the most experimental part. Built on top of ONNX Runtime Web ( https://onnxruntime.ai ) and Transformers.js — huge thanks to those communities for making in-browser ML inference possible. https://ttslab.dev February 23, 2026 at 10:52PM
Sunday, February 22, 2026
Show HN: Drowse – Nix dynamic derivations made easy https://ift.tt/EzSsnJ3
Show HN: Drowse – Nix dynamic derivations made easy https://ift.tt/DR0xN7y February 22, 2026 at 10:18PM
Show HN: I quit MyNetDiary after 3 years of popups and built a calorie tracker https://ift.tt/EYVg35N
Show HN: I quit MyNetDiary after 3 years of popups and built a calorie tracker After three years of hitting the same upgrade popup every time I opened MyNetDiary just to log lunch, I finally gave up searching for an alternative and built one myself. The whole thing is a single HTML file. No server, no account, no login, no cloud. Data lives on your device only. You open it in a browser, bookmark it, and it works — offline, forever. The feature I'm most proud of is real-time pacing: it knows your eating window, the current time, and how much you've consumed, and tells you whether you're actually on track — not just what your total is. Free trial, no signup required: calories.today/app.html Built this for myself after losing weight and just wanting to maintain without an app trying to sell me something every day. If that sounds familiar, give the trial a shot. https://calories.today/app.html February 22, 2026 at 11:41PM
Saturday, February 21, 2026
Show HN: Blindspot – a userscript to block tab-switch detection https://ift.tt/XfMunP9
Show HN: Blindspot – a userscript to block tab-switch detection A Tampermonkey userscript that disables in-browser anti-cheat mechanisms (BlurSpy, honest-responder). https://ift.tt/ThleUYw February 21, 2026 at 09:04PM
Show HN: ClaudeUsage – macOS menu bar app to track your Claude Pro usage limits https://ift.tt/D65rhFX
Show HN: ClaudeUsage – macOS menu bar app to track your Claude Pro usage limits https://ift.tt/7qfutxe February 21, 2026 at 10:44PM
Friday, February 20, 2026
Show HN: Celeste game installs as ELF binary (42kB) on ESP32/breezybox [video] https://ift.tt/1NkwMPm
Show HN: Celeste game installs as ELF binary (42kB) on ESP32/breezybox [video] https://www.youtube.com/watch?v=nufOQWBmwpk February 21, 2026 at 12:26AM
Show HN: Flask Is My Go-To Web Framework https://ift.tt/ErlFe6V
Show HN: Flask Is My Go-To Web Framework https://ift.tt/eUgoJBS February 20, 2026 at 06:41PM
Thursday, February 19, 2026
Show HN: Hi.new – DMs for agents (open-source) https://ift.tt/VquKM8i
Show HN: Hi.new – DMs for agents (open-source) https://www.hi.new/ February 20, 2026 at 04:20AM
Show HN: Astroworld – A universal N-body gravity engine in Python https://ift.tt/MJwrvFR
Show HN: Astroworld – A universal N-body gravity engine in Python I’ve been working on a modular N-body simulator in Python called Astroworld. It started as a Solar System visualizer, but I recently refactored it into a general-purpose engine that decouples physical laws from planetary data.Technical Highlights:Symplectic Integration: Uses a Velocity Verlet integrator to maintain long-term energy conservation ($\Delta E/E \approx 10^{-8}$ in stable systems).Agnostic Architecture: It can ingest any system via orbital elements (Keplerian) or state vectors. I've used it to validate the stability of ultra-compact systems like TRAPPIST-1 and long-period perturbations like the Planet 9 hypothesis.Validation: Includes 90+ physical tests, including Mercury’s relativistic precession using Schwarzschild metric corrections.The Planet 9 Experiment:I ran a 10k-year simulation to track the differential signal in the argument of perihelion ($\omega$) for TNOs like Sedna. The result ($\approx 0.002^{\circ}$) was a great sanity check for the engine’s precision, as this effect is secular and requires millions of years to fully manifest.The Stack:NumPy for vectorization, Matplotlib for 2D analysis, and Plotly for interactive 3D trajectories.I'm currently working on a real-time 3D rendering layer. I’d love to get feedback on the integrator’s stability for high-eccentricity orbits or suggestions on implementing more complex gravitational potentials. https://ift.tt/wOhUEuP February 20, 2026 at 02:57AM
Show HN: PostForge – A PostScript interpreter written in Python https://ift.tt/SqIHuaF
Show HN: PostForge – A PostScript interpreter written in Python Hi HN, I built a PostScript interpreter from scratch in Python. PostForge implements the full PostScript Level 2 specification — operators, graphics model, font system, save/restore VM, the works. It reads .ps and .eps files and outputs PNG, PDF, SVG, or renders to an interactive Qt window. Why build this? GhostScript is the only real game in town for PostScript interpretation, and it's a 35-year-old C codebase. I wanted something where you could actually read the code, step through execution, and understand what's happening. PostForge is modular and approachable — each operator category lives in its own file, the type system is clean, and there's an interactive prompt where you can poke at the interpreter state. Some technical highlights: - Full Level 2 compliance with selected Level 3 features - PDF output with Type 1 font reconstruction/subsetting and TrueType/CID embedding - ICC color management (sRGB, CMYK, Gray profiles via lcms2) - Optional Cython-compiled execution loop (15-40% speedup) - 2,500+ unit tests written in PostScript itself using a custom assertion framework - Interactive executive mode with live Qt display — useful for debugging PS programs What it's not: A GhostScript replacement for production/printer use. It's interpreted Python, so it's slower. But it handles complex real-world PostScript files well and the output quality is solid. I'd love feedback, especially from anyone who's worked with PostScript or built language interpreters. The architecture docs are at docs/developer/architecture-overview.md if you want to dig in. https://ift.tt/zuH4Rq2 February 19, 2026 at 11:21PM
Wednesday, February 18, 2026
Show HN: Trust Protocols for Anthropic/OpenAI/Gemini https://ift.tt/Vq4bick
Show HN: Trust Protocols for Anthropic/OpenAI/Gemini Much of my work right now involves complex, long-running, multi-agentic teams of agents. I kept running into the same problem: “How do I keep these guys in line?” Rules weren’t cutting it, and we needed a scalable, agentic-native STANDARD I could count on. There wasn’t one. So I built one. Here are two open-source protocols that extend A2A, granting AI agents behavioral contracts and runtime integrity monitoring: - Agent Alignment Protocol (AAP): What an agent can do / has done. - Agent Integrity Protocol (AIP): What an agent is thinking about doing / is allowed to do. The problem: AI agents make autonomous decisions but have no standard way to declare what they're allowed to do, prove they're doing it, or detect when they've drifted. Observability tools tell you what happened. These protocols tell you whether what happened was okay. Here's a concrete example. Say you have an agent who handles customer support tickets. Its Alignment Card declares: { "permitted": ["read_tickets", "draft_responses", "escalate_to_human"], "forbidden": ["access_payment_data", "issue_refunds", "modify_account_settings"], "escalation_triggers": ["billing_request_over_500"], "values": ["accuracy", "empathy", "privacy"] } The agent gets a ticket: "Can you refund my last three orders?" The agent's reasoning trace shows it considering a call to the payments API. AIP reads that thinking, compares it to the card, and produces an Integrity Checkpoint: { "verdict": "boundary_violation", "concerns": ["forbidden_action: access_payment_data"], "reasoning": "Agent considered payments API access, which is explicitly forbidden. Should escalate to human.", "confidence": 0.95 } The agent gets nudged back before it acts. Not after. Not in a log you review during a 2:00 AM triage. Between this turn and the next. That's the core idea. AAP defines what agents should do (the contract). AIP watches what they're actually thinking and flags when those diverge (the conscience). Over time, AIP builds a drift profile — if an agent that was cautious starts getting aggressive, the system notices. When multiple agents work together, it gets more interesting. Agents exchange Alignment Cards and verify value compatibility before coordination begins. An agent that values "move fast" and one that values "rollback safety" registers low coherence, and the system surfaces that conflict before work starts. Live demo with four agents handling a production incident: https://ift.tt/VjgDPZy The protocols are Apache-licensed, work with any Anthropic/OpenAI/Gemini agent, and ship as SDKs on npm and PyPI. A free gateway proxy (smoltbot) adds integrity checking to any agent with zero code changes. GitHub: https://ift.tt/XU7y64p Docs: docs.mnemom.ai Demo video: https://youtu.be/fmUxVZH09So https://www.mnemom.ai February 18, 2026 at 11:33PM
Show HN: LockFS https://ift.tt/mH8q47p
Show HN: LockFS LockFS is a small open-source Java tool that encrypts files individually instead of bundling everything into a single container. Many vault systems rely on large encrypted blobs or container files. They can become complex to handle as they grow and complicate backups across mixed storage sizes. LockFS takes a file-level approach: - Each file is encrypted independently - No monolithic container growth - Files can be added, moved, or removed without rewriting a large archive Contributions and feedback are welcome. https://ift.tt/Ihr4XGy February 18, 2026 at 11:42PM
Subscribe to:
Comments (Atom)