covid20212022
ads
ads
Sunday, April 19, 2026
Show HN: A privacy-first, local-LLM note app for iOS (Google Keep alternative) https://ift.tt/KrmxHaX
Show HN: A privacy-first, local-LLM note app for iOS (Google Keep alternative) https://ift.tt/P6rQswD April 19, 2026 at 11:59PM
Show HN: Free PDF redactor that runs client-side https://ift.tt/dbNrwXy
Show HN: Free PDF redactor that runs client-side I recently needed to verify past employment and to do so I was going to upload paystubs from a previous employer, however I didn't want to share my salary in that role. I did a quick search online and most sites required sign-up or weren't clear about document privacy. I conceded and signed up for a free trial of Adobe Acrobat so I could use their PDF redaction feature. I figured there should be a dead simple way of doing this that's private, so I decided to create it myself. What this does is rasterize each page to an image with your redactions burned in, then it rebuilds the PDF so the text layer is permanently destroyed and not just covered up and easily retrievable. I welcome any and all feedback as this is my first live tool, thanks! https://redactpdf.net April 20, 2026 at 01:39AM
Show HN: Faceoff – A terminal UI for following NHL games https://ift.tt/9gnRhIw
Show HN: Faceoff – A terminal UI for following NHL games Faceoff is a TUI app written in Python to follow live NHL games and browse standings and stats. I got the inspiration from Playball, a similar TUI app for MLB games that was featured on HN. The app was mostly vibe-coded with Claude Code, but not one-shot. I added features and fixed bugs by using it, as I spent way too much time in the terminal over the last few months. Try it out with `uvx faceoff` (requires uv). https://ift.tt/NVuyAMC April 20, 2026 at 12:44AM
Show HN: Google Gemini Is Scanning Your Photos – and the EU Said No https://ift.tt/q2CGbs6
Show HN: Google Gemini Is Scanning Your Photos – and the EU Said No Google has expanded its Personal Intelligence feature so that Gemini can now access your Google Photos face data, Gmail, YouTube history, and search activity to generate personalized AI images — live for US paid subscribers as of April 2026. https://ift.tt/r5T7NsL... April 19, 2026 at 11:36PM
Saturday, April 18, 2026
Show HN: AI Subroutines – Run automation scripts inside your browser tab https://ift.tt/jJLSZRb
Show HN: AI Subroutines – Run automation scripts inside your browser tab We built AI Subroutines in rtrvr.ai. Record a browser task once, save it as a callable tool, replay it at: zero token cost, zero LLM inference delay, and zero mistakes. The subroutine itself is a deterministic script composed of discovered network calls hitting the site's backend as well as page interactions like click/type/find. The key architectural decision: the script executes inside the webpage itself, not through a proxy, not in a headless worker, not out of process. The script dispatches requests from the tab's execution context, so auth, CSRF, TLS session, and signed headers get added to all requests and propagate for free. No certificate installation, no TLS fingerprint modification, no separate auth stack to maintain. During recording, the extension intercepts network requests (MAIN-world fetch/XHR patch + webRequest fallback). We score and trim ~300 requests down to ~5 based on method, timing relative to DOM events, and origin. Volatile GraphQL operation IDs are detected and force a DOM-only fallback before they break silently on the next run. The generated code combines network calls with DOM actions (click, type, find) in the same function via an rtrvr.* helper namespace. Point the agent at a spreadsheet of 500 rows and with just one LLM call parameters are assigned and 500 Subroutines kicked off. Key use cases: - record sending IG DM, then have reusable and callable routine to send DMs at zero token cost - create routine getting latest products in site catalog, call it to get thousands of products via direct graphql queries - setup routine to file EHR form based on parameters to the tool, AI infers parameters from current page context and calls tool - reuse routine daily to sync outbound messages on LinkedIn/Slack/Gmail to a CRM using a MCP server We see the fundamental reason that browser agents haven't taken off is that for repetitive tasks going through the inference loop is unnecessary. Better to just record once, and get the LLM to generate a script leveraging all the possible ways to interact with a site and the wider web like directly calling backed API's, interacting with the DOM, and calling 3P tools/APIs/MCP servers. https://ift.tt/J5mrUDp April 18, 2026 at 04:03AM
Show HN: Praxis – Lab data to publication-ready figures in one Python package https://ift.tt/itwyvOA
Show HN: Praxis – Lab data to publication-ready figures in one Python package https://ift.tt/u5Nj9xO April 19, 2026 at 01:15AM
Show HN: I built Panda to get up to 99% token savings https://ift.tt/NL73vPK
Show HN: I built Panda to get up to 99% token savings https://ift.tt/dVw9mNM April 18, 2026 at 05:00PM
Friday, April 17, 2026
Show HN: Waputer – The WebAssembly Computer https://ift.tt/nlCwDAr
Show HN: Waputer – The WebAssembly Computer Waputer is an operating system that runs entirely in the browser. When you visit the website at https://waputer.app , a kernel written in JavaScript sets up a filesystem and launches a WebAssembly program, which in turn talks to the kernel to handle the display and input. A purely terminal-based version is at https://waputer.dev . My original intention was to create programs that run in the browser that have a lot more in common with the desktop. The traditional "hello world" program is not really suited for the web. Waputer changes that. The GitHub repo at https://ift.tt/g5z06Up gives a very brief overview of compiling a C program and running it on Waputer. There is a blog available from the main site that has a long-form explanation of Waputer and my motivations if you want some additional reading. https://waputer.app April 18, 2026 at 12:46AM
Show HN: Smol machines – subsecond coldstart, portable virtual machines https://ift.tt/ZBLptF2
Show HN: Smol machines – subsecond coldstart, portable virtual machines https://ift.tt/Ur5cJgS April 18, 2026 at 12:18AM
Show HN: Bird, a CLI for Tired Brains https://ift.tt/3XBzHEO
Show HN: Bird, a CLI for Tired Brains https://ift.tt/hSZ4xpo April 18, 2026 at 12:13AM
Show HN: PanicLock – Close your MacBook lid disable TouchID –> password unlock https://ift.tt/QFPhEV5
Show HN: PanicLock – Close your MacBook lid disable TouchID –> password unlock https://ift.tt/ivusXmS April 17, 2026 at 11:38PM
Thursday, April 16, 2026
Show HN: EDDI – Multi-agent AI engine where agent logic lives in JSON, not code https://ift.tt/FLnKJU5
Show HN: EDDI – Multi-agent AI engine where agent logic lives in JSON, not code I started EDDI in 2006 as a rule-based dialog engine. Back then it was pattern matching and state machines. When LLMs showed up, the interesting question wasn't "how do I call GPT" but "how do I keep control over what the AI does in production?" My answer was: agent logic belongs in JSON configs, not code. You describe what an agent should do, which LLM to use, what tools it can call, how it should behave. The engine reads that config and runs it. No dynamic code execution, ever. The LLM cannot run arbitrary code by design. The engine is strict so the AI can be creative. v6 is the version where this actually became practical. You can have groups of agents debating a topic in five different orchestration styles (round table, peer review, devil's advocate...). Each agent can use a different model. A cascading system tries cheap models first and only escalates to expensive ones when confidence is low. It also implements MCP as both server and client, so you can control EDDI from Claude Desktop or Cursor. And Google's A2A protocol for agents discovering each other across platforms. The whole thing runs in Java 25 on Quarkus, ships as a single Docker image, and installs with one command. Open source since 2017, Apache 2.0. Would love to hear thoughts on the architecture and feature set. And if you have ideas for what's missing or what you'd want from a system like this, I'm all ears. Always looking for good input on the roadmap. https://ift.tt/Rp83Xwo April 16, 2026 at 09:11PM
Show HN: CodeBurn – Analyze Claude Code token usage by task https://ift.tt/4pnvwDZ
Show HN: CodeBurn – Analyze Claude Code token usage by task Built this after realizing I was spending ~$1400/week on Claude Code with almost no visibility into what was actually consuming tokens. Tools like ccusage give a cost breakdown per model and per day, but I wanted to understand usage at the task level. CodeBurn reads the JSONL session transcripts that Claude Code stores locally (~/.claude/projects/) and classifies each turn into 13 categories based on tool usage patterns (no LLM calls involved). One surprising result: about 56% of my spend was on conversation turns with no tool usage. Actual coding (edits/writes) was only ~21%. The interface is an interactive terminal UI built with Ink (React for terminals), with gradient bar charts, responsive panels, and keyboard navigation. There’s also a SwiftBar menu bar integration for macOS. Happy to hear feedback or ideas. https://ift.tt/dbt8nS1 April 14, 2026 at 05:57AM
Subscribe to:
Comments (Atom)