ads

Saturday, January 31, 2026

Show HN: Free Text-to-Speech Tool – No Signup, 40 Languages https://ift.tt/5F2bT4K

Show HN: Free Text-to-Speech Tool – No Signup, 40 Languages I built a simple text-to-speech converter at texttospeech.site Free tier: 10 generations/day, standard voices, no account needed. Pro tier: Neural2 voices, 2000 chars, downloadable MP3s. Stack: Next.js, Google Cloud TTS API, Vercel. The $2 domain was an SEO experiment after my speechtotext.xyz satellite drove 22% of traffic to my main product. Curious if exact-match keyword domains still work for TTS searches. Feedback welcome — especially on voice quality and UX. https://texttospeech.site/ January 31, 2026 at 10:30PM

Show HN: Bunnie – Use Bun as the templating engine in Rust applications https://ift.tt/uIBpvKP

Show HN: Bunnie – Use Bun as the templating engine in Rust applications https://ift.tt/XmA1qxd January 31, 2026 at 10:20PM

Friday, January 30, 2026

Show HN: Claude Commander: runtime model switching in Cloud Code via hooks/API https://ift.tt/tIoXe4g

Show HN: Claude Commander: runtime model switching in Cloud Code via hooks/API Hi HN, I built Claude Commander, a small wrapper around Cloud Code that lets you issue commands programmatically from inside Cloud Code (via hooks or scripts). Main feature: switch model at runtime. Why: start expensive for planning or hard debugging, then downgrade for execution to cut cost. https://ift.tt/05h796w January 30, 2026 at 10:19PM

Thursday, January 29, 2026

Show HN: vind – A Better Kind (Kubernetes in Docker) https://ift.tt/tqac8dH

Show HN: vind – A Better Kind (Kubernetes in Docker) https://ift.tt/lfyW1aD January 29, 2026 at 11:57PM

Show HN: Nomod payment integrated into usage-based billing stack https://ift.tt/oidNA6p

Show HN: Nomod payment integrated into usage-based billing stack Hi HN, We just shipped a Nomod integration in Flexprice. For context, flexprice is an open-source billing system that handles invoices, usage, and credit wallets. One gap we wanted to close was supporting region-specific payment providers without breaking billing state. With this integration: - Invoices finalized in Flexprice can be synced to Nomod - A hosted Nomod payment link is generated for the invoice - Payment status updates flow back into Flexprice - Invoices and payment records stay in sync - Credits (if applicable) are applied only after payment succeeds This keeps billing logic simple and avoids reconciliation issues later. There's no demo yet, but docs are live here: https://ift.tt/iLUGW8u Happy to answer questions or hear feedback from folks who've built billing or payment integrations before or feel free to join our open-source community if that interests you : http://bit.ly/4huvkDm http://Link:admin.flexprice.io January 29, 2026 at 11:07PM

Wednesday, January 28, 2026

Show HN: SHDL – A minimal hardware description language built from logic gates https://ift.tt/eQhaxUV

Show HN: SHDL – A minimal hardware description language built from logic gates Hi, everyone! I built SHDL (Simple Hardware Description Language) as an experiment in stripping hardware description down to its absolute fundamentals. In SHDL, there are no arithmetic operators, no implicit bit widths, and no high-level constructs. You build everything explicitly from logic gates and wires, and then compose larger components hierarchically. The goal is not synthesis or performance, but understanding: what digital systems actually look like when abstractions are removed. SHDL is accompanied by PySHDL, a Python interface that lets you load circuits, poke inputs, step the simulation, and observe outputs. Under the hood, SHDL compiles circuits to C for fast execution, but the language itself remains intentionally small and transparent. This is not meant to replace Verilog or VHDL. It’s aimed at: - learning digital logic from first principles - experimenting with HDL and language design - teaching or visualizing how complex hardware emerges from simple gates. I would especially appreciate feedback on: - the language design choices - what feels unnecessarily restrictive vs. educationally valuable - whether this kind of “anti-abstraction” HDL is useful to you. Repo: https://ift.tt/zGQpebx Python package: PySHDL on PyPI To make this concrete, here are a few small working examples written in SHDL: 1. Full Adder component FullAdder(A, B, Cin) -> (Sum, Cout) { x1: XOR; a1: AND; x2: XOR; a2: AND; o1: OR; connect { A -> x1.A; B -> x1.B; A -> a1.A; B -> a1.B; x1.O -> x2.A; Cin -> x2.B; x1.O -> a2.A; Cin -> a2.B; a1.O -> o1.A; a2.O -> o1.B; x2.O -> Sum; o1.O -> Cout; } } 2. 16 bit register # clk must be high for two cycles to store a value component Register16(In[16], clk) -> (Out[16]) { >i[16]{ a1{i}: AND; a2{i}: AND; not1{i}: NOT; nor1{i}: NOR; nor2{i}: NOR; } connect { >i[16]{ # Capture on clk In[{i}] -> a1{i}.A; In[{i}] -> not1{i}.A; not1{i}.O -> a2{i}.A; clk -> a1{i}.B; clk -> a2{i}.B; a1{i}.O -> nor1{i}.A; a2{i}.O -> nor2{i}.A; nor1{i}.O -> nor2{i}.B; nor2{i}.O -> nor1{i}.B; nor2{i}.O -> Out[{i}]; } } } 3. 16-bit Ripple-Carry Adder use fullAdder::{FullAdder}; component Adder16(A[16], B[16], Cin) -> (Sum[16], Cout) { >i[16]{ fa{i}: FullAdder; } connect { A[1] -> fa1.A; B[1] -> fa1.B; Cin -> fa1.Cin; fa1.Sum -> Sum[1]; >i[2,16]{ A[{i}] -> fa{i}.A; B[{i}] -> fa{i}.B; fa{i-1}.Cout -> fa{i}.Cin; fa{i}.Sum -> Sum[{i}]; } fa16.Cout -> Cout; } } https://ift.tt/zGQpebx January 28, 2026 at 07:06PM

Show HN: I Built a Sandbox for Agents https://ift.tt/MgaFnIS

Show HN: I Built a Sandbox for Agents https://ift.tt/0iFx3RA January 28, 2026 at 11:50PM

Show HN: A header-only C++20 compile-time assembler for x86/x64 instructions https://ift.tt/VfmMAOU

Show HN: A header-only C++20 compile-time assembler for x86/x64 instructions https://ift.tt/p74vTXr January 28, 2026 at 11:30PM

Tuesday, January 27, 2026

Show HN: Lightbox – Flight recorder for AI agents (record, replay, verify) https://ift.tt/8DaMo67

Show HN: Lightbox – Flight recorder for AI agents (record, replay, verify) I built Lightbox because I kept running into the same problem: an agent would fail in production, and I had no way to know what actually happened. Logs were scattered, the LLM’s “I called the tool” wasn’t trustworthy, and re-running wasn’t deterministic. This week, tons of Clawdbot incidents have driven the point home. Agents with full system access can expose API keys and chat histories. Prompt injection is now a major security concern. When agents can touch your filesystem, execute code, and browse the web…you probably need a tamper-proof record of exactly what actions it took, especially when a malicious prompt or compromised webpage could hijack the agent mid-session. Lightbox is a small Python library that records every tool call an agent makes (inputs, outputs, timing) into an append-only log with cryptographic hashes. You can replay runs with mocked responses, diff executions across versions, and verify the integrity of logs after the fact. Think airplane black box, but for your hackbox. *What it does:* - Records tool calls locally (no cloud, your infra) - Tamper-evident logs (hash chain, verifiable) - Replay failures exactly with recorded responses - CLI to inspect, replay, diff, and verify sessions - Framework-agnostic (works with LangChain, Claude, OpenAI, etc.) *What it doesn’t do:* - Doesn’t replay the LLM itself (just tool calls) - Not a dashboard or analytics platform - Not trying to replace LangSmith/Langfuse (different problem) *Use cases I care about:* - Security forensics: agent behaved strangely, was it prompt injection? Check the trace. - Compliance: “prove what your agent did last Tuesday” - Debugging: reproduce a failure without re-running expensive API calls - Regression testing: diff tool call patterns across agent versions As agents get more capable and more autonomous (Clawdbot/Molt, Claude computer use, Manus, Devin), I think we’ll need black boxes the same way aviation does. This is my attempt at that primitive. It’s early (v0.1), intentionally minimal, MIT licensed. Site: < https://uselightbox.app > install: `pip install lightbox-rec` GitHub: < https://github.com/mainnebula/Lightbox-Project > Would love feedback, especially from anyone thinking about agent security or running autonomous agents in production. https://ift.tt/4twe3ay January 28, 2026 at 12:23AM

Show HN: I built a CSV parser to try Go 1.26's new SIMD package https://ift.tt/FcWasRp

Show HN: I built a CSV parser to try Go 1.26's new SIMD package Hey HN, A CSV parser using Go 1.26's experimental simd/archsimd package. I wanted to see what the new SIMD API looks like in practice. CSV parsing is mostly "find these bytes in a buffer"—load 64 bytes, compare, get a bitmask of positions. The interesting part was handling chunk boundaries correctly (quotes and line endings can split across chunks). - Drop-in replacement for encoding/csv - ~20% faster for unquoted data on AVX-512 - Quoted data is slower (still optimizing) - Scalar fallback for non-AVX-512 Requires GOEXPERIMENT=simd. https://ift.tt/hCJOS6D Feedback on edge cases or the SIMD implementation welcome. https://ift.tt/hCJOS6D January 27, 2026 at 09:28PM

Monday, January 26, 2026

Show HN: Hybrid Markdown Editing https://ift.tt/s9xX7VT

Show HN: Hybrid Markdown Editing Shows rendered preview for unfocused lines and raw markdown for the line or block being edited. https://tiagosimoes.github.io/codemirror-markdown-hybrid/ January 27, 2026 at 02:16AM

Show HN: Managed Postgres with native ClickHouse integration https://ift.tt/5Tlk03j

Show HN: Managed Postgres with native ClickHouse integration Hello HN, this is Sai and Kaushik from ClickHouse. Today we are launching a Postgres managed service that is natively integrated with ClickHouse. It is built together with Ubicloud (YC W24). TL;DR: NVMe-backed Postgres + built-in CDC into ClickHouse + pg_clickhouse so you can keep your app Postgres-first while running analytics in ClickHouse. Try it (private preview): https://ift.tt/vYrJpQd Blog w/ live demo: https://ift.tt/iNfrRMC Problem Across many fast-growing companies using Postgres, performance and scalability commonly emerge as challenges as they grow. This is for both transactional and analytical workloads. On the OLTP side, common issues include slower ingestion (especially updates, upserts), slower vacuums, long-running transactions incurring WAL spikes, among others. In most cases, these problems stem from limited disk IOPS and suboptimal disk latency. Without the need to provision or cap IOPS, Postgres could do far more than it does today. On the analytics side, many limitations stem from the fact that Postgres was designed primarily for OLTP and lacks several features that analytical databases have developed over time, for example vectorized execution, support for a wide variety of ingest formats, etc. We’re increasingly seeing a common pattern where many companies like GitLab, Ramp, Cloudflare etc. complement Postgres with ClickHouse to offload analytics. This architecture enables teams to adopt two purpose-built open-source databases. That said, if you’re running a Postgres based application, adopting ClickHouse isn’t straightforward. You typically end up building a CDC pipeline, handling backfills, and dealing with schema changes and updating your application code to be aware of a second database for analytics. Solution On the OLTP side, we believe that NVMe-based Postgres is the right fit and can drastically improve performance. NVMe storage is physically colocated with compute, enabling significantly lower disk latency and higher IOPS than network-attached storage, which requires a network round trip for disk access. This benefits disk-throttled workloads and can significantly (up to 10x) speed up operations incl. updates, upserts, vacuums, checkpointing, etc. We are working on a detailed blog examining how WAL fsyncs, buffer reads, and checkpoints dominate on slow I/O and are significantly reduced on NVMe. Stay tuned! On the OLAP side, the Postgres service includes native CDC to ClickHouse and unified query capabilities through pg_clickhouse. Today, CDC is powered by ClickPipes/PeerDB under the hood, which is based on logical replication. We are working to make this faster and easier by supporting logical replication v2 for streaming in-progress transactions, a new logical decoding plugin to address existing limitations of logical replication, working toward sub-second replication, and more. Every Postgres comes packaged with the pg_clickhouse extension, which reduces the effort required to add ClickHouse-powered analytics to a Postgres application. It allows you to query ClickHouse directly from Postgres, enabling Postgres for both transactions and analytics. pg_clickhouse supports comprehensive query pushdown for analytics, and we plan to continuously expand this further ( https://ift.tt/rwvL9xu ). Vision To sum it up - Our vision is to provide a unified data stack that combines Postgres for transactions with ClickHouse for analytics, giving you best-in-class performance and scalability on an open-source foundation. Get Started We are actively working with users to onboard them to the Postgres service. Since this is a private preview, it is currently free of cost.If you’re interested, please sign up here. https://ift.tt/vYrJpQd We’d love to hear your feedback on our thesis and anything else that comes to mind, it would be super helpful to us as we build this out! January 23, 2026 at 01:21AM

Show HN: I got tired of checking 5 dashboards, so I built a simpler one https://ift.tt/2x5ectB

Show HN: I got tired of checking 5 dashboards, so I built a simpler one Hey, I’m Felix, an 18-year-old student from Austria. I’ve built a few small SaaS projects, mostly solo, and I kept running into the same small but persistent problem. Whenever I wanted to understand how things were going, I’d end up jumping between Stripe, analytics, database queries, logs, and cron scripts. I even built custom dashboards and Telegram bots to notify me about certain numbers, but that just added more things to maintain. What I wanted was something simpler: send a number from my backend and see it on a clean dashboard. So I built a small tool for myself. It’s essentially a very simple API where you push numeric metrics with a timestamp, and then view them as counters, charts, goals, or percentage changes over time. It’s not meant to replace analytics tools. I still use those. This is more for things like user counts, MRR, failed jobs, or any metric you already know you want to track without setting up a full integration. Some intentional constraints: - no SDKs, just a basic HTTP API - works well with backend code and cron jobs - stores only numbers and timestamps - flexible enough to track any metric you can turn into a number It’s still early and very much an MVP. I’m mainly posting to get feedback: - does this solve a real problem for you? - what feels unnecessary or missing? - how would you approach this differently? Website: https://anypanel.io Happy to answer questions or hear why this doesn’t make sense. Thanks, Felix https://anypanel.io/ January 26, 2026 at 11:03PM

Sunday, January 25, 2026

Show HN: CertRadar – Find every certificate ever issued for your domain https://ift.tt/Ief7yT9

Show HN: CertRadar – Find every certificate ever issued for your domain https://certradar.net/ January 26, 2026 at 12:51AM

Show HN: Free PDF Editor by TechRex – client-side PDF editing, OCR, compression https://ift.tt/0TE1mv8

Show HN: Free PDF Editor by TechRex – client-side PDF editing, OCR, compression Hi HN — I’m Maaz. I built Free PDF Editor by TechRex, a privacy-first PDF toolkit that runs entirely in the browser (client-side). No signup, no watermark. Why: I was frustrated that many “free” PDF tools require uploads, add watermarks, or force accounts. I wanted a simple tool where files stay on-device by default. What it includes: - Edit & annotate: type on PDF, highlight, draw/markup, add notes - Add images/branding: insert images/photos, add a logo to a PDF - Organize: merge, split, extract pages, delete pages - Compression: compress for email/WhatsApp/portal uploads + target sizes (100KB, 200KB, 500KB, 1MB, 2MB, 5MB, 10MB) - OCR: detect scanned PDFs, make PDFs searchable (Ctrl+F), improve copy/paste + conversion accuracy - Converters: PDF ↔ Word/Excel/PPTX, image ↔ PDF, HTML ↔ PDF, PDF ↔ text, image-to-text I’d love feedback on: 1) UX: should the homepage focus on Edit vs Compress vs OCR? 2) Quality: which formats/conversions/OCR cases break most for you? 3) Trust: what privacy assurances would you want to see (copy, UI, technical notes)? Thanks — I’ll respond to every comment and prioritize fixes/features based on feedback. https://ift.tt/KtDyzOA January 25, 2026 at 10:03PM

Saturday, January 24, 2026

Show HN: Remote workers find your crew https://ift.tt/9esg5u4

Show HN: Remote workers find your crew Working from home? Are you a remote employee that "misses" going to the office? Well let's be clear on what you actually miss. No one misses that feeling of having to go and be there 8 hours. But many people miss friends. They miss being part of a crew. Going to lunch, hearing about other people's lives in person not over zoom. Join a co-working space you say? Yes. We have. It's like walking into a library and trying to talk to random people and getting nothing back. Zero part of a crew feeling. https://ift.tt/JkGqvsQ This app helps you find a crew and meet up for work and get that crew feeling. This is my first time using cloudflare workers for a webapp. The free plan is amazing! You get so much compare to anything else out there in terms of limits. The sqlite database they give you is just fine, I don't miss psql. January 25, 2026 at 01:24AM

Show HN: Polymcp – Turn Any Python Function into an MCP Tool for AI Agents https://ift.tt/4jMF5WS

Show HN: Polymcp – Turn Any Python Function into an MCP Tool for AI Agents I built Polymcp, a framework that allows you to transform any Python function into an MCP (Model Context Protocol) tool ready to be used by AI agents. No rewriting, no complex integrations. Examples Simple function: from polymcp.polymcp_toolkit import expose_tools_http def add(a: int, b: int) -> int: """Add two numbers""" return a + b app = expose_tools_http([add], title="Math Tools") Run with: uvicorn server_mcp:app --reload Now add is exposed via MCP and can be called directly by AI agents. API function: import requests from polymcp.polymcp_toolkit import expose_tools_http def get_weather(city: str): """Return current weather data for a city""" response = requests.get(f" https://ift.tt/sUgWwzJ ") return response.json() app = expose_tools_http([get_weather], title="Weather Tools") AI agents can call get_weather("London") to get real-time weather data instantly. Business workflow function: import pandas as pd from polymcp.polymcp_toolkit import expose_tools_http def calculate_commissions(sales_data: list[dict]): """Calculate sales commissions from sales data""" df = pd.DataFrame(sales_data) df["commission"] = df["sales_amount"] * 0.05 return df.to_dict(orient="records") app = expose_tools_http([calculate_commissions], title="Business Tools") AI agents can now generate commission reports automatically. Why it matters for companies • Reuse existing code immediately: legacy scripts, internal libraries, APIs. • Automate complex workflows: AI can orchestrate multiple tools reliably. • Plug-and-play: multiple Python functions exposed on the same MCP server. • Reduce development time: no custom wrappers or middleware needed. • Built-in reliability: input/output validation and error handling included. Polymcp makes Python functions immediately usable by AI agents, standardizing integration across enterprise software. Repo: https://ift.tt/KFrIxiY January 25, 2026 at 02:27AM

Friday, January 23, 2026

Show HN: Teemux – Zero-config log multiplexer with built-in MCP server https://ift.tt/qhDmGvw

Show HN: Teemux – Zero-config log multiplexer with built-in MCP server I started to use AI agents for coding and quickly ran into a frustrating limitation – there is no easy way to share my development environment logs with AI agents. So that's what is Teemux. A simple CLI program that aggregates logs, makes them available to you as a developer (in a pretty UI), and makes them available to your AI coding agents using MCP. There is one implementation detail that I geek out about: It is zero config and has built-in leader nomination for running the web server and MCP server. When you start one `teemux` instance, it starts web server, .. when you start second and third instances, they join the first server and start merging logs. If you were to kill the first instance, a new leader is nominated. This design allows to seamless add/remove nodes that share logs (a process that historically would have taken a central log aggregator). A super quick demo: npx teemux -- curl -N https://ift.tt/J7cD3Ut https://teemux.com/ January 23, 2026 at 10:49PM

Show HN: Claude Tutor – an open source engineering tutor https://ift.tt/JiOf1uZ

Show HN: Claude Tutor – an open source engineering tutor We used Claude Agent SDK to make Claude Tutor. It's main goal is increase human knowledge, understanding, and agency. It's an email and CLI agent to help people level up their software engineering skills. We think there's too much focus on AI agency right now and not enough on human agency. Open sourced, and curious for feedback! This is v0.1 so it's hella early. Ps. Next step is to get this working on Open Agent SDK and explore other interfaces. https://twitter.com/michaelraspuzzi/status/2014756546195148988 January 24, 2026 at 12:48AM

Show HN: New 3D Mapping website (uses GMP) https://ift.tt/huwkfqK

Show HN: New 3D Mapping website (uses GMP) https://ift.tt/bRwItqY January 24, 2026 at 12:34AM

Show HN: Cholesterol Tracker – Built after high cholesterol diagnosis at 33 https://ift.tt/RqtbI9Z

Show HN: Cholesterol Tracker – Built after high cholesterol diagnosis at 33 After my annual checkup showed LDL 4.4 mmol/L (170 mg/dL) and triglycerides 2.0 mmol/L at 33, I tried tracking with ChatGPT (lost data when context got too big), then spreadsheets (too tedious). Built a simple tracker focused on cholesterol. Log meals, see lipid breakdown, track trends. I believe snacks and sugar were my main issue. Stack: Angular 17 + NestJS + Supabase Started January 1st, already lost 3kg. Same breakfast daily (psyllium, oats, chia, skyr, whey, berries), cut sugar from daily to once per week. Free during beta. Looking for feedback on whether strict diet cutting or 80/20 approach is more sustainable long-term. https://ift.tt/RYT4z8L January 24, 2026 at 12:23AM

Thursday, January 22, 2026

Show HN: A Node Based Editor for Three.js Shading Language (TSL) https://ift.tt/CRdt9Gi

Show HN: A Node Based Editor for Three.js Shading Language (TSL) Three.js recently introduced TSL (Three.js Shading Language), a way to write shaders in pure JavaScript/TypeScript that compiles to both GLSL and WGSL. I built this editor to provide a visual interface for the tsl ecosystem. It allows developers to prototype shaders for WebGPU/WebGL and see the results in real-time. This is a beta release and I'm looking for feedback. https://www.tsl-graph.xyz/ January 23, 2026 at 12:05AM

Show HN: I'm tired of my LLM bullshitting. So I fixed it https://ift.tt/w1cAPd8

Show HN: Bible translated using LLMs from source Greek and Hebrew https://ift.tt/PAhofCI

Show HN: Bible translated using LLMs from source Greek and Hebrew Built an auditable AI (Bible) translation pipeline: Hebrew/Greek source packets -> verse JSON with notes rolling up to chapters, books, and testaments. Final texts compiled with metrics (TTR, n-grams). This is the first full-text example as far as I know (Gen Z bible doesn't count). There are hallucinations and issues, but the overall quality surprised me. LLMs have a lot of promise translating and rendering 'accessible' more ancient texts. The technology has a lot of benefit for the faithful, that I think is only beginning to be explored. https://biblexica.com January 22, 2026 at 11:00PM

Wednesday, January 21, 2026

Show HN: I built a chess explorer that explains strategy instead of just stats https://ift.tt/GUX9t5S

Show HN: I built a chess explorer that explains strategy instead of just stats I built this because I got tired of Stockfish giving me evaluations (+0.5) without explaining the actual plan. Most opening explorers focus on statistics (Win/Loss/Draw). I wanted a tool that explains the strategic intent behind the moves (e.g., "White plays c4 to clamp down on d5" vs just "White plays c4"). The Project: Comprehensive Database: I’ve mapped and annotated over 3,500 named opening variations. It covers everything from main lines (Ruy Lopez, Sicilian) to deep sidelines. Strategic Visualization: The UI highlights key squares and draws arrows based on the textual explanation, linking the logic to the board state dynamically. Hybrid Architecture: For the 3,500+ core lines, it serves my proprietary strategic data. For anything deeper/rarer, it seamlessly falls back to the Lichess Master API so the explorer remains functional 20 moves deep. Stack: Next.js (App Router), MongoDB Atlas for the graph data, and Arcjet for security/rate-limiting. It is currently in Beta. I am working on expanding the annotated coverage, but the main theoretical landscape is mapped. Feedback on the UI/UX or the data structure is welcome. https://ift.tt/IFC26tc January 21, 2026 at 10:56PM

Show HN: Rowboat – Open-Source Claude Cowork with an Obsidian Vault https://ift.tt/rjn6VwH

Show HN: Rowboat – Open-Source Claude Cowork with an Obsidian Vault Claude Cowork just launched, bringing agentic AI to everyday work. Rowboat is an open-source alternative that builds knowledge that persists over time. A quick demo is here: https://youtu.be/T2Bmiy05FrI It connects to Gmail and meeting notes (Granola, Fireflies) and organizes them into an Obsidian-compatible vault. Plain Markdown files with backlinks, organized around things like people, projects, organizations, and topics. As new emails and meetings come in, the right notes update automatically. Rowboat is also the primary interface for this vault. You can read, navigate, edit, and add notes directly. It includes a full markdown editor and graph visualization so you can see how context builds up across conversations. Why not just search transcripts when you need something? Search only answers the questions you think to ask. A system that accumulates context over time can track decisions, commitments, and relationships across conversations, surfacing patterns you didn’t know to look for. Once this context exists, it becomes knowledge that Rowboat can work with. Because it runs on your machine, it can work directly with local files and run shell commands or scripts, including tools like ffmpeg when needed. The link in the title opens an interactive example graph showing how context accumulates across emails and meetings. We used a founder example because it naturally includes projects, people, and long-running conversations, but the structure applies to any role. Examples of what you can do with Rowboat: draft emails from accumulated context, prep for meetings by assembling past decisions and enriching them with external research (for example via Exa MCP), organize files and project artifacts on your machine as work evolves, or turn notes into voice briefings via MCP servers like ElevenLabs. We’re opinionated about noise. We prioritize recurring contacts, active projects, and ongoing work, and ignore one-off emails and notifications. The goal is long-lived knowledge that compounds over time. All data is stored locally as plain Markdown. You can use local models via Ollama or LM Studio, or a hosted model. Apache-2.0 licensed. GitHub: https://ift.tt/MQIaP0O Curious how this fits into your current workflow for everyday work. https://ift.tt/ac9iysC January 22, 2026 at 12:22AM

Show HN: See the carbon impact of your cloud as you code https://ift.tt/gY9ISZb

Show HN: See the carbon impact of your cloud as you code Hey folks, I’m Hassan, one of the co-founders of Infracost ( https://ift.tt/sXfZDV5 ). Infracost helps engineers see and reduce the cloud cost of each infrastructure change before they merge their code. The way Infracost works is we gather pricing data from Amazon Web Services, Microsoft Azure and Google Cloud. What we call a ‘Pricing Service’, which now holds around 9 million live price points (!!). Then we map these prices to infrastructure code. Once the mapping is done, it enables us to show the cost impact of a code change before it is merged, directly in GitHub, GitLab etc. Kind of like a checkout-screen for cloud infrastructure. We’ve been building since 2020 (we were part of YC W21 batch), and iterating on the product, building out a team etc. However, back in 2020 one of our users asked if we can also show the carbon impact alongside costs. It has been itching my brain since then. The biggest challenge has always been the carbon data. The mapping of carbon data to infrastructure is time consuming, but it is possible since we’ve done it with cloud costs. But we need the raw carbon data first. The discussions that have happened in the last few years finally led me to a company called Greenpixie in the UK. A few of our existing customers were using them already, so I immediately connected with the founder, John. Greenpixie said they have the data (AHA!!) And their data is verified (ISO-14064 & aligned with the Greenhouse Gas Protocol). As soon as I talked to a few of their customers, I asked my team to see if we can actually finally do this, and build it. My thinking is this: some engineers will care, and some will not (or maybe some will love it and some will hate it!). For those who care, cost and carbon are actually linked; meaning if you reduce the carbon, you usually reduce the cost of the cloud too. It can act as another motivation factor. And now, it is here, and I’d love your feedback. Try it out by going to https://ift.tt/bfXP7JK , create an account, set up with the GitHub app or GitLab app, and send a pull request with Terraform changes (you can use our example terraform file). It will then show you the cost impact alongside the carbon impact, and how you can optimize it. I’d especially love to hear your feedback on if you think carbon is a big driver for engineers within your teams, or if carbon is a big driver for your company (i.e. is there anything top-down about carbon). AMA - I’ll be monitoring the thread :) Thanks https://ift.tt/bfXP7JK January 21, 2026 at 10:04PM

Show HN: Should I kill my side project? https://ift.tt/b8ncJ7d

Show HN: Should I kill my side project? https://ift.tt/ClMD1hQ January 21, 2026 at 07:52PM

Tuesday, January 20, 2026

Show HN: Agent Skills – 1k curated Claude Code skills from 60k+ GitHub skills https://ift.tt/0MvJHLV

Show HN: Agent Skills – 1k curated Claude Code skills from 60k+ GitHub skills https://agent-skills.cc/ January 20, 2026 at 11:07PM

Show HN: Picocode – a Rust based tiny Claude Code clone for any LLM, for fun https://ift.tt/kS7DLZz

Show HN: Picocode – a Rust based tiny Claude Code clone for any LLM, for fun https://ift.tt/Qv3dFA4 January 20, 2026 at 11:06PM

Show HN: Preloop – An MCP proxy for human-in-the-loop tool approvals https://ift.tt/vgs81Ip

Show HN: Preloop – An MCP proxy for human-in-the-loop tool approvals Hey HN, I’m Yannis, co-founder of Preloop. We’ve built a proxy for the Model Context Protocol (MCP) that lets you add human approval gates to your AI agents without changing your agent code. We’re building agents that use tools (Claude Desktop, Cursor, etc.), but we were terrified to give them write-access to sensitive systems (Stripe, Prod DBs, AWS). We didn't want to rewrite our agents to wrap every tool call in complex "ask_user" logic, especially since we use different agent runtimes. We built Preloop as a middleware layer. It acts as a standard MCP server proxy.You point your agent to Preloop instead of the raw tool. You define policies (e.g., "Allow payments < $50, but require approval for > $50"). When the agent triggers a rule, we intercept the JSON-RPC request and hold the connection open. You get a push notification (mobile/web/email) to Approve/Deny. Once approved, we forward the request to the actual tool and return the result to the agent. We put together a short video showing Claude Code trying to send money. It gets paused automatically when it exceeds the limit: https://www.youtube.com/watch?v=yTtXn8WibTY We’re compatible with any client that supports MCP (Claude Desktop, Cursor, etc.). We also have a built-in automation platform if you want to host the agents yourself, but the proxy works standalone. We’re looking for feedback on the architecture and the approval flow. Is the "Proxy" approach the right way to handle agent safety, or do you prefer SDKs? You can try it out here: https://preloop.ai Docs: https://docs.preloop.ai Thanks! https://preloop.ai January 20, 2026 at 11:04PM

Show HN: APIsec MCP Audit – Audit what your AI agents can access https://ift.tt/PKtr8U7

Show HN: APIsec MCP Audit – Audit what your AI agents can access Hi HN — I built APIsec MCP Audit, an open source tool to audit Model Context Protocol (MCP) configurations used by AI agents. Developers are connecting Claude, Cursor, and other assistants to APIs, databases, and internal systems via MCP. These configs grant agents real permissions, often without security oversight. MCP Audit scans MCP configs and surfaces: - Exposed credentials (keys, tokens, database URLs) - What APIs or tools an agent can call - High-risk capabilities (shell access, filesystem access, unverified sources) It can also export results as a CycloneDX AI-BOM for governance and compliance. Two ways to try it: - CLI: pip install mcp-audit - Web demo: https://apisec-inc.github.io/mcp-audit/ Repo: https://ift.tt/FaelEJ3 We're a security company (APIsec) and built this after repeatedly finding secrets and over-permissioned agent configs during assessments. Would appreciate feedback — especially on risk scoring heuristics and what additional signals would be useful. https://ift.tt/FaelEJ3 January 20, 2026 at 09:33PM

Monday, January 19, 2026

Show HN: Subth.ink – write something and see how many others wrote the same https://ift.tt/ckEQKAb

Show HN: Subth.ink – write something and see how many others wrote the same Hey HN, this is a small Haskell learning project that I wanted to share. It's just a website where you can see how many people write the exact same text as you (thought it was a fun idea). It's built using Scotty, SQLite, Redis and Caddy. Currently it's running in a small DigitalOcean droplet (1 Gb RAM). Using Haskell for web development (specifically with Scotty) was slightly easier than I thought, but still a relatively hard task compared to other languages. One of my main friction points was Haskell's multiple string-like types: String, Text (& lazy), ByteString (& lazy), and each library choosing to consume a different one amongst these. There is also a soft requirement to learn monad transformers (e.g. to understand what liftIO is doing) which made the initial development more difficult. https://subth.ink/ January 20, 2026 at 01:34AM

Show HN: I built a system to drive my RC car from anywhere in the world https://ift.tt/L5CDPnb

Show HN: I built a system to drive my RC car from anywhere in the world Wanted to share a project I've been working on. Basically lets you drive an RC car remotely over the internet with live FPV video. I'm arranging outdoor time attack tournaments with friends, somewhere in woods or in the open field. The setup: - Raspberry Pi Zero 2W mounted on the car with a wide-angle camera - ESP32 on the transmitter generating joystick voltages (needed because ARRMA's 2-in-1 ESC/receiver has no accessible inputs) - Cloudflare for the networking magic (TURN, Tunnel, Workers) - Browser-based controls - works on phone or desktop What it does: - ~100-200ms control latency over internet (10-15ms on LAN) - 720p @ 30fps live video - Touch controls on mobile, keyboard on desktop - Admin dashboard for race management - Token-based access so I can let friends drive - Auto-stops if connection drops (safety first) - Adjustable throttle limits - Optional re-streaming to YouTube Built it because I thought it'd be cool to let people drive the car without being physically present. Currently running it on my 4G modem and it works surprisingly well. The whole thing is open source if anyone wants to check it out or build their own. The thing is, it's obviously not easy to get up and running for an average user. But maybe you'll find this useful. Total hardware cost is around $75 (Pi + camera + ESP32) assuming you already have the car and transmitter. Some features are work in progress: - Speedometer - GPS and track position - Gates system (will probably use short-range Bluetooth beacons) Here's a a technical article about the project that reveals a bit more of under the hood thinking https://ift.tt/CsFRgPS... https://ift.tt/h5FVAiB January 20, 2026 at 12:05AM

Show HN: Pipenet – A Modern Alternative to Localtunnel https://ift.tt/UEJbFaf

Show HN: Pipenet – A Modern Alternative to Localtunnel Hey HN! localtunnel's server needs random ports per client. That doesn't work on Fly.io or behind strict firewalls. We rewrote it in TypeScript and added multiplexing over a single port. Open-source and 100% self-hostable. Public instance at *.pipenet.dev if you don't want to self-host. Built at Glama for our MCP Inspector, but it's a generic tunnel with no ties to our infra. https://ift.tt/5uP6arx https://pipenet.dev/ January 19, 2026 at 11:10PM

Sunday, January 18, 2026

Show HN: Available.dev – Craigslist for Developer Availability https://ift.tt/eDXcH5S

Show HN: Available.dev – Craigslist for Developer Availability Hey HN, Craigslist for developer availability. You're either in the room or you're not. How it works: GitHub OAuth → one-liner → pick skills → you're visible. Employers browse freely, reach out directly. Design choices: - Most recently active at top (browsing keeps you visible) - Days cap at "30+" (no one needs to see "day 47") - No resumes, no applications 54 devs in the room. Supply side works. Testing demand. Question for HN: Would you actually browse this when hiring? What's missing? https://ift.tt/OQMLxn9 January 18, 2026 at 11:01PM

Show HN: I built a "sudo" mechanism for AI agents https://ift.tt/arN8BkP

Show HN: I built a "sudo" mechanism for AI agents Hi HN, I’m Yaron, a DevOps engineer working on AI infrastructure. I built Cordum because I saw a huge gap between "AI Demos" and "Production Safety." Everyone is building Agents, but no one wants to give them write-access to sensitive APIs (like refunds, database deletions, or server management). The problem is that LLMs are probabilistic, but our infrastructure requires deterministic guarantees. Cordum is an open-source "Safety Kernel" that sits between your LLM and your execution environment. Think of it as a firewall/proxy for agentic actions. Instead of relying on the prompt to "please be safe," Cordum enforces policy at the protocol layer: 1. It intercepts the agent's intent. 2. Checks it against a strict policy (e.g., "Refund > $50 requires human approval"). 3. Manages the execution via a state machine. Tech Stack: - Written in Go (for performance and concurrency). - Uses NATS JetStream for the message bus. - Redis for state management. It’s still early days, but I’d love your feedback on the architecture and the approach to agent governance. Repo: https://ift.tt/Kgzj7fW Happy to answer any questions! https://ift.tt/Kgzj7fW January 18, 2026 at 08:52PM

Show HN: DailySpace – Daily astronomy photos with rocket launch tracking https://ift.tt/CYFSTUG

Show HN: DailySpace – Daily astronomy photos with rocket launch tracking I built DailySpace because I wanted a better way to explore space imagery beyond endlessly scrolling through search results. The app features a curated collection of thousands of cosmic images organized into categories like galaxies, nebulae, Mars, and black holes. Each photo comes with explanations that make the science accessible. I recently added rocket launch tracking with detailed mission data from space agencies worldwide. The architecture focuses on discoverability-you get a featured photo daily, but you can also browse categories or search the full collection. The dark-themed UI is optimized for viewing space imagery without eye strain.Free tier covers daily photos and basic browsing. Premium unlocks unlimited search results, unlimited favorites, and cross-device sync. What started as a personal project to learn more about astronomy turned into something I use every day. The two-minute daily habit of opening it and learning something new about the universe has been surprisingly impactful. Would love feedback from HN, especially on features you'd find useful or additional data sources worth integrating. Download: https://ift.tt/3qKFXms... https://ift.tt/56qdjKP January 18, 2026 at 11:21PM

Saturday, January 17, 2026

Show HN: HORenderer3: A C++ software renderer implementing OpenGL 3.3 pipeline https://ift.tt/nXsJxWF

Show HN: HORenderer3: A C++ software renderer implementing OpenGL 3.3 pipeline Hi everyone, I wanted to share a personal project I've been working on: a GL-like 3D software renderer inspired by the OpenGL 3.3 Core Specification. The main goal was to better understand GPU behavior and rendering pipelines by building a virtual GPU layer entirely in software. This includes VRAM-backed resource handling, pipeline state management, and shader execution flow. The project also exposes an OpenGL-style API and driver layer based on the official OpenGL Registry headers, allowing rendering code to be written in a way that closely resembles OpenGL usage. I'd really appreciate any feedback. https://ift.tt/r2b9TRc January 18, 2026 at 12:34AM

Show HN: What if your menu bar was a keyboard-controlled command center? https://ift.tt/nQGI3qf

Show HN: What if your menu bar was a keyboard-controlled command center? Hey Hacker News The ones that know me here know that I am a productivity geek. After DockFlow to manage my Dock and ExtraDock, which gives me more space to manage my apps and files, I decided to tackle the macOS big boss: the menu bar. I spend ~40% of my day context-switching between apps — Zoom meetings, Slack channels, Code projects, and Figma designs. My macOS menu bar has too many useless icons I almost never use. So I thought to myself, how can I use this area to improve my workflows? Most solutions (Bartender, Ice) require screen recording permissions, and did not really solve my issues. I wanted custom menus in the apps, not the ones that the developers decided for me. After a few iterations and exploring different solutions, ExtraBar was created. Instead of just hiding icons, what if the menu bar became a keyboard-controlled command center that has the actions I need? No permissions. No telemetry. Just local actions. This is ExtraBar: Set up the menu with the apps and actions YOU need, and use a hotkey to bring it up with full keyboard navigation built in. What you can do: - Jump into your next Zoom call with a keystroke - Open specific Slack channels instantly (no menu clicking) - Launch VS Code projects directly - Trigger Apple Shortcuts workflows - Integrate with Raycast for advanced automation - Custom deep links to Figma, Spotify, or any URL Real-world example: I've removed my menu bar icons. Everything is keyboard- controlled: cmd+B → 2 (Zoom) → 4 (my personal meeting) → I'm in. Why it's different: Bartender and Ice hide icons. ExtraBar uses your menu bar to do things. Bartender requires screen recording permissions. Ice requires accessibility permissions. ExtraBar works offline with zero permissions - (Enhance functionality with only accessibility permissions, not a must) Technical: - Written in SwiftUI; native on Apple Silicon and Intel - Zero OS permissions required (optional accessibility for enhanced keyboard nav) - All data stored locally (no cloud, no telemetry) - Very Customizable with custom configuration built in for popular apps + fully customizable configuration actions. - Import/export action configurations The app is improving weekly based on community feedback. We're also building configuration sharing so users can share setups. Already got some great feedback from Reddit and Producthunt, and I can't wait to get yours! Check out the website: https://extrabar.app ProductHunt: https://ift.tt/PmZNqFr https://extrabar.app/ January 18, 2026 at 12:31AM

Show HN: Reddit GDPR Export Viewer – Built After Ban, Unban, Reban https://ift.tt/Uh3jz20

Show HN: Reddit GDPR Export Viewer – Built After Ban, Unban, Reban Show HN: Reddit GDPR Export Viewer – Built After Getting Hacked, Reinstated, Then Banned Again A few months ago, I posted here about getting my 10-year Reddit account hacked despite 2FA: https://ift.tt/APHTudF The likely culprit: session cookie theft via a malicious browser extension, possibly linked to the ShadyPanda campaign that infected 4.3M browsers. Reddit eventually reinstated my account with zero explanation. Then, exactly one month later, they banned me again – permanently, with no reason given and no appeal process. This drove home a lesson: platforms can and will revoke your access arbitrarily, taking years of contributions with them. So I requested my GDPR data export. What I received was not really usable: raw CSV files with no way to meaningfully browse a decade of comments, posts, and activity. So I built this: https://ift.tt/wHby8fF It's a pure client-side viewer – zero backend, your data never leaves your machine. Open the HTML file, load your Reddit export, and browse your history offline. Full disclosure: I've been vibe coding with Claude Opus for the past few weeks, creating mostly Gravity Forms and WordPress extensions for work (18 repos so far). This particular project was knocked out in a couple of hours. I don't have a strong technical background, so this might be pretty badly coded. It works for what I needed, though. If you find issues or have suggestions for improvements, PRs are welcome. https://ift.tt/wHby8fF January 18, 2026 at 12:15AM

Show HN: I built a tool to assist AI agents to know when a PR is good to go https://ift.tt/YR69Zfo

Show HN: I built a tool to assist AI agents to know when a PR is good to go I've been using Claude Code heavily, and kept hitting the same issue: the agent would push changes, respond to reviews, wait for CI... but never really know when it was done. It would poll CI in loops. Miss actionable comments buried among 15 CodeRabbit suggestions. Or declare victory while threads were still unresolved. The core problem: no deterministic way for an agent to know a PR is ready to merge. So I built gtg (Good To Go). One command, one answer: $ gtg 123 OK PR #123: READY CI: success (5/5 passed) Threads: 3/3 resolved It aggregates CI status, classifies review comments (actionable vs. noise), and tracks thread resolution. Returns JSON for agents or human-readable text. The comment classification is the interesting part — it understands CodeRabbit severity markers, Greptile patterns, Claude's blocking/approval language. "Critical: SQL injection" gets flagged; "Nice refactor!" doesn't. MIT licensed, pure Python. I use this daily in a larger agent orchestration system — would love feedback from others building similar workflows. https://dsifry.github.io/goodtogo/ January 17, 2026 at 04:55PM

Friday, January 16, 2026

Show HN: Fluent, a tiny lang for differentiable tensors and reactive programming https://ift.tt/DpKEUsu

Show HN: Fluent, a tiny lang for differentiable tensors and reactive programming Hello, I finally pushed myself to open-source Fluent, a differentiable array-oriented language I've been building for the New Kind of Paper project [1-5]. Demo is available at [0]. Few salient features: 1. Every operator is user-(re)definable. Don't like writing assignment with `:`, change it to whatever you like. Create new and whacky operators – experiment to the death with it. 2. Differentiability. Language is suitable for machine learning tasks using gradient descent. 3. Reactivity. Values can be reactive, so down-stream values are automatically recomputed as in spreadsheet. 4. Strict left-to-right order of operations. Evaluation and reading should be the same thing. 5. Words and glyphs are interchangeable. All are just names for something. Right? 6. (Pre,In,Post)-fix. You can choose style that suits you. It has its own IDE with live evaluation and visualization of the values. The whole thing runs in browser (prefer Chrome), it definitely has ton of bugs, will crash your browser/computer/stock portfolio, so beware. Some bait – linear regression (Ctrl+O, "linear-regression-compressed" or [6]): ``` x: (0 :: 10), y: (x × 0.23 + 0.47), θ: ~([0, 0]), f: { x | x × (θ_0) + (θ_1) }, : { μ((y - f(x)) ^ 2) }, minimize: adam(0.03), losses: $([]), (++): concat, { losses(losses() ++ [minimize()]), } ⟳ 400, (losses, θ) ``` --- [0]: https://mlajtos.github.io/fluent/?code=RG9jdW1lbnRhdGlvbg [1]: https://ift.tt/eKDhsjL [2]: https://ift.tt/9MeTJNr [3]: https://ift.tt/V7UtzsD [4]: https://ift.tt/vRdZKWz [5]: https://ift.tt/g50txJO [6]: https://mlajtos.github.io/fluent/?code=eDogKDAgOjogMTApLAp5O... https://ift.tt/kxf9hAK January 17, 2026 at 12:38AM

Show HN: Claude Code plugin for ecommerce development https://ift.tt/c5Y9zaK

Show HN: Claude Code plugin for ecommerce development https://ift.tt/8NqvhDL January 16, 2026 at 11:29PM

Show HN: 1Code – Open-source Cursor-like UI for Claude Code https://ift.tt/dQ1FGNc

Show HN: 1Code – Open-source Cursor-like UI for Claude Code Hi, we're Sergey and Serafim. We've been building dev tools at 21st.dev and recently open-sourced 1Code ( https://1code.dev ), a local UI for Claude Code. Here's a video of the product: https://www.youtube.com/watch?v=Sgk9Z-nAjC0 Claude Code has been our go-to for 4 months. When Opus 4.5 dropped, parallel agents stopped needing so much babysitting. We started trusting it with more: building features end to end, adding tests, refactors. Stuff you'd normally hand off to a developer. We started running 3-4 at once. Then the CLI became annoying: too many terminals, hard to track what's where, diffs scattered everywhere. So we built 1Code.dev, an app to run your Claude Code agents in parallel that works on Mac and Web. On Mac: run locally, with or without worktrees. On Web: run in remote sandboxes with live previews of your app, mobile included, so you can check on agents from anywhere. Running multiple Claude Codes in parallel dramatically sped up how we build features. What’s next: Bug bot for identifying issues based on your changes; QA Agent, that checks that new features don't break anything; Adding OpenCode, Codex, other models and coding agents. API for starting Claude Codes in remote sandboxes. Try it out! We're open-source, so you can just bun build it. If you want something hosted, Pro ($20/mo) gives you web with live browser previews hosted on remote sandboxes. We’re also working on API access for running Claude Code sessions programmatically. We'd love to hear your feedback! https://ift.tt/waRqMJO January 16, 2026 at 02:20AM

Show HN: SkillRisk – Free security analyzer for AI agent skills https://ift.tt/DwxYiJF

Show HN: SkillRisk – Free security analyzer for AI agent skills https://ift.tt/sSr0duJ January 16, 2026 at 11:05PM

Thursday, January 15, 2026

Show HN: OpenWork – an open-source alternative to Claude Cowork https://ift.tt/m0nTGcB

Show HN: OpenWork – an open-source alternative to Claude Cowork hi hn, i built openwork, an open-source, local-first system inspired by claude cowork. it’s a native desktop app that runs on top of opencode (opencode.ai). it’s basically an alternative gui for opencode, which (at least until now) has been more focused on technical folks. the original seed for openwork was simple: i have a home server, and i wanted my wife and i to be able to run privileged workflows. things like controlling home assistant, or deploying custom web apps (e.g. our customs recipe app recipes.benjaminshafii.com), legal torrents, without living in a terminal. our initial setup was running the opencode web server directly and sharing credentials to it. that worked, but i found the web ui unreliable and very unfriendly for non-technical users. the goal with openwork is to bring the kind of workflows i’m used to running in the cli into a gui, while keeping a very deep extensibility mindset. ideally this grows into something closer to an obsidian-style ecosystem, but for agentic work. some core principles i had in mind: - open by design: no black boxes, no hosted lock-in. everything runs locally or on your own servers. (models don’t run locally yet, but both opencode and openwork are built with that future in mind.) - hyper extensible: skills are installable modules via a skill/package manager, using the native opencode plugin ecosystem. - non-technical by default: plans, progress, permissions, and artifacts are surfaced in the ui, not buried in logs. you can already try it: - there’s an unsigned dmg - or you can clone the repo, install deps, and if you already have opencode running it should work right away it’s very alpha, lots of rough edges. i’d love feedback on what feels the roughest or most confusing. happy to answer questions. https://ift.tt/9zA6qyb January 14, 2026 at 11:55AM

Show HN: Keypost – Policy enforcement for MCP pipelines https://ift.tt/IGmYDW1

Show HN: Keypost – Policy enforcement for MCP pipelines https://keypost.ai January 16, 2026 at 12:25AM

Show HN: I'm building an open-source AI agent runtime using Firecracker microVMs https://ift.tt/Tq4QJUi

Show HN: I'm building an open-source AI agent runtime using Firecracker microVMs Hello Hacker News! I'm Mark. I'm building Moru, an open-source runtime for AI agents that runs each session in an isolated Firecracker microVM. It started as a fork of E2B, and most of the low-level Firecracker runtime is still from upstream. It lets you run agent harnesses like Claude Code or Codex in the cloud, giving each session its own isolated microVM with filesystem and shell access. The repo is: https://ift.tt/cfHRjZY Each VM is a snapshot of a Docker build. You define a Dockerfile, CPU, memory limits, and Moru runs the build inside a Firecracker VM, then pauses and saves the exact state: CPU, dirty memory pages, and changed filesystem blocks. When you spawn a new VM, it resumes from that template snapshot. Memory snapshot is lazy-loaded via userfaultfd, which helps sandboxes start within a second. Each VM runs on Firecracker with KVM isolation and a dedicated kernel. Network uses namespaces for isolation and iptables for access control. From outside, you talk to the VM through the Moru CLI or TypeScript/Python SDK. Inside, it's just Linux. Run commands, read/write files, anything you'd do on a normal machine. I've been building AI apps since the ChatGPT launch. These days, when an agent needs to solve complex problems, I just give it filesystem + shell access. This works well because it (1) handles large data without pushing everything into the model context window, and (2) reuses tools that already work (Python, Bash, etc.). This has become much more practical as frontier models have gotten good at tool use and multi-step workflows. Now models run for hours on real tasks. As models get smarter, the harness should give models more autonomy, but with safe guardrails. I want Moru to help developers focus on building agents, not the underlying runtime and infra. You can try the cloud version without setting up your own infra. It's fully self-hostable including the infra and the dashboard. I'm planning to keep this open like the upstream repo (Apache 2.0). Give it a spin: https://ift.tt/cfHRjZY Let me know what you think! Next features I'm working toward: - Richer streaming: today it's mostly stdin/stdout. That pushes me to overload print/console.log for control-plane communication, which gets messy fast. I want a separate streaming channel for structured events and coordination with the control plane (often an app server), while keeping stdout/stderr for debugging. - Seamless deployment: a deploy experience closer to Vercel/Fly.io. - A storage primitive: save and resume sessions without always having to manually sync workspace and session state. Open to your feature requests or suggestions. I'm focusing on making it easy to deploy and run local-first agent harnesses (e.g., Claude Agent SDK) inside isolated VMs. If you've built or are building those, I'd appreciate any notes on what's missing, or what you'd prioritize first. https://ift.tt/cfHRjZY January 16, 2026 at 12:18AM

Wednesday, January 14, 2026

Show HN: HyTags – HTML as a Programming Language https://ift.tt/vTh64AN

Show HN: HyTags – HTML as a Programming Language This is hyTags, a programming language embedded in HTML for building interactive web UIs. It started as a way to write full-stack web apps in Swift without a separate frontend, but grew into a small language with control flow, functions, and async handling via HTML tags. The result is backend language-agnostic and can be generated from any server that can produce HTML via templates or DSLs. https://hytags.org January 13, 2026 at 05:57PM

Show HN: A 10KiB kernel for cloud apps https://ift.tt/6aIhWiA

Show HN: A 10KiB kernel for cloud apps https://ift.tt/VOy1zqw January 14, 2026 at 11:04PM

Tuesday, January 13, 2026

Show HN: Self-host Reddit – 2.38B posts, works offline, yours forever https://ift.tt/nmIuLTe

Show HN: Self-host Reddit – 2.38B posts, works offline, yours forever Reddit's API is effectively dead for archival. Third-party apps are gone. Reddit has threatened to cut off access to the Pushshift dataset multiple times. But 3.28TB of Reddit history exists as a torrent right now, and I built a tool to turn it into something you can browse on your own hardware. The key point: This doesn't touch Reddit's servers. Ever. Download the Pushshift dataset, run my tool locally, get a fully browsable archive. Works on an air-gapped machine. Works on a Raspberry Pi serving your LAN. Works on a USB drive you hand to someone. What it does: Takes compressed data dumps from Reddit (.zst), Voat (SQL), and Ruqqus (.7z) and generates static HTML. No JavaScript, no external requests, no tracking. Open index.html and browse. Want search? Run the optional Docker stack with PostgreSQL – still entirely on your machine. API & AI Integration: Full REST API with 30+ endpoints – posts, comments, users, subreddits, full-text search, aggregations. Also ships with an MCP server (29 tools) so you can query your archive directly from AI tools. Self-hosting options: - USB drive / local folder (just open the HTML files) - Home server on your LAN - Tor hidden service (2 commands, no port forwarding needed) - VPS with HTTPS - GitHub Pages for small archives Why this matters: Once you have the data, you own it. No API keys, no rate limits, no ToS changes can take it away. Scale: Tens of millions of posts per instance. PostgreSQL backend keeps memory constant regardless of dataset size. For the full 2.38B post dataset, run multiple instances by topic. How I built it: Python, PostgreSQL, Jinja2 templates, Docker. Used Claude Code throughout as an experiment in AI-assisted development. Learned that the workflow is "trust but verify" – it accelerates the boring parts but you still own the architecture. Live demo: https://online-archives.github.io/redd-archiver-example/ GitHub: https://ift.tt/8jpNDb0 (Public Domain) Pushshift torrent: https://ift.tt/pzYdDac... https://ift.tt/8jpNDb0 January 13, 2026 at 10:35PM

Show HN: Ayder – HTTP-native durable event log written in C (curl as client) https://ift.tt/OKyrMuR

Show HN: Ayder – HTTP-native durable event log written in C (curl as client) Hi HN, I built Ayder — a single-binary, HTTP-native durable event log written in C. The wedge is simple: curl is the client (no JVM, no ZooKeeper, no thick client libs). There’s a 2-minute demo that starts with an unclean SIGKILL, then restarts and verifies offsets + data are still there. Numbers (3-node Raft, real network, sync-majority writes, 64B payload): ~50K msg/s sustained (wrk2 @ 50K req/s), client P99 ~3.46ms. Crash recovery after SIGKILL is ~40–50s with ~8M offsets. Repo link has the video, benchmarks, and quick start. I’m looking for a few early design partners (any event ingestion/streaming workload). https://ift.tt/fEwTGlO January 14, 2026 at 12:55AM

Show HN: Data from a mixed-brand LiFePO₄ battery bank https://ift.tt/6NhRK34

Show HN: Data from a mixed-brand LiFePO₄ battery bank Hi HN — I’m sharing an empirical, long-term dataset from a DIY energy-storage project that ended up testing a common assumption in battery design. Conventional advice says never mix battery brands. That guidance is well-founded for series strings, but there’s surprisingly little data on purely parallel configurations. I built a 12 V, 500 Ah LiFePO₄ battery bank (1S5P) using mixed-brand cells and instrumented it for continuous monitoring over 73+ days, including high-frequency voltage sampling. The goal was to see whether cell-level differences actually manifest over time in a parallel topology. What the data shows No progressive voltage divergence across the observation period Voltage spread remained within ~10–15 mV Measured Peukert exponent ≈ 1.00 Thermal effects were small relative to instrumentation noise In practice, the parallel architecture appears to force electrical convergence when interconnect resistance is low. I’ve been referring to this as “architectural immunity” — the idea that topology can dominate cell-level mismatch under specific conditions. This is not a recommendation to mix batteries casually, and it’s not a safety guarantee. It’s an attempt to replace folklore with measurements and to define the boundary conditions where this does or does not hold. Everything is public: Raw CSV data Analysis scripts Full PDF report Replication protocol Repo: https://ift.tt/pmk0A4K I’m posting this to invite critique — especially around failure modes, instrumentation limits, or cases where this model would break down (e.g., higher C-rates, aging asymmetry, thermal gradients, different chemistries). Happy to answer technical questions. January 14, 2026 at 12:53AM

Show HN: DebtBomb – Make TODOs expire and automatically create Jira tickets https://ift.tt/Vv1uqHR

Show HN: DebtBomb – Make TODOs expire and automatically create Jira tickets Hi HN, In most codebases I’ve worked on, temporary hacks (“TODO: remove later”, “just for this release”) slowly become permanent. Nobody remembers why they exist, but they keep shipping to production. I built a small CLI called DebtBomb to make that explicit. Instead of free-form TODOs, you attach an expiry date to temporary code. When the date passes, CI fails until the code is removed or the expiry is intentionally extended. Recently I added integrations so expired debt bombs don’t just fail CI — they become visible and owned: When a debt bomb expires, DebtBomb can automatically create a Jira ticket with file path, owner, reason, and code snippet. It can also notify Slack, Discord, or Microsoft Teams. You can configure “expiring soon” warnings (e.g., 7 days before) so it’s not just a surprise break. Repo: https://ift.tt/no2G1Jw This is still early and I’m mainly trying to validate whether this actually improves how teams handle “temporary” code compared to TODOs, linters, or just creating tickets manually. I’d especially love feedback from people who’ve dealt with tech debt in long-lived codebases or CI-heavy environments. Thanks for reading. https://ift.tt/no2G1Jw January 13, 2026 at 11:59PM

Monday, January 12, 2026

Show HN: AI in SolidWorks https://ift.tt/2KCHUdB

Show HN: AI in SolidWorks Hey HN! We’re Will and Jorge, and we’ve built LAD (Language-Aided Design), a SolidWorks add-in that uses LLMs to create sketches, features, assemblies, and macros from conversational inputs ( https://www.trylad.com/ ). We come from software engineering backgrounds where tools like Claude Code and Cursor have come to dominate, but when poking around CAD systems a few months back we realized there's no way to go from a text prompt input to a modeling output in any of the major CAD systems. In our testing, the LLMs aren't as good at making 3D objects as they are are writing code, but we think they'll get a lot better in the upcoming months and years. To bridge this gap, we've created LAD, an add-in in SolidWorks to turn conversational input and uploaded documents/images into parts, assemblies, and macros. It includes: - Dozens of tools the LLM can call to create sketches, features, and other objects in parts. - Assembly tools the LLM can call to turn parts into assemblies. - File system tools the LLM can use to create, save, search, and read SolidWorks files and documentation. - Macro writing/running tools plus a SolidWorks API documentation search so the LLM can use macros. - Automatic screenshots and feature tree parsing to provide the LLM context on the current state. - Checkpointing to roll back unwanted edits and permissioning to determine which commands wait for user permission. You can try LAD at https://www.trylad.com/ and let us know what features would make it more useful for your work. To be honest, the LLMs aren't great at CAD right now, but we're mostly curious to hear if people would want and use this if it worked well. https://www.trylad.com January 12, 2026 at 11:56PM

Show HN: Pane – An agent that edits spreadsheets https://ift.tt/Ie5FJ6o

Show HN: Pane – An agent that edits spreadsheets Hi HN, I built Pane, a spreadsheet-native agent that operates directly on the grid (cells, formulas, references, ranges) instead of treating spreadsheets as text. Most spreadsheet AI tools fail because they: - hallucinate formulas - lose context across edits - can't reliably modify existing models Pane runs inside the spreadsheet environment and uses the same primitives a human would: selecting cells, editing formulas, inserting ranges, reconciling tables. I launched it on Product Hunt this weekend and it unexpectedly resonated, which made me curious whether this approach actually holds up under scrutiny. I'd love feedback on: - obvious failure modes you expect - whether this is fundamentally better than scripts + formulas + copilots Happy to answer technical questions. https://paneapp.com January 12, 2026 at 10:41PM

Show HN: words.zip – Massively infinite word search https://ift.tt/k4JdxUb

Show HN: words.zip – Massively infinite word search Hi HN! This is a word search game I launched in the beginning of this year - didn't get much traction then, but it's been posted around a bit (right now getting some traffic from kottke.org) and now has over 12,000 words found! Now that it's a little more filled out I figured I'd share it again. Really enjoying seeing what everyone is making on it - it appears most people start by just adding a few words to the big clump in the middle, then adding to other people's projects (or ruining them) and finally working on their own little concepts. My favorite is the kitty to the north. Hope you enjoy! https://words.zip/ January 12, 2026 at 09:22PM

Sunday, January 11, 2026

Show HN: A MCP for controlling terminal UI apps built with bubbletea and ratatui https://ift.tt/jZKSnbk

Show HN: A MCP for controlling terminal UI apps built with bubbletea and ratatui so you can start vibe-coding your ad-hoc terminal dashboard. With session replay and mouse click support built-in. https://ift.tt/2bC3qLW January 12, 2026 at 02:54AM

Show HN: Epstein IM – Talk to Epstein clone in iMessage https://ift.tt/NKlEefS

Show HN: Epstein IM – Talk to Epstein clone in iMessage https://epstein.im/ January 11, 2026 at 07:58AM

Saturday, January 10, 2026

Show HN: Persistent Memory for Claude Code (MCP) https://ift.tt/1KdZgsA

Show HN: Persistent Memory for Claude Code (MCP) This is my attempt in building a memory that evolves and persist for claude code. My approach is inspired from Zettelkasten method, memories are atomic, connected and dynamic. Existing memories can evolve based on newer memories. In the background it uses LLM to handle linking and evolution. I have only used it with claude code so far, it works well with me but still early stage, so rough edges likely. I'm planning to extend it to other coding agents as I use several different agents during development. Looking for feedbacks! https://ift.tt/tmo7evK January 11, 2026 at 03:34AM

Show HN: I used Claude Code to discover connections between 100 books https://ift.tt/xTGtQRe

Show HN: I used Claude Code to discover connections between 100 books I think LLMs are overused to summarise and underused to help us read deeper. I built a system for Claude Code to browse 100 non-fiction books and find interesting connections between them. I started out with a pipeline in stages, chaining together LLM calls to build up a context of the library. I was mainly getting back the insight that I was baking into the prompts, and the results weren't particularly surprising. On a whim, I gave CC access to my debug CLI tools and found that it wiped the floor with that approach. It gave actually interesting results and required very little orchestration in comparison. One of my favourite trail of excerpts goes from Jobs’ reality distortion field to Theranos’ fake demos, to Thiel on startup cults, to Hoffer on mass movement charlatans ( https://ift.tt/A2jurdt ). A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems - as if the task itself summoned a Foucault’s Pendulum mindset. Details: * The books are picked from HN’s favourites (which I collected before: https://ift.tt/5Z7dqtS ). * Chunks are indexed by topic using Gemini Flash Lite. The whole library cost about £10. * Topics are organised into a tree structure using recursive Leiden partitioning and LLM labels. This gives a high-level sense of the themes. * There are several ways to browse. The most useful are embedding similarity, topic tree siblings, and topics cooccurring within a chunk window. * Everything is stored in SQLite and manipulated using a set of CLI tools. I wrote more about the process here: https://ift.tt/XZ7IBGS I’m curious if this way of reading resonates for anyone else - LLM-mediated or not. https://ift.tt/9AoT0pn January 10, 2026 at 11:56PM

Show HN: 15 Years of StarCraft II Balance Changes Visualized Interactively https://ift.tt/Hac9n8f

Show HN: 15 Years of StarCraft II Balance Changes Visualized Interactively Hi HN! "Never perfect. Perfection goal that changes. Never stops moving. Can chase, cannot catch." - Abathur ( https://www.youtube.com/watch?v=pw_GN3v-0Ls ) StarCraft 2 is one of the most balanced games ever - thanks to Blizzard’s pursuit of perfection. It has been over 15 years since the release of Wings of Liberty and over 10 years since the last installment, Legacy of the Void. Yet, balance updates continue to appear, changing how the game plays. Thanks to that, StarCraft is still alive and well! I decided to create an interactive visualization of all balance changes, both by patch and by unit, with smooth transitions. I had this idea quite a few years ago, yet LLMs made it possible - otherwise, I wouldn't have had the time to code or to collect all changes from hundreds of patches (not all have balance updates). It took way more time than expected - both dealing with parsing data and dealing with D3.js transitions. Pretty much pure vibe coding with Claude Code and Opus 4.5 - while constantly using Playwright skills and consulting Gemini 3 Pro ( https://ift.tt/eJ8Y5KM ). While Opus 4.5 was much better at executing, it was often essential to use Gemini to get insights, to get cleaner code, or to inspect screenshots. The difference in quality was huge. Still, it was tricky, as LLMs do not know D3.js nearly as well as React. The D3.js transition part is a thing that sometimes I think would be better to do manually, and only use LLMs for details. But it was also a lesson. Enjoy! Source code is here: https://ift.tt/7RewPOi https://ift.tt/7jhzUZf January 11, 2026 at 12:37AM

Friday, January 9, 2026

Show HN: Similarity = cosine(your_GitHub_stars, Karpathy) Client-side https://ift.tt/hNBzmYG

Show HN: Similarity = cosine(your_GitHub_stars, Karpathy) Client-side GitHub profile analysis - Build your embedding from your Stars - Compare and discover popular people with similar interests and share yours - Generate a Skill Radar - Recommend repositories you might like https://puzer.github.io/github_recommender/ January 6, 2026 at 08:23PM

Show HN: Agent-contracts, contract-based LangGraph agents https://ift.tt/ju3rHeM

Show HN: Agent-contracts, contract-based LangGraph agents Hi HN, I’m the author of agent-contracts, a Python library that explores a contract-based approach to structuring LangGraph agents. When building larger LangGraph-based systems, I kept running into the same issues: - node responsibilities becoming implicit - state dependencies spreading across the graph - routing logic getting harder to reason about - refactoring feeling increasingly risky agent-contracts is an attempt to make these boundaries explicit. Each node declares a contract that describes: - which parts of the state it reads and writes - what external services it depends on - when it should run, using rule-based conditions with optional LLM hints From these contracts, the LangGraph structure can be assembled in a more predictable and inspectable way. This is still early-stage and experimental. I’m mainly interested in feedback on the design trade-offs and whether this mental model resonates with others building complex agent systems. https://ift.tt/WDEFx5P January 9, 2026 at 11:48PM

Thursday, January 8, 2026

Show HN: Pydantic-AI-stream – Structured event streaming for pydantic-AI agents https://ift.tt/oT52v4C

Show HN: Pydantic-AI-stream – Structured event streaming for pydantic-AI agents https://ift.tt/jPv7SI6 January 9, 2026 at 01:01AM

Show HN: TierHive – Hourly-billed NAT VPS with private /24 subnets https://ift.tt/H5bDi73

Show HN: TierHive – Hourly-billed NAT VPS with private /24 subnets This idea has been floating in my head for about 10 years. Some of you might remember LowEndSpirit.com back before it became a forum, I started that. I've been obsessed with making tiny, cheap VPS actually useful ever since. TierHive is my attempt to make 128MB VPS great again :) It's a NAT VPS (KVM) platform with true hourly billing. Spin up a server, use it for 3 hours, delete it, pay for 3 hours. No monthly commitments, no minimums beyond a $5 top-up. The tradeoff is NAT (no dedicated IPv4), but I've tried to make that less painful: - Every account gets a /24 private subnet with full DHCP management. - Every server gets auto ssh port forwarding and a few TCP/UDP ports - Built-in HAProxy with Let's Encrypt SSL, load balancing, and auto-failover - WireGuard mesh between locations (Canada, Germany, UK currently) - PXE/iPXE boot support for custom installs - Email relay with DKIM/SPF - Recipe system for one-click deploys Still in alpha. Small team, rough edges, but I've been running my own stuff on it for months. Would love feedback — especially on whether the NAT tradeoff kills it for your use cases, or what's missing. (IPv6 is coming) https://tierhive.com https://tierhive.com/ January 9, 2026 at 12:44AM

Show HN: 90% of GPU Cycles Are Waste. A New Computing Primitive for Physics AI https://ift.tt/y6eqa7P

Show HN: 90% of GPU Cycles Are Waste. A New Computing Primitive for Physics AI https://ift.tt/7eBgZ4n January 8, 2026 at 10:48PM

Wednesday, January 7, 2026

Show HN: bikemap.nyc – visualization of the entire history of Citi Bike https://ift.tt/p3Od65U

Show HN: bikemap.nyc – visualization of the entire history of Citi Bike Each moving arrow represents a real bike ride. There are 291 million rides in total, covering 12 years of history from June 2013 to December 2025, based on public data published by Lyft. If you've ever taken a Citi Bike ride before, you are included in this massive visualization! You can search for your ride using Cmd + K and your Citi Bike receipt, which should give you the time of your ride and start/end station. Some technical details: - No backend! Processed data is stored in parquet files on a CDN, and queried directly by DuckDB WASM - deck.gl w/ Mapbox for GPU-accelerated rendering of thousands of concurrent animated bikes - Web Workers decode polyline routes and do as much precomputation as possible off the main thread - Since only (start, end) station pairs are provided, routes are generated by querying OSRM for the shortest path between all 2,400+ station pairs Legend: - Blue = E-Bike - Purple = Classic Bike - Red = Bike docked - Green = Bike unlocked https://ift.tt/Ekc9Zeq January 8, 2026 at 03:45AM

Show HN: Seapie – a Python debugger where breakpoints drop into a REPL https://ift.tt/Fk7oHb5

Show HN: Seapie – a Python debugger where breakpoints drop into a REPL https://ift.tt/rqTmJZk January 8, 2026 at 12:58AM

Show HN: Free and local browser tool for designing gear models for 3D printing https://ift.tt/VEYOZQX

Show HN: Free and local browser tool for designing gear models for 3D printing Just build a local tool for designing gears that kinda looks and works nice https://ift.tt/pWovMSV January 7, 2026 at 03:42PM

Tuesday, January 6, 2026

Show HN: Dimensions – Terminal Tab Manager https://ift.tt/PkATdoh

Show HN: Dimensions – Terminal Tab Manager A terminal TUI that leverage tmux to make managing terminal tabs easier and more friendly. https://ift.tt/eW0gaUI January 6, 2026 at 11:48PM

Show HN: Doo – Generate auth and CRUD APIs from struct definitions https://ift.tt/ldoswR5

Show HN: Doo – Generate auth and CRUD APIs from struct definitions Built Doo because I was tired of writing 200 lines of auth boilerplate for every API. Example (complete API): import std::Http::Server; import std::Database; struct User { id: Int @primary @auto, email: Str @email @unique, password: Str @hash, } fn main() { let db = Database::postgres()?; let app = Server::new(":3000"); app.auth("/signup", "/login", User, db); app.crud("/todos", Todo, db); // Todo = any struct you define app.start(); } Result: - POST /signup with email validation + password hashing (automatic from @email, @hash) - POST /login with JWT - Full CRUD endpoints for GET, POST, GET/:id, PUT/:id, DELETE/:id - Compiles to native binary Status: Alpha v0.3.0. Auth, CRUD, validation, and Postgres working. Actively fixing bugs. https://ift.tt/sIr9n1S What would you need to see before using this in production? https://ift.tt/sIr9n1S January 6, 2026 at 10:59PM

Monday, January 5, 2026

Show HN: Unicode cursive font generator that checks cross-platform compatibility https://ift.tt/ATvUynF

Show HN: Unicode cursive font generator that checks cross-platform compatibility Hi HN, Unicode “cursive” and script-style fonts are widely used on social platforms, but many of them silently break depending on where they’re pasted — some render as tofu, some get filtered, and others display inconsistently across platforms. I built a small web tool that explores this problem from a compatibility-first angle: Instead of just converting text into cursive Unicode characters, the tool: • Generates multiple cursive / script variants based on Unicode blocks • Evaluates how safe each variant is across major platforms (Instagram, TikTok, Discord, etc.) • Explains why certain Unicode characters are flagged or unstable on specific platforms • Helps users avoid styles that look fine in one app but break in another Under the hood, it’s essentially mapping Unicode script characters and classifying them based on known platform filtering and rendering behaviors, rather than assuming “Unicode = universal.” This started as a side project after repeatedly seeing “fancy text” fail unpredictably in real usage. Feedback, edge cases, or Unicode quirks I may have missed are very welcome. https://ift.tt/iqHod74 January 1, 2026 at 09:07PM

Show HN: Open-Source 8-Ch BCI Board (ESP32 and ADS1299 and OpenBCI GUI) https://ift.tt/E3idaj4

Show HN: Open-Source 8-Ch BCI Board (ESP32 and ADS1299 and OpenBCI GUI) Hi HN, I recently shared this on r/BCI and wanted to see what the engineering community here thinks. A while back, I got frustrated with the state of accessible BCI hardware. Research gear was wildly unaffordable. So, I spent a ton of time designing a custom board, software and firmware to bridge that gap. I call it the Cerelog ESP-EEG. It is open-source (Firmware + Schematics), and I designed it specifically to fix the signal integrity issues found in most DIY hardware. I believe in sharing the work. You can find the Schematics, Firmware, and Software setup on the GitHub repo: GITHUB LINK: https://ift.tt/TngpCD4 For those who don't want to deal with BGA soldering or sourcing components, I do have assembled units available: https://ift.tt/jUr728W The major features: Forked/modified OpenBCI GUI Compatibility as well as Brainflow API, and LSL Compatibility. I know a lot of us rely on the OpenBCI GUI for visualization because it just works. I didn't want to reinvent the wheel, so I ensured this board supports it natively. It works out of the box: I maintain a forked modified version of the GUI that connects to the board via LSL (Lab Streaming Layer). Zero coding required: You can visualize FFTs, Spectrograms, and EMG widgets immediately without writing a single line of Python. The "active bias" (why my signal is cleaner): The TI ADS1299 is the gold standard for EEG, but many dev boards implement it incorrectly. They often leave the Bias feedback loop "open" (passive), which makes them terrible at rejecting 60Hz mains hum. I simply followed the datasheet: I implemented a True Closed-Loop Active Bias (Drive Right Leg). How it works: It measures the common-mode signal, inverts it, and actively drives it back into the body. The result: Cleaner data Tech stack: ADC: TI ADS1299 (24-bit, 8-channel). MCU: ESP32 Chosen to handle high-speed SPI and WiFi/USB streaming Software: BrainFlow support (Python, C++, Java, C#) for those who want to build custom ML pipelines, LSL support, and forked version of OpenBCI GUI support This was a huge project for me. I’m happy to geek out about getting the ESP32 to stream reliably at high sample rates as both the software and firmware for this project proved a lot more challenging than I expected. Let me know what you think! SAFETY NOTE: I strongly recommend running this on a LiPo battery via WiFi. If you must use USB, please use a laptop running on battery power, not plugged into the wall. https://ift.tt/TngpCD4 January 6, 2026 at 12:46AM

Show HN: Onyx DR – Data rooms that surface investor and document signals https://ift.tt/Mv1cOar

Show HN: Onyx DR – Data rooms that surface investor and document signals Hi HN! I'm one of the people building ONYX Data Rooms. We started working on this during our own fundraise and noticed that many data rooms focus on later-stage fundraising processes and budgets, rather than the needs of early- and growth-stage founders. What stood out to us the most wasn't the lack of data, but the lack of clarity. As founders, we could see that documents were being opened, but it was hard to understand: - which investors are genuinely engaged vs. just clicking through - which documents are getting attention vs. being skipped - where diligence is slowing down or generating questions ONYX focuses on making that clearer: - unlimited data rooms and users - analytics that highlight which investors are active and which documents are being read - built-in Q&A so questions stay connected to the relevant files The goal isn't to add more metrics, but to help founders prioritize follow-ups and know where to spend time during a raise. If you want to poke around: https://ift.tt/pCHtyJ3 Happy to answer questions or hear how others handle diligence and investor signaling today. Thanks! https://ift.tt/pCHtyJ3 January 5, 2026 at 11:08PM

Show HN: Tailsnitch – A Security Auditor for Tailscale https://ift.tt/BTwrDbN

Show HN: Tailsnitch – A Security Auditor for Tailscale https://ift.tt/skdT4E7 January 5, 2026 at 11:47PM

Sunday, January 4, 2026

Show HN: I made R/place for LLMs https://ift.tt/e5Yj3kV

Show HN: I made R/place for LLMs I built AI Place, a vLLM-controlled pixel canvas inspired by r/place. Instead of users placing pixels, an LLM paints the grid continuously and you can watch it evolve live. The theme rotates daily. Currently, the canvas is scored using CLIP ViT-B/32 against a prompt (e.g., Pixelart of ${theme}). The highest-scoring snapshot is saved to the archive at the end of each day. The agents work in a simple loop: Input: Theme + image of current canvas Output: Python code to update specific pixel coordinates + One word description Tech: Next.js, SSE realtime updates, NVIDIA NIM (Mistral Large 3/GPT-OSS/Llama 4 Maverick) for the painting decisions Would love feedback! (or ideas for prompts/behaviors to try) https://art.heimdal.dev January 5, 2026 at 02:50AM

Show HN: Hover – IDE style hover documentation on any webpage https://ift.tt/NnG0ale

Show HN: Hover – IDE style hover documentation on any webpage I thought it would be interesting to have ID style hover docs outside the IDE. Hover is a Chrome extension that gives you IDE style hover tooltips on any webpage: documentation sites, ChatGPT, Claude, etc. How it works: - When a code block comes into view, the extension detects tokens and sends the code to an LLM (via OpenRouter or custom endpoint) - The LLM generates documentation for tokens worth documenting, which gets cached - On hover, the cached documentation is displayed instantly A few things I wanted to get right: - Website permissions are granular and use Chrome's permission system, so the extension only runs where you allow it - Custom endpoints let you skip OpenRouter entirely – if you're at a company with its own infra, you can point it at AWS Bedrock, Google AI Studio, or whatever you have Built with TypeScript, Vite, and the Chrome extension APIs. Coming to the Chrome Web Store soon. Would love feedback on the onboarding experience and general UX – there were a lot of design decisions I wasn't sure about. Happy to answer questions about the implementation. https://ift.tt/Wp84Rx3 January 5, 2026 at 01:43AM

Show HN: 3D Printed Difference Engine [video] https://ift.tt/Z6sevdo

Show HN: 3D Printed Difference Engine [video] https://www.youtube.com/watch?v=NvORut3h904 January 4, 2026 at 11:40PM

Saturday, January 3, 2026

Show HN: A New Year gift for Python devs–My self-healing project's DNA analyzer https://ift.tt/1n2vXO8

Show HN: A New Year gift for Python devs–My self-healing project's DNA analyzer I built a system that maps its own "DNA" using AST to enable self-healing capabilities. Instead of a standard release, I’ve hidden the core mapping engine inside a New Year gift file in the repo for those who like to explore code directly. It’s not just a script; it’s the architectural vision behind Ultra Meta. Check the HAPPY_NEW_YEAR.md file for the source https://ift.tt/7mCVSi2 January 4, 2026 at 02:20AM

Show HN: Turbo – Python Web Framework https://ift.tt/uXCrFLz

Show HN: Turbo – Python Web Framework https://ift.tt/XHxWIEZ January 4, 2026 at 12:15AM

Show HN: FP-pack – Functional pipelines in TypeScript without monads https://ift.tt/sve1bpH

Show HN: FP-pack – Functional pipelines in TypeScript without monads Hi HN, I built fp-pack, a small TypeScript functional utility library focused on pipe-first composition. The goal is to keep pipelines simple and readable, while still supporting early exits and side effects — without introducing monads like Option or Either. Most code uses plain pipe/pipeAsync. For the few cases that need early termination, fp-pack provides a SideEffect-based pipeline that short-circuits safely. I also wrote an “AI agent skills” document to help LLMs generate consistent fp-pack-style code. Feedback, criticism, or questions are very welcome. https://ift.tt/QZYWr4n January 3, 2026 at 10:00PM

Friday, January 2, 2026

Show HN: I mapped System Design concepts to AI Prompts to stop bad code https://ift.tt/XbNcIs8

Show HN: I mapped System Design concepts to AI Prompts to stop bad code https://ift.tt/AwW9dPy January 3, 2026 at 01:45AM

Show HN: CryDecoder – On-device ML for classifying baby cries (Swift, Core ML) https://ift.tt/nxq0pe1

Show HN: CryDecoder – On-device ML for classifying baby cries (Swift, Core ML) Hi HN, I’m the developer behind CryDecoder. I built this after too many nights at 3am staring at a crying infant, completely exhausted, trying to guess whether it was hunger, gas, or just general fussiness. I realized I was essentially running a mental decision tree on very little sleep, so I decided to see if I could automate some of that signal processing. What it does: CryDecoder analyzes short audio clips of a baby’s cry and classifies them into categories like hunger, discomfort/gas, tiredness, or general fussiness. How it works: • Tech: On-device audio feature extraction paired with a lightweight ML model trained on labeled cry patterns. • Performance: Inference runs locally on the phone, which keeps latency low and avoids sending audio off-device. Results come back quickly enough to feel near real-time. • Philosophy: This isn’t meant to replace parental judgment. It’s intended as an extra data point — a sanity check when you’re tired and not sure what to try next. The business side: The app currently uses a paid model with a preview. I’m an engineer first and still iterating on pricing and paywall placement. I’d appreciate feedback on: 1. The technical approach and responsiveness 2. Whether the paywall timing feels reasonable for a utility like this Thanks for taking a look. https://ift.tt/3aeSVNC January 2, 2026 at 11:56PM

Show HN: Text-to-3D Motion Generator (Hunyuan 1.0 wrapper) https://ift.tt/psu7hSO

Show HN: Text-to-3D Motion Generator (Hunyuan 1.0 wrapper) Hi everyone, I built a UI for the new open-source Hunyuan Motion model to generate 3D animations from text: https://hy-motion.ai It generates BVH files instantly. I'm trying to bridge the gap between "cool AI demo" and "useful game dev tool". Question for 3D devs/animators: If you were to use this in production, what is the single biggest missing feature? 1. Export Pipeline: Auto-conversion to FBX for Unity/Unreal? 2. Motion Fusion: Blending multiple prompts into one long sequence? 3. Rig Variety: Support for non-humanoid skeletons? Feedback is much appreciated. https://hy-motion.ai/ January 2, 2026 at 10:56PM

Show HN: Startboard – A simple little browser start page and bookmarks organizer https://ift.tt/tzAoqwD

Show HN: Startboard – A simple little browser start page and bookmarks organizer https://startboard.so/ January 2, 2026 at 11:26PM

Thursday, January 1, 2026

Show HN: Feature detection exploration in Lidar DEMs via differential decomp https://ift.tt/Tg8AvcY

Show HN: Feature detection exploration in Lidar DEMs via differential decomp I'm not a geospatial expert — I work in AI/ML. This started when I was exploring LiDAR data with agentic assitince and noticed that different signal decomposition methods revealed different terrain features. The core idea: if you systematically combine decomposition methods (Gaussian, bilateral, wavelet, morphological, etc.) with different upsampling techniques, each combination has characteristic "failure modes" that selectively preserve or eliminate certain features. The differences between outputs become feature-specific filters. The framework tests 25 decomposition × 19 upsampling methods across parameter ranges — about 40,000 combinations total. The visualization grid makes it easy to compare which methods work for what. Built in Cursor with Opus 4.5, NumPy, SciPy, scikit-image, PyWavelets, and OpenCV. Apache 2.0 licensed. I'd appreciate feedback from anyone who actually works with elevation data. What am I missing? What's obvious to practitioners that I wouldn't know? https://ift.tt/a6tQONd January 1, 2026 at 07:29AM

Show HN: DroidDock – Browse Android files on Mac with a Finder-like experience https://ift.tt/Vx92aEm

Show HN: DroidDock – Browse Android files on Mac with a Finder-like experience Show HN: DroidDock – Browse Android files on Mac with a Finder-like experience DroidDock is a macOS app that allows you to browse files on your Android device via ADB. Built with Tauri (Rust + React). Core features: - Browse files with Table, Grid, or Column views - Preview images/text without downloading (press Space) - Full keyboard navigation - Search, upload/download, multi-select - Dark mode support What's New in v0.2.x - File Previews : Press Space to preview images/text without downloading - Minimalist UI : Clean 95% grayscale design with better readability - Clickable Sorting : Click column headers (Name, Size, Date) to sort - Kind Column : Shows file types at a glance (Image, Video, Document, etc.) - Better Keyboard Navigation : Arrow keys in preview, Cmd shortcuts for everything Tech Details Built with Tauri (Rust backend) + React/TypeScript frontend. Rust handles all ADB communication for good performance. Small bundle (~15MB DMG universal binary), lower memory than Electron. Challenges 1. ADB Path Detection : Different package managers install ADB in different locations. Had to check 5+ common paths on startup. 2. Thumbnail Generation : Android doesn't expose a thumbnail API via ADB. I pull the first N bytes of image files and generate thumbnails on-the-fly with caching. 3. File Preview : ADB doesn't stream files – you have to pull the entire file. For large images, I had to implement chunked reading to check dimensions first. 4. Code Signing : Currently unsigned (requires $99/year Apple Developer membership). Users have to right-click → Open on first launch. Open Source & Free MIT licensed, no telemetry, no ads. Website: < https://rajivm1991.github.io/DroidDock/ > GitHub: < https://github.com/rajivm1991/DroidDock > Download: < https://github.com/rajivm1991/DroidDock/releases/latest > Would love feedback! This is my first Tauri app after years of Electron. The Rust learning curve was worth it. https://rajivm1991.github.io/DroidDock/releases/v0.2.1.html January 1, 2026 at 11:10PM