ads

Sunday, August 31, 2025

Show HN: Anonymous Age Verification https://ift.tt/xYsTFdO

Show HN: Anonymous Age Verification So I'm not an expert in this area, but here's an attempt at cost effective, anonymous, age verification flow that probably covers ~70% of use cases in the United States. The basic premise is to leverage your bank (who already has had to perform KYC on you to open an account) to attest to your age for age-restricted merchant sites (pornhub, gambling, etc) without sharing any more information than necessary. Flow works like this: 1) You go to gambling.com 2) They request you to verify your age 3) You choose "Bank Verification" 4) You trigger a WebAuthn Credential Creation flow 5) gambling.com gives you a string to copy ------------- 6) You log into your bank 7) You go to bank.com/age-verify 8) You paste in the string you were given 9) The bank verifies it/you and creates a signed payload with your age-claims (over_18: true, over_21: false) 10) You copy this and go back to gambling.com --------------- 11) You paste the string back into gambling.com 12) You perform WebAuthn Auth flow 13) gambling.com verifies everything (signatures, webauthn, etc) 14) gambling.com sets a session-cookie and _STRONGLY_ encourages you to create an account (with a pass key). This will prevent you from having to verify your age every time you visit gambling.com The mechanics might feel off, but it feels like this in the neighborhood of a way to perform anonymous age verification. This is virtually free, and requires extremely light infra. Banks can be incentivized with small payments, or offer it because everyone else does and don't want to get left behind. https://gist.github.com/JWally/bf4681f79c0725eb378ec3c246cf0664 September 1, 2025 at 12:14AM

Show HN: Pitaya – Orchestrate AI coding agents like Claude Code https://ift.tt/h753aoR

Show HN: Pitaya – Orchestrate AI coding agents like Claude Code Pitaya is a local, open-source orchestrator for AI coding agents (Claude Code, Codex CLI). It runs many agents in parallel, isolates each in Docker with its own git branch, supports pluggable Python strategies, and persists state so runs are resumable. Quickstart + short demo are in the README. https://ift.tt/UIyvsXz August 31, 2025 at 11:33PM

Show HN: How to create and use Tesseract OCR in Rust programming language? https://ift.tt/0HDI1Ru

Show HN: How to create and use Tesseract OCR in Rust programming language? In this Guide, We have use rusty tesseract for building an invoice processing api https://ift.tt/9OmQRMt August 31, 2025 at 11:43PM

Show HN: I made a game called "Funeral of Freiren." https://ift.tt/fEGJCK6

Show HN: I made a game called "Funeral of Freiren." https://ift.tt/WfAYgJj August 31, 2025 at 11:09PM

Saturday, August 30, 2025

Show HN: Give Claude Code control of your browser (open-source) https://ift.tt/6MeuR7K

Show HN: Give Claude Code control of your browser (open-source) As I started to use Claude Code to do more random tasks I realized I could basically build any CLI tool and it would use it. So I built one that controls the browser and open-sourced it. It should work with Codex or any other CLI-based agent! I have a long term idea where the models are all local and then the tool is privacy preserving because it's easy to remove PII from text, but I'd definitely not recommend using this for anything important just yet. You'll need a Gemini key until I (or someone else) figure out how to distill a local version out of that part of the pipeline. Github link: https://ift.tt/NuQFvyc https://www.cli-agents.click/ August 31, 2025 at 01:07AM

Show HN: TextPolicy – reinforcement learning for text generation on a MacBook https://ift.tt/7COXj9K

Show HN: TextPolicy – reinforcement learning for text generation on a MacBook I built TextPolicy because I wanted a way to study reinforcement learning for text generation without needing a cluster or cloud GPUs. A MacBook is enough. The toolkit is simple: Implements GRPO and GSPO algorithms Provides a decorator interface for custom reward functions Includes LoRA and QLoRA utilities Runs on MLX, so it is efficient on Apple Silicon It is not intended for production. The purpose is learning and experimentation: to understand algorithms, to test ideas, to see how reward shaping affects behavior. Installation is through pip: pip install textpolicy There is a minimal example in the README. I am interested in feedback on: the clarity of the API, the usefulness of the examples, and whether this lowers the barrier for people new to RL. Repository: github.com/teilomillet/textpolicy https://ift.tt/OgPEIV5 August 30, 2025 at 11:34PM

Show HN: A simple CLI tool to list network ports and their associated bin https://ift.tt/Zz9OdTI

Show HN: A simple CLI tool to list network ports and their associated bin https://ift.tt/BIjEmxc August 30, 2025 at 11:10PM

Friday, August 29, 2025

Show HN: An open source implementation of OpenStreetMap in Electron https://ift.tt/C8K6Fed

Show HN: An open source implementation of OpenStreetMap in Electron https://ift.tt/UHmfGzv August 30, 2025 at 03:44AM

Show HN: Magic links – Get video and dev logs without installing anything https://ift.tt/eGbPgwx

Show HN: Magic links – Get video and dev logs without installing anything Hey HN, For a while now, our team has been trying to solve a common problem: getting all the context needed to debug a bug report without the endless back-and-forth. It’s hard to fix what you can't see, and console logs, network requests, and other dev data are usually missing from bug reports. We’ve been working on a new tool called Recording Links. The idea is simple: you send a link to a user or teammate, and when they record their screen to show an issue, the link automatically captures a video of the problem along with all the dev context, like console logs and network requests. Our goal is to make it so you can get a complete, debuggable bug report in one go. We think this can save a ton of time that's normally spent on follow-up calls and emails. We’re a small team and would genuinely appreciate your thoughts on this. Is this a problem you face? How would you improve this? Any and all feedback—positive or critical—would be incredibly helpful as we continue to build. PS - you can try it out from here: https://ift.tt/6EDOMeb August 27, 2025 at 11:51AM

Show HN: FFmpeg Pages – because I was tired of fighting FFmpeg every time https://ift.tt/F6R2XOs

Show HN: FFmpeg Pages – because I was tired of fighting FFmpeg every time You ever just want to shrink a video… and suddenly you’re buried in flags, half-broken StackOverflow answers, and 10 tabs open just to figure out one command? That’s been me. Every. Single. Time. So I built FFmpeg Pages — a dead-simple collection of the commands I kept searching for. No fluff, no digging, just the stuff that actually works. https://ffmpegs.pages.dev/ August 29, 2025 at 11:25PM

Show HN: OAuth for AI Agents https://ift.tt/kWFsYBR

Show HN: OAuth for AI Agents https://ift.tt/0TOWzjE August 29, 2025 at 11:02PM

Thursday, August 28, 2025

Show HN: Welcome to "Voice AI Stack" Weekly – A Home for Voice AI Builders https://ift.tt/DQad6s8

Show HN: Welcome to "Voice AI Stack" Weekly – A Home for Voice AI Builders Hey HN, This newsletter didn’t come from a growth hack or content strategy. It started with frustration. Every week, I was drowning in blogs, PR blasts, and Twitter threads trying to keep up with Voice + AI. New models dropping. Partnerships overnight. Startups in India and Asia pushing infra upgrades that no one was covering. But whenever I wanted to know what really mattered, the signal was buried under the noise. And there wasn’t a single newsletter focused on India’s Voice AI ecosystem — most only covered the US. So I built the thing I wished existed: Voice AI Stack — a newsletter on India, Asia, and global Voice AI updates. What you’ll get every Friday Product launches that actually move the Voice + AI ecosystem forward Infra upgrades & strategic deals (with context on why they matter) Advances in speech tech, translation & agent performance A spotlight on VideoSDK’s AI Agent features — what’s shipping, and what’s next If you’re a developer, PM, researcher, or just curious about the future of AI voices & agents in India and beyond — this is for you. Behind the Scenes Last night at 11:30 pm, we were testing our VideoSDK AI agent. Everything was running perfectly—smooth, steady, no problems at all. Then suddenly, every agent started speaking in opera voices. Instead of answering questions, they were singing like they were on stage in Italy. We couldn’t stop laughing. Then came the panic. And finally, the fix. That’s what building in this space is really like—messy, surprising, and full of moments you don’t expect. Behind every polished demo, there are nights like this: bugs, laughter, and small wins that make the journey worth it. This newsletter is my way of opening that door for you. A peek into the experiments, the stumbles, the “wait, did that agent just…” moments that make this space exciting. Subscribe here to stay in the loop. https://ift.tt/zlOvdR3... And if you’ve got a friend building or curious about Voice AI — forward this to them. Let’s cut through the noise, together. See you tomorrow Sagar Kava https://ift.tt/6EuqzKb August 28, 2025 at 10:36PM

Show HN: Yoink AI – macOS AI app that edits directly in any textfield of any app https://ift.tt/M6TopmE

Show HN: Yoink AI – macOS AI app that edits directly in any textfield of any app Hey HN, I built Yoink AI to solve my biggest frustration with AI tools: they constantly break my workflow. I was tired of copy-pasting between my apps and a chatbot just for simple edits. Yoink AI is a macOS app that brings the AI to you. With a simple hotkey (⌘ Shift Y), it works directly inside any text field, in any app. If you can type there Yoink can write there Key Features: - Automatically captures the context of the text field you're in, so you dont have to manually prime it - Create custom voices trained on your own writing samples. This helps you steer the output to match your personal style and avoid generic, robotic-sounding text - Yoink doesnt just dump text and run. It delivers suggestions as redline edits that you can accept or reject, keeping you in full control. It's less of a chatbot and more of a collaborative writing partner that adapts to your workflow, not the other way around. There's a free tier with 10 requests/month and we just launched a pro trial, which will get you 100 requests for the first 7 days to try it out! I'm here to answer questions and would love to hear what you think - like all early stage start ups, feedback is always deeply appreciated https://www.useyoink.ai August 28, 2025 at 09:13PM

Wednesday, August 27, 2025

Show HN: Whose p*nis is that now? Probably the weirdest website I've ever made https://ift.tt/6YhPcOs

Show HN: Whose p*nis is that now? Probably the weirdest website I've ever made I made a site inspired by a ridiculous side project: a lift-the-flap book for adults. It mixes science facts, (bad) poetry, and way too many penis facts. NSFW-ish, interactive, and definitely not what you’d expect. https://ift.tt/iUsh1Xx August 28, 2025 at 12:57AM

Show HN: I built a robot that draws caricatures with a Sharpie https://ift.tt/aPmU5hT

Show HN: I built a robot that draws caricatures with a Sharpie Hi HN, I’ve been tinkering with this for a while and finally have it in a decent state. It’s a plotter robot that draws caricatures from photos. I trained a diffusion model (Flux Kontext LoRA) on caricature images, 3D-printed a Sharpie mount for my Ender 3, and hacked together a pipeline that goes photo → caricature → G-code. After a lot of trial and error it’s working pretty well, and I put up a little site where you can try it out. Happy to answer questions or hear any feedback. Thanks! https://ift.tt/r7CQSoI August 27, 2025 at 09:27PM

Show HN: AlphaSuite – An open-source platform for quantitative stock analysis https://ift.tt/8ulLv3R

Show HN: AlphaSuite – An open-source platform for quantitative stock analysis AlphaSuite is a comprehensive suite of tools for quantitative financial analysis, model training, backtesting, and trade management. It's designed for traders and analysts who want to build, validate, and deploy data-driven trading strategies. https://ift.tt/bHGty5j August 27, 2025 at 09:20PM

Tuesday, August 26, 2025

Show HN: Gonzo – A Go-based TUI for log analysis (OpenTelemetry/OTLP support) https://ift.tt/gGsN3bz

Show HN: Gonzo – A Go-based TUI for log analysis (OpenTelemetry/OTLP support) We built Gonzo to make log analysis faster and friendlier in the terminal. Think of it like k9s for logs — a TUI that can ingest JSON, text, or OpenTelemetry (OTLP) logs, highlight and boil up patterns, and even run AI models locally or via API to summarize logs. We’re still iterating, so ideas and contributions are welcome! https://ift.tt/KZxGf7H August 26, 2025 at 02:44AM

Monday, August 25, 2025

Show HN: Bitcoin Challenge. Try to steal a plain text private key you can use https://ift.tt/NHtjxFV

Show HN: Bitcoin Challenge. Try to steal a plain text private key you can use Hi HN, I'm releasing my round one public demo of a new browser security system I've been developing. There's a real Bitcoin private key (worth $20) in plaintext at app.redactsure.com. You can copy it, paste it, delete it, move it around - full control. But you can't see the actual characters or extract them. The challenge: Break the protection and take the Bitcoin. First person wins, challenge ends. Details: - Requires email verification (prevents abuse, no account needed) - 15 minute time limit per session - Currently US only for the demo (latency) - Verify the Bitcoin is real: https://ift.tt/HUr6bE3 Technical approach: - Cloud-hosted browser with real time NER model - Webpages are unmodified - Think of it as selective invisibility for sensitive data. You can interact with it normally, just can't see or extract it Looking for feedback on edge cases in the hiding/protection algorithm. Happy to answer questions about the implementation. https://ift.tt/gVeDq9s August 25, 2025 at 11:16PM

Sunday, August 24, 2025

Show HN: I Built a XSLT Blog Framework https://ift.tt/T9mpq4d

Show HN: I Built a XSLT Blog Framework A few weeks ago a friend sent me grug-brain XSLT (1) which inspired me to redo my personal blog in XSLT. Rather than just build my own blog on it, I wrote it up for others to use and I've published it on GitHub https://ift.tt/XZ3c0MG (2) Since others have XSLT on the mind, now seems just as good of a time as any to share it with the world. Evidlo@ did a fine job explaining the "how" xslt works (3) The short version on how to publish using this framework is: 1. Create a new post in HTML wrapped in the XML headers and footers the framework expects. 2. Tag the post so that its unique and the framework can find it on build 3. Add the post to the posts.xml file And that's it. No build system to update menus, no RSS file to update (posts.xml is the rss file). As a reusable framework, there are likely bugs lurking in CSS, but otherwise I'm finding it perfectly usable for my needs. Finally, it'd be a shame if XSLT is removed from the HTML spec (4), I've found it quite eloquent in its simplicity. (1) https://ift.tt/1A8hKDC (2) https://ift.tt/XZ3c0MG (3) https://ift.tt/DZHsi9t (4) https://ift.tt/G3i6lko (Aside - First time caller long time listener to hn, thanks!) https://ift.tt/hlXSQiJ August 25, 2025 at 12:38AM

Show HN: Komposer, AI image editor where the LLM writes the prompts https://ift.tt/8eQFgrY

Show HN: Komposer, AI image editor where the LLM writes the prompts A Flux Kontext + Mistral experiment. Upload an image, and let the AIs do the rest of the work. https://www.komposer.xyz/ August 25, 2025 at 02:06AM

Saturday, August 23, 2025

Show HN: LoadGQL – a CLI for load-testing GraphQL endpoints https://ift.tt/aKxNn42

Show HN: LoadGQL – a CLI for load-testing GraphQL endpoints Hi HN I’ve been working with GraphQL for a while and always felt the tooling around load testing was lacking. Most tools either don’t support GraphQL natively, or they require heavy setup/config. So I built *LoadGQL* — a single-binary CLI (written in Go) that lets you quickly stress-test a GraphQL endpoint. *What it does today (v1.0.0):* - Run queries against any GraphQL endpoint (no schema parsing required) - Reports median & p95 latency, throughput (RPS), and error rate - Supports concurrency, duration, and custom headers - Minimal and terminal-first by design *Roadmap:* p50/p99 latency, output formats (JSON/CSV), multiple query files. Landing page: [ https://ift.tt/vcBPKJu ]( https://ift.tt/vcBPKJu ) I’d love feedback from the HN community: - What metrics matter most to you for GraphQL performance? - Any sharp edges you’d expect in a GraphQL load tester? Thanks for checking it out! https://ift.tt/yQJ7azY August 24, 2025 at 08:30AM

Show HN: I built aibanner.co to stop spending hours on marketing banners https://ift.tt/JnLU0ZP

Show HN: I built aibanner.co to stop spending hours on marketing banners https://www.aibanner.co August 24, 2025 at 07:27AM

Show HN: Python library for fetching/storing/streaming crypto market data https://ift.tt/qrZ7l9u

Show HN: Python library for fetching/storing/streaming crypto market data https://ift.tt/IXqtlvY August 23, 2025 at 11:21PM

Friday, August 22, 2025

Show HN: Pinch – macOS voice translation for real-time conversations https://ift.tt/YtZIwMF

Show HN: Pinch – macOS voice translation for real-time conversations Hey HN! I’m Christian, daily lurker and some might remember our original launch post ( https://ift.tt/B9hZrtM ). Today we're launching Pinch for Mac, which we believe is a step-change improvement in real-time AI translation. Our vision is to make cross-lingual conversations feel as natural as regular conversations. TL:DR During an online meeting, the app instantly transcribes and translates all audio you hear, and allows you to decide when you translate your voice and when you don't. It's invisible to others (like Granola), and works everywhere without any meeting bots. Try it at startpinch.com Here's a live demo we recorded this morning, without cuts: https://youtu.be/ltM2p-SosLc When we first launched Pinch, we shipped a video conferencing solution with a human-like AI interpreter that was an active participant in your call. Our users hold the spacebar down while speaking to the translator, and when they release the spacebar the translator speaks out to the entire room. That design was intentional - it puts the task of context selection on the user and prevents people from interrupting each other awkwardly (only one person can press spacebar at a time). It also comes with heavy tradeoffs, namely: * Latency - Up to 2x longer meeting lengths due to everyone hearing your full sentence and then the translation of your full sentence * Friction with first-time users - Customers using Pinch for external communication often meet with new people each time, and we've learned of several that send out an instruction doc pre-meeting on how to join and use translation in the Pinch call. Bad signal for our UX. * Restricting our customers to those who are meeting creators Benefits of the desktop app: 1. It creates a virtual microphone that you can use in any meeting app 2. Instant transcription+translation means you can understand what's going on in real-time and interrupt where necessary 3. Simultaneous translation - after you start speaking, the others will hear your translated audio as fast as we can generate it, without interrupting your flow. Over the last months our focus has been on developing a model and UX to support high translation accuracy while automating context selection - knowing exactly when it has enough words to start the translated sentence. We’ve rolled this out to the desktop app first. We're incredibly excited to go public beta today, you can give it a try at www.startpinch.com Cheers, - Christian https://ift.tt/a9qxJkW August 20, 2025 at 07:10PM

Show HN: Clyp – Clipboard Manager for Linux https://ift.tt/J6A9EOU

Show HN: Clyp – Clipboard Manager for Linux https://ift.tt/kQ09wPe August 22, 2025 at 11:03PM

Show HN: MockinglyAI On-Demand AI Interviewer for System Design Mock Interviews https://ift.tt/QlpIq9b

Show HN: MockinglyAI On-Demand AI Interviewer for System Design Mock Interviews An AI interviewer for software engineers to practice with on-demand mock system design interviews. https://ift.tt/rfZtj5a August 22, 2025 at 09:31PM

Show HN: AIMless – a 10 KB single file P2P chat app with zero dependencies https://ift.tt/Gxiv3fY

Show HN: AIMless – a 10 KB single file P2P chat app with zero dependencies I built AIMless, a ridiculously minimalistic, browser native chat app that fits entirely into one HTML file (10 KB). It’s decentralized, P2P, and has no build tools, no server, and no frameworks. Just you, your browser, and a copy/pasted blob or two. https://ift.tt/VYjO5pX August 22, 2025 at 08:49PM

Thursday, August 21, 2025

Show HN: Playing Piano with Prime Numbers https://ift.tt/BV1v2SQ

Show HN: Playing Piano with Prime Numbers I decided to turn prime numbers into a mini piano and see what kind of music they could make. Inspired by: https://ift.tt/QILvJlH Github: https://ift.tt/scTb2jm https://ift.tt/FiWRoN7 August 18, 2025 at 10:14PM

Show HN: Tool shows UK properties matching group commute/time preferences https://ift.tt/RNExmn1

Show HN: Tool shows UK properties matching group commute/time preferences I came up with this idea when I was looking to move to London with a friend. I quickly learned how frustrating it is to trial-and-error housing options for days on end, just to be denied after days of searching due to some grotesque counteroffer. To add to this, finding properties that meet the budgets, commuting preferences and work locations of everyone in a group is a Sisyphean task - it often ends in failure, with somebody exceeding their original budget or somebody dropping out. To solve this I built a tool ( https://closemove.com/ ) that: - lets you enter between 1-6 people’s workplaces, budgets, and maximum commute times - filters public rental listings and only shows the ones that satisfy everyone’s constraints - shows results in either a list or map view No sign-up/validation required at present. Currently UK only, but please let me know if you'd want me to expand this to your city/country. This currently works best in London (with walking, cycling, driving and public transport links connected), and works decently in the rest of the UK (walking, cycling, driving only). This started as a side project and it still needs improvement. I’d appreciate any feedback! https://closemove.com August 21, 2025 at 01:59AM

Show HN: I Help Startups Go from Idea to Revenue in 30-60 Days https://ift.tt/raviZ30

Show HN: I Help Startups Go from Idea to Revenue in 30-60 Days Hey HN, I'm Syket, and I've noticed a pattern: most startup failures aren't due to bad ideas, but slow/expensive technical execution. Over 30+ projects, I've developed a framework for rapid MVP development: Week 1-2: Core features + authentication + payments Week 3-4: Mobile app + admin dashboard + analytics Week 5-6: AI features + optimization + launch prep Recent examples: - Taplab Agency: Now UK's largest edu creator platform ( https://taplab.agency ) - Unithrive: Mentorship platform serving thousands of UK students ( https://ift.tt/6LH7C4B ) - Connect Jew: NGO management system scaling across multiple cities ( https://connect-jew.vercel.app ) What I've learned about startup tech: 1. *Start with revenue generation* - build payment processing first 2. *Mobile-first design* - 80% of users are on mobile 3. *AI integration* - users expect smart features now 4. *Performance = retention* - every 100ms delay costs users The key insight: Don't build everything. Build the minimum that generates revenue, then iterate based on real user data. I'm curious - what's been the biggest technical bottleneck in your startup journey? Happy to share specific solutions I've implemented. Portfolio: https://syket.io https://www.syket.io/ August 21, 2025 at 10:54PM

Wednesday, August 20, 2025

Show HN: What country you would hit if you went straight where you're pointing https://ift.tt/xbclYZJ

Show HN: What country you would hit if you went straight where you're pointing This app was designed to answer my wife’s question “what country would we hit if we went straight” (generally posed while pointing her phone) But with two additional twists: 1. It loads up historical maps from different years (right now 1 BC, 700 AD, 1000 AD, 1300 AD, 1800 AD, 1900 AD) so you can see what you would hit if you had a time machine AND you went in the direction your phone is pointing 2. Tap a country/territory for an (AI-generated) blurb on what you are pointing at How it works: Starting from your phone’s bearing, we trace the great-circle in 200 km steps, prefilter candidate countries with bounding boxes (~5–10 instead of ~200), then check ~20 km points along each segment to catch coastlines and stop when the path first enters another country. Great-circles ( https://ift.tt/zWfZl3b ) are why you can hit Australia from NYC, even though when you look at a flat map that can be hard to see. There might be some weird stuff in the explanations, I haven’t read all 1,400 of them. If you see something weird let me know and I will update it! The app is free and doesn’t have ads or tracking — your location and bearing are only used locally to figure out where you are and what you’re pointing at Probably will work best if you hold your phone pretty flat :) Thank you to André Ourednik and all the contributors to the Historical Basemaps project: https://ift.tt/b1ZoDl8 ) https://ift.tt/9rVN6y3 August 20, 2025 at 10:23PM

Show HN: Okapi – a metrics engine based on open data formats https://ift.tt/Az0RO3r

Show HN: Okapi – a metrics engine based on open data formats Hi All I wanted to share an early preview of Okapi an in-memory metrics engine that also integrates with existing datalakes. Modern software systems produce a mammoth amount of telemetry. While we can discuss whether or not this is necessary, we can all agree that it happens. Most metrics engines today use proprietary formats to store data and don’t use disaggregated storage and compute. Okapi changes that by leveraging open data formats and integrating with existing data lakes. This makes it possible to use standard OLAP tools like Snowflake, Databricks, DuckDB or even Jupyter / Polars to run analysis workflows (such as anomaly detection) while avoiding vendor lock-in in two ways - you can bring your own workflows and have a swappable compute engine. Disaggregation also reduces Ops burden of maintaining your own storage and the compute engine can be scaled up and down on demand. Not all data can reside in a data-lake/object store though - this doesn’t work for recent data. To ease realtime queries Okapi first writes all metrics data to an in memory store and reads on recent data are served from this store. Metrics are rolled up as they arrive which helps ease memory pressure. Metrics are held in-memory for a configurable retention period after which it gets shipped out to object storage/datalake (currently only Parquet export is supported). This allows fast reads on recent data while offloading query-processing for older data. On benchmarks queries on in-memory data finish in under a millisecond while having write throughput of ~280k samples per second. On a real deployment, there’d be network delays so YMMV. Okapi it is still early — feedback, critiques, and contributions welcome. Cheers ! https://ift.tt/DUoZRen August 20, 2025 at 11:22PM

Show HN: Anchor Relay – A faster, easier way to get Let's Encrypt certificates https://ift.tt/cVmXNfx

Show HN: Anchor Relay – A faster, easier way to get Let's Encrypt certificates From the cryptic terminal commands to the innumerable ways to shoot yourself in the foot, I always struggled to use TLS certificates. I love how much easier (and cheaper) Let's Encrypt made it to get certificates, but there are still plenty of things to struggle with. That's why we built Relay: a free, browser-based tool that streamlines the ACME workflow, especially for tricky setups like homelabs. Relay acts as a secure intermediary between your ACME client and public certificate authorities like Let's Encrypt. Some ways Relay provides a better experience: - really fast, streamlined certificates in minutes, with any ACME client - one-time upfront DNS delegation without inbound traffic or DNS credentials sprinkled everywhere - clear insights into the whole ACME process and renewal reminders Try Relay now: https://ift.tt/VHBN9YU Or read our blog post: https://ift.tt/6Dm5WQA... Please give it a try (it only takes a couple minutes) and let me know what you think. https://ift.tt/VHBN9YU August 20, 2025 at 11:13PM

Show HN: Luminal – Open-source, search-based GPU compiler https://ift.tt/ulJINS4

Show HN: Luminal – Open-source, search-based GPU compiler Hi HN, I’m Joe. My friends Matthew, Jake and I are building Luminal ( https://luminalai.com/ ), a GPU compiler for automatically generating fast GPU kernels for AI models. It uses search-based compilation to achieve high performance. We take high level model code, like you'd have in PyTorch, and generate very fast GPU code. We do that without using LLMs or AI - rather, we pose it as a search problem. Our compiler builds a search space, generates millions of possible kernels, and then searches through it to minimize runtime. You can try out a demo in `demos/matmul` on mac to see how Luminal takes a naive operation, represented in our IR of 12 simple operations, and compiles it to an optimized, tensor-core enabled Metal kernel. Here’s a video showing how: https://youtu.be/P2oNR8zxSAA Our approach differs significantly from traditional ML libraries in that we ahead-of-time compile everything, generate a large search space of logically-equivalent kernels, and search through it to find the fastest kernels. This allows us to leverage the Bitter Lesson to discover complex optimizations like Flash Attention entirely automatically without needing manual heuristics. The best rule is no rule, the best heuristic is no heuristic, just search everything. We’re working on bringing CUDA support up to parity with Metal, adding more flexibility to the search space, adding full-model examples (like Llama), and adding very exotic hardware backends. We aim to radically simplify the ML ecosystem while improving performance and hardware utilization. Please check out our repo: https://ift.tt/cb5EFls and I’d love to hear your thoughts! https://ift.tt/cb5EFls August 20, 2025 at 11:01PM

Tuesday, August 19, 2025

Show HN: AI-powered CLI that translates natural language to FFmpeg https://ift.tt/ucl1Pb5

Show HN: AI-powered CLI that translates natural language to FFmpeg I got tired of spending 20 minutes Googling ffmpeg syntax every time I needed to process a video. So I built aiclip - an AI-powered CLI that translates plain English into perfect ffmpeg commands. Instead of this: ffmpeg -i input.mp4 -vf "scale=1280:720" -c:v libx264 -c:a aac -b:v 2000k output.mp4 Just say this: aiclip "resize video.mp4 to 720p with good quality" Key features: - Safety first: Preview every command before execution - Smart defaults: Sensible codec and quality settings - Context aware: Scans your directory for input files - Interactive mode: Iterate on commands naturally - Well-tested: 87%+ test coverage with comprehensive error handling What it can do: - Convert video formats (mov to mp4, etc.) - Resize and compress videos - Extract audio from videos - Trim and cut video segments - Create thumbnails and extract frames - Add watermarks and overlays GitHub: https://ift.tt/cGzu9Qh PyPI: https://ift.tt/5oP2dVA Install: pip install ai-ffmpeg-cli I'd love feedback on the UX and any features you'd find useful. What video processing tasks do you find most frustrating? August 20, 2025 at 01:02AM

Show HN: Built a memory layer that stops AI agents from forgetting everything https://ift.tt/FQJ4mRl

Show HN: Built a memory layer that stops AI agents from forgetting everything Tired of AI coding tools that forget everything between sessions? Every time I open a new chat with Claude or fire up Copilot, I'm back to square one explaining my codebase structure. So I built something to fix this. It's called In Memoria. Its an MCP server that gives AI tools persistent memory. Instead of starting fresh every conversation, the AI remembers your coding patterns, architectural decisions, and all the context you've built up. The setup is dead simple: `npx in-memoria server` then connect your AI tool. No accounts, no data leaves your machine. Under the hood it's TypeScript + Rust with tree-sitter for parsing and vector storage for semantic search. Supports JavaScript/TypeScript, Python, and Rust so far. It originally started as a documentation tool but had a realization - AI doesn't need better docs, it needs to remember stuff. Spent the last few months rebuilding it from scratch as this memory layer. It's working pretty well for me but curious what others think, especially about the pattern learning part. What languages would you want supported next? Code: https://ift.tt/5ZEzX7y https://ift.tt/5ZEzX7y August 19, 2025 at 11:29PM

Show HN: OnPair – String compression with fast random access (Rust, C++) https://ift.tt/rpeJnLs

Show HN: OnPair – String compression with fast random access (Rust, C++) I’ve been working on a compression algorithm for fast random access to individual strings in large collections. The problem came up when working with large in-memory database columns (emails, URLs, product titles, etc.), where low-latency point queries are essential. With short strings, LZ77-based compressors don’t perform well. Block compression helps, but block size forces a trade-off between ratio and access speed. Some existing options: - BPE: good ratios, but slow and memory-heavy - FSST (discussed here: https://ift.tt/FadfGEX ): very fast, but weaker compression This solution provides an interesting balance (more details in the paper): - Compression ratio: similar to BPE - Compression speed: 100–200 MiB/s - Decompression speed: 6–7 GiB/s I’d love to hear your thoughts — whether it’s workloads you think this could help with, ideas for API improvements, or just general discussion. Always happy to chat here on HN or by email. --- Resources: - Paper: https://ift.tt/Os2ulfv - Rust: https://ift.tt/vfw9HNL - C++: https://ift.tt/6dpGsw1 https://ift.tt/vfw9HNL August 19, 2025 at 10:20PM

Monday, August 18, 2025

Show HN: Typed-arrow – compile‑time Arrow schemas for Rust https://ift.tt/PEN6TQy

Show HN: Typed-arrow – compile‑time Arrow schemas for Rust Hi community, we just released https://ift.tt/20f9WNy . When working with arrow-rs, we noticed that schemas are declared at runtime. This often leads to runtime errors and makes development less safe. typed-arrow takes a different approach: - Schemas are declared at compile time with Rust’s type system. - This eliminates runtime schema errors. - And introduces no runtime overhead — everything is checked and generated by the compiler. If you’ve run into Arrow runtime schema issues, and your schema is stable (not defined or switched at runtime), this project might be useful. https://ift.tt/20f9WNy August 18, 2025 at 07:34PM

Show HN: Whispering – Open-source, local-first dictation you can trust https://ift.tt/QvNACUx

Show HN: Whispering – Open-source, local-first dictation you can trust Hey HN! Braden here, creator of Whispering, an open-source speech-to-text app. I really like dictation. For years, I relied on transcription tools that were almost good, but they were all closed-source. Even a lot of them that claimed to be “local” or “on-device” were still black boxes that left me wondering where my audio really went. So I built Whispering. It’s open-source, local-first, and most importantly, transparent with your data. All your data is stored locally on your device. For me, the features were good enough that I left my paid tools behind (I used Superwhisper and Wispr Flow before). Productivity apps should be open-source and transparent with your data, but they also need to match the UX of paid, closed-software alternatives. I hope Whispering is near that point. I use it for several hours a day, from coding to thinking out loud while carrying pizza boxes back from the office. Here’s an overview: https://www.youtube.com/watch?v=1jYgBMrfVZs , and here’s how I personally am using it with Claude Code these days: https://www.youtube.com/watch?v=tpix588SeiQ . There are plenty of transcription apps out there, but I hope Whispering adds some extra competition from the OSS ecosystem (one of my other OSS favorites is Handy https://ift.tt/8vyI1EB ). Whispering has a few tricks up its sleeve, like a voice-activated mode for hands-free operation (no button holding), and customizable AI transformations with any prompt/model. Whispering used to be in my personal GH repo, but I recently moved it as part of a larger project called Epicenter ( https://ift.tt/OGdN0ye ), which I should explain a bit... I’m basically obsessed with local-first open-source software. I think there should be an open-source, local-first version of every app, and I would like them all to work together. The idea of Epicenter is to store your data in a folder of plaintext and SQLite, and build a suite of interoperable, local-first tools on top of this shared memory. Everything is totally transparent, so you can trust it. Whispering is the first app in this effort. It’s not there yet regarding memory, but it’s getting there. I’ll probably write more about the bigger picture soon, but mainly I just want to make software and let it speak for itself (no pun intended in this case!), so this is my Show HN for now. I just finished college and was about to move back with my parents and work on this instead of getting a job…and then I somehow got into YC. So my current plan is to cover my living expenses and use the YC funding to support maintainers, our dependencies, and people working on their own open-source local-first projects. More on that soon. Would love your feedback, ideas, and roasts. If you would like to support the project, star it on GitHub here ( https://ift.tt/OGdN0ye ) and join the Discord here ( https://ift.tt/t0qvnxz ). Everything’s MIT licensed, so fork it, break it, ship your own version, copy whatever you want! https://ift.tt/zueBwlK August 18, 2025 at 11:52PM

Saturday, August 16, 2025

Show HN: Embedr – Agentic IDE for Arduino, ESP32, and More https://ift.tt/i04tWZs

Show HN: Embedr – Agentic IDE for Arduino, ESP32, and More Hi HN, I’m building an agentic IDE for hardware developers. It currently supports Arduino, ESP32, ESP8266, and a bunch of other boards (mostly hobbyist for now, but expanding to things like PlatformIO). It can already write and debug hardware projects end-to-end on its own. The goal is to have it also generate breadboard views (Fritzing-style), PCB layouts, and schematics. Basically a generative EDA tool. Right now, it’s already a better drop-in replacement for the Arduino IDE. Would love feedback from folks here. https://www.embedr.app/ August 16, 2025 at 11:40PM

Show HN: iOS keyboard for on-demand GIF generation https://ift.tt/MusPpzS

Show HN: iOS keyboard for on-demand GIF generation Got this idea last summer, and with AI video models improving since then, I started working on it more actively and just released it. Right from the keyboard extension, you can prompt for any GIF and it will generate it on demand. Generation usually takes around 20 seconds (probably this will only get faster in the future). You get notified by a push notification when the generation has finished. I have many more ideas to develop this further. I believe on demand GIFs bridge the gap between humor of different persons too, it allows you to convey a situation, a joke or an imagination way better. A way for AI to actually bring us closer as humans. Try it out and let me know your thoughts! https://gifai.nl August 17, 2025 at 12:19AM

Friday, August 15, 2025

Show HN: Ldns.com – fast DNS lookups from the URL bar https://ift.tt/t7WpnjP

Show HN: Ldns.com – fast DNS lookups from the URL bar I built LDNS because I'm constantly curious about domain names - who owns them, which nameservers they use, how they're configured, and what their DNS records reveal. I wanted a fast, easy way to investigate domains right from my browser without juggling multiple tools. LDNS runs entirely client-side using Cloudflare's DNS over HTTPS. Just type a domain and instantly see: All DNS records with clickable filtering RDAP/WHOIS data with registrar info and expiration dates Email security configuration (SPF, DMARC, MTA-STS, BIMI) Export options (JSON, CSV, BIND zones, PDF reports) Try it at ldns.com - just append any domain like ldns.com/example.com to start investigating. Built with SvelteKit and deployed on Cloudflare Pages for that instant-load experience we all love. Comments, feedback, and feature requests welcome! https://ldns.com/ August 15, 2025 at 11:46PM

Show HN: OpenAVMKit – open-source toolkit for real estate mass appraisal (AVMs) https://ift.tt/HJCit1O

Show HN: OpenAVMKit – open-source toolkit for real estate mass appraisal (AVMs) I'm the maintainer of OpenAVMKit. It's a free and open source toolkit for real estate mass appraisal. I want to make it easier for analysts, researchers, and assessors to build automated valuation models (AVMs) from public data sources. It's in early development, but has reached a stage where it's useable and I'm gathering public feedback. Stuff it can do: - Data enrichment: add additional data from public sources like OpenStreetMap, Overtrue, US Census, USGS, or your own shapefiles - Modeling: run many different algorithms, such as MRA, GWR, LightGBM, XGBoost, CatBoost, as well as ensembles, all through one interface - Reporting: generate IAAO (International Association of Assessing Officers) compatible ratio studies and other statistics and graphs - Reproducible workflows: build data pipelines out of input data and settings file, so the same person can reproduce your work on another computer starting with the same ingredients There's a "getting started" section in the docs with a minimal example that should let you download the test jurisdiction and run through the basic example notebooks. I'm looking for feedback on what features people are the most interested in and what kinds of tutorials/documentation would be the most helpful for me to focus on. The official site is here: https://ift.tt/CvJgcoh The github is here: https://ift.tt/fV7WRgd We also have a bit of a writeup on it here: https://ift.tt/iFkzrIt... Happy to answer any questions in the thread. https://ift.tt/CvJgcoh August 15, 2025 at 10:59PM

Show HN: PlutoPrint – Generate Beautiful PDFs and PNGs from HTML with Python https://ift.tt/LenJhFS

Show HN: PlutoPrint – Generate Beautiful PDFs and PNGs from HTML with Python https://ift.tt/jqlKQ0g August 15, 2025 at 09:59PM

Show HN: Sarpro – 5–20× faster Sentinel‑1 GRD → GeoTIFF/JPEG https://ift.tt/IhNdke9

Show HN: Sarpro – 5–20× faster Sentinel‑1 GRD → GeoTIFF/JPEG I’ve shipped a big performance upgrade to Sarpro, an open‑source Rust tool for converting Sentinel‑1 SAR GRD products into GeoTIFF or JPEG. Since my initial post [here]( https://ift.tt/Q3l9vda ), Sarpro now avoids full‑resolution I/O when you only need a smaller output and performs reprojection in‑process without writing giant temporary files. Highlights: - Target‑size reads and single‑step warp: read/warp directly to the final output size instead of loading full‑res first. This cuts I/O and memory by 5–20× for small outputs. - Reprojection to any CRS: `--target-crs EPSG:xxxx` with in‑process gdalwarp via VRT (no temp GeoTIFF). Resampling: nearest/bilinear/cubic/lanczos. - Faster autoscaling: O(N) histogram‑based percentiles replace O(N log N) sorting. - Batch + GUI: both now honor reprojection/resampling in batch mode. - Performance: on a modern laptop (M4Pro12), scaling a dual‑band ~400–500MP GRD to 2048 px typically completes in ~1–2 s; no‑warp downsamples can be sub‑second. Full native warps remain tens of seconds as expected. Features: - CLI, GUI, and Rust API - Synthetic RGB from polarization pairs, robust autoscaling, optional padding - TIFF and JPEG outputs with georeferencing/sidecars, metadata emission I’d love feedback from the RS/EO community: more RGB presets, additional processing modes, tiling for ML, and cloud pipeline integrations. Links: - GitHub: ` https://ift.tt/34ofX5t ` - Previous HN thread: [Show HN post]( https://ift.tt/Q3l9vda ) https://ift.tt/34ofX5t August 15, 2025 at 08:15PM

Thursday, August 14, 2025

Show HN: OWhisper – Ollama for realtime speech-to-text https://ift.tt/bQ6NzLP

Show HN: OWhisper – Ollama for realtime speech-to-text Hello everyone. This is Yujong from the Hyprnote team ( https://ift.tt/zB2R64a ). We built OWhisper for 2 reasons: (Also outlined in https://ift.tt/hzkXU82 ) (1). While working with on-device, realtime speech-to-text, we found there isn't tooling that exists to download / run the model in a practical way. (2). Also, we got frequent requests to provide a way to plug in custom STT endpoints to the Hyprnote desktop app, just like doing it with OpenAI-compatible LLM endpoints. The (2) part is still kind of WIP, but we spent some time writing docs so you'll get a good idea of what it will look like if you skim through them. For (1) - You can try it now. ( https://ift.tt/Kv3nmf8 ) bash brew tap fastrepl/hyprnote && brew install owhisper owhisper pull whisper-cpp-base-q8-en owhisper run whisper-cpp-base-q8-en If you're tired of Whisper, we also support Moonshine :) Give it a shot (owhisper pull moonshine-onnx-base-q8) We're here and looking forward to your comments! https://ift.tt/hzkXU82 August 14, 2025 at 10:47PM

Show HN: We made a 2.5GB Offline disaster AI assistant [video] https://ift.tt/3PntxiS

Show HN: We made a 2.5GB Offline disaster AI assistant [video] It is a prototype for Gemma 3n Impact Challenge hosted by DeepMind. We don't have experience on local LLM before, so it is a pretty fun learning experience. Hope to see more lightweight llm model in the future! https://www.youtube.com/watch?v=VfJikuZMR4E August 15, 2025 at 12:54AM

Wednesday, August 13, 2025

Show HN: Real-time privacy protection for smart glasses https://ift.tt/Pgq8ojR

Show HN: Real-time privacy protection for smart glasses I built a live video privacy filter that helps smart glasses app developers handle privacy automatically. How it works: You can replace a raw camera feed with the filtered stream in your app. The filter processes a live video stream, applies privacy protections, and outputs a privacy-compliant stream in real time. You can use this processed stream for AI apps, social apps, or anything else. Features: Currently, the filter blurs all faces except those who have given consent. Consent can be granted verbally by saying something like "I consent to be captured" to the camera. I'll be adding more features, such as detecting and redacting other private information, speech anonymization, and automatic video shut-off in certain locations or situations. Why I built it: While developing an always-on AI assistant/memory for glasses, I realized privacy concerns would be a critical problem, for both bystanders and the wearer. Addressing this involves complex issues like GDPR, CCPA, data deletion requests, and consent management, so I built this privacy layer first for myself and other developers. Reference app: There's a sample app (./examples/rewind/) that uses the filter. The demo video is in the README, please check it out! The app shows the current camera stream and past recordings, both privacy-protected, and will include AI features using the recordings. Tech: Runs offline on a laptop. Built with FFmpeg (stream decode/encode), OpenCV (face recognition/blurring), Faster Whisper (voice transcription), and Phi-3.1 Mini (LLM for transcription analysis). I'd love feedback and ideas for tackling the privacy challenges in wearable camera apps! https://ift.tt/Zw9sx4J August 12, 2025 at 02:40AM

Show HN: Mock Interviews for Software Engineers https://ift.tt/pWw6YHo

Show HN: Mock Interviews for Software Engineers https://ift.tt/3WrKcYq August 14, 2025 at 06:02AM

Show HN: Emailcore – write chiptune in plain text in the browser https://ift.tt/e7O4lhS

Show HN: Emailcore – write chiptune in plain text in the browser I tried using the AudioContext API to make the most primitive browser-based multi-voice chiptune tracker conceivable. No frameworks or external dependencies were used, and the page source ought to be very readable. Songs are written in plain, 7-bit safe text. Every line makes a voice/channel. The examples given on the page should hopefully illustrate every feature, but as a quick overview: Sounds are specified using Anglo-style note names, with flat (black) keys being the lowercase version of the white key above so as to maintain one character per note. Hence, a full chromatic scale is AbBCdDeEFgGa. Every note name is interpreted as the closest instance of that note to the preceding one. +- skips up or down an octave, ~ holds the previous note for a beat, . skips a beat, 01234 chooses one of 5 preset timbres, <> makes beats slower or faster (for all channels), () makes the current channel louder or quieter. All other characters are ignored. If you come up with a good tune, please share it in the comments! https://ift.tt/OodeLJ9 August 14, 2025 at 04:53AM

Show HN: I wanted to reinvent programming tutorials for Gen Z people https://ift.tt/LE6DtB1

Show HN: I wanted to reinvent programming tutorials for Gen Z people Hi! I had a an inspiration based on jrpg video games and brain rot content on the internet. I built a "platform" with tutorials that are spoon-feeding knowledge to people via panels that you are advancing by clicking spacebar or tapping. To make it different, I also wrote them with very "light" language and added few cringe jokes and elements. Right now just to test the idea I added two tutorials: - Python Type Hints - Coding Interview Tips Right now I am looking for feedback because I want to find out if this way of learning could be actually useful for anyone. Or if it's another idea of mine that fits into the category "cool, but no one wants that". I will be really grateful for any feedback! Thank you! https://ift.tt/b5k2QRa August 14, 2025 at 12:26AM

Tuesday, August 12, 2025

Show HN: I accidentally built a startup idea validation tool https://ift.tt/de048nm

Show HN: I accidentally built a startup idea validation tool I was working on validating some of my own project ideas. While trying to find how to validate my idea, I realized the process itself could be turned into a tool. A few late nights later, I had something that takes any startup idea, fetches discussions, summarizes sentiment, and gives a quick “validation score.” It’s very rough, but it works, and it’s already making me rethink a few of my own ideas. It's still a work in progress. I don't actually know what I'm doing, but I know it's worth it. Honest feedback welcomed! Live demo here: https://validationly.com/ https://validationly.com/ August 13, 2025 at 03:29AM

Show HN: Minimal Claude-Powered Bookmark Manager https://ift.tt/wjmHgpu

Show HN: Minimal Claude-Powered Bookmark Manager https://tryeyeball.com/ August 13, 2025 at 01:04AM

Show HN: I built LMArena for Motion Graphics https://ift.tt/swF7tfX

Show HN: I built LMArena for Motion Graphics A motion-graphic comparison website in the vein of LMArena. The videos are rendered via Remotion. We hope that AI will be used in interesting ways to help with video production, so we wanted to give some of the models available today a shot at some basic graphics. https://ift.tt/UtdmePv August 13, 2025 at 12:34AM

Show HN: Omnara – Run Claude Code from Anywhere https://ift.tt/RbtNY8z

Show HN: Omnara – Run Claude Code from Anywhere Hey ya’ll, Ishaan and Kartik here. We're building Omnara ( https://omnara.com/ ), an “agent command center” that lets you launch and control Claude Code from anywhere: terminal, web, or mobile — and easily switch between them. Run 'pip install omnara && omnara', and you'll have a regular Claude Code session. But you can continue that same session from our web dashboard ( https://omnara.com/ ) or mobile app ( https://ift.tt/dXoNguf... ). Check out a demo here: https://ift.tt/Jj37Al8 . Before Omnara, we felt stuck watching Claude Code think and write code, waiting 5-10 minutes just to provide input when needed. Now with Omnara, I can start a Claude Code session and if I need to leave my laptop, I can respond from my phone anywhere. Some places I've coded from include my bed, on a walk, in an Uber, while doing laundry, and even on the toilet. There are many new Claude Code wrappers (e.g., Crystal, Conductor), but none keep the native Claude Code terminal experience while allowing interaction outside the terminal, especially on mobile. On the other hand, tools like Vibetunnel or Termius replicate the terminal experience but lack push notifications, clean UIs for answering questions or viewing git diffs, and easy setup. We wanted our integration to fully mirror the native Claude Code experience, including terminal output, permissions, notifications, and mode switching. The Claude Code SDK and hooks don't support all of this, so we made a CLI wrapper that parses the session file at ~/.claude/projects and the terminal output to capture user and agent messages. We send these messages to our platform, where they're displayed in the web and mobile apps in real time via SSE. Our CLI wrapper monitors for input from both the Omnara platform and the Claude Code CLI, continuing execution when the user responds from either location. Our entire backend is open source: https://ift.tt/CWQfn5j . Omnara isn't just for Claude Code. It's a general framework for any AI agent to send messages and push notifications to humans when they need input. For example, I've been using it as a human-in-the-loop node in n8n workflows for replying to emails. But every Claude Code user we show it to gets excited about that application specifically so that’s why we’re launching that first :) Omnara is free for up to 10 agent sessions per month, then $9/month for unlimited sessions. Looking forward to your feedback and hearing your thoughts and comments! https://ift.tt/CWQfn5j August 12, 2025 at 11:33PM

Monday, August 11, 2025

Show HN: I built a video generation app that indexes your media locally https://ift.tt/IPWfiUy

Show HN: I built a video generation app that indexes your media locally https://meetcosmos.com/ August 12, 2025 at 12:04AM

Show HN: I built an app that uses math to find restaurants nearby the sweet spot https://ift.tt/5gSWeld

Show HN: I built an app that uses math to find restaurants nearby the sweet spot I recently built an iOS app called Settld: Group Restaurant Finder that helps friends decide where to meet by finding restaurants that are roughly equally far from everyone’s location, and displaying information about them. We’ve all been in chaotic group chats where no one can agree on where to eat — this app cuts through that by calculating a “sweet spot” for the group. For 2 people, it’s the midpoint. For 3 people, it’s the circumcenter. For 4–6 people, it uses a minimum enclosing circle approach (Welzl’s algorithm). It then shows the top 15 nearby options so there’s no more “where do we meet?” chaos — or $50 dinners after a gruelling 2-hour trip just because no one planned. If anyone’s wondering why I capped it at 15 options, it’s to cut down on decision paralysis. Would love to get your thoughts: https://settld.space/ https://settld.space/ August 11, 2025 at 11:50PM

Show HN: Free SVG Icons – Browse, customize, and grab icons https://ift.tt/xmMOtrw

Show HN: Free SVG Icons – Browse, customize, and grab icons https://iconshelf.com August 11, 2025 at 11:04PM

Show HN: KARMA – An evaluation framework for Medical AI systems https://ift.tt/p3Kmow9

Show HN: KARMA – An evaluation framework for Medical AI systems KARMA-OpenMedEvalKit is an expandable toolkit for assessing AI models in medical applications, featuring multiple healthcare-focused datasets with particular emphasis on the Indian healthcare environment. KARMA can evaluate text, image, and audio-based medical AI models using 21+ healthcare datasets We support popular models (Qwen, MedGemma, IndicConformer, OpenAI, Anthropic models - via AWS Bedrock, and practically any HuggingFace models) out-of-the-box KARMA also handles medical-specific evaluation needs like ASR models that need language-aware post-processing, or having LLM as a judge on rubric based evaluations. KARMA caches model outputs so you can iterate on metrics without re-running expensive inference. Medical AI evaluation is currently fragmented – researchers often build custom evaluation scripts for each project. KARMA provides standardized metrics and a registry system where you can easily plug in your own models and datasets. KARMA has extensible registry system with decorators for easy model/dataset integration. It supports custom metrics with dataset-specific post-processing. The model's output are cached based on the datapoint and the model configuration to speed up evaluation iterations. The Indian healthcare focus came from our work focused on building AI systems for India. Most medical AI benchmarks are heavily skewed toward Western contexts, missing important regional variations in medical terminology, disease prevalence, and clinical practices. To aid in this, we are also releasing 4 datasets - Medical ASR Evaluation Dataset, Medical Records Parsing Evaluation Dataset, Structured Clinical Note Generation Dataset, Eka Medical Summarisation Dataset. Find the collection here - https://ift.tt/mI6nB3X... Along with our datasets, we are also releasing 2 models from our Parrotlet series in the public domain licensed under MIT. Parrotlet-a-en-5b: A purpose-built model for automatic speech recognition for medical context for English and Parrotlet-v-lite-4b: A purpose-built model for medical report understanding. Link - https://ift.tt/PVcRLCu... We've been using KARMA internally and thought the community might find it useful. Happy to answer questions about the architecture or specific use cases! GitHub: https://ift.tt/eKrXiaN Docs: https://karma.eka.care Release blog: https://ift.tt/cVMDxYG... https://karma.eka.care/ August 11, 2025 at 10:44PM

Sunday, August 10, 2025

Show HN: A new alternative to Softmax attention – live GD-Attention demos https://ift.tt/svxI4AC

Show HN: A new alternative to Softmax attention – live GD-Attention demos We built two live demos to illustrate the Ghost Drift Theory — a framework for modeling semantic coherence — and a new attention mechanism called GD-Attention. • Part 1 — Semantic Energy Landscape: Visualize the unique coherence point s* and jump direction g in real time. • Part 2 — GD-Attention vs Softmax: "Softmax blends, GD-Attention selects" — explore the difference interactively. Paper (with Zenodo DOI): [Ghost Drift Theory & GD-Attention PDF]( https://ift.tt/9VG7i1l ) ▶ Part 1: https://ift.tt/4PwCmhD... ▶ Part 2: https://ift.tt/vG4rcxL... Would love feedback on clarity, use cases, and potential improvements. https://ift.tt/9VG7i1l August 10, 2025 at 10:48PM

Show HN: AI Coloring Pages Generator https://ift.tt/1GMZsSF

Show HN: AI Coloring Pages Generator Hey Ycombinator News community! I'm excited to share AI Coloring Pages Generator with you all! As a parent myself, I noticed how hard it was to find fresh, engaging coloring pages that my kids actually wanted to color. So I built this AI-powered tool that lets anyone create custom coloring pages in seconds - just describe what you want and watch the magic happen! Whether it's "unicorn princess," "summer theme," or "cute kittens," the AI generates beautiful, printable coloring pages that are perfect for kids and adults alike. The best part? It's completely free to use! I've already seen families, teachers, and even therapists using it to create personalized activities. There's something special about seeing a child's face light up when they get to color exactly what they imagined. Would love to hear what you think and what kind of coloring pages you'd create! https://ift.tt/veytCK1 August 10, 2025 at 02:34PM

Saturday, August 9, 2025

Show HN: I made a Ruby on Rails-like framework in PHP (Still in progress) https://ift.tt/HNhEazV

Show HN: I made a Ruby on Rails-like framework in PHP (Still in progress) Play with it and let me know what you think of the architecture & how we can improve it with PHP native functions + speed. https://ift.tt/6zakLeg August 9, 2025 at 08:05PM

Show HN: I built a platform to connect with future peers before you start https://ift.tt/4SVhQuR

Show HN: I built a platform to connect with future peers before you start When I moved to a new city for my master’s and later for work, I realized how isolating it can be. I had to find housing, figure out the commute, and find roommates, all completely on my own. So I built a free site, Findeaze, that connects people headed to the same city (often for school or work) so they can plan the move, housing, and commute together rather than having to do all of it alone. It’s still early, so the community is small. If you try it now, you might not instantly find a match. But every post helps the network grow and makes it easier for the next person to connect. If you try it, please let me know what works well and what I could improve. https://ift.tt/Xe6Gzcm August 10, 2025 at 01:04AM

Show HN: Runtime – skills-based browser automation that uses fewer tokens https://ift.tt/FtKYNo6

Show HN: Runtime – skills-based browser automation that uses fewer tokens Hi HN, I’m Bayang. I’m launching Runtime — a desktop tool that automates your existing browser using small, reusable skills instead of big, fragile prompts. Links - README: https://ift.tt/6g3mXB5 - Skills guide: https://ift.tt/CDbuwEG Why did I build it? I was using browser automation for my own work, but it got slow and expensive because it pushed huge chunks of a page to the model. I also saw agent systems like browser-use that try to stream the live DOM/processed and “guess” the next click. It looked cool, but it felt heavy and flaky. I asked a few friends what they really wanted to have a browser that does some of their jobs, like repetitive tasks. All three said: “I want to teach my browser or just explain to it how to do my tasks.” Also: “Please don’t make me switch browsers—I already have my extensions, theme, and setup.” That’s where Runtime came from: keep your browser, keep control, make automation predictable Runtime takes a task in chat (I’m open to challenging the User experience of conversing with runtime), then runs a short plan made of skills. A skill is a set of functions: it has inputs and an expected output. Examples: “search a site,” “open a result,” “extract product fields,” “click a button,” “submit a form.” Because plans use skills (not whole pages), prompts stay tiny, process stays deterministic and fast. What’s different - Uses your browser (Chrome/Edge, soon Brave). No new browser to install. - Deterministic by design. Skills are explicit and typed; runs are auditable. - Low token use. We pass compact actions, not the full DOM. And most importantly, we don’t take screenshots at all. We believe screenshots are useless if we use selectors to navigate. - Human-in-the-loop. You can watch the steps and stop/retry anytime. Who it's for? People who do research/ops on the web: pull structured info, file forms, move data between tools, or run repeatable flows without writing a full RPA script or without using any API. It’s just “runtime run at runtime” Try this first (5–10 minutes) 1. Clone the repo and follow the quickstart in the README. 2. Run a sample flow: search → open → extract fields. 3. Read `SKILLS.md`, then make one tiny skill for a site you use daily. What’s not perfect yet Sites change. Skills also change, but we will post about addressing this issue. I’d love to hear where it breaks. Feedback I’m asking for - Is the skills format clear? Being declarative, does that help? - Where does the planner over-/under-specify steps? - Which sites should we ship skills for first? Happy to answer everything in the comments, and would love a teardown. Thanks! Bayang https://ift.tt/RzV0xGs August 10, 2025 at 12:45AM

Friday, August 8, 2025

Show HN: Aegis – A framework for AI-governed software development https://ift.tt/dkablF1

Show HN: Aegis – A framework for AI-governed software development Hey HN – I built a framework called Aegis to govern AI-assisted software development. The core idea is that AI-generated code should follow the same rules as human code: versioned, validated, observable. Aegis enforces this through blueprint-based development, drift detection, and runtime compliance systems. It’s designed for teams using tools like Copilot, Kilo, or Lovable to build production systems with confidence. This isn’t a library — it’s a way to architect AI-native engineering workflows. Would love feedback, questions, and critiques. Especially curious if others are facing similar issues with AI output governance or system reliability in their workflows. Happy to dive into internals or philosophy if there's interest. https://ift.tt/xlLC7DU August 9, 2025 at 12:00AM

Show HN: GPT-5 Document Retrieval – AI Assistant with Inline Citations https://ift.tt/SpYerbB

Show HN: GPT-5 Document Retrieval – AI Assistant with Inline Citations After years on HN, I've built SmartResearchAI to solve what PhDs struggle with: finding answers in their mountain of PDFs. Today we add free GPT-5 access to our AI assistant. Core Capability Upload your research files (PDF/DOCX) → ask questions → get answers with inline citations pinpointing exact sources. GPT-5 Powered Features 1. Document Retrieval Mode - "Find conflicting methodology claims in these 3 papers" → Answers with [Source 1, p.12], [Source 2, p.4] - Extracts data from tables/figures with source links 2. Smart Synthesis - "Summarize Author X's theory using only my uploaded chapters" - "Compare these 5 studies' conclusions about CRISPR risks" 3. Drafting + Integrity Tools - Write with auto-citations (APA/MLA/Chicago) - Built-in plagiarism/AI detection (98.3% accuracy) Why GPT-5 Excels Here - 41% better at cross-document reasoning vs. GPT-4 (our benchmarks) - Handles technical jargon in STEM/humanities - Traces claims to your specific uploads - no hallucinations Try It https://ift.tt/iYBIXae HN: Challenge Us - How can we improve source tracing for math-heavy papers? - Should model choice (GPT-5/Claude/Gemini) be task-automated? - What safeguards would make you trust this for thesis work? https://ift.tt/TC8I0Fr August 8, 2025 at 11:30PM

Show HN: Bringing Tech News from HN to My Community https://ift.tt/h7XiTuf

Show HN: Bringing Tech News from HN to My Community https://ift.tt/EJteajG August 8, 2025 at 11:29PM

Show HN: LLM from URL –– A free AI chat completion service directly from URL https://ift.tt/vDJ1uaS

Show HN: LLM from URL –– A free AI chat completion service directly from URL LLM from URL —— A free AI chat completion service directly from URL Usage: In the address bar of any web browser, type your question after https://818233.xyz/ and hit Enter to get the instant answer. You know the best part of this? Whitespace in the url is supported in most web browsers! You can also use curl or Wget to retrieve the appended url by replacing any whitespace with a '+' character. If you need to have an actual '+' character in your question, just use '++'. Example: The url " https://818233.xyz/hi there" in any web browser will return the same answer as if you send "hi there" to an AI chatbot. curl command: curl https://818233.xyz/hi+there wget command: wget -qO- https://818233.xyz/hi+there Limit: No chat history Fair use policy: Abuse of the service will result in an IP ban. Contact: For questions, suggestions or bug reports, feel free to drop me an email (hi@818233.xyz). https://818233.xyz/ August 8, 2025 at 11:14PM

Thursday, August 7, 2025

Show HN: Browser AI agent platform designed for reliability https://ift.tt/hjSiHyJ

Show HN: Browser AI agent platform designed for reliability We’re very excited to share something we’ve been building. Notte https://www.notte.cc/ is a full-stack browser agent platform built to reliably automate a wide range of workflows. Browser agents aren’t new, but what is still hard is covering real-world flows reliably. The inspiration for Notte was to make a full-featured platform that bridges the agent reliability gap. We’ve packaged everything via a singe API for ease of use: - Site Interactions - Observe website states, scrape data and execute actions - Structured Output - Get data in your exact format with Pydantic models - Stealth browser sessions - built-in CAPTCHA solving, proxies, and anti-detection - Hybrid workflows - Combine scripting and AI agents to reduce costs and improve reliability - Secrets vaults - Credential management to store emails, passwords, MFA tokens, SSO, etc. - Digital personas - Digital identities with unique emails, phones for account creation workflows With these tools, Notte allows you to automate difficult tasks like account creation, form filling, work on authenticated dashboards. Close compatibility with Playwright allows you to cut LLM costs and improve execution speed by mixing web automation primitives and include agents only for specific parts that require reasoning and adaptability. Here’s a short YouTube demo: https://www.youtube.com/watch?v=b1CzmfpdzaQ If any of this sounds interesting, you can run your first agent following our quickstart on GitHub https://ift.tt/SZXPYxI . Or play around with our free plan through our Notte Console: https://console.notte.cc/ We’d love to hear if there’s anything else required before you’d try or trust it on your own workflows :) https://ift.tt/SZXPYxI August 8, 2025 at 12:12AM

Show HN: Creating a Binary Puzzle Game https://ift.tt/P7ETu8O

Show HN: Creating a Binary Puzzle Game I built a free, web‑based implementation of the “moons‑and‑suns” game that LinkedIn calls Tango. It’s just a visual skin for the classic binary puzzle (also known as Binairo or Takuzu). The code that generates an infinite supply of solvable puzzles is open‑source. What is a binary puzzle? There's a grid (n × n boar, commonly 6 × 6 or 8 × 8). Two symbols – traditionally 0/1, black/white, or, in LinkedIn’s case, moons and suns. Rules Equal count – each row and each column contains the same number of each symbol (e.g. three 0s and three 1s in a 6 × 6 grid). No three in a row – you can’t have three identical symbols consecutively in any row or column These three simple constraints make the puzzle non‑trivial but always deterministic: a correctly generated board has a unique solution. Some history: Early 2000s it appeared in Japanese puzzle magazines as Binairo. Later popularised in the West as Takuzu (“binary” in Japanese). LinkedIn rebranded the same mechanics as Tango, swapping 0/1 for moons and suns. The underlying logic is identical it's just the graphics that are a cosmetic layer. How I generate an endless supply The hardest part is guaranteeing that every new board is both valid and uniquely solvable. My generator follows a standard constructive approach: Back‑tracking placement – recursively fill cells, pruning any branch that would break a rule. Early symmetry breaking – enforce row/column uniqueness as soon as possible to cut the search space. Uniqueness verification – once a full grid is built, run a deterministic solver; if more than one solution exists, backtrack and try a different seed. Live demo – https://taengo.vercel.app Source code – https://ift.tt/2K0pywR (need to clean it up in the next days). https://taengo.vercel.app August 7, 2025 at 11:41PM

Show HN: FocusTree – a simple task app (prototype), free open source https://ift.tt/i5crJsT

Show HN: FocusTree – a simple task app (prototype), free open source Use it here: https://proc0.github.io/focustree/ All data is stored locally. I built this is a proof of concept because I needed unlimited branching on tasks and a way to walk through the task tree step by step. I will build on this as I go, so all feedback is greatly appreciated. https://ift.tt/O3msXGE August 7, 2025 at 11:14PM

Wednesday, August 6, 2025

Show HN: Tool that helps you launch your startup 10x cheaper and 1,000x faster https://ift.tt/q3VBfDw

Show HN: Tool that helps you launch your startup 10x cheaper and 1,000x faster https://ift.tt/eWB9w31 August 6, 2025 at 11:24PM

Show HN: Chilli – A lightweight microframework for CLIs in Zig https://ift.tt/N0mkwut

Show HN: Chilli – A lightweight microframework for CLIs in Zig I've made an open-source microframework for creating command-line (CLI) applications in the Zig programming language. It's called Chilli, and it currently provides the following features: - A declarative API for defining nested commands, flags, and positional arguments. - Type-safe parsing of arguments from the command line and environment variables. - Automatic generation of formatted `--help` and `--version` output. - Support for command aliases, persistent flags, and other common CLI patterns. You can find the project on GitHub: https://ift.tt/kLnBsV0 August 6, 2025 at 11:21PM

Show HN: Sinkzone DNS forwarder that blocks everything except your allowlist https://ift.tt/tIgrsTd

Show HN: Sinkzone DNS forwarder that blocks everything except your allowlist Most site blockers work by blacklisting distractions. That never worked for me, the internet is too big, and there’s always something new to waste time on. I wanted the opposite: allowlist‑only browsing. Block everything by default, and explicitly allow only what I need. So I built Sinkzone: a local DNS forwarder with two modes: Monitor mode: lets all traffic through, but logs every domain so you can decide what to allow. Focus mode: only allowlisted domains resolve; everything else is blocked (NXDOMAIN). It’s open source, written in Go, and runs locally on macOS, Linux, and Windows. Works a bit like Pi‑hole, but instead of blocking ads, it blocks everything unless you say otherwise. I’m curious if this would be useful in your workflow. If you try it, please let me know what breaks, what works well, and what you’d improve. https://ift.tt/IZRQ5F9 August 6, 2025 at 11:08PM

Show HN: MCPJungle – self-hosted Gateway for connecting AI Agents to MCP tools https://ift.tt/WmClDrH

Show HN: MCPJungle – self-hosted Gateway for connecting AI Agents to MCP tools Hey HN, I’ve been working on a tool called MCPJungle - an open-source, self-hosted MCP Registry + Gateway that helps MCP clients (like Claude, Cursor, custom AI agents) connect to multiple MCP servers through a single endpoint. MCP (Model Context Protocol) is gaining adoption as a standard for tool-calling in LLMs, but managing multiple servers (auth, tool discovery, ACLs, observability) is still a nightmare - especially across teams in an org. MCPJungle tries to fix that: - Expose all your MCP servers behind a single `/mcp` endpoint - Use ACLs to control which clients can view & call which MCP tools - Keep track of all your MCP clients & servers from one central place. Individuals can run it locally for maximum privacy. Orgs deploy it as a shared gateway for all their AI agents. It’s written in Go & distributed a single binary. You can run it via Homebrew or Docker. No auth or config needed by default - it’s meant to be friction-less for developers. Still early, but the current version is stable and being used by a few early devs. Would love to hear feedback, critiques, or ideas. Happy to answer any questions here too. Thanks! https://ift.tt/eGhW0yN August 6, 2025 at 11:01PM

Tuesday, August 5, 2025

Show HN: Give coding agents MCP access to lint/test/format with 1 YAML file https://ift.tt/k5on9GO

Show HN: Give coding agents MCP access to lint/test/format with 1 YAML file Fun little project I built for myself. Create a YAML file for your dev-commands (lint, format, tests, etc), then expose those to coding agents via MCP. https://ift.tt/ot1FU3z Kinda like package.json scripts, but for agent runtimes, and commands are invoked via MCP. 1. Simple setup: one YAML file is all it takes to create a custom MCP server for your coding agents. Add the YAML to your repo to share with your team. 2. Tool discovery: coding agents know which dev-tools are available and the exact arguments they require. No more guessing CLI strings. 3. Improved security: limit which commands agents can run. Validate the arguments agents generate (e.g. ensure a file path is inside the project, not `~/.ssh/id_rsa`). 3. Works anywhere MCP works: Cursor, Windsurf, Cline, etc 4. Speed: using MCP unlocks parallel execution, requires fewer tokens for generating commands, and eliminates errors in commands requiring iteration. 5. And more: strip ANSI codes/control characters, .env file loading, define required secrets without checking them in, supports exit codes/stdout/stderr, etc https://ift.tt/ot1FU3z August 5, 2025 at 10:58PM

Show HN: Cartoony AI Voices on ESP32 with Pitch Shifting https://ift.tt/Bdw68oJ

Show HN: Cartoony AI Voices on ESP32 with Pitch Shifting I show how to use Pitch shifting supported by the arduino-audio-tools[1] and ElatoAI[2] library on ESP32 to get OpenAI Realtime and Gemini Live Voices to sound like cartoons like Alvin and the Chipmunks or Hulk. [1] https://ift.tt/uZonUip [2] https://ift.tt/xO97qU4 https://ift.tt/sRLabKE August 5, 2025 at 11:30PM

Show HN: Stagewise (YC S25) – Front end coding agent for existing codebases https://ift.tt/mQhuvHt

Show HN: Stagewise (YC S25) – Front end coding agent for existing codebases Hey HN, we're Julian and Glenn, and we're building stagewise ( https://stagewise.io ), a frontend coding agent that lives inside your browser on localhost and operates on local codebases. You can spawn the agent into locally running web apps in dev mode with `npx stagewise` from the project root. The agent lets you then click on HTML Elements in your app, enter prompts like 'increase the height here' and will implement the changes in your source code. Before stagewise, we were building a vertical SaaS for logistics from scratch and loved using prototyping tools like v0 or lovable to get to the first version. But when switching from v0/ lovable to Cursor for local development, we felt like the frontend magic was gone. So, we decided to build stagewise to bring that same magic to local development. The first version of stagewise just forwarded a prompt with browser context to existing IDEs and agents (Cursor, Cline, ..) and went viral on X after we open sourced it. However, the APIs of existing coding agents were very limiting, so we figured that building our own agent would unlock the full potential of stagewise. Here's how it works: When you run `npx stagewise`, our cli proxies your running web application in dev mode and injects a toolbar containing the coding agent on top of it. Each prompt you send will be enriched with browser context and sent to our cli, which will call our backend and modify the source code of your local codebase accordingly. Here's a demo of our agent changing the login UI of Cal.com, a popular open-source meeting scheduling app: https://www.youtube.com/watch?v=BkDcAozK9L4 . So far, we've seen great adoption from non-technical users who wanted to continue building their lovable prototype locally. We personally use the agent almost daily to make changes to our landing page and to build the UI of new features on our console ( https://ift.tt/LhlG0sW ). If you have an app running in dev mode, simply `cd` into the app directory and run `npx stagewise` - the agent should appear, ready to play with. We're very excited to hear your feedback! https://ift.tt/jPpXknS August 5, 2025 at 09:38PM

Monday, August 4, 2025

Show HN: Mathpad – Physical keypad for typing 100+ math symbols anywhere https://ift.tt/WHqrGN1

Show HN: Mathpad – Physical keypad for typing 100+ math symbols anywhere Here's something different than your usual fare: A physical keypad that lets you directly type math! Ever tried typing mathematical equations in your code IDE, email, or on Slack? You might know it can be tricky. Mathpad solves this with dedicated keys for Greek letters, calculus symbols, and more. Press the ∫ key and get ∫, in any application that accepts text. It uses Unicode composition, so it works everywhere: Browsers, chat apps, code editors, Word, you name it. Basically, anywhere you can type text, Mathpad lets you type mathematics. I built Mathpad after getting frustrated with the friction of typing equations in e.g. Word, and what a pain in the ass it was to find the specific symbols I needed. I assumed that a product like Mathpad already existed, but that was not true and I had to build it myself. It turned out to be pretty useful! Three years of solo development later, I'm launching on Crowd Supply. One of the trickiest parts of this project was finding someone who could manufacture custom keycaps with mathematical symbols. Shoutout to Loic at 3dkeycap.com for making it possible! Fully open source (hardware + software): https://ift.tt/7sGCDSP Campaign: https://ift.tt/ej1WM0B Project log: https://ift.tt/QFsEHnl https://ift.tt/ej1WM0B August 3, 2025 at 03:43AM

Show HN: I spent 6 years building a ridiculous wooden pixel display https://ift.tt/7mEW4fP

Show HN: I spent 6 years building a ridiculous wooden pixel display I built the world's most impractical 1000-pixel display and anyone in the world can draw on it. It draws a single pixel at a time and takes 30-60 minutes to complete a single image. Anyone can participate in the project by voting for the next image to be drawn, and submitting images. https://ift.tt/mYCg8wF August 4, 2025 at 11:16PM

Sunday, August 3, 2025

Show HN: Zomni – An AI sleep coach that personalizes CBT-I for everyday use https://ift.tt/Ve6T7am

Show HN: Zomni – An AI sleep coach that personalizes CBT-I for everyday use Hi HN, We built Zomni because we were tired of sleep trackers that show data but don’t help you actually sleep better. Zomni is a personal sleep coach powered by AI and rooted in CBT-I, the most effective treatment for insomnia. It doesn't just record your sleep; it gives you a daily plan and dynamic recommendations tailored to your real habits, rhythm, and mindset. The problem: Most sleep apps show you charts like “6h 42min” or “sleep efficiency: 78%,” but leave you wondering: now what? They often make sleep worse by encouraging unrealistic goals and reinforcing bad patterns (like over-napping or obsessing about 8 hours). What we built: A fully conversational AI sleep coach (built on OpenAI) Hyper-personalized advice based on your last 3 nights of sleep A CBT-I–based sleep plan that updates automatically No wearables, no stress — just real habit change. We’d love feedback — from tech, behavior, or personal perspectives. Thanks for reading, Zomni Team https://ift.tt/CcQ3wR8 August 4, 2025 at 01:01AM

Show HN: Enforce TDD in Claude Code https://ift.tt/m8MIdNo

Show HN: Enforce TDD in Claude Code https://ift.tt/qTALyX2 August 4, 2025 at 12:25AM

Show HN: My Bytecode Optimizer Beats Copilot by 2X https://ift.tt/TA9zcCS

Show HN: My Bytecode Optimizer Beats Copilot by 2X https://ift.tt/iskcpf0 August 1, 2025 at 12:15AM

Saturday, August 2, 2025

Show HN: WebGPU enables local LLM in the browser – demo site with AI chat https://ift.tt/fbNBH7y

Show HN: WebGPU enables local LLM in the browser – demo site with AI chat Browser LLM demo working on JavaScript and WebGPU. WebGPU is already supported in Chrome, Safari, Firefox, iOS (v26) and Android. Demo, similar to ChatGPT https://andreinwald.github.io/browser-llm/ Code https://ift.tt/1jbXcFp - No need to use your OPENAI_API_KEY - its local model that runs on your device - No network requests to any API - No need to install any program - No need to download files on your device (model is cached in browser) - Site will ask before downloading large files (llm model) to browser cache - Hosted on Github Pages from this repo - secure, because you see what you are running https://andreinwald.github.io/browser-llm/ August 2, 2025 at 09:09PM

Show HN: Persisting Data with DuckDB, OPFS and WASM https://ift.tt/H93F2gA

Show HN: Persisting Data with DuckDB, OPFS and WASM https://ift.tt/qScMLtV August 2, 2025 at 10:58PM

Friday, August 1, 2025

Show HN: TraceRoot – Open-source agentic debugging for distributed services https://ift.tt/5uZUV4E

Show HN: TraceRoot – Open-source agentic debugging for distributed services Hey Xinwei and Zecheng here, we are the authors of TraceRoot ( https://ift.tt/XVcwNY8 ). TraceRoot ( https://traceroot.ai ) is an open-source debugging platform that helps engineers fix production issues faster by combining structured traces, logs, source code contexts and discussions in Github PRs, issues and Slack channels, etc. with AI Agents. At the heart are our lightweight Python ( https://ift.tt/3X2BVT6 ) and TypeScript ( https://ift.tt/2yhNUlI ) SDKs - they can hook into your app using OpenTelemetry and captures logs and traces. These are either sent to a local Jaeger ( https://ift.tt/EU8azB5 ) + SQLite backend or to our cloud backend, where we correlate them into a single view. From there, our custom agent takes over. The agent builds a heterogeneous execution tree that merges spans, logs, and GitHub context into one internal structure. This allows it to model the control and data flow of a request across services. It then uses LLMs to reason over this tree - pruning irrelevant branches, surfacing anomalous spans, and identifying likely root causes. You can ask questions like “what caused this timeout?” or “summarize the errors in these 3 spans”, and it can trace the failure back to a specific commit, summarize the chain of events, or even propose a fix via a draft PR. We also built a debugging UI that ties everything together - you explore traces visually, pick spans of interest, and get AI-assisted insights with full context: logs, timings, metadata, and surrounding code. Unlike most tools, TraceRoot stores long-term debugging history and builds structured context for each company - something we haven’t seen many others do in this space. What’s live today: - Python and TypeScript SDKs for structured logs and traces. - AI summaries, GitHub issue generation, and PR creation. - Debugging UI that ties everything together TraceRoot is MIT licensed and easy to self-host (via Docker). We support both local mode (Jaeger + SQLite) and cloud mode. Inspired by OSS projects like PostHog and Supabase - core is free, enterprise features like agent mode multi-tenant and slack integration are paid. If you find it interesting, you can see a demo video here: https://www.youtube.com/watch?v=nb-D3LM0sJM We’d love you to try TraceRoot ( https://traceroot.ai ) and share any feedback. If you're interested, our code is available here: https://ift.tt/XVcwNY8 . If we don’t have something, let us know and we’d be happy to build it for you. We look forward to your comments! https://ift.tt/XVcwNY8 August 1, 2025 at 11:58PM

Show HN:typed - Markdown app for writers, students, professionals, and creators https://ift.tt/Z6lPN2s

Show HN:typed - Markdown app for writers, students, professionals, and creators https://ift.tt/fe6JFYn August 2, 2025 at 12:09AM