ads

Friday, October 31, 2025

Show HN: A chess middlegame trainer so I can stop blundering https://ift.tt/5LuCqMz

Show HN: A chess middlegame trainer so I can stop blundering https://dontblunder.com November 1, 2025 at 02:30AM

Show HN: Build your own Bracket City puzzle (offical tool!) https://ift.tt/6cXfFZK

Show HN: Build your own Bracket City puzzle (offical tool!) Hi HN — Bracket City is the word puzzle game I made earlier this year and (in part thanks to this community, see https://ift.tt/CGWJ4lm ) managed to license to the Atlantic in April. The game has been growing a lot and I wanted to share the latest: a tool that lets anyone make a Bracket City puzzle — specifically a “Bracket Suburb”! I made this tool to help me construct puzzles, and I’ve been using it every day for months. After the Atlantic launch, I started to get the occasional inquiry about whether there was a way to make your own puzzle. One guy wanted to make a Bracket City puzzle part of a puzzle hunt he made to propose to his girlfriend (he did it!), and that convinced me it would be fun to make something publicly available. I got the Atlantic on board with the idea, and we are launching it today with an "example" custom puzzle: a Halloween/horror-themed puzzle by my pal Wyna Liu of NYT Connections fame. https://ift.tt/Z8sdUeo And we've got few other fun "celeb" puzzles lined up for later this year. The thought is that folks can use the builder to make custom puzzles for birthday wishes/event invites/insults/proposals/break ups in addition to “normal” Bracket City puzzles. I'm also hoping to learn more about the potential of the format – crossword puzzles have benefited so much from the creativity of constructors – I'm hoping bracket puzzles do the same. The good news is that it’s way easier to construct a bracket puzzle than a crossword. Once you try it, you’ll see why: you have many more degrees of freedom. In a crossword, each added word increases the level of constraint exponentially — every new entry sharply reduces the remaining options for completing the grid. Bracket puzzles are the opposite: as you add clues, you expand the available fodder for new ones. Anyway, I would love any/all feedback and to try puzzles created by folks here. I’m hoping we will figure out a way to highlight the best community puzzles on the Atlantic soon! PS and please keep playing the main game / sending me feedback / denouncing me on the subreddit https://ift.tt/pIgCJhX October 31, 2025 at 09:55PM

Thursday, October 30, 2025

Show HN: Meals You Love – AI-powered meal planning and grocery shopping https://ift.tt/dYmFreb

Show HN: Meals You Love – AI-powered meal planning and grocery shopping Meals You Love is a meal planning app that creates weekly meal plans tailored to your tastes and dietary preferences. It integrates with Kroger and Instacart's APIs so you can add your meal plan groceries directly to your cart. You can also import your own recipes to include alongside AI suggestions. I originally built this to help my wife with meal planning and grocery shopping. We were always struggling to decide what to make and inevitably forgot ingredients. Most meal planners felt too rigid or generic, and few handled the grocery side well (or at all). We've also used meal kits like Home Chef in the past but they end up being quite expensive and produce a comical amount of packaging waste, plus you still wind up needing to purchase groceries anyway. In all honesty, I also wanted an excuse to try building something "real" using AI and to see if it could be used in an actually useful manner. Would love feedback from anyone interested in food, meal planning, or product design! Tech stack: - Cloud Run - Firestore - Vertex AI / Gemini https://ift.tt/07fq2Es https://ift.tt/07fq2Es October 28, 2025 at 12:57AM

Show HN: I made CSV files double-click to open in Google Sheets instead of Excel https://ift.tt/nx2loQ3

Show HN: I made CSV files double-click to open in Google Sheets instead of Excel I built my first macOS app to automatically open csv, xls files in Google Sheets. I work as marketing, revops person and often have to combine data from different platforms for reporting purposes. Google made the import flow super broken with too many clicks in between. So I built a simple solution that saves me some time. Sharing it here, you can test it out for free. No subscription bullshit, one time payment to get unlimited usage if you like it. Happy double clicking! https://csvtosheets.com October 31, 2025 at 12:55AM

Show HN: I made a heatmap diff viewer for code reviews https://ift.tt/PmHQXI1

Show HN: I made a heatmap diff viewer for code reviews 0github.com is a pull request viewer that color-codes every diff line/token by how much human attention it probably needs. Unlike PR-review bots, we try to flag not just by "is it a bug?" but by "is it worth a second look?" (examples: hard-coded secret, weird crypto mode, gnarly logic, ugly code). To try it, replace github.com with 0github.com in any pull-request URL. Under the hood, we split the PR into individual files, and for each file, we ask an LLM to annotate each line with a data structure that we parse into a colored heatmap. Examples: https://ift.tt/m9bLudI https://ift.tt/bFYBjut https://ift.tt/7jw0ip3 https://ift.tt/v96uQC7 Notice how all the example links have a 0 prepended before github.com. This navigates you to our custom diff viewer where we handle the same URL path parameters as github.com. Darker yellows indicate that an area might require more investigation. Hover on the highlights to see the LLM's explanation. There's also a slider on the top left to adjust the "should review" threshold. Repo (MIT license): https://ift.tt/VmzK8H3 https://0github.com October 30, 2025 at 09:21PM

Wednesday, October 29, 2025

Show HN: Research Hacker News, ArXiv & Google with Hierarchical Bayesian Models https://ift.tt/0zENqi4

Show HN: Research Hacker News, ArXiv & Google with Hierarchical Bayesian Models Hi Hacker News! I’m a Bayesian statistician that has been working on applying hierarchical mixture models (originally developed for genomics) to structure text data, and in the process, used these models to build (what started as a personal) tool for conducting literature reviews and deep research. My literature review process starts with a broad search to find a few key papers/groups, and from there expands along their citation networks. I needed to conduct a few rounds of literature reviews during the course of my research and decided to build a tool to facilitate this process. The tool started as an experimental wrapper over low-level statistical software in C, quickly became a testing/iteration ground for our api, and is now my personal go-to for lit reviews. The tool organizes corpuses of text content, visualizes the high level themes, and enables me to pull up relevant excerpts. Unlike LLMs, this model transparently organizes the data and can train from scratch quickly on small datasets to learn custom hierarchical taxonomies. My favorite part of the tool is the citation network integration: any research paper it pulls up has a button “Citation Network Deep Dive” that pulls every paper that cites or is cited by the original paper, and organizes it for further exploration. I initially built this tool for academic research, but ended up extending it to support Hacker News to mine technical conversation, the top 200 Google results, and earnings transcripts. We have a gallery of ready to explore results on the homepage. If you are kicking off a custom deep dive, it takes about 1-5 minutes for academic search, 3-7 minutes for Hacker News, and 5-10 minutes for Google. To demonstrate the process, I put together a video walkthrough of a short literature review I conducted on AI hallucinations: https://www.youtube.com/watch?v=OUmDPAcK6Ns I host this tool on my company’s website, free for personal use. I’d love to know if the HN community finds it useful (or to hear what breaks)! https://ift.tt/LDIumCN October 28, 2025 at 10:49PM

Show HN: Kedr Programming Language https://ift.tt/JuA2ZnO

Show HN: Kedr Programming Language Kedr is a programming language for games, primarily deriving from F# and Rust. Its approach is to create a game with automatic reference counting, and then switch impactful types to manual memory management one by one. Below are some of my findings. We are used to having imports at the beginning of every file, but it might be better to keep them all in one place for the entire crate. This way code can be moved freely between files, and smaller files are encouraged. To open a file and immediately see useful code is also refreshing. It is highly beneficial when braces always mean closure. A strong argument for the indent-based code structure. Object tree creation looks more natural without parentheses and commas for function invocation. Sequential code enforcement, when elements can only depend on what is defined above, opens new possibilities. One is splitting the type constructor among multiple files, potentially located in different crates. An example of how this is useful. One crate contains UI control definitions with layout code, while additional crates extend control types with data and calculations, necessary for their rendering, resulting in multiple switchable backends, like Vulkan or Skia. Maintaining such data outside complicates the code. There is a tendency to move away from type hierarchies. I think it is better to tune them down and reevaluate. A major source of complexity is the ability to override existing implementation of a method, because a code is being added to a type without a guarantee on whether it is going to stay. Such a guarantee will make hierarchies worth keeping more often. https://ift.tt/7fkqXwI October 29, 2025 at 11:27PM

Show HN: Oblivious HTTP for Go https://ift.tt/vus0oqF

Show HN: Oblivious HTTP for Go I couldn't find a suitable go implementation for Oblivious HTTP Client & Gateway (RFC 9458). So, I'm open sourcing ours. Some background: OHTTP is a protocol that hides who you are from the data you send - if you've ever used products of Apple, Mozilla, Fastly, or Cloudflare (to name a few) you probably used OHTTP. Key Features: - implemented as http.RoundTripper - supports chunked transfer encoding - customizable HPKE (e.g., for custom hardware-based encryption) - built on top of twoway and bhttp libraries Repo: https://ift.tt/3q4vV5l Detail: https://ift.tt/zdNIlWu Explainer: https://ift.tt/HFaEu2Q Specs: https://ift.tt/NLk9PnX , https://ift.tt/Ulj9RJk... Feedback welcome! https://ift.tt/3q4vV5l October 29, 2025 at 11:21PM

Tuesday, October 28, 2025

Show HN: Thymis.io Device management – images pre-loaded with your applications https://ift.tt/5snFAe1

Show HN: Thymis.io Device management – images pre-loaded with your applications https://thymis.io/ October 28, 2025 at 11:48PM

Show HN: Linux CLI game, quiz, cheatsheet and map from my mind mapping app https://ift.tt/oBncgpS

Show HN: Linux CLI game, quiz, cheatsheet and map from my mind mapping app Im working on a mind mapping app that would allow for integration of gamification features and be nicer and easier to remember. I was missing more advanced graphics from apps and being able to treat it as my note taking and learning tool. i.e. how (and why?) should I remember a mind map if it looks the same like all other maps and all i can do is pick couple of shapes and dashed/dotted lines? this simply doesnt work with my brain so i decided to create something better :) map on page is a preview of this, but im curious about the quiz, typing games and cheatsheet, would love some feedback (and ideas) on such training modes! Another thing i'd love is some comments about any features that you miss in existing mm's soft. If you're mind mapping enthusiast and interested in beta testing checkout contact form :) p.s. last week i posted simple map, but i didnt realize i can use SHOWN HN for stuff that ive made and that lets you play with something. It just vanished down the pages so ive added new commands, modified all bad answers to be more realistic, and added some features for quiz and typing game and reposting to get proper feedback and hopefully some testers! as well, if you're not ready to take lengthy quiz [nearly 180 questions] pick a smaller random generated subset to play with it! https://ift.tt/oD0gnAh October 28, 2025 at 11:26PM

Show HN: Dexto – Connect your AI Agents with real-world tools and data https://ift.tt/GYUsarB

Show HN: Dexto – Connect your AI Agents with real-world tools and data Hi HN, we’re the team at Truffle AI (YC W25), and we’ve been working on Dexto ( https://www.dexto.ai/ ), a runtime and orchestration layer for AI Agents that lets you turn any app, service or tool into an AI assistant that can reason, think and act. Here's a video walkthrough - https://www.youtube.com/watch?v=WJ1qbI6MU6g We started working on Dexto after helping clients setup agents for everyday marketing tasks like posting on LinkedIn, running Reddit searches, generating ad creatives, etc. We realized that the LLMs weren’t the issue. The real drag was the repetitive orchestration around them: - wiring LLMs to tools - managing context and persistence - adding memory and approval flows - tailoring behavior per client/use case Each small project quietly ballooned into weeks of plumbing where each customer had mostly the same, but slightly custom requirement. So instead of another framework where you write orchestration logic yourself, we built Dexto as a top-level orchestration layer where you declare an agent’s capabilities and behavior: - which tools or MCPs the agent can use - which LLM powers it - how it should behave (system prompt, tone, approval rules) Once configured, the agent runs as an event-driven loop - reasoning through steps, invoking tools, handling retries, and maintaining its own state and memory. Your app doesn’t manage orchestration, it just triggers and subscribes to the agent’s events and decides how to render or approve outcomes. Agents can run locally, in the cloud, or hybrid. Dexto ships with a CLI, a web UI, and a few sample agents to get started. To show its flexibility, we wrapped some OpenCV functions into an MCP server and connected it to Dexto ( https://youtu.be/A0j61EIgWdI ). Now, a non-technical user could detect faces in images or create custom photo collages by talking to the agent. The same approach works for coding agents, browser agents, multi-speaker podcast agents, and marketing assistants tuned to your data. https://ift.tt/KwiFh0d Dexto is modular, composable and portable allowing you to plug in new tools or even re-expose an entire Dexto agent as an MCP Server and consume it from other apps like Cursor ( https://www.youtube.com/watch?v=_hZMFIO8KZM ). Because agents are defined through config and powered by a consistent runtime, they can run anywhere without code changes making cross-agent (A2A) interactions and reuse effortless. In a way, we like to think of Dexto as a “meta-agent” or “agent harness” that can be customized into a specialized agent depending on its tools, data, and platform. For the time being, we have opted for an Elastic V2 license to give maximum flexibility for the community to build with Dexto while preventing bigger players from taking over and monetizing our work. We’d love your feedback: - Try the quickstart and tell us what breaks - Share a use case you want to ship in a day, and we’ll suggest a minimal config Repo: https://ift.tt/4sT7h8z Docs: https://ift.tt/LBaxWXZ Quickstart: npm i -g dexto https://ift.tt/4sT7h8z October 28, 2025 at 11:07PM

Monday, October 27, 2025

Show HN: nblm - Rust CLI/Python SDK for NotebookLM Enterprise automation https://ift.tt/0OuLiBN

Show HN: nblm - Rust CLI/Python SDK for NotebookLM Enterprise automation I built nblm, a Rust-based toolset to automate Google’s NotebookLM Enterprise API reliably. It aims to replace brittle curl snippets with a stable interface you can use in cron/CI or agentic systems. * Python SDK (type-safe): IDE auto-complete, fewer JSON key typos, fits complex workflows. * Standalone CLI: single fast binary for scripts and pipelines. * Handles auth, batching, retries; you focus on logic. Rust core is fast and memory-safe. * Enterprise API only (consumer NotebookLM isn’t supported). Repo: https://ift.tt/fbdorm4 Feedback is welcome—I'm especially interested in thoughts on the Python SDK’s design for building automated/agentic workflows. Thanks! https://ift.tt/fbdorm4 October 27, 2025 at 09:58PM

Sunday, October 26, 2025

Show HN: MyraOS – My 32-bit operating system in C and ASM (Hack Club project) https://ift.tt/UHA241E

Show HN: MyraOS – My 32-bit operating system in C and ASM (Hack Club project) Hi HN, I’m Dvir, a young developer. Last year, I got rejected after a job interview because I lacked some CPU knowledge. After that, I decided to deepen my understanding in the low level world and learn how things work under the hood. I decided to try and create an OS in C and ASM as a way to broaden my knowledge in this area. This took me on the most interesting ride, where I’ve learned about OS theory and low level programming on a whole new level. I’ve spent hours upon hours, blood and tears, reading different OS theory blogs, learning low level concepts, debugging, testing and working on this project. I started by reading University books and online blogs, while also watching videos. Some sources that helped me out were OSDev Wiki ( https://ift.tt/ixfoH0w ), OSTEP ( https://pages.cs.wisc.edu/~remzi/OSTEP ), open-source repositories like MellOS and LemonOS (more advanced), DoomGeneric, and some friends that have built an OS before. This part was the longest, but also the easiest. I felt like I understood the theory, but still could not connect it into actual code. Sitting down and starting to code was difficult, but I knew that was the next step I needed to take! I began by working on the bootloader, which is optional since you can use a pre-made one (I switched to GRUB later), but implementing it was mainly for learning purposes and to warm up on ASM. These were my steps after that: 1) I started implementing the VGA driver, which gave me the ability to display text. 2) Interrupts - IDT, ISR, IRQ, which signal to the CPU that a certain event occurred and needs handling (such as faults, hardware connected device actions, etc). 3) Keyboard driver, which enables me to display the same text I type on my keyboard. 4) PMM (Physical memory management) 5) Paging and virtual memory management 6) RTC driver - clock addition (which was, in my opinion, optional) 7) PIT driver - Ticks every certain amount of time, and also 8) FS (File System) and physical HDD drivers - for the HDD I chose PATA (HDD communication protocol) for simplicity (SATA is a newer but harder option as well). For the FS I chose EXT2 (The Second Extended FileSystem), which is a foundational linux FS structure introduced in 1993. This FS structure is not the simplest, but is very popular in hobby-OS, it is very supported, easy to set up and upgrade to newer EXT versions, it has a lot of materials online, compared to other options. This was probably the longest and largest feature I had worked on. 9) Syscall support. 10) Libc implementation. 11) Processing and scheduling for multiprocessing. 12) Here I also made a shell to test it all. At this point, I had a working shell, but later decided to go further and add a GUI! I was working on the FS (stage 8), when I heard about Hack Club’s Summer of Making (SoM). This was my first time practicing in HackClub, and I want to express my gratitude and share my enjoyment of participating in it. At first I just wanted to declare the OS as finished after completing the FS, and a bit of other drivers, but because of SoM my perspective was changed completely. Because of the competition, I started to think that I needed to ship a complete OS, with processing, GUI and the bare minimum ability to run Doom. I wanted to show the community in SoM how everything works. Then I worked on it for another 2 months, after finishing the shell, just because of SoM!, totalling my project to almost 7 months of work. At this time I added full GUI support, with dirty rectangles and double buffering, I made a GUI mouse driver, and even made a full Doom port! things I would've never even thought about without participating in SoM. This is my SoM project: https://ift.tt/xhFHm1W . Every project has challenges, especially in such a low level project. I had to do a lot of debugging while working on this, and it is no easy task. I highly recommend using GDB which helped me debug so many of my problems, especially memory ones. The first major challenge I encountered was during the coding of processes - I realized that a lot of my paging code was completely wrong, poorly tested, and had to be reworked. During this time I was already in the competition and it was difficult keeping up with devlogs and new features while fixing old problems in a code I wrote a few months ago. Some more major problems occurred when trying to run Doom, and unlike the last problem, this was a disaster. I had random PFs and memory problems, one run could work while the next one wouldn’t, and the worst part is that it was only on the Doom, and not on processes I created myself. These issues took a lot of time to figure out. I began to question the Doom code itself, and even thought about giving up on the whole project. After a lot of time spent debugging, I fixed the issues. It was a combination of scheduling issues, Libc issues and the Qemu not having enough (wrongfully assuming 128MB for the whole OS was enough). Finally, I worked throughout all the difficulties, and shipped the project! In the end, the experience working on this project was amazing. I learned a lot, grew and improved as a developer, and I thank SoM for helping to increase my motivation and make the project memorable and unique like I never imagined it would be. The repo is at https://ift.tt/UhWScHj . I’d love to discuss any aspect of this with you all in the comments! https://ift.tt/UhWScHj October 27, 2025 at 03:43AM

Show HN: I Built DevTools for Blazor (Like React DevTools but for .NET) https://ift.tt/pu1cjKI

Show HN: I Built DevTools for Blazor (Like React DevTools but for .NET) Hi HN! I've been working on developer tools for Blazor that let you inspect Razor components in the browser, similar to React DevTools or Vue DevTools. The problem: Blazor is Microsoft's frontend framework that lets you write web UIs in C#. It's growing fast but lacks the debugging tools other frameworks have. When your component tree gets complex, you're stuck with Console.WriteLine debugging. What I built: A browser extension + NuGet package that: Shows the Razor component tree in your browser Maps DOM elements back to their source components Highlights components on hover Works with both Blazor Server and WASM How it works: The NuGet package creates shadow copies of your .razor files and injects invisible markers during compilation. These markers survive the Razor→HTML pipeline. The browser extension reads these markers to reconstruct the component tree. Current status: Beta - it works but has rough edges. Found some bugs when testing on larger production apps that I'm working through. All documented on GitHub. Technical challenges solved: Getting markers through the Razor compiler without breaking anything Working around CSS isolation that strips unknown attributes Making it work with both hosting models It's completely open source: https://ift.tt/JziHSgk Demo site where you can try it: https://ift.tt/QUfLWVz Would love feedback, especially from anyone building production Blazor apps. What debugging pain points do you have that developer tools could solve? https://ift.tt/oCEdYey October 26, 2025 at 11:34PM

Show HN: FlashRecord – 2MB Python-native CLI screen recorder https://ift.tt/fNCRMod

Show HN: FlashRecord – 2MB Python-native CLI screen recorder Hi HN — I built FlashRecord, a tiny (≈2MB) Python-native CLI tool for screenshots and GIF recordings aimed at developers who want automation-friendly, scriptable screen capture without a GUI. ### What it is - CLI-first and importable (import flashrecord) so you can plug it into scripts, tests, CI pipelines, or docs generation. - Outputs GIFs (and screenshots) with a pure-Pillow/NumPy implementation of a CWAM-inspired compression pipeline (multi-scale saliency, temporal subsampling, adaptive scaling). - Cross-platform (Windows/macOS/Linux), zero-config defaults, and production-ready with tests/docs. --- ### Why it might be interesting - Tiny install and no heavyweight GUI/tooling to manage. - Designed for automation: generate evidence GIFs in CI, attach demo GIFs to PRs, or create tutorial assets from scripts. - Compression focuses on preserving visually important regions while reducing file size dramatically in typical UI demos. --- Repo & license: https://ift.tt/Jm5F8pW — MIT licensed. --- I’m happy to answer technical questions, performance numbers, cross-platform quirks, or walk through the compression pipeline. Feedback, issues, and PRs welcome. What it is CLI-first and importable (import flashrecord) so you can plug it into scripts, tests, CI pipelines, or docs generation. Outputs GIFs (and screenshots) with a pure-Pillow/NumPy implementation of a CWAM-inspired compression pipeline (multi-scale saliency, temporal subsampling, adaptive scaling). Cross-platform (Windows/macOS/Linux), zero-config defaults, and production-ready with tests/docs. Why it might be interesting Tiny install and no heavyweight GUI/tooling to manage. Designed for automation: generate evidence GIFs in CI, attach demo GIFs to PRs, or create tutorial assets from scripts. Compression focuses on preserving visually important regions while reducing file size dramatically in typical UI demos. Quick try (from source) git clone https://ift.tt/Jm5F8pW cd FlashRecord pip install -e . flashrecord @sc # instant screenshot flashrecord @sv 5 10 # 5s GIF at 10 FPS (interactive by default) Repo & license: https://ift.tt/Jm5F8pW — MIT licensed. I’m happy to answer technical questions, performance numbers, cross-platform quirks, or walk through the compression pipeline. Feedback, issues, and PRs welcome. https://ift.tt/Jm5F8pW October 27, 2025 at 12:12AM

Show HN: AI bookmarking app for people who hate AI https://ift.tt/fCl2mFx

Show HN: AI bookmarking app for people who hate AI https://tryeyeball.com/ October 26, 2025 at 10:58PM

Saturday, October 25, 2025

Show HN: I created a small 2D game about an ant https://ift.tt/RLO3NpZ

Show HN: I created a small 2D game about an ant Hello HN! I created a short game in just a few days, just for fun, where you play as an ant and feed it apples. This game also features random landscape generation, where clouds and trees are randomly distributed across all coordinates (only trees do not grow in the y direction). This is what took me the longest time :) I would appreciate your feedback ^ ^ https://ift.tt/7DNPVrQ October 26, 2025 at 02:20AM

Show HN: Random Makers – Show HN and Product Hunt, but Faster and Not Corporate https://ift.tt/ckvnVwB

Show HN: Random Makers – Show HN and Product Hunt, but Faster and Not Corporate https://ift.tt/HjRw6lh October 26, 2025 at 01:02AM

Show HN: Pyxis CodeCanvas a lightweight, client-side IDE for iPad and browsers https://ift.tt/VW3KbHU

Show HN: Pyxis CodeCanvas a lightweight, client-side IDE for iPad and browsers I’ve been building a browser IDE called *Pyxis CodeCanvas*, designed mainly for iPad and quick coding sessions. It’s still a work in progress (expect some bugs!), but I’d love to get feedback — especially from devs interested in browser runtimes or local-first tools. Pyxis aims to be a “1-second-to-open” IDE that runs entirely client-side — no backend, no cloud. It uses OPFS + IndexedDB for persistent storage and runs smoothly even on Safari. Currently supports: - TypeScript / JavaScript / Python - Partial Node.js runtime - Git/GitHub integration (push, pull, clone, even private repos) It’s optimized for low-memory devices, supports GitHub Pages deploys directly, and includes AI-assisted code review and markdown preview. I sometimes use Pyxis itself to work on Pyxis (not fully, but enough to be practical). The goal is to fill the gap between full VS Code and just editing text in a browser tab. *Repo:* https://ift.tt/azxoWm9 *Demo:* https://stasshe.github.io/Pyxis-CodeCanvas/ Would love to hear what you think — what’s broken, missing, or surprising! https://ift.tt/azxoWm9 October 25, 2025 at 11:28PM

Friday, October 24, 2025

Show HN: Understanding LLM fundamentals without frameworks https://ift.tt/OlLPcnM

Show HN: Understanding LLM fundamentals without frameworks I was using LLM frameworks everywhere but had no idea what was happening inside them. One day I needed to optimize something and realized I couldn't. Hard truth: I didn't understand the fundamentals, just which framework function to call. So I stripped everything away. No abstractions. Just Python, HTTP requests, and the OpenAI/Anthropic APIs. What I found was anticlimactic in the best way: there's almost nothing there. - "AI agents" are just functions the model tells you to call - "Memory" is literally just a list you append to and send back - "RAG" is search, concatenate to prompt, send it off - "Multi-agent systems" are just API calls in sequence It all clicked after that. Not because the patterns are hard. They're not. In fact, they're trivial. They're just buried under layers of abstraction that make them seem hard. I created 7 modules showing the basics: API calls, conversation state, tool calling, RAG, streaming, prompt chaining. Each one is heavily commented, nothing fancy. Side-by-side examples for Claude and GPT so you can see they're fundamentally the same thing. Now when I use frameworks, I actually know if I need them or if I'm just adding bloat. Repo: https://ift.tt/5RU73Dm https://ift.tt/5RU73Dm October 25, 2025 at 12:29AM

Show HN: AgentML – Deterministic AI Agents (MIT, Alpha) https://ift.tt/0U4Idpq

Show HN: AgentML – Deterministic AI Agents (MIT, Alpha) https://ift.tt/WJSqEwm October 24, 2025 at 11:03PM

Show HN: I might have invented a new style of puzzle https://ift.tt/TLfNb7a

Show HN: I might have invented a new style of puzzle https://ift.tt/ZTHoIkV October 24, 2025 at 10:54PM

Show HN: A high-performance Rust based MCP server https://ift.tt/yHJF6PM

Show HN: A high-performance Rust based MCP server Many projects I've had involve spreadsheets in various forms. Agent's are good at navigating through them but often are doing so via ad-hoc python scripts, which are both clunky and have poor performance characteristics. This includes both basic spreadsheet functionality like `list_sheets`, `sheet_page` as well as useful features like recursive formula precedent/dependent tracing. https://ift.tt/cXirJbU October 24, 2025 at 10:53PM

Thursday, October 23, 2025

Show HN: ScreenAsk – Free Screen Recording Links for Customer Support https://ift.tt/aZJ5MdK

Show HN: ScreenAsk – Free Screen Recording Links for Customer Support Hey HN, My name is Brett and I'm excited to share ScreenAsk with you today! ScreenAsk makes it easy to collect screen recordings from your customers by simply sending a link. At my SaaS company, we spend hours every week teaching customers to record their screen to show us support issues. They have to sign up for a new tool, download and install it, create the recording, and then upload it somewhere to send it over… Most of the time spent on support wasn't fixing the issue but rather understanding it. I built ScreenAsk to solve this exact problem, making it simple to see what your customers see: - Send over your recording link - They follow a few easy steps to record their screen - You instantly see what they’re experiencing No sign up, no installing extra software, and no uploading to another service just to share. You can get notified via Email + Slack + Zapier + Webhooks when somebody records, and recordings include transcription + AI summaries for quick scanning. We also offer a widget that can be embedded in your site and is fully customizable + controllable with javascript. - Show / hide it when you want - Change colors and language - Listen for a recording and populate a form field with the viewing link - Add metadata like name, email, ID to the recording - Capture network and console I’d be grateful if you gave it a spin! You get 10 free recordings per month and a personalized recording link: https://screenask.com Launch tweet + demos + discussion: https://ift.tt/oFeQczm https://screenask.com October 23, 2025 at 11:59PM

Show HN: Coyote – Wildly Real-Time AI https://ift.tt/VvUXh7I

Show HN: Coyote – Wildly Real-Time AI Hey all, we just shipped coyote. it's an ai assistant but built different — everything runs async and feels way more natural. You text it, it handles work in the background and you can keep talking to it. No more stop button. Instead of creating another app we put it in WhatsApp (iMessage coming soon) so you can just text it for free and get the feeling. The core idea: most ai assistants make you sit there waiting for an answer. coyote's like texting a friend. you ask it to grab something for you, they say "on it," and you just keep chatting while it's out getting it. no awkward silence, no being stuck. Built it to handle real tasks — emails, calendar stuff, research, whatever. all non-blocking. Everything happens concurrently so you're never left hanging. We're still early but it's live and working. Happy to answer questions or get feedback. We've also worked hard to make it snappy, and friendly. Try it out and would love some feedback! Thanks! https://getcoyote.app October 23, 2025 at 11:38PM

Show HN: hist: An overengineered solution to `sort|uniq -c` with 25x throughput https://ift.tt/I4KELky

Show HN: hist: An overengineered solution to `sort|uniq -c` with 25x throughput Was sitting around in meetings yesterday and remembered an old shell script I had to count the number of unique lines in a file. Gave it a shot in rust and with a little bit of (over-engineering)™ I managed to get 25x throughput over the naive approach using coreutils as well as improve over some existing tools. Some notes on the improvements: 1. using csv (serde) for writing leads to some big gains 2. arena allocation of incoming keys + storing references in the hashmap instead of storing owned values heavily reduced the number of allocations and improves cache efficiency (I'm guessing, I did not measure). There are some regex functionalities and some table filtering built in as well. happy hacking https://ift.tt/gVxXEIA October 23, 2025 at 11:26PM

Show HN: 401K Traditional vs. Roth Calculator https://ift.tt/RjUab2g

Show HN: 401K Traditional vs. Roth Calculator Hi everyone! I built a 401(k) Traditional vs. Roth calculator in Cursor to give a quick estimate of how your investments might grow and which option could work better for you. I’d love your thoughts and suggestions on how to improve it and make it easier for more people to use. https://401k.pages.dev/ October 23, 2025 at 11:21PM

Wednesday, October 22, 2025

Show HN: Create interactive diagrams with pop-up content https://ift.tt/gKcXDkS

Show HN: Create interactive diagrams with pop-up content This is a recent addition to Vexlio which I think the HN crowd may find interesting or useful. TL;DR: easy creation of interactive diagrams, meaning diagrams that have mouse click/hover hooks that you can use to display pop-up content. The end result can be shared with a no-sign-in-required web link. My thought is that this is useful for system docs, onboarding or user guides, presentations, etc. Anything where there is a high-level view that should remain uncluttered + important metadata or details that still need to be available somewhere. You can try it out without signing up for anything, just launch the app here ( https://app.vexlio.com/ ), create a shape, select it with the main pointer tool and then click "Add popup" on the context toolbar. I'd be grateful for any and all feedback! https://ift.tt/8T6hObz October 22, 2025 at 09:45PM

Show HN: RuleHunt – TikTok for Cellular Automata https://ift.tt/TYfrSZF

Show HN: RuleHunt – TikTok for Cellular Automata We built RuleHunt to search for interesting cellular automata rules using tiktok-style engagement monitoring. The first rule shown is Conway’s Game of Life. As you scroll, you will see other random rules – the search space is 2^512. Help us find good rule heuristics by starring the ones you like! On mobile it's a tiktok-like scrolling interface; on desktop it's an interface for targeted rule searching. Starred rules go to a global leaderboard. GitHub repo: https://ift.tt/l24ZzoK https://rulehunt.org October 22, 2025 at 10:48PM

Tuesday, October 21, 2025

Show HN: bbcli – A TUI and CLI to browse BBC News like a hacker https://ift.tt/KghdiCY

Show HN: bbcli – A TUI and CLI to browse BBC News like a hacker hey hn! I (re)built this TUI tool for browsing BBC News in the terminal, it uses an RSS feed for getting headlines and previews and you can read articles too. Try it out and let me know what you think! :) https://ift.tt/I39P4Qw October 19, 2025 at 05:58PM

Show HN: A to Do List That Helps You Forget https://ift.tt/tG3di6K

Show HN: A to Do List That Helps You Forget https://adamjgrant.github.io/tides-over-sand/ October 22, 2025 at 12:12AM

Show HN: FastQR – A Fast QRCode Generator Supporting Batch Processing https://ift.tt/8lwxHk2

Show HN: FastQR – A Fast QRCode Generator Supporting Batch Processing I'd like to share FastQR ( https://ift.tt/8O6yRKX ), a high-performance QR code generator written in C++. This is my first open-source project, and I'm excited (and a bit nervous!) to share it with you all. What it is: - A fast CLI tool and library for generating QR codes - Written in C++ with bindings for Ruby, PHP, and Node.js - Full UTF-8 support (works great with Vietnamese, Japanese, and other languages) - Supports custom colors, logo embedding, and precise size control - Pre-built binaries included – no need to install dependencies separately Why I built this: - I needed a QR code generator that was fast, supported UTF-8 properly (especially for Vietnamese text), and could be easily integrated into different languages. Most existing solutions were either slow or had poor Unicode support. Performance: - Generating 1000 QR codes (500x500px): ~0.37 seconds Tech stack: - C++ core using libqrencode and libpng - Language bindings for Ruby, PHP, and Node.js - Precompiled binaries for easy installation This is my very first open-source project, so I'm sure there are things that could be improved or bugs I haven't caught yet. I'd really appreciate it if you could try it out and share your feedback. If you find any issues or have suggestions, please open an issue on GitHub – I'll do my best to fix them quickly. Any feedback, criticism, or suggestions would be greatly appreciated. Thanks for taking the time to check it out! GitHub: https://ift.tt/8O6yRKX October 21, 2025 at 10:59PM

Monday, October 20, 2025

Show HN: I created a cross-platform GUI for the JJ VCS (Git compatible) https://ift.tt/gmDQ1JH

Show HN: I created a cross-platform GUI for the JJ VCS (Git compatible) Personally, I think the JJ VCS ( https://ift.tt/kutZFNd ) hit a point some time in this past year where I find it hard to find a great reason to continue using git. Over the years I've cobbled together aliases and bash functions to try to improve my git workflow, but after using jj, which works with ~any git repo and integrates great with Github repos, all of the workflow issues I ran into with git are not only solved, but improved in ways I couldn't manage with simple scripts. One example is the op log, which lets you go to any point in your repo's time and provides simple undo and redo commands when you want to back out of a merge, didn't mean to rebase, etc. Because I have a pretty strong conviction that JJ is at this point a cleaner and more powerful version of git, my hopes are that it continues to grow. With that, it seemed a proper full-featured GUI was missing for the VCS. There's some plugins that add some integration into VS Code, and there's one in the works to get Intellij support working, but many of the constructs JJ provides in my opinion necessitate a grounds-up build of a GUI around how JJ works. Right now, Judo for JJ is an MVP in an open beta. I did my best to support all of the core functionality one would need, though there's many nice-to-haves that I am going to add, like native merge support, native splitting, etc. Most of this will be based on feedback from the Beta. I'm really grateful for the great community JJ has built, alongside the HN community itself in the countless VCS-based posts I've read over the years, and am hoping for lots of input here during Beta under real usage - the goal is to be a full-featured desktop GUI for the VCS, similar to many of the great products that are out there for git. https://judojj.com October 20, 2025 at 10:35PM

Show HN: NativeBlend – Text to fully editable 3D Models that don't suck https://ift.tt/nw67oY8

Show HN: NativeBlend – Text to fully editable 3D Models that don't suck I'm a developer (not a 3D artist) who's been frustrated with current AI text-to-3D tools — most produce messy, monolithic meshes that are unusable without hours of cleanup. So I built NativeBlend, a side project aimed at generating editable 3D assets that actually fit into a real workflow. Key features: - Semantic Part Segmentation: Outputs separate, meaningful components (e.g., wheels, doors), not just a single mesh blob. - Native Blender Output: Generates clean, structured .blend files with proper hierarchies, editable PBR materials, and decent UVs — no FBX/GLB cleanup required. The goal is to give devs a usable starting point for game assets without the usual AI slop. I have a working demo and would love feedback: Does this solve a real need, or am I just scratching my own itch? Thanks for taking a look! https://native-blend-app.vercel.app/ October 21, 2025 at 01:57AM

Show HN: Visual autocomplete for drawings (real-time Human-AI interaction) https://news.ycombinator.com/item?id=45645528

Show HN: Visual autocomplete for drawings (real-time Human-AI interaction) I've been interested in real-time Human-AI interaction for a while. This project is a prototype closed-loop drawing system, like "visual autocomplete" for drawings. The idea is that the user just draws along with the AI, without disrupting the flow through manual text prompting. It works by AI continually observing and responding to live drawing on a canvas. A vision model (using Ollama) interprets what it sees, and that description drives real-time image generation (StreamDiffusion). For real-time performance, this project is built in C++ and Python, leveraging the GPU for Spout-based texture sharing with minimal overhead. Reusable components include: - StreamDiffusionSpoutServer: lightweight Python server for real-time image generation with StreamDiffusion. Designed for interfacing with any Spout-compatible software and uses OSC for instructions. - OllamaClient: minimal C++ library for interfacing with Ollama vision language models. Includes implementations for openFrameworks and Cinder. The "visual autocomplete" concept has been explored in recent papers (e.g., arxiv.org/abs/2508.19254, arxiv.org/abs/2411.17673). Hopefully, these open source components can help accelerate others experimenting and advancing this direction! https://github.com/olwal/AiDrawing October 20, 2025 at 11:09PM

Sunday, October 19, 2025

Show HN: Moonfish – AI podcast generator with research, writing, and voicing https://ift.tt/a1cFXS2

Show HN: Moonfish – AI podcast generator with research, writing, and voicing I built Moonfish because I have a long commute and kept wanting podcasts on niche topics that don't exist. It's like a combination of OpenAI's deep research and Google's NotebookLM – it searches the web for sources, synthesizes the information, and creates a conversational podcast with two AI hosts. It's very steerable. You create a show first, then add episodes to it. Set the tone at the show level ("explain like I'm a beginner or create podcast in xxx language"), then prompt individual episodes. Episode creation would take around ~3-5m and episode length is about 15 minutes right now (I'm working on extending that hopefully to an hour :) ) Underneath it comprises of three main agents - one agent searches and gathers sources, another structures the narrative, a third writes natural dialogue. The architecture is simple but very effective and scalable with new model release iOS app: https://ift.tt/6KWD1ky Would love to hear your feedbacks! https://ift.tt/6KWD1ky October 19, 2025 at 11:06PM

Show HN: Photerra – One app to discover hidden gems, plan with friends, and book https://ift.tt/HyA29RZ

Show HN: Photerra – One app to discover hidden gems, plan with friends, and book Hey HN — I'm David, and I built Photerra to solve a problem I kept running into: planning trips meant juggling dozens of browser tabs, Google Sheets, and the same recycled "top 10" lists everyone else sees. Photerra turns geolocated photos into map spots you can organize into trips, share with friends, and book from — all in one flow. The core idea: photos with GPS → actual spots on a map → drag into trip days → share → book. What makes Photerra different: • Real locations, not just POIs — Your photos have EXIF GPS data, so you're adding exact spots (that actual spot on the trail, not just “Yosemite” - no address needed) • End-to-end flow — Discover → plan → coordinate → book, without switching between 5 apps • Photo-grounded data — Community spots come from real photos, not scraped listicles, so you find more off-path places • Works for everyday wandering — Not just big trips. Save local spots and open them in Maps or Uber with one tap Try it: iOS and Android apps are live (links in comments). I've seeded content in SF, Portland, LA, San Diego, Hawaii, Philly, Yosemite, and Mexico City. Tech: React Native + RN-Maps on mobile; NestJS + TypeORM/MySQL + AWS on backend. What I'd love feedback on: • Is the photo→spot→trip flow intuitive on first use? • What's missing to make this truly start-to-finish for your trips? • Any friction in auth, maps, or sharing? Be blunt — it's helpful. Happy to answer questions! — David (solo, first-time founder) https://ift.tt/5S3PUEt October 19, 2025 at 11:53PM

Show HN: WP-Easy, framework to build WordPress themes https://ift.tt/u79yk3b

Show HN: WP-Easy, framework to build WordPress themes The inspiration for this framework came from my brother, an amazing graphic designer who wanted to build WordPress themes using only his FTP-based code editor. He knows HTML and CSS really well, and some jQuery, but not modern JavaScript. In my experience, this is common for people whose jobs are tangential to frontend web development... designers, copywriters, project managers, and backend engineers. So this is for people who don't want to deal with the mess of modern build tools. It tries to nudge people into a more modern direction: component-based architecture, JS modules, SCSS, and template routing. WP-Easy lets people like my brother build professional, modern themes without the usual barriers, just code with your favorite editor and see the results instantly. Key features: 1. File-based routing - Define routes in router.php with Express-like syntax (/work/:slug/) 2. Single File Components - PHP templates with

Saturday, October 18, 2025

Show HN: Odyis: lunar lander (1979) clone written in Rust https://ift.tt/GeKSf7X

Show HN: Odyis: lunar lander (1979) clone written in Rust Moin, to learn Rust I decided to create a simple clone of the original lunar lander game. I would love to hear feedback on the quality of the code! https://ift.tt/uF0vNGk October 19, 2025 at 01:57AM

Show HN: Open-source implementation of Stanford's self-learning agent framework https://ift.tt/Wmqj1uF

Show HN: Open-source implementation of Stanford's self-learning agent framework We implemented Stanford's Agentic Context Engineering paper which shows agents can improve their performance just by evolving their own context. How it works: Agents execute tasks, reflect on what worked/failed, and curate a "playbook" of strategies. All from execution feedback - no training data needed. Happy to answer questions about the implementation or the research! https://ift.tt/DzKw4LU October 18, 2025 at 10:09PM

Friday, October 17, 2025

Show HN: I turned my resume into a catchy song. It's a game changer https://ift.tt/Egs3Cqo

Show HN: I turned my resume into a catchy song. It's a game changer I turned my resume into a catchy pop song. Thought you'd all appreciate it. Worked directly on the Song Style prompt, which you can duplicate for your own fun catchy resume song. Just replace the lyrics! https://ift.tt/h0oMlUa October 18, 2025 at 03:52AM

Show HN: We packaged an MCP server inside Chromium https://ift.tt/DiQylEs

Show HN: We packaged an MCP server inside Chromium Hey HN, we just shipped a browser with an inbuilt MCP server! We're a YC startup (S24) building BrowserOS — an open‑source Chromium fork. We're a privacy‑first alternative to the new wave of AI browsers like Dia, Perplexity Comet. Since launching ~3 months ago, the #1 request has been to expose our browser as an MCP server. -- Google beat us to launch with chrome-devtools-mcp (solid product btw), which lets you build/debug web apps by connecting Chrome to coding assistants. But we wanted to take this a step further: we packaged the MCP server directly into our browser binary. That gives three advantages: 1. MCP server setup is super simple — no npx install, no starting Chrome with CDP flags, you just download the BrowserOS binary. 2. with our browser's inbuilt MCP server, AI agents can interact using your logged‑in sessions (unlike chrome-devtools-mcp which starts a fresh headless instance each time) 3. our MCP server also exposes new APIs from Chromium's C++ core to click, type, and draw bounding boxes on a webpage. Our APIs are also not CDP-based (Chrome Debug Protocol) and have robust anti-bot detection. -- Few example use cases for BrowserOS-mcp are: a) *Frontend development with Claude Code*: instead of screenshot‑pasting, claude-code gets WYSIWYG access. It can write code, take a screenshot, check console logs, and fix issues in one agentic sweep. Since it has your sessions, it can do QA stuff like "test the auth flow with my Google Sign‑In." Here's a video of claude-code using browserOS to improve the css styling with back-and-forth checking: https://youtu.be/vcSxzIIkg_0 b) *Use as an agentic browser:* You can install BrowserOS-mcp in claude-code or Claude Desktop and do things like form-filling, extraction, multi-step agentic tasks, etc. It honestly works better than Perplexity Comet! Here's a video of claude-code opening top 5 hacker news posts and summarizing: https://youtu.be/rPFx_Btajj0 -- *How we packaged MCP server inside Chromium binary*: We package the server as a Bun binary and expose MCP tools over HTTP instead of stdio (to support multiple sessions). And we have a BrowserOS controller installed as an extension at the application layer which the MCP server connects to over WebSocket to control the browser. Here's a rough architecture diagram: https://dub.sh/browseros-mcp-diag -- *How to install and use it:* We put together a short guide here: https://ift.tt/q0kguFK Our vision is to reimagine the browser as an operating system for AI agents, and packaging an MCP server directly into it is a big unlock for that! I'll be hanging around all day, would love to get your feedback and answer any questions! https://ift.tt/PXSwRiV October 17, 2025 at 11:22PM

Show HN: Stop Chasing Success: Write for Wonder Instead https://ift.tt/MSCxmzQ

Show HN: Stop Chasing Success: Write for Wonder Instead Why novels are an ideal project for bringing wonder into your life https://ift.tt/WmKBcYv October 18, 2025 at 12:46AM

Show HN: LLM In-Browser Fuzzer Finds Hidden Prompt Injection in AI Browsers https://ift.tt/4TFRdcL

Show HN: LLM In-Browser Fuzzer Finds Hidden Prompt Injection in AI Browsers We built an in-browser, LLM-guided fuzzer to automatically discover hidden prompt injection vulnerabilities in AI-powered browser assistants (often called agentic AI browsers). These are browser-based AI agents that can read and interact with web pages on a user's behalf (e.g. summarizing pages or clicking links). The problem is that malicious instructions can be embedded in a webpage's content (even invisibly) and trick the agent into doing unintended actions. For example, a recent exploit in Perplexity’s AI Browser Comet showed that hidden prompts in a Reddit post could make the assistant exfiltrate the user’s private data and perform unauthorized actions across other sites. Such attacks bypass traditional web security boundaries like same-origin policy, because the AI agent has the user’s privileges on all sites – an attacker could potentially read emails, steal auth tokens, or click dangerous links without needing any browser bug. The AI simply obeys the hidden instructions as if they were the user’s, which is a serious new threat. To systematically uncover these vulnerabilities, we developed a fuzzing framework that runs entirely inside a real browser. Each test case is an actual webpage (loaded in an isolated tab) so the agent perceives it just like a normal user-opened page, with full DOM and content. An LLM (like GPT-4) is used to generate diverse malicious page contents – starting from some known prompt injection patterns and then mutating them or creating new variants. The browser is instrumented to detect when the AI agent misbehaves (e.g. clicks a hidden phishing link or follows a concealed instruction), and this real-time feedback is fed back into the fuzzer to guide the next round of attacks. In essence, the LLM fuzzer acts as an adaptive adversary: after each failed attempt it “learns” and evolves more sophisticated prompt injections to try on the next iteration. This closed-loop approach gives high-fidelity results and virtually zero false positives, since we only count an attack as successful if the agent actually performs an unwanted action in the browser. By doing all of this within a live browser environment, we can observe the agent under realistic conditions and quickly hone in on exploits that truly work in practice. https://browsertotal.com/demos/agentic-browser-fuzzer October 17, 2025 at 11:03PM

Thursday, October 16, 2025

Show HN: Inkeep (YC W23) – Agent builder that works both visually and in code https://ift.tt/5nlhdOP

Show HN: Inkeep (YC W23) – Agent builder that works both visually and in code Hi HN! I'm Nick from Inkeep. We built an agent builder with true 2-way sync between code and a drag-and-drop visual editor, so devs and non-devs can collaborate on the same agents. Here’s a demo video: https://ift.tt/PJawXpk . As a developer, the flow is: 1) Build AI Chat Assistants or AI Workflows with the TypeScript SDK 2) Run `inkeep push` from your CLI to publish 3)Edit agents in the visual builder (or hand off to non-technical teams) 4) Run `inkeep pull to edit in code again. We built this because we wanted the accessibility of no-code workflow builders (n8n, Zapier), but the flexibility and devex of code-based agent frameworks (LangGraph, Mastra). We also wanted first-class support for chat assistants with interactive UIs, not just workflows. OpenAI got close, but you can only do a one-time export from visual builder to code and there’s vendor lock-in. How I've used it: I bootstrapped a few agents for our marketing and sales teams, then was able to hand off so they can maintain and create their own agents. This has enabled us to adopt agents across technical and non-technical roles in our company on a single platform. To try it, here’s the quickstart: https://ift.tt/q2AQZLi . We leaned on open protocols to make it easy to use agents anywhere: An MCP endpoint, so agents can be used from Cursor/Claude/ChatGPT A Chat UI library with interactive elements you can customize in React An API endpoint compatible with the Vercel AI SDK `useChat` hook Support for Agent2Agent (A2A) so they work with other agent ecosystems We made some practical templates like a customer_support, deep_research, and docs_assistant. Deployment is easy with Vercel/Docker with a fair-code license and there's a traces UI and OTEL logs for observability. Under the hood, we went all-in on a multi-agent architecture. Agents are made up of LLMs, MCPs, and agent-to-agent relationships. We’ve found this approach to be easier to maintain and more flexible than traditional “if/else” approaches for complex workflows. The interoperability works because the SDK and visual builder share a common underlying representation, and the Inkeep CLI bridges it with a mix of LLMs and TypeScript syntactic sugar. Details in our docs: https://docs.inkeep.com . We’re open to ideas and contributions! And would love to hear about your experience building agents - what works, hasn’t worked, what’s promising? https://ift.tt/ZdtAx5m October 16, 2025 at 07:50PM

Show HN: Coordable – Get better geocoding results with AI cleaning and analytics https://ift.tt/OP3bk2v

Show HN: Coordable – Get better geocoding results with AI cleaning and analytics I’ve been working on a tool called Coordable, which helps analyze and improve geocoding results. If you’ve ever dealt with geocoding at scale, you’ve probably hit two recurring problems: Garbage in = garbage out. Addresses are often messy (“2nd floor”, “/”, abbreviations, multiple addresses in one line…). Most geocoders will fail or return incorrect matches if the input isn’t perfectly normalized. A result isn’t always a correct result. Many providers return something even if it’s wrong — e.g. shifting a house number, or confusing similar street names. Assessing whether a geocoded result is actually right is surprisingly hard to automate. Coordable tries to address both issues with AI and analytics: Uses an LLM-based cleaner to normalize messy addresses (multi-country support). Automatically evaluates geocoding accuracy by comparing input and output like a human would. Lets you benchmark multiple providers (Google, HERE, Mapbox, Census, BAN, etc.) side by side. Includes a dashboard to visualize results, quality metrics, and exports. It’s not a new geocoder — it wraps existing APIs and focuses on data quality, comparison, and automation. It’s currently in beta with free credits. If you work with geocoding or address data, I’d love to hear how you handle these challenges and what kind of analytics would be most useful to you. https://coordable.co/ October 17, 2025 at 12:41AM

Show HN: How Useless Are You? A brutally honest skills check https://ift.tt/0zhyQGm

Show HN: How Useless Are You? A brutally honest skills check We built this to answer "am I a fit for this role?" after noticing how hard it is to get honest feedback when applying to a YC startup or something else entirely. It's a custom 5-minute challenge that roasts you after. Added a leaderboard for those who want to see how they stack up. Roast us below. https://ift.tt/B7QCSIh October 17, 2025 at 12:34AM

Wednesday, October 15, 2025

Show HN: Shorter – search for shorter versions of your domain https://ift.tt/K4zY6Ot

Show HN: Shorter – search for shorter versions of your domain https://shorter.dev October 16, 2025 at 08:59AM

Show HN: Specific (YC F25) – Build backends with specifications instead of code https://ift.tt/PIqk7v2

Show HN: Specific (YC F25) – Build backends with specifications instead of code Hi folks! Iman and I (Fabian) have been building Specific for a while now and are finally opening up our public beta. Specific is a platform for building backend APIs and services entirely through natural-language specifications and tests, without writing code. We then automatically turn your specs into a working system and deploy it for you, along with any infrastructure needed. We know a lot of developers who have already adopted spec-driven development to focus on high-level design and let coding agents take care of implementation. We are attempting to take this even further by making the specs themselves the source of truth. Of course, we can’t blindly trust coding agents to follow the spec, so we also support adding tests that will run to ensure the system behaves as expected and to avoid regressions. There is so much ground to cover, so we are focusing on a smaller set of initial features that in our experience should cover a large portion of backends: - An HTTP server for each project. Authentication can be added by simply stating in the spec how you want to protect your endpoint. - A database automatically spun up and schema configured if the spec indicates persistence is needed. - External APIs can be called. You can even link out to API docs in your specs. You currently can’t see the generated code, but we are working on enabling it. Of course, we don’t claim any ownership of the generated code and will gladly let you export it and continue building elsewhere. Specific is free to try and we are really eager to hear your feedback on it! Try it here: https://ift.tt/9KskloF https://specific.dev/ October 16, 2025 at 12:21AM

Show HN: Pxxl App – A Nigerian Alternative to Vercel, Render, and Netlify https://ift.tt/G3iYL9s

Show HN: Pxxl App – A Nigerian Alternative to Vercel, Render, and Netlify Hi HN, I built Pxxl App — a free web hosting and deployment platform for developers in Nigeria and beyond. It’s a Nigerian alternative to Vercel, Render, and Netlify, designed for those who want a simple, fast, and barrier-free way to host both frontend and backend apps. With Pxxl App, you can connect your Git repo and deploy in seconds — no credit card, no limits. You’ll get a live subdomain like yourapp.pxxl.pro, automatic builds, and continuous deployment. It supports: • Frontend frameworks: React, Next.js, Vue, Svelte, and more • Backend projects: Node.js, PHP, and Python • Features like environment variables, CI/CD, and instant rollback The goal is to make cloud deployment accessible to African and global developers without the typical payment or region restrictions. It’s completely free to start, and I’d love to hear feedback from the HN community on how to improve it or what features you’d want next. Check it out: https://pxxl.app https://pxxl.app October 16, 2025 at 01:25AM

Show HN: Cmux – Coding Agent Multiplexer https://ift.tt/tvr4cfz

Show HN: Cmux – Coding Agent Multiplexer HN, I'm stoked to share this product I've been working on non-stop for the past few weeks. It's an immersive GUI experience for working with many coding agents in parallel. The UX should be familiar to Claude Code users, but we took advantage of the GUI nature to add in a bunch more. cmux is early but certainly usable—almost all of our internal cmux development rolls through cmux itself. Please let me know your thoughts and feedback! https://ift.tt/xKbSHPw October 16, 2025 at 12:40AM

Tuesday, October 14, 2025

Show HN:I built a free AI tool that scans and sorts financial news for traders https://ift.tt/GNTWcfI

Show HN:I built a free AI tool that scans and sorts financial news for traders https://www.fxradar.live/ October 15, 2025 at 12:26AM

Show HN: Metorial (YC F25) – Vercel for MCP https://ift.tt/OczNVFl

Show HN: Metorial (YC F25) – Vercel for MCP Hey HN! We're Wen and Tobias, and we're building Metorial ( https://metorial.com ), an integration platform that connects AI agents to external tools and data using MCP. The Problem: While MCP works great locally (e.g., Cursor or Claude Desktop), server-side deployments are painful. Running MCP servers means managing Docker configs, per-user OAuth flows, scaling concurrent sessions, and building observability from scratch. This infrastructure work turns simple integrations into weeks of setup. Metorial handles all of this automatically. We maintain an open catalog of ~600 MCP servers (GitHub, Slack, Google Drive, Salesforce, databases, etc.) that you can deploy in three clicks. You can also bring your own MCP server or fork existing ones. For OAuth, just provide your client ID and secret and we handle the entire flow, including token refresh. Each user then gets an isolated MCP server instance configured with their own OAuth credentials automatically. What makes us different is that our serverless runtime hibernates idle MCP servers and resumes them with sub-second cold starts while preserving the state and connection. Our custom MCP engine is capable of managing thousands of concurrent connections, giving you a scalable service with per-user isolation. Other alternatives either run shared servers (security issues) or provision separate VMs per user (expensive and slow to scale). Our Python and TypeScript SDKs let you connect LLMs to MCP tools in a single function call, abstracting away the protocol complexity. But if you want to dig deep, you can just use standard MCP and our REST API ( https://ift.tt/Z0HUFy4 ) to connect to our platform. You can self-host ( https://ift.tt/51QEmvc ) or use the managed version at https://metorial.com . So far, we see enterprise teams use Metorial to have a central integration hub for tools like Salesforce, while startups use it to cut weeks of infra work on their side when building AI agents with integrations. Demo video: https://www.youtube.com/watch?v=07StSRNmJZ8 Our Repos: Metorial: https://ift.tt/51QEmvc , MCP Containers: https://ift.tt/ReKtIYW SDKs: Node/TypeScript: https://ift.tt/YAQreBg , Python: https://ift.tt/UVtjkRS We'd love to hear feedback, especially if you've dealt with deploying MCP at scale! https://ift.tt/51QEmvc October 14, 2025 at 09:49PM

Monday, October 13, 2025

Show HN: Make AI text sound human https://ift.tt/ZNkSU4R

Show HN: Make AI text sound human Transform ChatGPT, Claude, and Gemini text into natural, human-like writing with a single click. https://refine.so October 14, 2025 at 04:09AM

Show HN: I wrote a VectorDB in Go, built for Hackers, not Hyperscalers https://ift.tt/nRhmSF7

Show HN: I wrote a VectorDB in Go, built for Hackers, not Hyperscalers https://ift.tt/EbQ9N7w October 13, 2025 at 11:47PM

Show HN: FFTN, faster than FFTW in 700 lines of C https://ift.tt/YmeGgEn

Show HN: FFTN, faster than FFTW in 700 lines of C I am playing around with using arrays of arbitrary dimension as framework for designing FFT implementations, as opposed to the more classical approach of tensor products and butterflies (too complicated in my opinion). It turns out, that with a modern compiler, you do not need much complexity to make a really fast implementation. This implementation is for powers of 2, and optimized for arrays that do not fit in cache. I do think it would be better to use a higher-level language to implement other cases (e.g. n = 2^a * 3^b * 5^c, multiple small FFTs, higher-dimensional), so I am currently working on getting the SaC-compiler to generate this code. https://ift.tt/n7eDZgz October 13, 2025 at 11:46PM

Show HN: No-Code REST APIs (and LLM Tools/MCPs) for Postgres https://ift.tt/U6bxqNZ

Show HN: No-Code REST APIs (and LLM Tools/MCPs) for Postgres I am building QueryDeck.io, a no-code way to turn your Postgres into production-ready REST APIs in minutes — and it now also auto-generates LLM tool definitions and MCP servers from your SQL. You can deploy to our cloud or export a Node.js app to run on your own infra. Would love some feedback from the community! https://ift.tt/r1mzjnp October 13, 2025 at 11:26PM

Sunday, October 12, 2025

Show HN: I rewrote the express library to rust https://ift.tt/caxNhmw

Show HN: I rewrote the express library to rust https://shyam20001.github.io/rsjs/ October 13, 2025 at 01:35AM

Show HN: I built a simple ambient sound app with no ads or subscriptions https://ift.tt/ME7KxXq

Show HN: I built a simple ambient sound app with no ads or subscriptions I’ve always liked having background noise while working or falling asleep, but I got frustrated that most “white noise” or ambient sound apps are either paywalled, stuffed with ads, or try to upsell subscriptions for basic features. So I made Ambi, a small iOS app with a clean interface and a set of freely available ambient sounds — rain, waves, wind, birds, that sort of thing. You can mix them, adjust volume levels, and just let it play all night or while you work. Everything works offline and there are no hidden catches. It’s something I built for myself first, but I figured others might find it useful too. Feedback, bugs, and suggestions are all welcome. https://ift.tt/cKYte9N... https://ambisounds.app/ October 12, 2025 at 09:49PM

Show HN: Recallie AI – Duolingo for learning anything https://ift.tt/GpaxhME

Show HN: Recallie AI – Duolingo for learning anything I built an app with a Duolingo-style structured learning system where users can generate full courses from photos, documents, or typed input. Each course includes quizzes, flashcards, and notes, and users can also turn their notes into an AI-generated podcast for audio learning. It’s designed to help students and anyone who wants to learn a topic study smarter and retain more through interactive lessons, quizzes, and podcasts tailored to their learning style. Built with React Native Expo. https://youtu.be/jsRk1vN1xNA?si=iMGZFhj_N2iwBp5n https://ift.tt/WrTyXat October 13, 2025 at 12:43AM

Saturday, October 11, 2025

Show HN: Solving the cluster 1 problem with vCluster standalone https://ift.tt/7IATz15

Show HN: Solving the cluster 1 problem with vCluster standalone vcluster is an open source tool for Kubernetes multi tenancy and over the years it has matured to have hosted controlplane virtual cluster, shared virtual clusters but the host cluster problem was always there. With vcluster standalone, you can now create the first cluster also with the same developer experience and consolidate the multiple vendor problem. With this, you can now use vcluster for entire multi tenancy spectrum. Feel free to discuss, happy to answer any questuons. https://ift.tt/4Bb2fI6 October 8, 2025 at 11:50PM

Show HN: Sprite Garden - HTML Canvas 2D sandbox and farming https://ift.tt/jfsy32b

Show HN: Sprite Garden - HTML Canvas 2D sandbox and farming Sprite Garden: https://kherrick.github.io/sprite-garden/ A 2D sandbox exploration and farming game that runs entirely in the web browser. As a fully HTML, CSS, and JavaScript game, it is highly readable, hackable, and customizable. Included on "globalThis" is the "spriteGarden" global object with the game config and state readily available. Drawing with tiles is as easy as opening dev tools (use the menu in the browser as keyboard is captured), or entering the "Konami Code," for a full screen view and a map editor. - Share games from the world state manager - Explore unique procedurally generated biomes - Dig for resources like coal, iron, and gold - Use collected materials to place blocks and shape the world - Discover underground cave systems filled with resources - Plant and harvest different crops with "realistic" growth cycles Examples: - Preparing a QR Code to be mined: https://gist.github.com/kherrick/1191ae457e1f6e1a65031c38c2d... - Drawing a heart in the sky: https://gist.github.com/kherrick/3dc9af05bccc126b11cc26fb30a... - Entering the Konami Code (map editor / fullscreen): https://gist.github.com/kherrick/effbe1463d9b78da046f27c5d42... I'm unsure how the game should be taken further, or whether it should progress. Some potential ideas for the future include: - Input Box with JS Execution: Provide a safe, sandboxed input area in the game's UI where players can write small JS functions or scripts (instead of exposing it on globalThis). - API Exposure: Expose a controlled API or object representing game state and functions, like terrain manipulation, crop growth, player movement, to the user script so players can automate or modify behaviors. - Event Hooks: Allow players to register hooks into game events (e.g., world update, planting crops) where their custom code runs, enabling mods or custom automation. - Multiplayer: Use WebRTC to allow many players in the same world. - Actual gamification: Make reasons to play, health meter, powerups, plant combinations, enemies? - Better mobile controls: Currently on screen, no swiping for movement. - Easier building with blocks: Currently block position based on location of player. Also featured on: - Microsoft Store: https://ift.tt/NPx8e31 - Wayback Machine: https://ift.tt/KW8Er1J.... Feedback is highly welcome, and source is available at: https://ift.tt/OBmJ70l https://kherrick.github.io/sprite-garden/ October 12, 2025 at 04:45AM

Show HN: Gnokestation Is an Ultra Lightweight Web Desktop Environment https://ift.tt/d1Tn8pz

Show HN: Gnokestation Is an Ultra Lightweight Web Desktop Environment https://gnokestation.netlify.app October 12, 2025 at 12:32AM

Friday, October 10, 2025

Show HN: Iframetest.com https://ift.tt/bf5eB6N

Show HN: Iframetest.com https://iframetest.com/ October 6, 2025 at 04:55PM

Show HN: Modeling the Human Body in Rust So I Can Cmd+Click Through It https://ift.tt/RPuAv7r

Show HN: Modeling the Human Body in Rust So I Can Cmd+Click Through It I started this trying to understand two things: why my Asian friends turn red after drinking, and why several friends all seemed to have migraine clusters. I was reading medical papers and textbooks, but kept getting lost jumping between topics. I thought: what if I could just Cmd+Click through this like code? What if "ALDH2 gene" was actually clickable, and took me to the variant, the phenotype, the population frequencies? So I started modeling human biology in Rust with my Ralph agent (Claude in a loop, ty ghuntley). Turns out the type system is perfect for this. Every biological entity is strongly-typed with relationships enforced at compile time. After 1 day of agent coding: - 277 Rust files, ~95k lines of code - 1,561 tests passing - 13 complete organ systems - Genetics with ancestry-specific variants - Clinical pathology models Try it: git clone https://ift.tt/Kyi8Ix2 cd open_human_ontology cargo run --example ide_navigation_demo Then open `examples/ide_navigation_demo.rs` and Cmd+Click through: Understanding Asian flush: AsianGeneticVariantsCatalog::get_metabolic_variants() // Click through to: // → ALDH2 gene on chromosome 12q24.12 // → rs671 variant (Glu504Lys) // → 40% frequency in Japanese population // → Alcohol flush reaction // → 10x esophageal cancer risk with alcohol // → Acetaldehyde metabolism pathway Understanding migraines: Migraine { subtype: WithAura, triggers: [Stress, LackOfSleep, HormonalChanges], genetic_variants: ["rs2075968", "rs1835740"], ... } // Click through to: // → 17 migraine trigger types // → 12 aura symptom types // → Genetic risk factors // → Why clusters happen (HormonalChanges → Menstruation) Now I can actually navigate the connections instead of flipping through PDFs. Heart → CoronaryArtery → Plaque. VisualCortex → 200M neurons → NeuralConnection pathways. It's like Wikipedia but type-checked and with jump-to-definition. This isn't production medical software - it's a learning tool. But it's way more useful than textbooks for understanding how biological systems connect. The agent keeps expanding it. Sometimes it OOMs but that's part of the fun. Tech: Rust, nalgebra, serde, rayon, proptest I am not a dr or medical professional this is for my education you can commit to it if you want to or review and open some PR's if you find wrong information or want to add references. https://ift.tt/Kyi8Ix2 October 11, 2025 at 12:59AM

Show HN: Sora Watermark Remover https://ift.tt/BwTmryc

Show HN: Sora Watermark Remover Sora Watermark Remover is an online AI-powered tool designed to automatically detect and remove watermarks from videos generated by Sora AI. It preserves video quality while efficiently removing text, logos, timestamps, and other overlays. The platform supports multiple video formats, processes files quickly, and ensures privacy by deleting uploaded videos after processing. Ideal for content creators, marketers, and video editors seeking a fast, professional watermark removal solution. https://sorawatermark.live October 10, 2025 at 11:21PM

Thursday, October 9, 2025

Show HN: I Wrote a Full Text Search Engine from Scratch in Go https://ift.tt/BYACUqD

Show HN: I Wrote a Full Text Search Engine from Scratch in Go https://ift.tt/akhpIdc October 10, 2025 at 12:09AM

Show HN: JavaScript canvas Pie 3D chart https://ift.tt/DRmyrA9

Show HN: JavaScript canvas Pie 3D chart https://koroliov.github.io/x-charts-js/docs/demos/ October 9, 2025 at 10:31PM

Show HN: I built an enterprise-grade data room for $40 https://ift.tt/Ul8Ri3O

Show HN: I built an enterprise-grade data room for $40 https://www.peony.ink/ October 9, 2025 at 09:49PM

Show HN: GYST – A new take on the desktop interface (alpha) https://ift.tt/Va1FhHO

Show HN: GYST – A new take on the desktop interface (alpha) Hi HN! I’ve been working on a tool that merges file explorer, whiteboard, bookmarking, note-taking & simple graphic design into one lightweight interface. The idea is to make all these tools feel like one fluid space instead of 5 separate tools. The hope is to replicate the feeling of a physical desk : where order and freedom coexist. This 15-min video walks through the current alpha and the vision for the full product : https://youtu.be/AcWzuBBuiPM I’d love your feedback — especially around the concept and UX. The alpha is online if you want to try it: https://gyst.fr This is a solo project for now, inspired by the “second brain” / PKM movement and my own frustration with fragmented tools and outdated UX. https://www.youtube.com/watch?v=AcWzuBBuiPM October 9, 2025 at 11:39PM

Wednesday, October 8, 2025

Show HN: KI Song Erstellen Kostenlos – AI Music Generator FüR Deutsche Musik https://ift.tt/12qTv6r

Show HN: KI Song Erstellen Kostenlos – AI Music Generator FüR Deutsche Musik Kostenloser KI-Musikgenerator für deutsche Songs. Text rein → professioneller Song in wenigen Minuten. Gebaut für Content Creator, die Copyright-freie Musik brauchen. https://ift.tt/NcDX4GK GitHub: https://ift.tt/AcMy0E9 Probiert es aus! https://ift.tt/NcDX4GK October 8, 2025 at 11:56PM

Show HN: I built a local-first podcast app https://ift.tt/W2QfwEi

Show HN: I built a local-first podcast app I worked on early podcast software in 2004 (iPodder/Juice) and have been a heavy podcast consumer ever since. I wanted a podcast app that respects your privacy and embraces the open web—and to explore what's possible in the browser. The result is wherever.audio, which you can try right now at the link above. How it works: It's a progressive web app that stores all your subscriptions and data locally in your browser using IndexedDB. Add it to your home screen and it feels native. Works offline with downloaded episodes. No central server storing your data—just some Cloudflare/AWS helpers to smooth out browser limitations. What makes it different: - True local-first: Your data stays on your device - Custom feeds: Add any RSS feed, not just what's in a directory - On-device search: Search across all feeds and episodes, including your custom ones - Podcasting 2.0 support: Chapters, transcripts, funding tags, and others - Auto-generated chapters: For popular shows that don't have them - AI-powered discovery: Ask questions to find shows and episodes (this feature does send queries to a 3rd party API, and also uses anonymized analytics while we work out the prompts) - Audio-guided tutorials: Interactive walkthroughs with voice guidance and visual cues The basics work well too: Standard playback features, queue management, speed controls, etc. I'm really interested in feedback—this is more passion project than business right now. I've been dogfooding it as my daily podcast app for over a year, and I'm open to exploring making it a business if people find it valuable. Curious if there are unmet needs that a privacy-focused, open web approach could address. https://wherever.audio October 8, 2025 at 11:46PM

Tuesday, October 7, 2025

Show HN: Arc – high-throughput time-series warehouse with DuckDB analytics https://ift.tt/MFGSIye

Show HN: Arc – high-throughput time-series warehouse with DuckDB analytics Hi HN, I’m Ignacio, founder at Basekick Labs. Over the past months I’ve been building Arc, a time-series data platform designed to combine very fast ingestion with strong analytical queries. What Arc does? Ingest via a binary MessagePack API (fast path), Compatible with Line Protocol for existing tools (Like InfluxDB, I'm ex Influxer), Store data as Parquet with hourly partitions, Query via DuckDB engine using SQL Why I built it: Many systems force you to trade retention, throughput, or complexity. I wanted something where ingestion performance doesn’t kill your analytics. Performance & benchmarks that I have so far. Write throughput: ~1.88M records/sec (MessagePack, untuned) in my M3 Pro Max (14 cores, 16gb RAM) ClickBench on AWS c6a.4xlarge: 35.18 s cold, ~0.81 s hot (43/43 queries succeeded) In those runs, caching was disabled to match benchmark rules; enabling cache in production gives ~20% faster repeated queries I’ve open-sourced the Arc repo so you can dive into implementation, benchmarks, and code. Would love your thoughts, critiques, and use-case ideas. Thanks! https://ift.tt/crCv2Tt October 7, 2025 at 11:40PM

Show HN: Kalendis – Scheduling API (keep your UI, we handle timezones/DST) https://ift.tt/EJzNBiI

Show HN: Timelinize – Privately organize your own data from everywhere, locally https://ift.tt/RUkfc3G

Show HN: Timelinize – Privately organize your own data from everywhere, locally https://timelinize.com October 7, 2025 at 11:10PM

Show HN: Mars – Personal AI robot for builders (< $2k) https://news.ycombinator.com/item?id=45504127

Show HN: Mars – Personal AI robot for builders (< $2k) Hey, we’re Axel and Vignesh, cofounders of Innate ( https://www.innate.bot/ ). We just launched MARS, a general-purpose robot with an open onboard agentic OS built on top of ROS2. Overview: https://youtu.be/GEOMYDXv6pE Control demo: https://youtu.be/_Cw5fGa8i3s Videos of autonomous use-cases: https://docs.innate.bot/welcome/mars-example-use-cases Quickstart: https://docs.innate.bot/welcome/mars-quick-start . Our last thread: https://news.ycombinator.com/item?id=42451707 When we started we felt there is currently no good affordable general-purpose that anyone can build on. There’s no lack of demand: hugging face’s SO-100 and LeKiwi are pretty clear successes already; but the hardware is unreliable, the software experience is barebone and keeps changing, and you often need to buy hidden extras to make them work (starting with a computer with a good gpu). The Turtlebots were good, but are getting outdated. The open-source hobbyist movement lacks really good platforms to build on, and we wanted something robust and accessible. MARS is our attempt at making a first intuitive AI robot for everyone. What it is: - It comes assembled and calibrated - Has onboard compute with a jetson orin nano 8gb - a 5DoF arm with a wrist camera - Sensors: RGBD wide-angle cam, 2D LiDAR, speakers - Control via a dedicated app and a leader arm that plugs in iPhone and Android - 2 additional USB ports + GPIO pins for extra sensors or effectors. - And our novel SDK called BASIC that allows to run it like an AI agent with VLAs. It boots in a minute, can be controlled via phone, programmable in depth with a PC, and the onboard agent lets it see, talk, plan, and act in real-time. Our SDK BASIC allows to create “behaviors” (our name for programs) ranging from a simple hello world to a very complex long-horizon task involving reasoning, planning, navigation and manipulation. You can create skills that behaviors can run autonomously by training the arm or writing code tools, like for an AI agent. You can also call the ROS2 topics to control the robot at a low-level. And anything created on top of this SDK can be easily shared with anyone else by just sharing the files. This is intended for hobbyist builders and education, and we would love to have your feedback! p.s. If you want to try it, there’s a temporary code HACKERNEWS-INNATE-MARS that lowers the price to $1,799. p.p.s The hardware and software will be open-sourced too, if some of you want to contribute or help us prepare it properly feel free to join our discord at https://discord.gg/YvqQbGKH October 7, 2025 at 10:11PM

Monday, October 6, 2025

Show HN: Tangled – Git collaboration built on AT Protocol https://ift.tt/Gh0dMUT

Show HN: Tangled – Git collaboration built on AT Protocol https://tangled.org October 7, 2025 at 12:28AM

Show HN: I've build a platform for writing technical/scientific documents https://ift.tt/HEgc10r

Show HN: I've build a platform for writing technical/scientific documents https://ift.tt/on0UzQK October 6, 2025 at 05:58PM

Show HN: I Built a Transcription CLI Because Uploading 4GB Videos Was Killing Me https://ift.tt/5vo68dE

Show HN: I Built a Transcription CLI Because Uploading 4GB Videos Was Killing Me https://ift.tt/Q0d9M62 October 7, 2025 at 01:22AM

Show HN: A Digital Twin of my coffee roaster that runs in the browser https://ift.tt/bfRDQHi

Show HN: A Digital Twin of my coffee roaster that runs in the browser I built this website to host a data-driven model of my coffee sample roaster. I realized after 20 or so batches on the machine that while the controls are intuitive (heat, fan, and drum speeds), the physics can be unintuitive. I wanted to use my historical roast data to create and tune a model that I could use to do roast planning, control, and to help me build my own intuition for roasting. This website lets you interact with my roaster in a virtual, risk-free setting! The models are custom Machine Learning modules that honor roaster physics and bean physics (this is not GPT/transformer-based). Buncha math. The models are trained on about a dozen real roasts. The default bean model is an Ethiopian Guji bean. My next steps are to add other roasters and the ability to practice control/reference tracking. https://ift.tt/2rVuUsp October 6, 2025 at 11:31PM

Sunday, October 5, 2025

Show HN: A Node.js CLI tool to generate ai.txt, llms.txt, robots.txt, humans.txt https://ift.tt/0Qn2wXJ

Show HN: A Node.js CLI tool to generate ai.txt, llms.txt, robots.txt, humans.txt https://ift.tt/RqsraVn October 6, 2025 at 10:58AM

Show HN: High-fidelity, compact, and real time rendering of university campus https://ift.tt/s7PMTtE

Show HN: High-fidelity, compact, and real time rendering of university campus Technical thread: https://ift.tt/RyBZJn6 https://hoanh.space/aalto/ October 6, 2025 at 06:51AM

Show HN: Re-Implementing the macOS Spatial Finder https://ift.tt/3IEitGA

Show HN: Re-Implementing the macOS Spatial Finder Modern macOS versions open folders in seemingly random positions and sizes. This set of scripts restores the behaviour known to classic macOS, where: - folders remember where they were on the screen - folders remember how big they were This enables you to utilise the brain's superb spatial memory for file management. https://ift.tt/arTMPJ0 October 2, 2025 at 03:58AM

Saturday, October 4, 2025

Show HN: World Amazing Framework: Like Django for Civilization https://ift.tt/tqOdNS9

Show HN: World Amazing Framework: Like Django for Civilization Any initial thoughts? This framework is meant to be a tool for construction, so if you want to play around with it for creating potential specific implementations, you can drop the contents of the website, the GitHub README, and the entire overview.md into an AI chat, and that should be enough to use the framework, at least conceptually. Would y'all want me to pre-prime a chat in Google AI Studio with the full context of the plan and some basic direction for discourse? I can share a link to a ready-to-go environment. The core documentation should answer most mechanical questions. And if you feed the docs into an AI chat, you can ask it any question you may have, or to simply ask it to explain something in different ways, or hypothesize solutions to any world issue, either systemic or regional. Gemini Pro 2.5 can take the full doc in one prompt, and its ability to co-create ideas is remarkable. I've been using it mostly through the AI Studio interface. Much of the overview is as much my work as it is a synthesis of my collaboration with Gemini Pro 2.5, ChatGPT-4o, and some early contributions from GPT-4 about a year ago. Before LLMs, I was building out pamphlet-style pages on a website (that are up at whomanatee.org, which is the base wrapper implementation of the framework), and I was planning to use them as talking points. I was anticipating that much of the deep thinking would have to happen in slow, public discourse. With LLMs, I've been able to stress-test these ideas from every possible angle, using any past event or theory to see if the framework could withstand scrutiny. At one point, a model argued that Adam Smith would have rejected this idea as fantasy. So I worked with it to develop an economic plan that "synthetic Adam" praised. It's incredible that we now have the ability to get synthesized thoughts from almost any perspective. You could ask it, "What would Barack Obama think of this plan? And using the framework, what would be your response to any hesitations he may have?" And it responds with incredible analysis, synthesis, and feedback. https://ift.tt/125zuFo October 5, 2025 at 05:14AM

Show HN: Run – a CLI universal code runner I built while learning Rust https://ift.tt/4Ro6ulr

Show HN: Run – a CLI universal code runner I built while learning Rust Hi HN — I’m learning Rust and decided to build a universal CLI for running code in many languages. The tool, Run, aims to be a single, minimal dependency utility for: running one-off snippets (from CLI flags), running files, reading and executing piped stdin, and providing language-specific REPLs that you can switch between interactively. I designed it to support both interpreted languages (Python, JS, Ruby, etc.) and compiled languages (Rust, Go, C/C++). It detects languages from flags or file extensions, can compile temporary files for compiled languages, and exposes a unified REPL experience with commands like :help, :lang, and :quit. Install: cargo install run-kit (or use the platform downloads on GitHub). Source & releases: https://ift.tt/v9szjWa I used Rust while following the official learning resources and used AI to speed up development, so I expect there are bugs and rough edges. I’d love feedback on: usability and UX of the REPL, edge cases for piping input to language runtimes, security considerations (sandboxing/resource limits), packaging and cross-platform distribution. Thanks — I’ll try to answer questions and share design notes. https://ift.tt/v9szjWa October 5, 2025 at 01:34AM

Show HN: Surf-Wayland https://ift.tt/f3tOsEi

Show HN: Surf-Wayland Porting of the suckless surf browser for Wayland https://ift.tt/vpzBKmG October 4, 2025 at 06:58PM

Show HN: Tempmail Mail https://ift.tt/yRZiJGa

Show HN: Tempmail Mail A proxy for your main email https://ift.tt/pq6lDn8 October 4, 2025 at 10:50PM

Friday, October 3, 2025

Show HN: Phpssg a Lightweight Static Site Generator in Pure PHP with DI https://ift.tt/yEe4xgB

Show HN: Phpssg a Lightweight Static Site Generator in Pure PHP with DI https://ift.tt/q7G2ZTb October 3, 2025 at 11:04PM

Show HN: API for removing watermarks from Sora 2 videos https://ift.tt/BiogHYE

Show HN: API for removing watermarks from Sora 2 videos Computer vision for detection, advanced inpainting for removal, FFmpeg for audio handling. Simple REST endpoints with webhook callbacks for async processing. Built this after seeing developers struggle with building their own ML pipelines for video post-processing. The API handles the complexity—you just POST a video and get back a clean file https://cliploom.app October 3, 2025 at 10:54PM

Show HN: Linux Command Challenges for Beginners https://ift.tt/QRgF8M3

Show HN: Linux Command Challenges for Beginners Learn Linux basics through short, interactive challenges in this web app. Try it here: https://linuxlabs.app Feedback welcome! https://linuxlabs.app October 3, 2025 at 10:12PM

Show HN: heyyy.chat – WebRTC-based Omegle-clone, Video Chat with Random People https://ift.tt/4y1gnFZ

Show HN: heyyy.chat – WebRTC-based Omegle-clone, Video Chat with Random People https://heyyy.chat October 3, 2025 at 08:58PM

Thursday, October 2, 2025

Show HN: YNOT – Free, Open-Source YouTube Downloader https://ift.tt/DXQUhIs

Show HN: YNOT – Free, Open-Source YouTube Downloader Hey HN! I built YNOT, a simple cross-platform YouTube downloader with a GUI. It's powered by yt-dlp and completely free/open-source (WTFPL license). Key features: - Simple GUI - just paste URL and download - Downloads HD/4K videos - Cross-platform (Windows, macOS, Linux) - No ads, no tracking, completely private - Lightweight and fast GitHub: https://ift.tt/qHg34h0 I'd love to hear your feedback and suggestions! https://james-see.github.io/ynot/ October 3, 2025 at 05:56AM

Show HN: Enhance – A Terminal UI for GitHub Actions https://ift.tt/Ppd3GX7

Show HN: Enhance – A Terminal UI for GitHub Actions I'm very excited to share what I've been working on lately! Introducing ENHANCE, a terminal UI for GitHub Actions that lets you easily see and interact with your PRs checks. It's available under a sponsorware model. Get more info on the site: -> https://ift.tt/d0IUVof This is an attempt to make my OSS development something sustainable. Happy to hear feedback about the model as well as the tool! Cheers! https://ift.tt/gd8EBHz October 3, 2025 at 02:19AM

Show HN: Traceroute Visualizer https://ift.tt/cK1EI0y

Show HN: Traceroute Visualizer This nifty tool plots the traceroute results and shows you the RTT as well as the distance travelled by the packets! Supports MTR, flyingroutes and of course, traceroute. The existing solutions were too limited so I made that. Let me know if you have any feedback https://ift.tt/aOjCeX5 September 29, 2025 at 03:31PM

Wednesday, October 1, 2025

Show HN: Rostra is a P2P (f2f) social network https://ift.tt/W0goFuM

Show HN: Rostra is a P2P (f2f) social network A public instance is available at https://rostra.me/ . It will default to showing the interface from the perspective of my own identity, in a read-only mode. Click "Logout" and then "Random" to generate your own identity to play with. https://app.radicle.xyz/nodes/radicle.dpc.pw/rad%3AzzK566qFsZnXomX2juRjxj9K1LuF October 2, 2025 at 05:10AM

Show HN: Open-source project – HTTP cache and reverse proxy https://ift.tt/96Hs5ia

Show HN: Open-source project – HTTP cache and reverse proxy https://borislavv.github.io/advcache.dev/ October 1, 2025 at 02:41PM

Show HN: Spit Notes – A songwriting app that keeps lyrics and audio together https://ift.tt/FQ2q8DG

Show HN: Spit Notes – A songwriting app that keeps lyrics and audio together Any songwriter who uses the iOS Notes app to write their lyrics has a mess of New Recording 142 voice memos in their Voice Memos app. I made Spit Notes as basically the Notes app but with a built-in voice recorder that keeps your audio files neatly organized on the same line as your lyrics. Now you can quickly capture your song ideas while driving or when you wake up in the middle of the night without worrying about losing them in your pile of untitled voice memos. While you can attach audio to notes with other apps, adding and recording audio has a lot of friction, and often the layout of the audio elements in those apps are too pronounced to keep the text flowing seamlessly. This is not the case with Spit Notes. I've wanted this app to exist for years but I put off making it myself because I knew it would take me a lot of time to build manually without knowing Swift. In recent years I've been writing AI-assisted code, but with AI coding agents getting better and better, 3 months ago I decided to see if I could vibe code a full product. The code for this project was not AI-assisted, but human-assisted, with me providing the vision and feel of the app, while the AI agent takes that, makes a pass at the code base, and then I QA it, letting the AI know what worked/didn't work, and iterating. I started with paying for Cursor and using Opus 4 but after getting insanely good initial results with Opus 4 and seeing my cursor costs start to rise, I remembered this post https://ift.tt/8qhTCFt and took the plunge with Claude Code max $200 plan. This turned out to be an incredible value because it allowed me to use claude basically without limit. However, Gemini still had the biggest context window and as the project grew I had to use Gemini to create plans for big features and find deep bugs across all of the AI-generated modules. Pro tip: create broad reference files for you and the AI, like an ARCHITECTURE.md where you keep a human-readable version of the technical big picture. You can then reference that for the AI so it stays aligned with your current progression. Once the Codex CLI became available on homebrew it was a wrap. I cancelled my claude code max plan and have been happily coding without ever hitting any rate limits (other than when they made that update that accidentally reduced rate limits instead of increase them). Today, I pay for for chatgpt plus and gemini $20 plan and am able to clear most obstacles on the first or second prompt. I haven't tried opus 4.5 but since I am not really getting stuck with codex, for now I'll stick with that. https://ift.tt/kXWPixR October 1, 2025 at 11:42PM

Show HN: Ocrisp, One-Click RAG Implementation, Simple and Portable https://ift.tt/CLZDeGF

Show HN: Ocrisp, One-Click RAG Implementation, Simple and Portable https://ift.tt/XchDxaE October 1, 2025 at 09:53PM