ads

Wednesday, February 4, 2026

Show HN: EpsteIn – Search the Epstein files for your LinkedIn connections https://ift.tt/2oxk74X

Show HN: EpsteIn – Search the Epstein files for your LinkedIn connections https://ift.tt/qgEoSYT February 5, 2026 at 02:24AM

Show HN: Tabstack Research – An API for verified web research (by Mozilla) https://ift.tt/1Fwnk5V

Show HN: Tabstack Research – An API for verified web research (by Mozilla) Hi HN, My team and I are building Tabstack to handle the web layer for AI agents. Today we are sharing Tabstack Research, an API for multi-step web discovery and synthesis. https://ift.tt/sAWhz9V In many agent systems, there is a clear distinction between extracting structured data from a single page and answering a question that requires reading across many sources. The first case is fairly well served today. The second usually is not. Most teams handle research by combining search, scraping, and summarization. This becomes brittle and expensive at scale. You end up managing browser orchestration, moving large amounts of raw text just to extract a few claims, and writing custom logic to check if a question was actually answered. We built Tabstack Research to move this reasoning loop into the infrastructure layer. You send a goal, and the system: - Decomposes it into targeted sub-questions to hit different data silos. - Navigates the web using fetches or browser automation as needed. - Extracts and verifies claims before synthesis to keep the context window focused on signal. - Checks coverage against the original intent and pivots if it detects information gaps. For example, if a search for enterprise policies identifies that data is fragmented across multiple sub-services (like Teams data living in SharePoint), the engine detects that gap and automatically pivots to find the missing documentation. The goal is to return something an application can rely on directly: a structured object with inline citations and direct links to the source text, rather than a list of links or a black-box summary. The blog post linked above goes into more detail on the engine architecture and the technical challenges of scaling agentic browsing. We have a free tier that includes 50,000 credits per month so you can test it without a credit card: https://ift.tt/JMaSWPU I would love to get your feedback on the approach and answer any questions about the stack. February 5, 2026 at 12:57AM

Show HN: GitHub Browser Plugin for AI Contribution Blame in Pull Requests https://ift.tt/nXVNuHM

Show HN: GitHub Browser Plugin for AI Contribution Blame in Pull Requests https://ift.tt/IKgtoQl February 3, 2026 at 09:35PM

Tuesday, February 3, 2026

Show HN: I built an AI movie making and design engine in Rust https://ift.tt/Z5BEpPc

Show HN: I built an AI movie making and design engine in Rust I've been a photons-on-glass filmmaker for over ten years, and I've been developing ArtCraft for myself, my friends, and my colleagues. All of my film school friends have a lot of ambition, but the production pyramid doesn't allow individual talent to shine easily. 10,000 students go to film school, yet only a handful get to helm projects they want with full autonomy - and almost never at the blockbuster budget levels that would afford the creative vision they want. There's a lot of nepotism, too. AI is the personal computer moment for film. The DAW. One of my friends has done rotoscoping with live actors: https://www.youtube.com/watch?v=Tii9uF0nAx4 The Corridor folks show off a lot of creativity with this tech: https://www.youtube.com/watch?v=_9LX9HSQkWo https://www.youtube.com/watch?v=DSRrSO7QhXY https://www.youtube.com/watch?v=iq5JaG53dho We've been making silly shorts ourselves: https://www.youtube.com/watch?v=oqoCWdOwr2U https://www.youtube.com/watch?v=H4NFXGMuwpY The secret is that a lot of studios have been using AI for well over a year now. You just don't notice it, and they won't ever tell you because of the stigma. It's the "bad toupee fallacy" - you'll only notice it when it's bad, and they'll never tell you otherwise. Comfy is neat, but I work with folks that don't intuit node graphs and that either don't have graphics cards with adequate VRAM, or that can't manage Python dependencies. The foundation models are all pretty competitive, and they're becoming increasingly controllable - and that's the big thing - control. So I've been working on the UI/UX control layer. ArtCraft has 2D and 3D control surfaces, where the 3D portion can be used as a strong and intuitive ControlNet for "Image-to-Image" (I2I) and "Image-to-Video" (I2V) workflows. It's almost like a WYSIWYG, and I'm confident that this is the direction the tech will evolve for creative professionals rather than text-centric prompting. I've been frustrated with tools like Gimp and Blender for a while. I'm no UX/UI maestro, but I've never enjoyed complicated tools - especially complicated OSS tools. Commercial-grade tools are better. Figma is sublime. An IDE for creatives should be simple, magical, and powerful. ArtCraft lets you drag and drop from a variety of creative canvases and an asset drawer easily. It's fast and intuitive. Bouncing between text-to-image for quick prototyping, image editing, 3d gen, to 3d compositing is fluid. It feels like "crafting" rather than prompting or node graph wizardry. ArtCraft, being a desktop app, lets us log you into 3rd party compute providers. I'm a big proponent of using and integrating the models you subscribe to wherever you have them. This has let us integrate WorldLabs' Marble Gaussian Splats, for instance, and nobody else has done that. My plan is to add every provider over time, including generic API key-based compute providers like FAL and Replicate. I don't care if you pay for ArtCraft - I just want it to be useful. Two disclaimers: ArtCraft is "fair source" - I'd like to go the Cockroach DB route and eventually get funding, but keep the tool itself 100% source available for people to build and run for themselves. Obsidian, but with source code. If we got big, I'd spend a lot of time making movies. Right now ArtCraft is tied to a lightweight cloud service - I don't like this. It was a choice so I could reuse an old project and go fast, but I intend for this to work fully offline soon. All server code is in the monorepo, so you can run everything yourself. In the fullness of time, I do envision a portable OSS cloud for various AI tools to read/write to like a Github for assets, but that's just a distant idea right now. I've written about roadmap in the repo: I'd like to develop integrations for every compute provider, rewrite the frontend UI/UX in Bevy for a fully native client, and integrate local models too. https://ift.tt/N2Rsw5S February 3, 2026 at 10:42PM

Monday, February 2, 2026

Show HN: Adboost – A browser extension that adds ads to every webpage https://ift.tt/ntYX7lL

Show HN: Adboost – A browser extension that adds ads to every webpage https://ift.tt/ArVUwN3 February 2, 2026 at 08:11PM

Show HN: Cloud-cost-CLI – Find cloud $$ waste in AWS, Azure and GCP https://ift.tt/71Gk4Li

Show HN: Cloud-cost-CLI – Find cloud $$ waste in AWS, Azure and GCP Hey HN! I built a CLI tool to find cost-saving opportunities in AWS, Azure, and GCP. Why? Existing cost management tools are either expensive SaaS products or slow dashboards buried in cloud consoles. I wanted something fast, CLI-first, and multi-cloud that I could run in CI/CD or my terminal. What it does: - Scans your cloud accounts and finds idle VMs, unattached volumes, oversized databases, unused resources - Returns a ranked list of opportunities with estimated monthly savings - 26 analyzers across AWS, Azure, and GCP - Read-only (never modifies infrastructure) Key features: • HTML reports with interactive charts (new in v0.6.2) • AI-powered explanations (OpenAI or local Ollama) • Export formats: HTML, Excel, CSV, JSON, terminal • Multi-Cloud - AWS, Azure, and GCP support (26 analyzers) Quick example: npm install -g cloud-cost-cli cloud-cost-cli scan --provider aws --output html Real impact: One scan found $11k/year in savings (empty App Service Plan, over-provisioned CosmosDB, idle caches). Technical stack: - TypeScript - AWS/Azure/GCP SDKs - Commander.js for CLI - Chart.js for HTML reports - Optional OpenAI/Ollama integration Open source (MIT): https://ift.tt/QoIlSie npm: cloud-cost-cli Would love feedback on: 1. What features would be most useful? 2. Should I add historical tracking (trends)? 3. Any missing cloud providers? Happy to answer questions! https://ift.tt/QoIlSie February 2, 2026 at 11:45PM

Sunday, February 1, 2026

Show HN: Voiden – an offline, Git-native API tool built around Markdown https://ift.tt/D8LgMjm

Show HN: Voiden – an offline, Git-native API tool built around Markdown Hi HN, We have open-sourced Voiden. Most API tools are built like platforms. They are heavy because they optimize for accounts, sync, and abstraction - not for simple, local API work. Voiden treats API tooling as files. It’s an offline-first, Git-native API tool built on Markdown, where specs, tests, and docs live together as executable Markdown in your repo. Git is the source of truth. No cloud. No syncing. No accounts. No telemetry.Just Markdown, Git, hotkeys, and your damn specs. Voiden is extensible via plugins (including gRPC and WSS). Repo: https://ift.tt/F4s927Y Download Voiden here : https://ift.tt/4cj0y9l We'd love feedback from folks tired of overcomplicated and bloated API tooling ! https://ift.tt/F4s927Y February 1, 2026 at 10:09PM