ads

Friday, May 1, 2026

Show HN: AI CAD Harness https://ift.tt/yJCBOXY

Show HN: AI CAD Harness Hi HN, I'm Zach, one of the co-founders of Adam ( https://adam.new ). We've been on HN twice before with text-to-CAD/3D experiments [1][2]. The honest takeaway from those threads: prompt-to-3D model web apps are fun, but serious mechanical engineers don't want a black box that spits out an STL. They want help inside the CAD tool they already use, with full visibility and control over the feature tree. So we built that. Adam is now a harness that integrates directly with your CAD. It reads your parts, understands the existing feature tree, and edits it for you agentically. We are now live in beta on Onshape and Fusion! [3]: Install link Autodesk Fusion: https://ift.tt/mVAtnav Install link PTC Onshape: https://ift.tt/MRZtHFT... Things people are using it for today: - "Merge redundant features and clean up my tree" - "Rename every feature so the tree is actually readable" - "Round all internal edges with a 2mm fillet" - “Parametrize my model” Along with of course, using Adam to generate CAD end-to-end! A few things we care about that aren't obvious from the listing: 1. From the start we have always believed in CAD as code as the right abstraction. Our harness leverages Onshape's FeatureScript and Python in Fusion heavily. 2. We run an internal CAD benchmark across frontier models. There has been a massive jump in the spatial reasoning capabilities of recently released models. Particularly GPT 5.5 and Opus 4.7 [4] [5] 3. We open-sourced our earlier text-to-CAD work [6] A note on the Anthropic Autodesk connector that shipped a couple days ago [7]: We think it's great for the space and validates the direction. Where Adam is different: - Model-agnostic. We pick whichever frontier model is winning on each task type from our own internal bench, instead of being tied to one lab. - We live natively in your CAD apps and are actively building integrations across all programs What would you want an in-CAD agent to do that nothing does today? [1] https://ift.tt/nVt4pSJ [2] https://ift.tt/UEvbD3W [3] https://ift.tt/REXc5jo [4] https://ift.tt/Wn0SCbu [5] https://ift.tt/UEPojqB [6] https://ift.tt/38jMl1E [7] https://ift.tt/LeQkl1f https://ift.tt/mVAtnav May 2, 2026 at 12:43AM

Show HN: My Private GitHub on Postgres https://ift.tt/RW1Ac34

Show HN: My Private GitHub on Postgres https://ift.tt/TYsyxGL May 2, 2026 at 12:40AM

Show HN: N=1 – iOS app for structured longevity self-protocols https://ift.tt/zhQKCE3

Show HN: N=1 – iOS app for structured longevity self-protocols Hello My name is Henry. I built this app for people who want to know for sure that things that they are trying are actually working. I am looking for enthusiastic people who want to see longevity and bio-hacker community grow. At the moment the app is completely free to use. There is no sign up or anything like that. I need your feedback to build something beautiful. https://ift.tt/BJbNIus May 2, 2026 at 12:30AM

Show HN: Access OPFS from multiple tabs using a fake Shared Worker https://ift.tt/9uvykVH

Show HN: Access OPFS from multiple tabs using a fake Shared Worker https://ift.tt/2fDp1sR May 1, 2026 at 11:15PM

Thursday, April 30, 2026

Show HN: TRiP – a complete transformer engine in C built from scratch just by me https://ift.tt/hoJ4UjI

Show HN: TRiP – a complete transformer engine in C built from scratch just by me https://ift.tt/sJtQFvD April 30, 2026 at 11:48PM

Show HN: Phase Router – capacity-aware routing for MoE https://ift.tt/lF2nfvX

Show HN: Phase Router – capacity-aware routing for MoE https://ift.tt/5fpDFEi April 30, 2026 at 11:37PM

Show HN: A programming language where the only token is the word "vibe" https://ift.tt/iQ5j9Ia

Show HN: A programming language where the only token is the word "vibe" Fuzzy opcode windows. You don't need an exact number of vibes, just roughly right. https://wevibe.fyi April 30, 2026 at 11:14PM

Show HN: FusionCore: ROS 2 sensor fusion that outperforms robot_localization https://ift.tt/o29iO5j

Show HN: FusionCore: ROS 2 sensor fusion that outperforms robot_localization I built sensor fusion for a mobile robot and reached for robot_localization like everyone does. After spending too long fighting navsat_transform, UTM zone boundaries, and YAML covariance tuning, I wrote my own. FusionCore is a 22 state UKF that fuses IMU, wheel encoders, and GPS in ECEF directly (no coordinate projection, no extra node). It estimates IMU bias, adapts its noise covariance automatically from the innovation sequence, and gates outliers with a chi squared test on every sensor. I benchmarked it against robot_localization EKF on 6 sequences from the NCLT public dataset (University of Michigan, real robot, real GPS, RTK ground truth). It wins 5 of 6. On the 6th sequence (fall, degraded GPS over a long period) it loses badly. RL UKF diverged to NaN on all six. Configs, methodology, and full reproduce instructions are in the benchmarks/ folder. https://ift.tt/fLhu3qE April 28, 2026 at 08:46PM

Wednesday, April 29, 2026

Show HN: Generative UI Library for React https://ift.tt/Kd6wCQA

Show HN: Generative UI Library for React https://ift.tt/9bhJ4WV April 30, 2026 at 02:28AM

Show HN: Send your first Peppol e-invoice in 5 minutes (EU mandate live) https://ift.tt/7QHGuOS

Show HN: Send your first Peppol e-invoice in 5 minutes (EU mandate live) https://getpeppr.dev/ April 30, 2026 at 12:36AM

Show HN: A new benchmark for testing LLMs for deterministic outputs https://ift.tt/R8lLrVa

Show HN: A new benchmark for testing LLMs for deterministic outputs When building workflows that rely on LLMs, we commonly use structured output for programmatic use cases like converting an invoice into rows or meeting transcripts into tickets or even complex PDFs into database entries. The model may return the schema you want, but with hallucinated values like `invoice_date` being off by 2 months or the transcript array ordered wrongly. The JSON is valid, but the values are not. Structured output today is a big part of using LLMs, especially when building deterministic workflows. Current structured output benchmarks (e.g., JSONSchemaBench) only validate the pass rate for JSON schema and types, and not the actual values within the produced JSON. So we designed the Structured Output Benchmark (SOB) that fixes this by measuring both the JSON schema pass rate, types, and the value accuracy across all three modalities, text, image, and audio. For our test set, every record is paired with a JSON Schema and a ground-truth answer that was verified against the source context manually by a human and an LLM cross-check, so a missing or hallucinated value will be considered to be wrong. Open source is doing pretty well with GLM 4.7 coming in number 2 right after GPT 5.4. We noticed the rankings shift across modalities: GLM-4.7 leads text, Gemma-4-31B leads images, Gemini-2.5-Flash leads audio. For example, GPT-5.4 ranks 3rd on text but 9th on images. Model size is not a predictor, either: Qwen3.5-35B and GLM-4.7 beat GPT-5 and Claude-Sonnet-4.6 on Value Accuracy. Phi-4 (14B) beats GPT-5 and GPT-5-mini on text. Structured hallucinations are the hardest bug. Such values are type-correct, schema-valid, and plausible, so they slip through most guardrails. For example, in one audio record, the ground truth is "target_market_age": "15 to 35 years", and a model returns "25 to 35". This is invisible without field-level checks. Our goal is to be the best general model for deterministic tasks, and a key aspect of determinism is a controllable and consistent output structure. The first step to making structured output better is to measure it and hold ourselves against the best. https://ift.tt/azlci6e April 29, 2026 at 11:01PM

Tuesday, April 28, 2026

Show HN: Open Bias – proxy that enforces agent behavior at runtime https://ift.tt/SUf65jN

Show HN: Open Bias – proxy that enforces agent behavior at runtime https://ift.tt/UZCHAo7 April 29, 2026 at 01:32AM

Show HN: I built a dating SIM that prepares you for your date https://ift.tt/lYZwagB

Show HN: I built a dating SIM that prepares you for your date https://ift.tt/UNViIbW April 29, 2026 at 12:16AM