Posts
All the articles I've posted.
-
Setting Logits to Negative Infinity: How LLMs Actually Output JSON
Structured outputs aren't a validation layer; they're a decoding-time intervention. How logit masking actually works, why token boundaries make it hard, and why reordering one field in your Pydantic schema can move accuracy by 90 points.
-
Breaking Down Agent Evals (Part 3): τ²-bench and τ³-bench
Part 3 of 3. How τ²-bench introduced dual control by giving the user its own tools, what τ³-bench added with sprawling document retrieval and full-duplex voice, and what production agent eval still does not measure.
-
LLMs playing Just One: Why Same-Model LLM Ensembles Mode-Collapse
Four Claude Haiku instances asked independently for a clue for 'toast' all reply 'bread'. Four Sonnets do it more often. Four Opuses do it even more often. I built a tiny benchmark using the board game Just One to measure when LLM ensembles collapse and what makes them stop. The mixed-family ensemble + anti-correlation prompt hits 3.25× the single-model baseline.
-
Breaking Down Agent Evals (Part 2): τ-bench Deep Dive
Part 2 of 3. How τ-bench unified a simulated user, domain policies, and a real-world consequence model into one benchmark, why pass^k changed how the field talks about agent quality, and how its design principles transfer to your own eval suite.
-
Breaking Down Agent Evals (Part 1): A Practitioner's Guide
Part 1 of a 3-part series. Why traces (not code) are the source of truth in agents, the three observability primitives, run types, the metrics that matter at each level, the pass^k reliability metric, a five-step methodology for building an eval suite, and a filter funnel approach to why no single eval method is enough.
-
Why Streaming LLMs Need Attention Sinks
A walkthrough of attention sinks: what they are, why softmax produces them by accident, why naive sliding-window inference collapses without them, and how a four-token reservation lets streaming inference run to four million tokens with no quality loss.
-
How PPO Actually Works
PPO walked through from vanilla policy gradients, through the trust region story that motivates it, to the clipped objective you actually run. Intuition first, math when it pays off. Written for ML people who have not done much RL.
-
How to Mitigate the Lost-in-the-Middle Effect in LLMs
A look at why long contexts quietly break LLMs, why important information is easier to use at the boundaries than in the middle, and why agents that periodically restate their goals at the end of the context often work better.