Welcome to my blog!
AI Enthusiast | Exploring the Future of Personalized LLMs
Welcome to my blog! I'm passionate about pushing the boundaries of AI technology. My work focuses on Large Language Models and Reinforcement Learning, particularly exploring how we can create more personalized AI experiences.
Feel free to explore my thoughts and research through the blog posts below, or connect with me through any of my social links.
Featured
-
Breaking Down Agent Evals (Part 3): τ²-bench and τ³-bench
Published:Part 3 of 3. How τ²-bench introduced dual control by giving the user its own tools, what τ³-bench added with sprawling document retrieval and full-duplex voice, and what production agent eval still does not measure.
-
Breaking Down Agent Evals (Part 2): τ-bench Deep Dive
Published:Part 2 of 3. How τ-bench unified a simulated user, domain policies, and a real-world consequence model into one benchmark, why pass^k changed how the field talks about agent quality, and how its design principles transfer to your own eval suite.
-
Breaking Down Agent Evals (Part 1): A Practitioner's Guide
Published:Part 1 of a 3-part series. Why traces (not code) are the source of truth in agents, the three observability primitives, run types, the metrics that matter at each level, the pass^k reliability metric, a four-step methodology for building an eval suite, and a filter funnel approach to why no single eval method is enough.
-
Why Streaming LLMs Need Attention Sinks
Published:A walkthrough of attention sinks: what they are, why softmax produces them by accident, why naive sliding-window inference collapses without them, and how a four-token reservation lets streaming inference run to four million tokens with no quality loss.
-
How PPO Actually Works
Published:PPO walked through from vanilla policy gradients, through the trust region story that motivates it, to the clipped objective you actually run. Intuition first, math when it pays off. Written for ML people who have not done much RL.
-
How to Mitigate the Lost-in-the-Middle Effect in LLMs
Published:A look at why long contexts quietly break LLMs, why important information is easier to use at the boundaries than in the middle, and why agents that periodically restate their goals at the end of the context often work better.