Model rankings look clear until you add error bars. Learn how to use statistical rigor to find the real signal in AI evaluations and avoid false leads.

We need to stop treating evals like a simple contest and start treating them like scientific experiments by viewing questions as a 'super-population.' The goal isn't just to see how the model does on specific questions—it’s to use that sample to infer the model’s true underlying skill.
https://arxiv.org/html/2411.00640v1


A higher score can be misleading if it lacks a "Standard Error" to account for sampling noise. Most evaluations use a limited set of questions, which are just a sample of a theoretical "super-population" of all possible questions. Without calculating error bars, a small lead (like 2% or 3%) might simply be the result of a model getting lucky with a specific set of questions rather than possessing superior underlying skill.
Clustered Standard Errors are used when multiple questions are based on the same context, such as several questions about a single Wikipedia passage. In these cases, the questions are not independent; if a model fails to understand the passage, it will likely miss all related questions. Treating these as independent data points results in error bars that are too small, making a model's performance seem more precise and "significant" than it actually is.
While setting temperature to zero makes a model deterministic, it can actually increase the variance of the final score and inject bias. By forcing the model to pick only the most likely token, you lose the nuance of its internal probability distribution. The script suggests that it is better to use "Next-Token Probabilities" for multiple-choice tests or to "resample" (ask the same question multiple times and average the results) to get a more accurate measure of the model's true ability.
To detect a 3% difference between models with 80% statistical power, a benchmark generally needs approximately 1,000 independent questions. Many popular "mini-evals" with only 50 to 100 questions are often too small to provide a clear signal, as the noise from the small sample size will drown out any actual performance gains unless one model is significantly better than the other.
Paired Analysis focuses on the difference in performance between two models on a question-by-question basis rather than just comparing their final aggregate scores. Because models often agree on which questions are easy or difficult, looking at the "paired difference" cancels out the noise caused by question difficulty. This approach provides a "free" boost in precision, allowing researchers to identify statistically significant leads even when the overall scores are close.
From Columbia University alumni built in San Francisco
"Instead of endless scrolling, I just hit play on BeFreed. It saves me so much time."
"I never knew where to start with nonfiction—BeFreed’s book lists turned into podcasts gave me a clear path."
"Perfect balance between learning and entertainment. Finished ‘Thinking, Fast and Slow’ on my commute this week."
"Crazy how much I learned while walking the dog. BeFreed = small habits → big gains."
"Reading used to feel like a chore. Now it’s just part of my lifestyle."
"Feels effortless compared to reading. I’ve finished 6 books this month already."
"BeFreed turned my guilty doomscrolling into something that feels productive and inspiring."
"BeFreed turned my commute into learning time. 20-min podcasts are perfect for finishing books I never had time for."
"BeFreed replaced my podcast queue. Imagine Spotify for books — that’s it. 🙌"
"It is great for me to learn something from the book without reading it."
"The themed book list podcasts help me connect ideas across authors—like a guided audio journey."
"Makes me feel smarter every time before going to work"
From Columbia University alumni built in San Francisco
