Leaderboard rankings often mistake noise for progress. Learn how to use statistical tools to find real signals and build more reliable model benchmarks.

Science isn't about being 100% sure; it’s about knowing exactly how 'not sure' you are. When we acknowledge the error bars, we’re actually being more rigorous, not less.
https://cameronrwolfe.substack.com/p/stats-llm-evals


Relying on raw scores often leads to the "highest number is best" fallacy, where tiny performance gaps are mistaken for actual progress. Research indicates that many of these decimal-point differences are simply statistical noise rather than true improvements in model capability. Without calculating statistical significance or using error bars, it is impossible to know if a model's higher score is a repeatable result or just a random fluctuation based on a specific sample of questions.
Standard deviation measures the diversity or "spread" of individual data points, showing how much the scores for different questions vary from one another. In contrast, standard error measures the precision of the average score. It tells you how much the mean performance would likely vary if you ran the same evaluation multiple times with different sets of questions. A small standard error indicates that the calculated accuracy is a reliable estimate of the model's true performance.
Paired difference analysis is a statistical "cheat code" that compares two models on the exact same set of prompts. Because models often find the same questions difficult or easy, their scores are highly correlated. By focusing on the difference in performance for each specific question rather than comparing two independent averages, the shared noise caused by question difficulty cancels out. This shrinks the standard error and allows researchers to detect significant improvements that might be hidden by the overlapping error bars of independent tests.
When a dataset has fewer than a few hundred samples, the Central Limit Theorem (CLT) may provide a false sense of security by producing confidence intervals that are too narrow. Small datasets are also prone to the "small data trap," where a model getting a perfect score (100% or 0%) makes the variance appear to be zero, incorrectly suggesting there is no uncertainty. For these smaller, specialized benchmarks, experts recommend using Bayesian methods or increasing the number of samples to ensure the results are robust.
Most evaluations use binary "pass or fail" scoring, which discards the nuance of how confident a model was in its answer. By looking at the next-token probabilities (the "expected score"), you can distinguish between a lucky guess and a confident, correct answer. This approach removes "within-question variability," leading to a much higher Signal-to-Noise Ratio (SNR). This makes performance metrics more stable and allows engineers to track progress more steadily during model training.
From Columbia University alumni built in San Francisco
"Instead of endless scrolling, I just hit play on BeFreed. It saves me so much time."
"I never knew where to start with nonfiction—BeFreed’s book lists turned into podcasts gave me a clear path."
"Perfect balance between learning and entertainment. Finished ‘Thinking, Fast and Slow’ on my commute this week."
"Crazy how much I learned while walking the dog. BeFreed = small habits → big gains."
"Reading used to feel like a chore. Now it’s just part of my lifestyle."
"Feels effortless compared to reading. I’ve finished 6 books this month already."
"BeFreed turned my guilty doomscrolling into something that feels productive and inspiring."
"BeFreed turned my commute into learning time. 20-min podcasts are perfect for finishing books I never had time for."
"BeFreed replaced my podcast queue. Imagine Spotify for books — that’s it. 🙌"
"It is great for me to learn something from the book without reading it."
"The themed book list podcasts help me connect ideas across authors—like a guided audio journey."
"Makes me feel smarter every time before going to work"
From Columbia University alumni built in San Francisco
