Are top AI models actually smarter, or just lucky? Learn why benchmark margins of error are often understated and how to measure true model skill.

An empirical science is only as good as its measuring tools. We need to move away from 'vibe-based' engineering and toward actual, rigorous science by acknowledging the noise and uncertainty in AI benchmarks.
The SEM is critical because it provides a measure of how much a model's score might fluctuate due to the specific questions chosen for a test, which researchers call the "luck of the draw." Without reporting the SEM or confidence intervals, a raw score like 75% is just a single data point that ignores statistical noise. By calculating the SEM, researchers can determine if a performance gap between two models is a genuine reflection of superior skill or simply a result of overlapping margins of error.
Clustering is a statistical technique used when questions in a benchmark are related to the same source material, such as a long passage or a specific legal case. If a model fails to understand a central theme in a passage, it will likely miss all five or six questions associated with it, meaning those questions are not independent trials. Anthropic’s research found that failing to account for these clusters can make the margin of error appear three times smaller than it actually is, leading to false conclusions about a model's reliability.
For multiple-choice questions, researchers can eliminate the randomness of a model "rolling the dice" on a single answer by looking at its internal probability distribution. Instead of forcing the model to output a specific letter and grading it as a pass or fail, researchers can record the model's internal confidence level—such as an 85% probability for the correct answer—as the score. This method, known as using log probabilities, provides a much more stable and precise measurement of the model's underlying knowledge without requiring multiple expensive test runs.
A paired-differences test compares two models by looking at the specific difference in their scores on every individual question, rather than just comparing their final averages. Because top-tier models often struggle with the same difficult or poorly phrased questions, looking at the difference allows the "noise" of question difficulty to cancel out. This technique focuses purely on the variance in how the models respond to the same stimuli, making the "signal" of which model is truly better much clearer and more scientifically robust.
Power analysis is a mathematical tool used to determine the minimum number of questions required in a benchmark to detect a specific difference in model performance. It helps researchers avoid "false negatives," where a model might actually be better than a competitor, but the test is too small to prove it statistically. By performing a power analysis beforehand, developers can ensure their experiments are "powered" enough to find the truth, saving time and resources that might otherwise be wasted on inconclusive evaluations.
From Columbia University alumni built in San Francisco
"Instead of endless scrolling, I just hit play on BeFreed. It saves me so much time."
"I never knew where to start with nonfiction—BeFreed’s book lists turned into podcasts gave me a clear path."
"Perfect balance between learning and entertainment. Finished ‘Thinking, Fast and Slow’ on my commute this week."
"Crazy how much I learned while walking the dog. BeFreed = small habits → big gains."
"Reading used to feel like a chore. Now it’s just part of my lifestyle."
"Feels effortless compared to reading. I’ve finished 6 books this month already."
"BeFreed turned my guilty doomscrolling into something that feels productive and inspiring."
"BeFreed turned my commute into learning time. 20-min podcasts are perfect for finishing books I never had time for."
"BeFreed replaced my podcast queue. Imagine Spotify for books — that’s it. 🙌"
"It is great for me to learn something from the book without reading it."
"The themed book list podcasts help me connect ideas across authors—like a guided audio journey."
"Makes me feel smarter every time before going to work"
From Columbia University alumni built in San Francisco
