Insights

Thinking
out loud.

Every sports analytics tool you've ever used gives you a number.

Team A wins 64% of the time. Player B scores 22.4 points per game. The spread is -3.5.

These numbers feel precise. They're not. They're averages — and averages hide the thing that actually determines outcomes in sports: variance.

A team that wins 64% of the time still loses 36% of the time. The question isn't whether they win — it's under what conditions they lose, and whether those conditions are present this week.

This is the core problem with single-point prediction models: they tell you the center of a distribution and throw away the rest. In a domain where the tails matter enormously — where a single injury, a weather shift, or a scheme adjustment can swing a game — discarding variance isn't just imprecise. It's dangerous.

At VAR, we don't produce single numbers. We produce distributions. Our Monte Carlo simulation engine runs thousands of game scenarios before a single whistle blows, mapping the full range of probable outcomes and the conditions that drive each one.

The output looks different from what you're used to. But it's closer to what's actually true.

Prediction isn't about the most likely outcome. It's about understanding the entire landscape of what can happen — and making decisions accordingly.

When we tell people our NFL simulator hit 65% against the spread, we get one of two reactions.

The first is skepticism. That number sounds high. And honestly, that's the right instinct — most published model performance figures are cherry-picked, backtested, or otherwise massaged into something more flattering than reality.

The second reaction is excitement. And that's where we have to pump the brakes.

65% ATS is a meaningful number. Break-even on a standard -110 line is 52.4%. Sustained performance above 55% is considered sharp. 65% — applied out-of-sample, on live predictions, evaluated against closing lines — represents a genuine edge.

But here's what 65% doesn't mean: it doesn't mean we win every week. It doesn't mean the edge is uniform across all game types, spreads, or market conditions. And it absolutely doesn't mean the number holds forever without continued model development and calibration.

We should also note: this number was revised downward after we discovered and corrected a model bug during the 2025–2026 season. Our original reported figure was higher. We corrected it publicly because that's what serious analytics operations do — and because transparency is a better foundation for a client relationship than inflated numbers.

What 65% means is that our simulator, as currently constructed, identifies situations where the market's implied probability meaningfully underestimates one team's chances — often enough to produce a statistically significant positive expectation over a full season sample.

We track this number obsessively. We evaluate it against the closing line — the sharpest available signal — rather than the opening line, because that's the honest test. And we publish it here because we believe the clients worth working with are the ones who know what these numbers actually mean.

Model performance is a conversation, not a scoreboard. We're happy to have that conversation.

Sports media runs on experts.

Former players, analysts, journalists with decades of game experience — they break down film, assess matchups, and deliver picks with authority. And they're wrong about as often as they're right.

This isn't a knock on expertise. Film study and pattern recognition are real and valuable. The problem is that human experts — no matter how knowledgeable — systematically struggle with two things that determine outcomes in sports: quantifying uncertainty and processing large variable sets simultaneously.

When an analyst says "I like the Chiefs this week," they're implicitly running a mental simulation. They're weighing factors, drawing on memory, and arriving at a probability judgment. The issue is that the human brain isn't built to hold 40 variables in working memory, weight them probabilistically, and output a calibrated estimate of game probability.

Computers are.

VAR's simulation engine doesn't replace domain expertise — it systematizes it. Our models are built on the same factors experienced analysts consider: personnel, scheme, efficiency, matchup, context. But instead of arriving at a gut conclusion, we run those factors through thousands of simulated game environments and let the distribution speak.

The result is a prediction that's transparent, reproducible, and continuously improvable in a way that human intuition simply isn't.

Expert picks will always have an audience. We built something for the people who want to know why.