The scoreboard
doesn't lie.
We track every prediction we make. No cherry-picking. No revisionist history. Just a running record of how our models perform against the market — updated continuously.
Predictions evaluated against closing line at primary sharp market. All results are out-of-sample.
All numbers are walk-forward out-of-sample — no in-sample contamination. Vegas baseline: ~52.4%.
1,283 games
simulated.
Michigan over UConn (69-63) — simulator had Michigan -2.9
Season accuracy: 853/1,283 straight up, 469/704 ATS, 348/704 O/U. NCAA Tournament: 52/66 correct predictions across all rounds.
Where the
edge showed up.
Maycee Barber -183 vs Alexa Grasso (Mar 28) — HIGH confidence pick that lost by KO R1.
Led to post-mortem fixes: KO rate floor, quality-of-loss weighting, and skill-usage gap modeling.
1,176 games.
28.9M simulations.
During the 2025–2026 season, we discovered and corrected a model bug that had inflated our ATS accuracy. The corrected figure — 65% — reflects our actual, validated performance.
We publish this correction because it's the right thing to do, and because it's the kind of transparency that separates serious analytics operations from black-box pick services. The model is actively improving through continued experimentation.
All performance metrics are evaluated against closing lines — the sharpest available market signal — rather than opening lines. This is the honest test.
Closing Line Value (CLV) is the gold standard for measuring predictive edge because it captures the final consensus of all market information. A model that consistently beats the closing line is generating real alpha, not exploiting stale openers.
Why we
publish this.
Most analytics companies don't show you their track record. We do, because edge is verifiable — and because the clients we want to work with know the difference between a real model and a marketing claim.