Scheme Fit
An evaluation framework that estimates how a player's tendencies, attributes, and developmental trajectory map onto a specific coaching staff's scheme, role, and rotation. Distinct from generic player rating because the same player has different expected production under different schemes.
Scheme fit is the cohort of front-office analyses that ask how a particular player will perform inside a particular system, not how they would perform on average across the league. The framework combines play-primitive vocabulary (the discrete unit-level concepts a sport's offense and defense are built from), coaching-staff fingerprints (how a staff has used similar roles historically), and player profiles to produce expected-performance distributions conditioned on the destination context. Scheme fit is the first capability VAR's front-office product line ships, because it is the analysis that operationalizes the difference between a generically good player and a player who fits.
- Generic player ratings collapse the most valuable variance. The same player produces different expected outcomes under different schemes, and the variance between scheme assignments is often larger than the variance between players at a single position.
- Draft and free-agency decisions are scheme-fit decisions in disguise. Front offices already make these calls; the choice is whether the framework is explicit and auditable or implicit and unreproducible.
- Scheme fit is built from public play-by-play data, not licensed charting feeds. Independent platforms can ship credible scheme-fit analysis at moderate-to-high rigor depending on the league.
- VAR's scheme-fit anchor for 2026 is a backward-looking retrospective on the 2026 WNBA draft class versus their actual rookie-year performance. Backward-looking and externally checkable: how the engine would have rated the draftees, compared to what happened.
Build a play-primitive vocabulary for the sport (the discrete unit-level concepts that offenses and defenses are constructed from). Tag every play in a multi-season public play-by-play corpus with the relevant primitives. For each coaching staff in the league, compute a usage fingerprint: how the staff has historically deployed each primitive, conditional on role and game state. For each player, compute a profile: their observed proficiency at each primitive, with credible intervals reflecting sample size. Cross the staff fingerprint with the player profile to produce a destination-conditioned expected-performance distribution.
For a 2026 WNBA draft prospect: build her play-primitive proficiency profile from her college career. For each WNBA team's offensive coordinator, compute the historical primitive-usage fingerprint. Cross the prospect's profile against each team's fingerprint to produce a distribution of expected first-year role production under each destination. The 'best fit' is the destination where the prospect's strongest primitives align with the staff's highest-usage roles, weighted by sample-size uncertainty.
- Treating scheme fit as a single number. The output is a distribution conditioned on destination; collapsing it to a 'fit score' destroys the variance that's the whole point of the analysis.
- Using composite stats (PER, EPV, RAPTOR) as inputs to scheme fit. Composites are already a collapse of the underlying play-primitive proficiency; feeding them to a scheme model loses the resolution scheme-fit needs.
- Ignoring the rigor ceiling at lower-data-density levels. Scheme fit at the NBA level is supported by dense play-by-play; at College Football or Women's College Basketball, the rigor ceiling is lower and the methodology-transparent disclosure has to be explicit about that.
How is scheme fit different from a generic player rating?
A generic rating produces a single number averaged across all schemes; scheme fit produces a destination-conditioned distribution. The same player can have a top-decile fit in one offense and a median fit in another. The variance between destinations is the part the rating throws away.
Does scheme fit require licensed charting data?
No. Scheme fit is built from public play-by-play data. Licensed charting (PFF, Synergy, Second Spectrum) adds resolution at the highest-rigor leagues, but the framework operates on public data by design. The methodology-transparent posture is that VAR ships scheme fit at the rigor public data supports, with explicit disclosure of the per-league rigor ceiling.
What's the methodology-transparent rigor ranking across leagues?
NBA > NFL > WNBA > CBB > CFB > Women's College Basketball. Public-data density and aggregate-check sources are dense at NBA and NFL, thinner at WNBA and CBB, sparsest at CFB and women's college. The framework applies uniformly; the resulting quantitative claims get tighter at higher-rigor levels.