Lineup-Adjusted Efficiency, Explained Without the PhD
Raw offensive and defensive ratings lie to you because they credit or punish players for who they happened to play with, and lineup-adjusted efficiency is the correction, done right, that turns possession-level data into roster-construction signal.
The problem in one example
Take two guards, identical box scores: 18 points, 5 assists, 2 turnovers per game on 58% true shooting. Guard A played 600 possessions next to an elite screen-setting big. Guard B played 600 possessions next to a stretch four with no screen game. Their raw offensive ratings will look similar. Their contribution to winning will not. Every possession-level metric you will ever use has some version of this confound baked in.
Why the naive fix fails
The first instinct is on/off splits. How does the team perform with player X on the floor versus off? The math is correct. The sample size is fatal. On/off numbers stabilize at roughly 3,000 possessions. Most playoff rotations do not hit that. Most lineup combinations never hit it. Raw on/off rewards players who happen to share the floor with good teammates and punishes players who do not, just like raw ratings do. It moves the confound one step further down the pipeline without solving it.
What actually works
Regularized adjusted plus-minus (RAPM) and its descendants (LEBRON, EPM, DARKO) treat the problem the right way: as a regression with ten simultaneous inputs per possession, solved across the whole league, with a regularization term that prevents small-sample players from distorting the fit. The output is an estimate of each player’s contribution to possession outcomes, holding constant who else was on the floor.
This is not controversial. It is also not quite enough. A player’s contribution to offense in a half-court set is a different skill than their contribution in transition. A lineup-adjusted metric that pools all possession types throws away signal. Running the adjustment separately by possession type, then reweighting by the player’s expected usage mix, recovers that signal.
That is the recalculator we run at VAR.
The decomposition in practice
We split possessions into five types: half-court, early offense, transition, after-timeout, and end-of-quarter. Each gets its own adjusted efficiency estimate per player. We reweight to lineup context: a player’s value in a slow, half-court-heavy playoff series is not the same as their value in a fast, transition-heavy regular-season game.

The three annotated points in the chart are the case for the whole framework. One player sits well above the diagonal: raw numbers flatter him because he played next to high-usage creators who bailed out bad possessions. One sits well below: he ran minutes with bench lineups that collapsed around him, and his raw line absorbs that team failure. One sits on the diagonal: the raw number was telling the truth.
What the framework tells you, and what it does not
Lineup-adjusted efficiency is a signal about possession-level impact given current context. It is good for: rotation decisions inside a season, midseason trade value, free agent pricing at the margin. It is less good for: draft projection (no prior possessions), scheme-fit projection across trades (the context changes), and late-career cliff detection (regularization smooths recent drops).
The mistake we see franchises make is treating lineup-adjusted efficiency as a ranking. It is not a ranking. It is a correction. A ranking requires judgment on top of the correction: role, durability, fit, trajectory. The correction just stops the box score from lying to you.
What to ask your vendor
If your current analytics vendor sells “adjusted efficiency” and cannot tell you (a) what regularization method they use, (b) whether they decompose by possession type, and (c) how they handle garbage time, you are buying a version of this that will give you worse decisions than the public models.
If you want to see the math run against your roster, or the notebook behind the chart above, reply.