If the season were done today, there would be so little to argue regarding at-large bid recipients. The NPI through the Top 9 is in solid agreement with the AVCA, MORE, & the T100, with Massey & Inside Hitter both believing Loras over Messiah would be the only difference to right what they’d say is wrong. It’s like I wrote earlier in the year, “Almost any model should be able to identify the top 8%, even a poor one, which none of these is of course. (At the moment, it only needs to capture the best 7% having Vassar in position as the last one in at #9.) The next 6 teams are where the logjam begins with only two of those six best positioned to capture an auto-bid. (Aurora & Wentworth) The rest will be hoping for no conference tournament upsets of the favorites and a little good fortune to go along with what they hope will be stellar play to close out their seasons.

I would urge anybody reading this post not to get too comfortable with where it all stands at this moment. The real world seems to have a way of rocking the boat, too often when by all accounts we don’t expect it. I have always maintained the NPI’s ability to capture at-large bids will only offer a head-scratcher about once every 5-years. There are a whole host of reasons beyond the obvious why it likely won’t be this season. However, that doesn’t mean it can do any more than what we’d expect of it at its bare minimum, either. It has a risk-reward problem even if it serves its “at-large purpose” in the early years. It is patently unfair to teams of the Midwest, though it may or may not rear its ugly head regarding that. It, like its predecessors, though not to the same degree, weighs the accumulation of wins more-so than any analytic would say it should. Speaking of accumulation of wins, check out the win matrix by every model below showing where its ranked teams have gathered theirs. If you squint along the left-hand margin of each you can see the Midwest distribution in red slivers of cells. The NPI spreads them along the whole landscape from top to bottom and scanning each subsequent matrix moving downward you can literally see those red cells getting squeezed higher and higher with each model.

When averaging the models who rank teams the best over its entire landscape and comparing to the ranks driven by the current NPI, some interesting patterns emerge. I chose a 7-spot threshold because it is equivalent to a 5% displacement by the NPI rank as compared to the mean of the other three. That is a significantly large disagreement. Many below are admittedly teams ranked in the bottom half of the landscape where differences become amplified. However, I have shaded ranks for teams among the landscape’s top half as a way to focus attention on positions which may come into play regarding tournament seeding or for teams whose wins against them will have higher probability to count in computing their NPI metrics.

Ranks are one way of focusing the attention on differences. Perhaps an even better way is by looking at the metrics themselves from each model which produce these ranks? That is what I did below. This time I listed teams only from the top half of the landscape to bring attention to those which produce NPI metrics higher or lower than what the other three models think they’d deserve. Some would say a single point difference is not very much, but I’d contend, more often than not, the team which ends up the last one in will be less than that ahead of the one who is the first one out of at-large consideration.

Later this week be on the lookout for the second installment about volleyball stats. Those related not to individual touches, but to a team’s abilities to effectively measure its functionality and set goals regarding its progress given any level of opponent.

You must be logged in to post a comment.