NPI Top Half – End of Regular Season

The NPI ranks have little to no value from the bottom half because its purpose is to rank the top 10 well for at-large bids and the top 50 reasonably well for seeding the tournament’s 19 teams. However, it is possible NPI ratings from the bottom half might contribute in a smallish way to teams listed below who defeated them, as long as points derived are deemed helpful to their metric. A cringeworthy thought made even more so when stopping to consider no less than ten teams not listed below according to any of the other 3 experts ought to be in the top half, and because they aren’t, there exists a compounding effect to injure the metric of those who play them. It usually injures the Midwest teams most because the median Midwest team plays volleyball at a level proven to be no worse than the 50th rank while the median team not from there plays commensurate to a 70th rank. This means a Midwest team who’s actually 50th in the landscape will on average play 0.500 ball against their neighbors and one not from there who’s actually 70th will also on average play 0.500 ball against its, as well. (That last statement generally independent of those we know regularly seek out winnable opponents or purposely schedule stronger ones because it aligns with their values of what’s important.) Every NCAA metric I have studied (The RPI, the KPI used in D1 Volleyball this year, and now the NPI for D3MVB) has a tendency to overweight win rate, believing the accumulation of wins trumps where they are produced on a win matrix, thus discriminating against the Midwest teams more than those not from there ranked adjacent to them. They claim to properly build in strength of opponents, but if a model can’t get its ranks calibrated properly by filtering out win-rate to the degree it should, then its assessment of opponents’ strength has little to no chance to right the wrong when it exacerbates those inconsistencies! You see it in the MORE every Tuesday, which frankly is better than any single individual ranking system on the planet out there! (Including my own T100…)

I know- I know, I have met some who like to wax poetically not having interest in the strength of teams as a barometer, but instead those who “deserve the reward more.” Often Athletic-Admin jargon for preferring teams with better win rates to be recognized – Even if they aren’t stronger in playing the game. C’mon Man! The last at-large this year will be better than the 10th best in the nation. Pretty sure it should be about strength at that point, especially when a model almost always suggests a team not from the Midwest deserves it more. Give me a break!

Some may be surprised I’m offering a table rather than a graph which would more effectively show a picture of the truth, but graph reading takes a skill that table reading doesn’t. This makes tabular form more user friendly – Why so many volleyball statistics are housed in tables. If 500 people typically read a post at Frog-Jump, I suspect maybe 150 or so could properly interpret a funky graph I think would be cool, but only half of those will invest the time to even do so. I figure no less than 400 could more effectively check out the color-coded elements of the table below in less time and hopefully three-quarters of you will. Last I checked, 75% of 400 is 300, four times larger than half of 150. If what’s seen below captures an audience four-fold, I can only hope it catches the eye of one with influence regarding a patently obvious injustice that will burn a Midwest team from an at-large no less than once every two years and create a truly remarkable injustice independent of geography no less than once every 5 years. Say nothing about how illogical the seeding might end up being year after year. And neither of these is the most egregious reason for not supporting the NPI method the way it is presently constructed. For the most egregious reason go to the NP-Eye post from four days ago about half-way down titled “The Expected NPI Point Values earned by Teams Ranked 5th Through 25th.” Figure out a way to numerically incentivize teams to want to play others just a little better than themselves, like most would want to anyhow, and then, just maybe, I could accept what’s seen below. Do that and a big part of these secondary problems seen will be rectified, too. The proverbial “… two birds with one stone!”

The mathematical concept used above:

Consider the rating metrics from the T100, Inside Hitter, and Massey have distributions that are each mound shaped and symmetric as to be characterized by a normal distribution. This makes it easy to standardize these 3 experts’ distributions to determine Z scores for every team. The NPI metrics also form a normal distribution showing a mean equal to 49.73 and a standard deviation of 8.40. All that needs to be done with the other three’s Z scores is to multiply each one by 8.40 and add it to 49.73 to make their models standardized in the “language” of NPI so that readers can compare apples to apples. That is what you see bordered in gold on the right margin on the table above. (Standardized NPI Equivalents – SNEQ)