If you are an “Ends Justifies the Means” individual living in the moment, then you aren’t likely disappointed with the teams seen in the top 10 this week with the unveiling of the MVB NPI. The order might have been just a little surprising given what we think we know, but it is early with lots of high profile matches still to be played. I can’t nor will I argue that point. However, I thought now might be a good time to go back to consider some of what has been written in 4 previous posts:
January 5, 2025 “What Likely Happens when the NPI Collides with the Bubble?“
The more important question, though, “Can it be accurate enough to identify the five most deserving teams to be placed on that bubble, and then utilize whatever precision it does possess to place them in an order satisfying the truth? The order satisfying the truth is less questionable on the bubble right now than it is amongst those teams ranked above it. Presently I’d say it looks pretty accurate but lacks some precision. Regardless, the metric is rounded to thousandths to prevent ties that have about a 3% chance to happen had they rounded to hundredths, and because there is the added benefit of the illusion of precision to most readers.
February 8, 2025 “NPI – The NCAA’s Next-Gen Performance Indicator”
It is an elegant piece of mathematics I have yet to see in action, … Read again to see how relieved I am to know the best teams probably will be awarded at-large bids. Simply because chances are so good this model can’t be that bad. It isn’t an endorsement nor is it an indictment, either. It is saying the bar is set so low, this model for MVB shouldn’t fail! The At-Large candidates right now look to be justified under an assumption the Midwest conference favorites among the top 10 win their tournament, or as long as 3rd or lower seeds from the UVC, CVC, or MAC do not win theirs. Otherwise, discernment may have been traded in for expediency guised as objectivity over subjectivity.
February 8, 2025 “NPI – The NCAA’s Next-Gen Performance Indicator”
All three expert models are very good! Will their truth confirm the NPI when it gets published? Probably. Might it show a chink in the NPI armor as seen through the eyes of the other models? Probably as well, but likely deeper than what most will be interested to find out when they read it. The 3 expert models generally confirm the NPI candidates through the top 10. In the plot below I see the NPI systematically devaluing wins over the above average Midwest programs which might influence the best Midwest teams’ ability to earn at-large bids down the road. The graph below also shows at least another dozen-plus teams who are over-valued by the NPI as seen by the dark red scattered data points shifted lower than any of the others. After determining if these non-Midwest teams have traits in common, then it’s possible to identify others who consistently compete with and defeat them to compute the degree to which a ripple effect of NPI inflation takes place and for whom it does. However, right now, the best Midwest teams average 5 less matches and 4 less wins than their Eastern counterparts who are ranked better. This has some potential to minimize any Midwest Bias effect to its upper echelon in the next few weeks. It needs to be watched closely to see how the NPI responds as their win counts continue to increase beyond the 13 minimum. i.e. The chances for some of their less desirable wins contributing now to not have to count later is higher, whereas many in the East already have some less desirable wins not contributing to their metric.
February 13, 2025 “Concerns for NPI Vulnerability”
If win-rate propagates errors in the ratings of a large enough subset of teams throughout the middle of the landscape, then the teams who played them would then propagate that error even more via their metrics, and then the ones who played them would do the same again and again until a compounding effect might create some unintended outcomes. An economist would call this a “multiplier effect,” but programs put in place by economists are usually intentional. Once determining the dozen or so teams most over-rated by the NPI in the middle of the landscape it becomes possible to determine whether there are enough of their opponents in the network who can both benefit and/or spread the compounding error. At the present time the Midwest teams ranked between the 70th and the 45th percentile win half their matches. The East teams from this same interval are presently winning 55%. The next month is predominantly conference matches compared to earlier in the season, so the expectation is for this disparity in win% to grow beyond 5% across this domain. The NPI offers a flat 20 extra points for every win. The win% disparity is certainly a driving force in producing ranking errors beyond the top 25 teams, and they are often obvious. The question is whether the magnitude of their metrics is enough to propagate the error to permeate those among the top 15 in such a way as to influence the NPI’s ability to perform its function well? That is undetermined just yet.
February 14, 2025 “NPI Vulnerability continued…”
These are some of the things I will take a look at when the NPI is unveiled. If any appear to make it potentially vulnerable in its task to choose the worthiest at-large teams a few weeks later, I’d want to see how it evolves over the month to take note how it attempts to correct itself. Especially over the conference championship week in April, after which I expect its final version will be published. This is the first of 4 weekly analyses I intend to do because unlike most who think the only NPI that matters is the last, I believe there is much to learn by tracking how it behaves from now until that time. (I do not intend to share all 4 with posts. This one is the baseline and then one more at the end of the process can describe any observations made in how it evolved to its endgame.)
When observing the plot below you should take note of the scatter color about each of the 4 “cubic” models. Particularly the lighter shade of red I have superimposed over the top of the NPI Midwest programs. I would urge you to compare the models R2 metrics which measure the percent of variability in ranks being predictable by the median z-scores of the program. The closer to 1.0, the more “perfect” the model becomes.
There are 28 teams whose median z-score is better than .75. In a normal distribution this typically represents about 22.5% of the population. Given the general shape of the models above together with the fact 22.5% of 127 teams is 28.6, it lends some credibility to the metrics of these models being from a “normal” distribution. The plot below of just these 28 team’s metrics is nothing more than the right tail of the plot above. The purpose of zooming in is to see the degree to which the NPI metrics of the best teams, only, differ from the 3 expert models.
The most recent post on February 27th related to the NPI is the only prospective piece I have seen across all of D3 Sport. I titled it “Greed is Good?” entertaining a forward-thinking account of how the NPI incentivizes competitive demand between teams. After all, isn’t it just as important to know that the NPI pushes programs toward desirable competitive ideals in the future than it is to retrospectively populate a tournament objectively and with a sense of fairness? I happen to think so. Besides, one is destined to foreshadow the other, so maybe you ought to consider climbing on board, if you are of the belief they aren’t connected.
February 27, 2025 “Greed is Good?”
What initially was believed to be an “elegant” piece of math, I now discover offends for its lack of risk vs. reward balance. This is a direct result of the win probability rate of change which exists in MVB. Even if the NPI ends up offering reasonable at-large bids to teams which seem to deserve them in the early years, there is little chance its incentive structure will be good for the game in the long run, acting like a journalist who embeds himself in the story only to change its trajectory, or a scientist whose bias taints an experiment.
I could name no less than a dozen teams off the top of my head that went out and produced schedules far more difficult than seen in the previous two years. Some whose circumstances didn’t change and others whose conditions did from last season to this year. i.e. Now being eligible for an automatic bid when before they weren’t. I am of the belief that the SOS component of the NPI was oversold this year, particularly since I researched the “Greed is “NOT” Good” post two weeks ago. Because so many teams stretched themselves this year, in particular, the NPI will have been positioned in more favorable light to succeed. If teams continue to pursue competition in the same way for the foreseeable future, it will continue to serve its purpose reasonably well. However, if they don’t …
I see Lancaster Bible’s victories to arrive at NPI #22 this week* and take note of St. John Fisher being among the few to have played no more difficult a schedule than in year’s past while continuing to be positioned at #11, the first one out for what I think is the 3rd time in 4 years. (2022, 2024, and right now as it stands in 2025 – In 2022 it won the UVC to not have mattered & in 2023 it was one of the last 2 in.) Taken together with what I have previously written regarding NPI driven incentivization, I suspect the NPI experiment is on an eroding path. Consider the following case study:
St. John Fisher has won exactly 19 of 20 matches against teams ranked from 26th to 45th the last 4 years. Had they played against the current teams ranked between 26th and 45th and won 19 of 20, their NPI would be 61.13, ranked 9th, just a little better than Cal Lu, and would be the last at large bid the NPI would choose if it was decided today. St. John Fisher has spent 42 consecutive weeks across 4 years ranked between 5th & 15th in the AVCA Poll. Is it that inconceivable they’d win 19 of 20 against teams 15 to 30 ranked spots less skilled than themselves? That was rhetorical – It’s not! Somewhere between 1 in 2 and 1 in 3 chances a #11 ranked team would accomplish such a thing.
If you read the “Greed” post a couple weeks ago you’d know what’s described above is only the 2nd most egregious incentive trait of the NPI. The first is how it hammers a team over the head for stretching itself to compete against teams a few ranked spots better than themselves! WHAT? Isn’t that the competitive ideal we as athletes and coaches revere the most? Some would say you have to win those matches to earn your way, but I would counter that if you won 3 of 4 against teams a little better than yourself thinking you were a bubble team, then the fact is you weren’t really a bubble team, after all. You were significantly better than that and your scheduling choices encumbered unnecessary risk to achieve your year-end goal to qualify for NCAAs. Congrats, though!
To those who might think me a jackass for sharing such information I would simply say, “I’d rather be the jackass providing a level playing field of transparency about this system than the jackass who’d think it a strategical edge to one-up the competition knowing it.” Others still may think it in poor taste because they are of the mindset “It just doesn’t matter! It is what it is, and I don’t intend on doing anything with this information.” Until those handful of phone calls come your way from those you’ve never met, seeking a match you now know benefits your opponent’s interests’ light-years better than it does your own, you are welcome. I was once told by a volleyball parent that if you look around the gym and don’t recognize the cupcake, then you are the cupcake. Don’t be a cupcake!
*Regarding Lancaster Bible – I watched them play earlier this year and have nothing but respect for that team. The manner in which they play gritty, frenetic defense keeping balls alive that have no business being in the air was a treat to watch. I only mention this program because Massey, IH, and T100 have them at #36, #35, & #28 respectively, an average ranking of #33 compared to the NPI #22. It boldly stood out as I scanned the NPI order enough for me to go check their schedule. I noticed 5 contests against teams inflated by the NPI on the graph above with 4 more to come. I knew ahead of time they had split with an up-and-coming Elizabethtown squad, both times 3-2 in a Home/Away series. E-Town’s rank according to the 3 expert models is approximately #40. I found it interesting they defeated a team twice in the same day, 3-0, to earn two victories each higher than their present NPI rank against a team ranked #86. This supports much of what I have mentioned above. Furthermore, they happen to be one of three teams with a 99% to get a Natty Bid according to the most recent T100. Unlike the other two (NYU & SVU) theirs is with no chance to earn an at-large bid. So why mention it? Among other reasons, should Lancaster Bible’s NPI prospects continue to thrive, I think it more likely than not they would be one of 13 teams to get a bye in the NCAA’s and not have to play a regional host in their first match. Is this deserved? Is it warranted? Is it just? It might be. However, I am not so sure it’s reasonable.

