The Top 100

Editor’s note: The 2024 season is about to begin, and with it comes new additions to the FrogJump team and our coverage. I’m excited to introduce Eric Ingerick to the landscape, for both his passion to our sport and the metrics that cover them. The FrogJump Power Rankings will now be taken over by Mr. Ingerick, and will be known as our T100 (top 100). I’llleave the rest of this article for Eric, as he will be able to introduce/explain to you his work in greater detail than I.

Formerly known as d3mvbt100, it occurs to me the list by itself pretty much guarantees we are
reading about Division 3 Men’s Volleyball (d3mvb) programs. Duh! LOL – So this year, how
about we just call it the T100! (The domain rights just expired, so it might be the legal thing to
do as well.)

WHO produces it? I am Eric. Retired Math/Stat Teacher, Math Tutor, Data Analyst, Volleyball
Dad, Sports Enthusiast and hopefully a source for interesting, sometimes off-the-beaten-path,
reliable, verifiable, authentic numerical takes.

WHAT is it? T100 is a ranked order list of the top 100 D3 Men’s Volleyball teams in America
using rating points.

WHY 100? In 2023 there were 113 programs and this year there seem to be 124, … I think.
You would not believe how difficult it is to pin down the exact number, or agree on who they are
even if by some stroke of luck you find consensus on the number. As soon as you think you have, it changes again. There is so much flux, even the National Committee minutes have it incorrect by up to 4% by the time January comes around, and they use such information regularly to arrive at important choices like who and how many teams get bids to the National Tournament among other high leverage decisions. A moving target for sure given a combination of increased interest in the sport and fiscally difficult times for institutions of higher education.

I see little need to differentiate any teams beneath the 20th percentile for a number of reasons, though the method utilized certainly does it reasonably well. Maybe with just a smidge less certainty than how well it differentiates team strength across other parts of the distribution where wins are more prevalent. Some might ask, ”Why the need to go past 25 or 50?” The reason is simple. Athletic teams and their constituents want to be able to measure progress towards larger goals. The kind of goals, if met, that would have been forged by precursing events detectable only by those close to their program the disengaged rest of us would never observe.

There are some teams over the last 5 years who have made steady progress, but never been
ranked in any national poll. Any team moving among the T100 should know there is reliable and precise quantifiable evidence for having changed beyond what a traditional W-L record in-
dicates. As for those teams whose present skill in playing volleyball are among the lesser 20%, it leaves them aspiring to break onto the T100 list, not so unlike teams consistently among the
top 25 who are chomping at the bit to be recognized by the AVCA Coaches Poll which used to
go to the perceived best 15 of the bunch.

WHY do it? It is a challenge and it is fun. Should it cease to be either, then I suspect it will also
disappear. Just as if it stops being fun or interesting for readers, it won’t be sought any more.

WHERE can I find it? The T100 will be available through FrogJump.

WHEN is it available? It is posted the earlier part of each week of the season until a National
Champion is crowned.

HOW does T100 work? ELO is a zero-sum game. The total points in the system for 124 teams will both start and remain 1240 this year. This makes it so the average is10 points per team. (That is unless the 124 value turns out to have been wrong, of course.) Every time a match is played, the winner takes points away from the loser. A relationship describing chances for a favorite defeating a dog determine how many points that will be.

Without “gunking” up the explanation with references to probability theory and logistic functions, suffice it to say, “When a favorite wins it will take anywhere from 0 to a 1⁄2 point away, should a dog win the match it will take somewhere between 1⁄2 and 11⁄2 points away. Should two equal teams play each other, then the winner will essentially steal a 1⁄2 point from the loser of the match. The most important thing to understand is as the gap between the competitive compatibility of these two teams gets wider, the margin of points poached are initially much more severe earlier than later. This delicate balance is the key to arriving at a system responsive enough to outcomes, but not too responsive to any single match played.

If I started by giving every team exactly 10 points, then by the end of a season of roughly 1500 games played, the model will have found a reasonable dynamic equilibrium not too horrendous. However, if I was divinely clairvoyant and could create the “perfect” pre-season rank, then the system would have no need to adjust itself, and then would only respond properly to the natural variation of teams’ play due to any factors for which it is dependent. (injuries, increased strength & speed, skill improvement, travel, etc.) The former offers no substantive value in forecasting during the season because it takes the whole season to right itself. The latter (clairvoyant approach) would be ideal because the system, like a well trained athlete having already put in the work & preparation, can then instinctively respond to conditions in the game spontaneously.

So the goal becomes making the pre-season ranking as close to “perfect” as possible. How to do that is the second most important task, knowing you do not want to wait too long for a well-calibrated system to be so distracted righting itself it is less able to respond to the natural ebb and flow of the season. Here is my method for building a preseason ranking as close to perfect as I can:

  1. Start with last year’s final rating. Certainly it is flawed, but far less than if all teams started out equal.
  2. Consider the composite wisdom of the expert mathematical crowd. Whatever they think they know tilts closer to the truth than last year’s final ranking, certainly. Places like Inside Hitter, Massey, and any others are excellent sources in their own right, but in my opinion take a little longer to arrive at being a credible forecasting system.
  3. Take note of player transitions and terminations, to use some volleyball lingo. Kudos to IH this year for providing invaluable Information regarding the classification of players on teams and from where they came. Sure, there were too many assumptions about graduates and such this year, but as soon as it became clear, they recalibrated and made it better. I see their efforts as magnificent. Unfortunately, others superficially saw Stevens ranked 8th on their initial list and fallaciously lost confidence. This criterion is far more difficult to interpret, especially with extra years of eligibility offered through Covid to then incoming Freshman. There will be a critical mass of 23 & 24 year old men playing amongst boys these next couple of years making it so this year’s 25th best team will likely be better than the 15th ranked teams from yesteryear.
  4. Respect coaches preseason evaluations of their own teams and those in their conference by taking note of preseason conference polling. Trusting the pecking order of teams in conferences is a worthy last step in getting a national ranking valid.
  5. Once I get the order as good as I can get it using what in essence is a collective wisdom of the crowd composite, I then build the points so that the distribution’s variability is pinched to 80% of the variability I know it will be by the end of the season. What this does is hedge against any egregious numerical errors that still might have been missed, giving teams who have inadvertently been placed too high or too low to have the capability to play themselves to their correct placement earlier than later. Each week, I watch the variability of the whole system creep up to what I expect for its final magnitude. To demonstrate this, consider T100’s preseason poll starts with its best team being near 15 points higher than its weakest. By season’s end that difference will be roughly 18, 20% greater by design. This suggests the best team at the end of the season would win a set 25-7 against the weakest. Presently, IH’s range is 22 to start the season. I do not doubt the strongest team in the land could defeat the weakest in a typical set by a 25-3 margin, but I absolutely believe they wouldn’t even if they could. There needs to be a built in buffer so that when really good teams play those not so good, a forecast can reflect lesser margins because of starters not playing and appropriate levels of concern being demonstrated by coaches and players, alike.