Gopher Volleyball 2019

What is happening to us in Michigan?

*gobsmacked*

Sent from my SM-N950U using Tapatalk
 
Last edited:

Gophers with 16 blocks again. Total of 20 Hart Attacks! But Michigan takes them to 5 sets. That’s called “character building” I guess.

Perhaps a bit scary match for the fans. But some efficient work by the Gophers in terms of only 13 attack errors (2.6) per set) versus 34 attack errors for the Wolverines. And won the first 5-set match of the year with 58 Gopher kills to 76 Michigan kills. +9 delta blocks by the Gophers and +6 delta service errors by the Wolverines made the difference.

Besides Hart’s production, there were 12 kills by Samedy, 11 kills by Rollins, 10 by Pittman, 4 by Morgan and 1 (probable dump) by McMenimen.

A 9-player rotation including all the usuals, but minus Miller (who did not set at all), and plus Rubright.
 
Last edited:

Got to watch the replay this morning-what a crazy, fun fifth set-especially the tricky bits where we went setterless.
 

Got to watch the replay this morning-what a crazy, fun fifth set-especially the tricky bits where we went setterless.

Nice 6-0 run, from down 8-5 to up 11-8, when they went setterless in the fifth. Still very concerning, though, is the number of kill attempts the Gophers get -- 41 fewer than Michigan in this match, 186-145 -- primarily due to our sub-par passing and setting last night.
 

Got to watch the replay this morning-what a crazy, fun fifth set-especially the tricky bits where we went setterless.

Yes, yes, yes. At a minimum, everybody needs to watch the micro-video on @GopherVBall Twitter labeled “You’re going to want to watch BOTH these points”. Or better yet, go to YouTube and watch the entire 5th set.

Without our starting setter, Miller, this team is (on paper anyway) not good enough to win the Big Ten. Yet they’re finding ways to win matches that (on paper anyway) they seemingly should have lost. Kudos to the team for their great confidence in themselves and their coach, down the stretch.

By the way, 49 assists by Bayley McMenimen.

The 20 kills was season-best for Alexis Hart.

Regan Pittman had a career-best 11 blocks (that is, she was involved in 11 out of the 16 team blocks), for the double-double.

Also racking up double-doubles were Stephanie Samedy and Adanna Rollins.

Shea Rubright did not have any kills, but her 3 duo blocks during “setterless time” were crucial in escaping Ann Arbor with the W.

All in all, a confidence builder for the Gophers.

But as noted by let’sbeclear, the sub-par passing and lack of hitting attempts are still a cause for concern.
 
Last edited:




Update: In West Lafayette, Purdue took out Nebraska in five sets. The Boilers won the 5th, 15-8.
 




Man, this grinding out set wins is stressful. Oh well, winning doesn't always need to be pretty. Does it? [emoji6]

Sent from my SM-N950U using Tapatalk
 

Getting some subs in set 3 and 4 --- Miaybe in and out for Samedy a few times in set 3. Sheehan in for Rollins in set 4.
 
Last edited:

Samedy, Hart, and Rollins hit -.037, .129, and .050, respectively, and the Gophers still win. Who woulda thunk! Multiple options help. Pittman hit .500 on 19 kills in 30 attacks, and Morgan hit .400 on 7 kills in 15. 11-5 on aces versus service errors -- Michigan State was 3-8 -- certainly helped. Can't complain about two road wins this weekend.
 
Last edited:

Pittman was outstanding; she had three of Minnesotas 11 aces tying CC. The Gophers tried a little of everything in the sets. Took a big lead and hung on; made a late run to snatch a set away from MSU; and finally in the fourth, took a solid lead and (almost) cruise home.

McMenimen got her kill on a tip over the net. Sheehan her kill on a back row roll shot that MSU just watched. and Rubright was 2 for 2.
 



With Miller out we are also lacking the dump by the setter in the offense. I wonder if Sara N is kicking herself for transferring.
 

The 80 match conference losing streak is OVER. Rutgers improves to 2-108 since joining the league.

 
Last edited:

With Miller out we are also lacking the dump by the setter in the offense. I wonder if Sara N is kicking herself for transferring.

And, for instance, against Michigan, their setter had, I’d say at least half a dozen successful dumps against us, plus a couple more attempts that we blocked. We scored no dumps against Michigan. The one McMenimen point that I thought might be a dump, was actually a regular kill.

So lack of dumps (as measured against an SSS Standard, say) is a drain on our offense.
 
Last edited:

Gophers move up in this week's RPI from #11 to #9. Some of these RPI moves are mystifying. Hawaii beats two lower-ranked teams, #171 Cal State Fullerton and #221 UC Irvine, on the road, and sees its RPI go from #10 to #17. Florida beats #92 Alabama and #55 Tennessee at home, and moves up from #12 to #10, while Penn State beats #36 Illinois (admittedly, 3-2) at home and #94 Maryland on the road, and drops from #9 to #12. Or how about Texas A & M moving from #17 to #11, after simply beating #31 Georgia at home?
 
Last edited:

Is the vball RPI a mix of computers and human polls?

Computer rankings cause the randomness and weird results. That's why they finally just got rid of them altogether when they did the college football playoff.
 

Gophers move up in this week's RPI from #11 to #9. Some of these RPI moves are mystifying. Hawaii beats two lower-ranked teams, #171 Cal State Fullerton and #221 UC Irvine, on the road, and sees its RPI go from #10 to #17. Florida beats #92 Alabama and #55 Tennessee at home, and moves up from #12 to #10, while Penn State beats #36 Illinois (admittedly, 3-2) at home and #94 Maryland on the road, and drops from #9 to #12. Or how about Texas A & M moving from #17 to #11, after simply beating #31 Georgia at home?

https://extra.ncaa.org/solutions/rpi/Stats Library/Nitty Gritty_102719.pdf

NCAA RPI:

1 Baylor
2 Texas
3 Wisconsin
4 Stanford
5 Pittsburgh
6 Washington
7 Nebraska
8 Kentucky
9 Minnesota
10 Florida
11 Texas A&M
12 Penn State
13 Marquette
14 Rice
15 UCLA
16 Louisville


18 Purdue
40 Indiana
 

Is the vball RPI a mix of computers and human polls?

Computer rankings cause the randomness and weird results. That's why they finally just got rid of them altogether when they did the college football playoff.

Except for input, no humans are involved in RPI. RPI is simply a formula using wins and losses on home, road and neutral sites for all 336 teams.

In volleyball, like women's basketball it's the single best predictor of who's in the NCAA tournament and who will be the 16 host schools on opening weekend. Yes there should be better methods; but it's the one we've got.
 

Except for input, no humans are involved in RPI. RPI is simply a formula using wins and losses on home, road and neutral sites for all 336 teams.

In volleyball, like women's basketball it's the single best predictor of who's in the NCAA tournament and who will be the 16 host schools on opening weekend. Yes there should be better methods; but it's the one we've got.

I was talking about how they used to use a bag of computer algorithm based rankings of college football teams, to help select who made the BCS. They got rid of that nonsense when they did the CFP system.

Ok, RPI then is nothing like those.

"Single best predictor" is a little like saying "an egg hatching is the single best predictor of when a chicken is born"? Well, yeah, it's the best predictor because it's what the selection committee uses to pick the teams ...
 

http://www.dailynebraskan.com/sport...cle_94dddd20-f13f-11e9-bf79-6b02a41ededb.html

From a recent Daily Nebraskan article on volleyball RPI (one note: I listed home, away and neutral as factors in RPI-that was factored only in basketball):

The Ratings Percentage Index, more commonly referred to as the RPI, has its flaws, but is not yet in its final days. The demise of the RPI was over-exaggerated, and it is still used for postseason selection despite the known struggles with the metric.

The RPI is another way to dive deeper into a team’s record. Some records do not tell the whole truth of how the team plays or who a team has played against. For that reason, RPI was created in 1981 to show a team’s strength of schedule and its’ opponents strength of schedule.

The RPI does a couple things right. It weighs matches against teams with better win percentages higher than those against weaker teams. 50% of the RPI formula is based on the opponent’s winning percentage, which means that playing teams with better records gives a larger RPI boost.

The other half of the formula revolves around other winning percentages.

The RPI formula is 0.25 (team’s winning percentage) + 0.5 (opponent winning percentage) + 0.25 (opponent’s opponent winning percentage).
The RPI formula puts so much emphasis on strength of schedule that a team’s wins and losses can be lost in the mix. One way losses affect the RPI is that when conference play begins, better conferences help every team out.

An opponent’s win percentage is half the RPI formula and another quarter is the opponents’ winning percentage. A conference with great teams at the top also helps the middle and lower teams. Power five conferences tend to have the stronger teams, and those conferences are perceived as the best in the RPI.

Losses are another part of the RPI formula cause criticism. To some, losses in stronger conferences do not affect teams as much. Another issue is how not only road and home wins are weighed, but also home and road losses. The RPI only captures the teams in the schedule and their overall record — not other factors in the wins such as venue and time.
The usage of RPI has mostly correlated to the top four seeds in each region. In 2018, 15 of the top 16 RPI teams were at least a four seed within their respective region. The difference was where they were seeded.
 

http://www.dailynebraskan.com/sport...cle_94dddd20-f13f-11e9-bf79-6b02a41ededb.html

The RPI formula is 0.25 (team’s winning percentage) + 0.5 (opponent winning percentage) + 0.25 (opponent’s opponent winning percentage):

There's something that just doesn't feel right about counting one's opponent's record twice as much as one's own record. It seems a bit strange that you beat an opponent, and because their winning percentage is now lower than it was before you beat them, the value of that win is now less than it was before you played the match. That's like the Black family who moves into a previously all-White neighborhood, and they're told the value of their house immediately went down. Why? Because THEY moved into the neighborhood. Sheesh!
 
Last edited:

Somewhere in this country, maybe in a bunker deep inside NCAA's Indianapolis headquarters, there's a true believer in RPI. No one else actually likes it. Maybe men's basketball's NET ratings, after a rocky start, will eventually lead to a new system.
 

There's something that just doesn't feel right about counting one's opponent's record twice as much as one's own record. It seems a bit strange that you beat an opponent, and because their winning percentage is now lower than it was before you beat them, the value of that win is now less than it was before you played the match. That's like the Black family who moves into a previously all-White neighborhood, and they're told the value of their house immediately went down. Why? Because THEY moved into the neighborhood. Sheesh!

As I think we all know at one level or another, the whole rating/ranking system or systems is/are nuts. It's a supposedly objective figuring of a moveable feast, with tons of improvisational artistry, that can change abruptly within games and from match to match. The rankers try to make a science of a pretty much ballet-like enterprise by the athletes. And the results on the court are not infrequently influenced by faulty referee calls or other inconsistencies. I for one try to enjoy the games for the artistic performances of the players, while also rooting for the Gophers, of course, and thinking less and less about the NCAA committees and their ridiculous computers. It is, after all, sport and art, more than computer science. If this makes me nuts, so be it.
 

Gophers move up in this week's RPI from #11 to #9. Some of these RPI moves are mystifying. Hawaii beats two lower-ranked teams, #171 Cal State Fullerton and #221 UC Irvine, on the road, and sees its RPI go from #10 to #17. Florida beats #92 Alabama and #55 Tennessee at home, and moves up from #12 to #10, while Penn State beats #36 Illinois (admittedly, 3-2) at home and #94 Maryland on the road, and drops from #9 to #12. Or how about Texas A & M moving from #17 to #11, after simply beating #31 Georgia at home?
... From a recent Daily Nebraskan article on volleyball RPI (one note: I listed home, away and neutral as factors in RPI-that was factored only in basketball)
Iggy, thanks for posting the current volleyball RPIs (and I'll repost below for ease of comparison); and also for that Nebraskan article that gives a simple RPI explanation.

I'm going to take your word that home/away/neutral emphasis is only used in basketball - I googled high and low to answer that question, but obstinate google absolutely refused to give me the answer. (I think Google is getting worse, but I digress.)
Is the vball RPI a mix of computers and human polls?

Computer rankings cause the randomness and weird results. That's why they finally just got rid of them altogether when they did the college football playoff.
Except for input, no humans are involved in RPI. RPI is simply a formula using wins and losses on home, road and neutral sites for all 336 teams. ...
Yes, to reiterate, RPI is a robotically (if you will) computed metric that comes from the win/loss data plus who played whom, and has no human decision input involved. In contrast, the NCAA Power 10 rankings and the AVCA Coaches Poll (all shown below for comparison) are essentially both determined by human decision making (although they might use various statistical tools in helping them decide).

The fact, per se, that RPI is a fully computer-automated metric, is not completely damning by itself. Yet, humans are still generally smarter than computers, especially since they take into account other factors that might not have been considered at design time for an automated metric.

However, it does happen to be the case that RPI is a generally very bad metric for ranking teams (as many people believe, including myself). I put much more faith in the AVCA Coaches Poll and the NCAA Power 10 rankings.
... In volleyball, like women's basketball it's the single best predictor of who's in the NCAA tournament and who will be the 16 host schools on opening weekend. Yes there should be better methods; but it's the one we've got.
Technically correct, but ...
... Ok, RPI then is nothing like those.

"Single best predictor" is a little like saying "an egg hatching is the single best predictor of when a chicken is born"? Well, yeah, it's the best predictor because it's what the selection committee uses to pick the teams ...
As I think we all know at one level or another, the whole rating/ranking system or systems is/are nuts. It's a supposedly objective figuring of a moveable feast, with tons of improvisational artistry, that can change abruptly within games and from match to match. The rankers try to make a science of a pretty much ballet-like enterprise by the athletes. And the results on the court are not infrequently influenced by faulty referee calls or other inconsistencies. I for one try to enjoy the games for the artistic performances of the players, while also rooting for the Gophers, of course, and thinking less and less about the NCAA committees and their ridiculous computers. It is, after all, sport and art, more than computer science. If this makes me nuts, so be it.
Very poetically said, Hrothgar.
There's something that just doesn't feel right about counting one's opponent's record twice as much as one's own record. It seems a bit strange that you beat an opponent, and because their winning percentage is now lower than it was before you beat them, the value of that win is now less than it was before you played the match. That's like the Black family who moves into a previously all-White neighborhood, and they're told the value of their house immediately went down. Why? Because THEY moved into the neighborhood. Sheesh!
Let's be clear about this - you hit the nail on the head in a non-technical manner, so let me expand on that more technically.

A comment in the RPI Wiki entry states the main point weakly: "The RPI lacks theoretical justification from a statistical standpoint."

I'll state it more strongly: The RPI was created in 1981 by a committee of sports enthusiasts (perhaps NCAA management?) who didn't know a darn thing about statistics. And the resulting RPI metric does a horrible job of ranking sports teams - in any sport. Unfortunately, they have stuck with it in spite of all the valid criticisms, with the exceptions of football moving off it and Men's Div I basketball switching to an experimental NET system starting last year.

Here's the primary aspect of how they went wrong in designing RPI. They started out with simple match win/loss statistics. So far so good. But they (rightly) realized that they needed to put in some kind of factor that tempered the pure win/loss statistics with how good their oponents were. Otherwise, an undefeated team in the absolutely worst conference would always come up ranked #1. So they decided to put into the metric an aspect reflecting Strength of Schcedule (SoS).

But the way they went about doing that compensation was all messed up. In a nutshell, they over-did the SoS compensation so much, that RPI actually became primarily a measure of a team's Strength of Schedule, augmented by a minor factor that actually reflects how well a team has played (so far this season). You can see that by breaking down the RPI formula ...

RPI = 0.25 (team’s winning percentage) + 0.5 (opponent winning percentage) + 0.25 (opponent’s opponent winning percentage)

You can see that the SoS part (that they intended to use to compensate for Strength of Schedule) is the + 0.5 (opponent winning percentage) + 0.25 (opponent’s opponent winning percentage). Using symbols, with WIN representing win percentage, we get ...

RPI = 0.25 * WIN + 0.75 * SoS
... where SoS = (2/3) * (opponent winning percentage) + (1/3) * (opponent’s opponent winning percentage). The whole SoS term is a measure of the given team's Strength of Schedule, placing twice as much emphasis on the strength of their schedue per se, than on the strength of their opponents' schedule. But make no mistake, the entire SoS term, which accounts for 3/4 of the RPI metric, is some sort of measure of the strength of the given team's schedule.

If you do the math, you see that this works out to the original RPI fomula. But by stating it this way, one can easily see that 75% of the emphasis is being placed on a team's Strength of Schedule, whereas only 25% of the emphasis is behing placed on each team's WIN percentage per se.

So although the statistically-illiterate RPI design committee sought a metric of a team's WIN percentage slightly tempered by its SoS, what they actually got was a metric of a team's SoS slightly tempered by their WIN percentage. Let me state it in no uncertain terms.

RPI is not a metric of the quality of performance of various volleyball (or fill-in-the-blank-sport) teams. Rather, it is a metric of how smart the team was in scheduling matches against (what they hope will turn out to be) the best teams in volleyball, ever-so-slightly tempered by the actual quality of performance of the team to which the metric is applied.

Or putting it another way: The designers of RPI were statistically dumber than a box of rocks.

In hindsight, they could have done a slightly better job without actually complicating the formula that much. For instance, they could have put 50% emphasis on WIN percentage and 50% emphasis on SoS. For all I know, that might have been a lot better, although it's hard to say what might be the better choice of distribution of emphasis from (say) {50/50, 55/45, 60/40}. But clearly, 25/75 emphasis on WIN vs Sos was a horrible choice. And makes the RPI metric almost worthless.

This bad formulation of RPI is directly responsible for what seems to be anomalies in its use. For instance, let's take a couple examples ...

"Hawaii beats two lower-ranked teams, #171 Cal State Fullerton and #221 UC Irvine, on the road, and sees its RPI go from #10 to #17."

In this example, Hawaii gets two victories but its RPI metric gets totally slaughtered - moving from #10 to #17. Well, the delta to its raw RPI number (the number between zero and one that is then sorted in order to determing ranking numbers) gets 25% times a positive delta in the WIN factor, and 75% times a huge negative delta in the SoS factor, just because it played two teams ranked #171 and #221. The huge negative hit to the SOS factor (which, by the way is weighted 3X bigger) far out-swamps the positive (but small-weighted) increase to the WIN factor. So Hawaii's raw (zero-to-one) RPI number goes down rather sharply. Due to the fact that there is a dense clustering of raw RPI numbers, especially among the high-ranked teams, this drop in raw RPI number causes a huge drop in RPI ranking # - from 10th place to 17th place. So this is the "correct" effect of RPI, at least the way that it is (very wrongly) defined. By the way, if you were to check, both Cal State Fullerton and UC Irvine have (at the same time) received a huge boost in their RPI rankings. Essentially, Hawaii donated some of its RPI to both Cal State Fullerton and UC Irvine. One might consider that to be a charitable act by whomever on the Hawaii staff set up its schedule. But of course, Hawaii is geographically isolated, so it almost has to play some unranked Cali teams, unless it wants to fly to Minnesota instead.

The other examples noted are similar, and all of them make sense according to how RPI is (badly) formulated, except for the one "Texas A & M moving from #17 to #11, after simply beating #31 Georgia at home" - which puzzles me a bit, and I haven't looked at in detail. A conjecture on that one might be that formerly, its #17 ranking was being dragged down excessively by having previously played mostly bad-RPI teams, so not only does the win help (a little bit, anyway), but just playing a (better) #31-ranked team gives it a huge boost to the primary SoS factor of its RPI. In any event, I can guarantee you that the seemingly odd Texas A & M effect is simply due to how horribly the RPI metric is formulated - and is thus yet another example of how bad RPI is.
Somewhere in this country, maybe in a bunker deep inside NCAA's Indianapolis headquarters, there's a true believer in RPI. No one else actually likes it. Maybe men's basketball's NET ratings, after a rocky start, will eventually lead to a new system.
The switch to the new NET formula from RPI for Men's Div I Basketball was at least right-headed in the sense that they tried to improve the very-bad RPI formulation. However, it seems obvious (albeit not yet scientifically proven) to a lot of people that with NET, the NCAA swung the pendulum too far back toward the winning component (a more complicated variant of WIN, in this case), and thus (among other faults) does not emphasize strength of schedule enough. There's so many things wrong with NET, yet it's impossible to critique it honestly, since this time the NCAA kept important details of the algorithm secret. My characterization of the invention of the new experimental NET ranking system by the NCAA is something like ...

"They finally acknowledged that RPI is evil, so the NCAA convened another committee to design the NET alternative metric. This time, they invited one statistical consultant to the meeting along with 19 statistically illiterate NCAA goons. The 20 voted on a new, really complicated formula, which was a compromise between the 19 statistically illiterate goons and the one statistician. As a result they got a NET metric that has equally as many problems as RPI, but that mostly fails in ways that are opposite to the ways in which RPI fails."

For reference, note the following ...

Compare the current (human-made) NCAA Power 10 rankings (https://www.ncaa.com/video/volleyba...l-rankings-texas-new-no-1-ncaacoms-power-10):

1. Texas
2. Pittsburgh
3. Wisconsin
4. Baylor
5. Minnesota
6. Penn State
7. Nebraska
8. Stanford
9. Creighton
10. Marquette

... versus the top rankings in the (human coaches-made) AVCA Coaches Poll (https://www.ncaa.com/rankings/volleyball-women/d1/avca-coaches):

1. Texas
2. Pittsburgh
3. Baylor
4. Wisconsin
5. Stanford
6. Minnesota
7. Penn State
8. Nebraska
9. Creighton
10. Marquette
11. BYU
12. Washington
13. Florida
14. Colorado State
15. Kentucky
16. Purdue
17. Utah
18. Rice
19. Illinois

... versus the top RPI rankings for NCAA Div I Volleyball (https://www.ncaa.com/rankings/volleyball-women/d1/ncaa-womens-volleyball-rpi):

1. Baylor
2. Texas
3. Wisconsin
4. Stanford
5. Pittsburgh
6. Washington
7. Nebraska
8. Kentucky
9. Minnesota
10. Florida
11. Texas A&M
12. Penn State
13. Marquette
14. Rice
15. UCLA
16. Louisville
17. Hawaii
18. Purdue

As noted in the video, the lady announcing the NCAA Power 10 rankings (who also did the rankings, I think) almost (but not quite) ranked Minnesota above Baylor. In that ranking, Baylor came in 4th and Minnesota 5th (with the leaders being Texas, Pitt and Wisconsin). Baylor had been just about everybody's first choice since they were undefeated, but their defeat dropped them down to just-a-bit higher than Minnesota in the Power 10 rankings. By comarison, the AVCA poll dropped Baylor only to 3rd place (with the Gophers in 6th place). And in total contrast to that, RPI leaves Baylor as #1 but puts Minnesota at #9 (as noted, a two-place advance from #11).

What does the last bit mean (about RPI having a much wider GAP between Baylor and Minnesota than any of the human-based polls do)?

Well, it just means that RPI is a piece of crap as an actual volleyball rating system. Baylor is still #1 in RPI just because of the fact that the Baylor team schedulers were smart enough to construct a 2019 scheule that maximized the number of really good volleyball teams that they play. Up til this weekend, this was also re-inforced by their perfect win/loss record, so they were probably rightly considered to be #1 by all the rating systems. But now that they have had a loss, and proved to be fallible like many of the other good teams, the human-decided ranking systems both dropped them down a few notches in the ranking. They're still a good team, mind you, but now considered to be ony marginally better than the Gophers. However, since RPI is mostly a measure of Streangth of Schedule, and Baylor still has an extremely strong schedule plus only one loss, the RPI metric automatically still considers them to be #1 in RPI. The Gophers, on the other hand, don't have quite as good a schedule, so in spite of a pretty good won/loss record, are still down there at #9 in RPI.

So what you can say based on the aboe three rankings is that Minnesota pretty much has the 9th most difficult Strength of Schedule, but in terms of how good the Minnesota team is (relative to the other good teams), Minnesota is ranked either #6 or #5, depending on which of the two human-based rankings you trust.

The down side, of course, is that at NCAA tournament selection time, RPI is still one of the biggest factors considered. Thankfully, it's not the only factor. Yet RPI could cause us to get a bad matchup that we didn't deserve. So any way you cut it, we're going to have to battle hard in the NCAA tournament games. It's unlikely that we'll end up with a top-four RPI ranking, even if we end up in the top four of the other credible (human-based) rankings.

However, in spite of what I just said (that would make one suspect that we won’t advance much more in RPI), we could actually advance significantly in RPI over the remaining Big Ten games. That’s because we will play Wisconsin, Nebraska, Penn State and Purdue. These are all ranked teams, so just playing them will boost our SoS significantly. And since RPI is mostly a measure of SoS, playing them at least has the potential to boost our RPI. But we also need to take care of business with those ranked teams (which could be a challenge if Miller is still out). More losses than wins against these remaining ranked teams on our schedule, could counteract the RPI benefit of just playing them. Because winning does count for something in the RPI formula, even if it doesn’t count for much (namely only 25%).
 
Last edited:

The problem with computer algorithm rankings always boils down to this inevitable reality:

- you either go "all in" and say something like "this is what the system is, I don't claim it is perfect, but this is what it is and what it will do, take it or leave it, for whatever it is worth"

- or you start saying things like "hmmm ... well that ranking just doesn't seem right ... let's tinker with this factor a little bit, or let's add this factor on and weight it like this " until it "looks right", and then it give you another "bad" rankings, and you tinker with it again, and then ...


This is why I think human rankings are superior. Absolutely, we can use computer rankings as a datapoint, or something to compare against, but I think it must be humans picking, not computers. And the RPI, as you've explained it, is barely even an algorithm. It's way too simple to be given as much weight as it does. To the point of being silly.
 

Personally I think the entire NCAA format is absurd if not even corrupt. NCAA chooses 64 teams (not only in VB but also BB & I think Baseball and some others), only a handful of whom have a chance of winning. It's essentially a huge PR and money-grubbing enterprise by the NCAA (witness the overpriced NCAA mementos sold at NCAA playoffs). If the NCAA championships were on the up and up, they would begin by choosing only, say, 8, or maybe just 4 and at the very most 16, teams with a REALISTIC chance of winning a national championship and then have them play all games at neutral sites to eliminate home court advantages. Plus having quantitative numbers crunchers at the NCAA ranking a qualitative activity doubles the absurdity.
 

Volleyball and Women's basketball are obviously not money grabs. I like the top 16 hosting in WVB and WBB-neutral sites were blah or oddly not neutral. But that's a discussion separate from RPI. And technically, the NCAA only chooses half the field the others are auto bids.

The funny thing about RPI (a house of cards build on no foundation) is that it's how we talk about rankings. A good move by the NCAA five or six years ago was when they started releasing RPI numbers regularly. They used to drop them seemingly randomly late in the season and they inevitably contained surprises.

Now, the NCAA begins regularly releasing RPI sheets about six weeks into the season which I assume influence the few human polls. Releasing the RPI Team Sheets and RPI Nitty Gritty reports weekly (in WBB they sometimes come out every day) for better or worse normalized RPI. All the conversations around Strength of Schedule, record versus top 25, top 50 and top 100 etc are based on RPI. Those dreaded losses to above 200 and 300 teams are all creations of RPI. Strength of conferences is probably the most vilified of the RPI rankings; but we still toss it about.

Anyway, it seems coaches are comfortable with RPI. They never seem comfortable with ranking systems which include the most essential stat: margin of victory.

Side note: Hockey uses the Hockey RPI version to choose the 16 tournament teams. Top 16 in and everyone else is out.
 

Side note: Hockey uses the Hockey RPI version to choose the 16 tournament teams. Top 16 in and everyone else is out.[/QUOTE]


I don't think I have ever heard a hockey coach or fan disparage the pairwise rankings like fans of the other sports do the RPI. I have no idea what the math is, but everyone seems to agree on it as an acceptable way to separate teams. Also, hockey has its own version of automatic qualifiers and opens the possibility of a non top 16 team getting in.
 




Top Bottom