I'm a big fan of analytics, and you don't have to be on this website more than 25 seconds to get that. When given the choice, I'm always going to err on the side of an objectivist approach. I think that's really important. But output is only as good as input, and when it comes to this, I just don't know, man...
PredictionMachine.com jumped head-first into controversy by attempting to rank every national championship team, using a pretty standard method:
To build our National Championship Power Rankings, we "played" every team against every other team, using strength of schedule adjusted statistics, 50,000 times each and ranked them by overall winning percentage of those games.
The thing is, this becomes really problematic once you dive back beyond 1995 or so. There are so many obvious problems with this, that actually there's probably no point in bothering with any teams outside the last quarter century. The statistics get fuzzier and fuzzier.
The results of this analysis place NC State's 1974 team among the worst to ever win a title. That's right, the team led by David Thompson and Tommy Burleson, which lost exactly one game in two years, and which beat a top-five Maryland team to get into the NCAAs, AND which happened to end the UCLA dynasty, is somehow a statistical footnote.
This is like denying David Thompson a legal dunk. Your rules are stupid, the route to them equally so, and how do we rectify this situation?