Many years ago I set about devising a system that would rate college football teams. I wanted to make something that would eliminate my personal biases and be based solely upon team abilities and performance.
Like many other systems, I started by evaluating each team based upon their oofensive and defensive performance. I added in a factor to account for how much the team wins, plus another for how well the team performs at home. These numbers, when compared to another team's numbers, provide a predicted score, as many other systems do.
The difference in my system is that the teams are then graded upon how well they did in that specific game. If a team is favored to win heavily and they do, they get a certain number of points. If they ourperform expectations, they get extra points, but those extra points are capped to aviod rewarding running up the score. If a team is heavily favored, and win, but by less than expected, that team earns a diminishing number of points, and may even lose points depsite winning. Conversely, the underdog can win rating points depsite losing the game.
An underdog earns more rating points by actually winning the game, and the favorite loses an equal number of points. The bigger the original spread and the greater the underdog's margin of victory, the bigger the point swing (favorite losing rating points and underdog gaining points). By keeping the points balanced (Team A's loss is Team B's gain), I avoid overall point inflation. The one exception to this rule is when a team ranked in the Top-10 (my rankings, not the polls) loses. Should the #1 team be upset, they lose 3 times the regular number of rating points. A #2 team losing means they lose 2.5 times the regular number. This "Top-10 Factor" decreases to 1.1 times the regular number of rating points for the #10 ranked team. If a team in the Top-10 loses (and loses badly enough to lose rating points), no matter if it was predicted or not, this extra factor is put into play. The perception is that a loss for a Top-10 team is "bigger" than a loss for any other team. In all of these cases, the team doing the upsetting earns the regular number of rating points.
So, then, I have all these teams with all these rating points. What happens now? In each subdivision, I set the points so that an average team has a rating of 80. Anything above 80 is an above average team, below 80 is not so good. Over the years, I have seen that a rating of 90 is pretty good, a rating of 95 will most likely get a team into the Top 15 or so, and a rating of100 or more will put a team into the Top 8. Going the other way, a rating of 70 indicates a team probably in the lower 25, while a rating below 60 means the Bottom-8 or so.
A rating of 80 in the FBS is NOT equivalent to a rating of 80 in the FCS. While a fluid number that changes as teams play each other between these two subdivisions, the rough rule of thumb is that a FCS rating is only about 2/3rds of it's FBS equivalent. Thusly, an 80 rating in FCS converts to about 53.5 in FBS terms while an 80 rating in FBS would be about 119 on the FCS scale. The FCS ratings are more spread out generally, top-to-bottom. This is due to the wide variance in quality of teams within the FCS. You have teams that could definitely play in the FBS (Appalachian St, Villanova and James Madison, to name a few), you have teams that don't offer scholarships or participate in the FCS Champiohsip playoffs (any Ivy League school), and you have teams that play a high number of games against Division 2 opponents (Butler, Campbell, Savannah St). With such a diversity, there is bound to be a broader range of ratings and, to keep things from getting out of hand, certain limitations are applied to rating points at the start of each season. Otherwise, the rating range would be more like 150 to 10. With such a broad spectrum, a team in the lower reagions could get hot, beat other teams and pull upsets and still never get out of the Bottom-25.
The other factor to determine rankings is "Difficulty of Schedule". While not a factor that evaluates a team's rating directly, it does reward a team that plays a harder schedule. Should two teams have identical ratings, the schedule difficulty is used as a tie-breaker with the higher ranking going to the team with a harder schedule. However, a team's Difficuly of Schedule rating is also a fluid thing. It takes into account the entire season of all teams, not just when a team played another. For example, Team B is hignly rated when Team A plays them, but later in the season, Team B falls apart and plummets in the ratings. Team A's difficulty of schedule follows Team B's fall throughout the season and is not based solely upon how TEam B was rated at the time of the game. Any team's difficulty of schedule rating is an ongoing average of their opponent's ratings. And, yes, the conversion between subdivisions is factored in here as well, so an FBS team playing an FCS opponent is at best, "OK" and oftentimes very detrimental to the FBS teams Schedule difficulty. This also explains why a team's schedule difficulty factor doesn't rise rapidly or plummet sharply depending upone any one opponent's rating. Any one team's rating is only 1/10th to 1/12th (or even less) of another team's schedule difficulty rating.
Finally, between the end of one season and the start of the next, each team is evaluated on thenumber of starters returning on each side of the ball. I have found that a certain number of returning starters on defense seems to maintain that defensive factor, so I then add or subtract factor pointsbased upon that number. For offense, I have a different number, and then add in the returning QB factor. If last year's starting QB is returning, I add 1 more to the offense factor. If the starting QB is not coming back, I subtract 2 more points.
The "winning edge" factor and the "home field" factor are each based upon the winning percentage from the previous 3 years plus the current season. This evaluates the history of the players who are graduating seniors and adds in their contributions their senior year.
Between seasons, I compare the numbers after adjustments to see how a team rates to start the new season. Oftentimes, a team has moved way up the rankings based upon their performance but, when evaluated against all other teams for the new year, they will find themselves back in the middle of the pack. In the same way, a team that has one abysmal season can find itself ranked much higher to start the next season. The system looks at ALL the factors, not just some of them.