Each player who has submitted match results to me has a number called a rating. This number starts at 1000. Whenever a player defeats another player, their rating goes up. Whenever a player loses, their rating goes down. One gets more points for defeating higher rated players than for defeating lower rated players.

Whenever a group of players face each other and submit their match results to me, I take out my calculator (a TI-89, since the equation is a little complex) and plug each player's current rating, the number of opponents they defeated out of all the other players, and a number called K that depends on the number of players and the number of points the winner acquired into one of two functions. The selected function outputs each player's new rating.

The first function does the following:

The risk for each player is the maximum amount of points that player can use. For a player A with rating R1, against opponents with ratings R2, R3...RN, the risk is as follows:

Risk(A)=-K/(10^((R2-R1)/400)+1)-K/(10^((R3-R1)/400)+1)...-K/(10^((RN-R1)/400)+1), where K is a constant to be explained later.

First, each player's risk is subtracted from their rating. Then the function adds K times the number of defeated opponents. In case of a tie, each player involved in the tie receives half a win.

K is a constant determined first by the number of players. 2 players give K=48, 3 and 4 give K=32, 5-6 gives K=24, 7-8 K=16, 9-10 gives K=12, and 11 or more gives a K of 8. However, if the highest score falls in the 19-24 range, the K decreases one step, from 48 to 32, 16 to 12, or whatever. If the highest score falls in the 12-18 range, the K decreases two steps, making these games only half as important as a game to 25. If the K would decrease below 8, the next two values are 6 and 4.

If I only rated games played to a full 25 points, a standard Elo rating system would work perfectly well. However, games played to 12 or 19 points count towards a player's rating as well. In these games, the weaker player has an increased chance to win. Therefore, if I use the same equation for games to 12, 19, and 25, weaker players have the advantage in games to 12, and stronger players in games to 25. Because a rating system should provide an impersonal measurement of a player's skill without any strategy of who to play affecting it more than a bare minimum, I took steps to correct this. Here's my work, as simple as I can make it.

If a game to p points is nontrivial, I assume with high certainty that a game to np points will be closer to a best-n-out-of-(2n-1) series of games to p points than a game to p points. My reasoning: As long as a game to p points is nontrivial, the winner of a game to np points is almost always the player who hits p points n times. I'm sure somebody can improve this estimation, but it works better than nothing at all.

Assume that players P1 and P2 have perfectly accurate ratings R1 and R2 when playing games to p points, and define x=R1-R2. If you go through this to get the probability of P1 winning, and convert this to the best-2-out-of-3 probability of winning, and then convert it back to a rating difference, then for games to 2p points, x(2p)=2 * x(p) + 400 * log{ [ 10 ^ (x(p) / 400 ) + 3 ] / [3 * 10 ^ ( x(p) / 400 ) + 1 ] }. The second function estimates the inverse of this equation, and uses that to alter each rating difference when figuring out odds. This allows me to rate 12-point games accurately. Very soon, I will program the function for 19-point games. (It's all down in my notebook, just not on the calc yet.)