---
As I've mentioned before, I'm an avid player of the game Warlocks, based on Waving Hands. This game uses two different ranking systems for competitive players, and they have some interesting differences between them.
The first, and simpler ranking system is ladder points, and they work as follows: every time a player wins a ladder match, they gain one ladder point; every time a player loses a ladder match, they lose one ladder point; every time a player dies during a ladder match, their ladder points reduce to zero (note that in this game, most matches end with one player surrendering, not dying). Every player starts with a ladder score of zero, and you cannot have negative ladder points. This means that every time a player with no ladder points loses a match, a ladder point is created from the ether, and every time a player with ladder points loses a match, their point is effectively transferred to the winner. There's one more feature of ladder matches that's worth mentioning, and that is that you cannot challenge a player to a ladder match if your relative ladder scores are more than 5 points apart.
The result of these features is that ladder scores rarely get very high. Since your ladder score will get reduced to zero by a single death it takes a lot of skill (or luck) in order to continually grow your score. Moreover, since you cannot challenge an opponent who is more than 5 points apart from you, the high possible ladder score for any player is 7 points higher than the second-highest score (assuming they began 5 points apart and that the higher-ranked player won). This means that in order for me to have a ladder score higher than 20, there need to be other players with a ladder score of at least 15 I can challenge. This means that the upper limit of ladder scores depends on the presence of a population of successful ladder players who collaboratively create ladder point (by playing those with 0 ladder points) and then transferring them up the ladder to the best players.
We'll see a similar dynamic with the second ranking system: elo. Like in chess rankings, elo is a system in which the change in a player's score is weighted depending on their expected likelihood of winning (which is, in turn, based on the competing players' relative elo scores). Each player who registers begins with an elo score of 1500, which defines that score as the expected skill level of an average new player. Each match results in one player gaining a number of points and the other losing an equal number of points - in other words, once again, matches effectively cause a "transfer" of points from one player to another. If a player with a lower score beats a player with a higher score, they earn more points from the win, and if a player with a higher score wins, they earn fewer points. The difference in points earned corresponds to a player's expected likelihood of winning - meaning that if I'm expected to have a 75% chance of defeating an opponent, I will earn 1/3 as many point for winning as he will if he wins, so that over the course of many games, elo scores will stabilize if players tend to win as often as they are expected to given their elo score. Since all starting players start with 1500 points, they begin ranked as equals even though some may be stronger players than others. However, the differences in skill level will fairly rapidly be reflected in their score once they begin playing ranked games.
Let's look at an example. I register a new account and start with 1500 elo. If I play and beat another new player, I will gain 12 points, to have a score of 1512, and their score will go down to 1488. Now the difference in our scores is 24, so if I play that same player again and win, I will gain slightly fewer points than I did the first time. Once the elo difference is over 100 points, I will gain 8 points from a win and my opponent will gain 16 points if he wins - as long as I win approximately twice as often as I lose, the elo difference will remain stable, but if I win more often, it will continue to go up, and if I lose more often, it will go down.
Notably, if the winning player is ahead by enough elo, they effectively gain no points from victory, so many high-ranked players will simply refuse to play ranked matches with much lower-ranked players (since they have nothing to gain and much to lose if they make a mistake). In practice, the maximum effective difference between players who can fairly compete in ranked matches is a little over 200 points. Any more of a difference and fluke wins by inexperienced players will unduly throw off the scores of high-ranked players.
All of this together suggests some interesting features of the elo economy - since a winning player gains as much as their opponent loses from a match, the sum elo score of the player population cannot grow except by the addition of new players, and that the existence of players with more than 1500 elo requires the existence of players with less than 1500 elo. Moreover, a player can only effectively grow their elo by playing opponents with an elo score within 200 points of their own, which suggests that growing your elo depends on a population of players with elos near your own, so the highest possible elo in the system depends on the number of successful players, which is in turn limited by the number of total players. That is, a population of new players is needed in order to support the elo growth of players with elos between 1500-1700, and a population of players with elos of at least 1700 is needed to support the elo growth of players with elos between 1700-1900.
As of this writing, there are 1577 players who have registered to play Warlocks, about 200 of which never played a ranked duel. Of the players who have played ranked games, 281 have an elo higher than 1500, and 419 players have an elo lower than 1500. The lowest elo in the system is 1298 (202 points lower than the average) and the highest elo is 2106 (606 points higher than the average). This suggests that in practice, a large population of weak players is needed to support the heightened elo scores of a relative few. There are two reasons for this: first, players who repeatedly lose will likely stop competing at some point, and players who repeatedly lose will have their elos fall to the point where they no longer effectively feed the elo growth of stronger players.
Since the value of a win is weighted by the likelihood of the win, players who perform as well as expected will have stable elos - if you are about twice as good as the average new player (meaning twice as likely to win), your elo should stabilize around elo 1600. However, once a player enters the higher echelons of play, the relative dirth of other high-ranked players makes it harder to play enough balanced games to maintain a representative elo. In a population of players with elos from 1400 to 1600, it is unlikely for me to grow my elo above 1800, no matter how good I become at the game.
So the grand result is this: The total size of the elo economy of the game is determined by the number of players in the system, and the larger the total elo economy is, the higher the elo ratings of the best players can be, but that for the vast majority of players, the size of the elo economy will have no impact on their personal elo scores. That is, as a resource, the total quantity of elo in the system will only effect the players at the top.
Now there are obvious disanalogies between the elo economy in Warlocks and market economies in the real world, but it nonetheless serves as an interesting model of a competition driven economy. This is also not meant in any way to be some kind of moral statement about how "just" the elo system is - the numbers simply represent the fact that some players win more often than others, and it is the explicit goal of the elo system to represent this. I simply believed that the unintended emergent features of the system are noteworthy, since they result from the interactions of thousands of players.