Only inflated things are puzzle ratings, I think. And that's in comparison to the other lichess ratings.
You could argue that puzzles are somewhat like a variant, and so the rating doesn't need to, in any way, be related to chess ratings, but I believe puzzle ratings are typically based off of chess ratings and are more helpful that way.
A rating system is a school marking grade.
Nothing more, nothing less.
In chess it's your standings that you need to follow.
Work your standings number.
That way you compare your self to the group of chess players on this site.
A car and a bike are not the same thing. so ratings systems from different locations are exactly that. Not the same thing.
A rating system could have used letters or symbols to rate a player. But they all use numbers. But a number 10 on one site is not the same number 10 on another site. White paint is not black paint. Yet both are paint. Don't compare black paint to white paint. Don't compare a rating from one location to another. They will never be the same thing.
If you move around from one city to another, you will quickly discover that the average player in one town that as 1400 rating does not have the same skill level compare to another player from another town with the same 1400 rating system.
To level off the differences, these players need to play in lots of common tournaments to finally place the players in some sort of standings order.
On lichess .lichess.org/@/Toscani/perf/rapid
Look at the left side written:
1630 (Increasing 15) ... 3372 games
Rank : 81,557
The last part written Rank: 81,557 is the number that you want to know !
Where do you stand among the Rapid chess players that play on Lichess ?
If you have a rating that does not have a Rank number, that because your rating is out dated.
You cannot compare an out dated rating with an up to date rating !
Your peak rating, can only be compared to your self, becasue of all the factors involved of how and when you got that peak rating.
This Rank number and not a Rating number is my standings number. It shows how many above me are better than me.
How many above me are really trying to win or are some still using some form of assistance?
There are still lots of factors to consider even with this standings number, but at least a standings number is much more accurate than any rating system.
A rating system is there to help pair players with probably similar skill levels. A rating is a level, not a standings.
A level is as accurate as the person trying to maintain a rating number.
If you constantly get paired with player with ± 400 rating difference, then that is not good for the rating system.
To be good for a rating system, you need to player against players with similar rating levels that are ±100. Even ±200 in chess tournaments is obvious who will be in the top third of a group.
Tournaments need to be so close in rating that you cannot guess who will be in the top third of the standings.
Good post. But why is the rating, and not ranking, used for matching players? Is +/- 100 rating better than +/- 100 (or whatever) ranking?
The problem is called resisting change.
It's everywhere in all standards refusing to adopt to new ideas or new standards.
A bit like being asked to wear a mask for others. So we don't exhale germs directly onto them.
Since we don't really see anything or might not understand, we are told to keep our distances.
If you're backs are touching back to back, and you exhale away from each other, I would then assume distance is no longer a factor besides being told it should be maintained. If it must then I don't understand why.
So to get back on track....
Inflated ratings on a site is caused by the players and not by the formula that the site uses.
The formula is the same for all players.
Now if the ratings of all the players were constantly increasing and no new players were added to the site, then it would be inflating by the formula.
"They are just different ratings with different rating pools and sets of players. Who says FIDE ratings are the most correct?"
The national federations' ratings are also different rating pools and sets of players, but the national federations periodically recalibrate national ratings so as to minimize the difference between FIDE rating and the national rating of their FIDE rated members. So they add or subtract to all ratings to maintain parity.
"A rating system is there to help pair players with probably similar skill levels."
1) That is the primary reason, but also
2) to decide who plays on board 1 of a team: fine positional player A or brilliant tactician B
3) to measure progress "I gained 200 rating in 2 months, is that great progress?" - "No, rating got inflated by an influx of new beginners seeded at 1500 and now at 1000, donating rating to the pool."
4) To answer questions like "I am rated 1750 lichess classical and want to play my first over the board tournament. Will I win an U1800 tournament? Will I lose all my games if I play the open to all tournament?"
5) To answer questions like "I am rated 2000 lichess rapid and 1900 lichess blitz, does that mean I am stronger at rapid than at blitz?" - "No, lichess rapid rating is inflated more than blitz rating."
6) To supply context "I am rated 1400 and want to know if the Najdorf is good for me to play"
7) To weigh forum posts "You are rated 2000 and he is 2300 so you must be wrong and he must be right"
8) Some players confound elo and ego.
In medieval times every village had its own thumb, fathom, foot, yard, mile. To facilitate trade all those measures were standardised.
We should divide all ratings by 10 or something to try to find parity with BCF. God save the queen.
Maybe just rating the players from 1 to 100 % would have done the trick, like better than 66.2% of Rapid players.
Then sort the players in a tournament by their percentage number.
That way nobody can pretend to be able to convert or say things like ... over or under inflated rating.
Maybe a player's daily medium performance rating graph would have been enough.
Why can't we just choose our own 'correct' rating. Problem solved.
The problem is that people are half-assing problem descriptions and wanting great changes... I've put in more work than most and even my more serious descriptions (specifying testable, narrow problems which I think can be improved with minimal risk) aren't enough.
What isn't feasible is having a rating system which both works well online and uses the FIDE rating system (Elo, not Glicko).