@Hrant_Petrosyan
i think that Atomic chess is based off of real chess because 1. it only has a few extra rules, all based off of one main extra rule
and 2. it has the same setup and moves as a normal chess game, with most of the same check and checkmate rules
RK, Horde, and Antichess are purely for fun i think. RK is obviously not like real chess, Horde has 40+ extra pawns, and antichess is... ANTIchess
@Hrant_Petrosyan
i think that Atomic chess is based off of real chess because 1. it only has a few extra rules, all based off of one main extra rule
and 2. it has the same setup and moves as a normal chess game, with most of the same check and checkmate rules
RK, Horde, and Antichess are purely for fun i think. RK is obviously not like real chess, Horde has 40+ extra pawns, and antichess is... ANTIchess
@jdwhite42 said in #31:
@Hrant_Petrosyan
i think that Atomic chess is based off of real chess because 1. it only has a few extra rules, all based off of one main extra rule
and 2. it has the same setup and moves as a normal chess game, with most of the same check and checkmate rules
RK, Horde, and Antichess are purely for fun i think. RK is obviously not like real chess, Horde has 40+ extra pawns, and antichess is... ANTIchess
You are true
@jdwhite42 said in #31:
> @Hrant_Petrosyan
>
> i think that Atomic chess is based off of real chess because 1. it only has a few extra rules, all based off of one main extra rule
> and 2. it has the same setup and moves as a normal chess game, with most of the same check and checkmate rules
>
> RK, Horde, and Antichess are purely for fun i think. RK is obviously not like real chess, Horde has 40+ extra pawns, and antichess is... ANTIchess
You are true
@piazzai said in #18:
I just published a blog post on this question. I analyzed 1.3 million games and it turns out variants do not help you win at standard chess. Most variants make you worse.
lichess.org/@/piazzai/blog/do-variants-help-you-play-better-chess-statistical-evidence/0tAPXnqH
Three-check and Racing Kings seem to be the worst. Crazyhouse does not hurt that much. Chess960 practically makes no difference.
Thank you so much! Verifiable, convincing facts were exactly what was missing in this debate.
Also it is always pleasing to see that I was right about Atomic hurting your standard skills :)
As noted in the conclusion of your blog post, it would be interesting to do a similar analysis including only experienced variant players.
Also, I have a side question, slightly unrelated. I believe that glicko rating is designed so that the rating difference should in principle be proportional to the expected value for the game outcome (something like that). With the data you collected, I guess it is easy to read off whether that is actually the case?
@piazzai said in #18:
> I just published a blog post on this question. I analyzed 1.3 million games and it turns out variants do not help you win at standard chess. Most variants make you worse.
>
> lichess.org/@/piazzai/blog/do-variants-help-you-play-better-chess-statistical-evidence/0tAPXnqH
>
> Three-check and Racing Kings seem to be the worst. Crazyhouse does not hurt that much. Chess960 practically makes no difference.
Thank you so much! Verifiable, convincing facts were exactly what was missing in this debate.
Also it is always pleasing to see that I was right about Atomic hurting your standard skills :)
As noted in the conclusion of your blog post, it would be interesting to do a similar analysis including only *experienced* variant players.
Also, I have a side question, slightly unrelated. I believe that glicko rating is designed so that the rating difference should in principle be proportional to the expected value for the game outcome (something like that). With the data you collected, I guess it is easy to read off whether that is actually the case?
@SD_2709 said in #24:
@piazzai how does racing kings/ 3 check have a stronger effect than Antichess? In Antichess you literally have to give all your pieces away..
I would agree with the rest of numbers in your blog except antichess.
How can you disagree with FACTS? Piazzai's blog is not opinions, it is a sound statistical analysis. Facts might be surprising, they might contradict what you would expect based on your opinion, but when this occur, because you can't change facts, you should consider changing your opinion, or investigating more.
@SD_2709 said in #24:
> @piazzai how does racing kings/ 3 check have a stronger effect than Antichess? In Antichess you literally have to give all your pieces away..
> I would agree with the rest of numbers in your blog except antichess.
How can you disagree with FACTS? Piazzai's blog is not opinions, it is a sound statistical analysis. Facts might be surprising, they might contradict what you would expect based on your opinion, but when this occur, because you can't change facts, you should consider changing your opinion, or investigating more.
@polylogarithmique uhm.... you probably misunderstood me then. I meant to say I am extremely surprised that 3 check or racing kings is more harmful to standard than antichess. I don't disagree I accept the numbers.. I am just surprised.
@polylogarithmique uhm.... you probably misunderstood me then. I meant to say I am extremely surprised that 3 check or racing kings is more harmful to standard than antichess. I don't disagree I accept the numbers.. I am just surprised.
Really commendable to actually make a data-driven analysis to see what they say about this question, 'do variants help?'.
But I think one should always be careful not to overextend the conclusions which can be drawn.
Here the conclusions is that for the average player on lichess, playing variants for up to a week before a game of standard chess tends to have a negative effect on performance (win ratio).
Plenty of questions are still left open. What about comparing two groups: one starts to play a variant for a longer period of time (many months) while still playing standard chess, and another plays only standard chess during that same longer period. How do the two groups performance in regular chess develop during that period where the variants were played? And what if you just look at stronger players making a similar analysis. Maybe for shorter time controls where quick reflexes are important, the variants are detrimental, but in longer time controls they can be helpful because they extend the range of situations encountered by the player. Possibly under the right circumstances or for a particular subset of players there is a benefit? etc etc...
So the question of whether variants are 'real chess' is really a subjective one, but the question of whether variants help or hinder a players standard chess is only partially answered.
Really commendable to actually make a data-driven analysis to see what they say about this question, 'do variants help?'.
But I think one should always be careful not to overextend the conclusions which can be drawn.
Here the conclusions is that for the average player on lichess, playing variants for up to a week before a game of standard chess tends to have a negative effect on performance (win ratio).
Plenty of questions are still left open. What about comparing two groups: one starts to play a variant for a longer period of time (many months) while still playing standard chess, and another plays only standard chess during that same longer period. How do the two groups performance in regular chess develop during that period where the variants were played? And what if you just look at stronger players making a similar analysis. Maybe for shorter time controls where quick reflexes are important, the variants are detrimental, but in longer time controls they can be helpful because they extend the range of situations encountered by the player. Possibly under the right circumstances or for a particular subset of players there is a benefit? etc etc...
So the question of whether variants are 'real chess' is really a subjective one, but the question of whether variants help or hinder a players standard chess is only partially answered.
@ThisShouldBeFun said in #20:
not all of them, antichess doesnt help with standard skill.
Antichess helps the most in standard chess, if studied deeply. It helped me improve from 1000 to 1400.
Also, you don't even play antichess. How can you make such statements? Just because the rules say something, it doesn't mean that the strategy is that.
@ThisShouldBeFun said in #20:
> not all of them, antichess doesnt help with standard skill.
Antichess helps the most in standard chess, if studied deeply. It helped me improve from 1000 to 1400.
Also, you don't even play antichess. How can you make such statements? Just because the rules say something, it doesn't mean that the strategy is that.
@polylogarithmique said in #33:
Thank you so much! Verifiable, convincing facts were exactly what was missing in this debate.
Also it is always pleasing to see that I was right about Atomic hurting your standard skills :)
As noted in the conclusion of your blog post, it would be interesting to do a similar analysis including only experienced variant players.
Also, I have a side question, slightly unrelated. I believe that glicko rating is designed so that the rating difference should in principle be proportional to the expected value for the game outcome (something like that). With the data you collected, I guess it is easy to read off whether that is actually the case?
Thank you for reading it. I'd be happy to analyze more data in the future if the community is interested.
With regard to your question: you are right. Just like Elo, Glicko-2 hopes to provide an indication of a player's position within a skill distribution. If player A is rated higher than B, then A is expected to win more often than B does in repeated matches between A and B. But how much more often?
In my model, if the rating of B minus the rating of A increases by one point, meaning that B is one point above A regardless of how strong A is, then the odds of victory for A decrease by 0.8%. This effect is statistically significant, so the answer to your question is: yes, the rating difference is predictive of a game's outcome. If it favors A, then all other things being equal A is expected to win.
The effect is small, but a single rating point is a pretty trivial difference. For a 10-point difference, the odds of victory for A decrease by 7.7%. For a 25-point difference, they decrease by 18.2%. Incidentally, this is approximately the same decrease that A experiences for playing black instead of white, so you could say that the advantage of white in a standard game amounts to a 25-point rating difference between the players. This statement is of course specific to Glicko-2 ratings on Lichess and might not hold for other pools of players, or other rating systems.
@polylogarithmique said in #33:
> Thank you so much! Verifiable, convincing facts were exactly what was missing in this debate.
> Also it is always pleasing to see that I was right about Atomic hurting your standard skills :)
>
> As noted in the conclusion of your blog post, it would be interesting to do a similar analysis including only *experienced* variant players.
>
> Also, I have a side question, slightly unrelated. I believe that glicko rating is designed so that the rating difference should in principle be proportional to the expected value for the game outcome (something like that). With the data you collected, I guess it is easy to read off whether that is actually the case?
Thank you for reading it. I'd be happy to analyze more data in the future if the community is interested.
With regard to your question: you are right. Just like Elo, Glicko-2 hopes to provide an indication of a player's position within a skill distribution. If player A is rated higher than B, then A is expected to win more often than B does in repeated matches between A and B. But how much more often?
In my model, if the rating of B minus the rating of A increases by one point, meaning that B is one point above A regardless of how strong A is, then the odds of victory for A decrease by 0.8%. This effect is statistically significant, so the answer to your question is: yes, the rating difference is predictive of a game's outcome. If it favors A, then all other things being equal A is expected to win.
The effect is small, but a single rating point is a pretty trivial difference. For a 10-point difference, the odds of victory for A decrease by 7.7%. For a 25-point difference, they decrease by 18.2%. Incidentally, this is approximately the same decrease that A experiences for playing black instead of white, so you could say that the advantage of white in a standard game amounts to a 25-point rating difference between the players. This statement is of course specific to Glicko-2 ratings on Lichess and might not hold for other pools of players, or other rating systems.
10 rating points cant be a WR difference of nearly 8%, something mustve surely went wrong there! nearly 20% just because rating differs by 25 points? did you lose a zero or something?
€: sorry i havent exactly read what your model is, i thought were talking about glicko, so nvm if its not.
@piazzai
10 rating points cant be a WR difference of nearly 8%, something mustve surely went wrong there! nearly 20% just because rating differs by 25 points? did you lose a zero or something?
€: sorry i havent exactly read what your model is, i thought were talking about glicko, so nvm if its not.
@piazzai
@piazzai yeah but what I mean is I believe the rating system is designed in such a way that there is a formula which gives you the expected outcome as a function of the rating difference (at least for the elo rating).
In an "ideal" world, this formula would match the empirical data exactly. Now because elo (and glicko) are based on some mathematical assumptions that are not necessarily verified in the real pool of players, a priori there is a difference between the average outcome of every game where the rating difference is exactly 100 (say), and what is predicted by the formula.
I would be curious to know if there is actually a big difference between them.
@piazzai yeah but what I mean is I believe the rating system is designed in such a way that there is a formula which gives you the expected outcome as a function of the rating difference (at least for the elo rating).
In an "ideal" world, this formula would match the empirical data exactly. Now because elo (and glicko) are based on some mathematical assumptions that are not necessarily verified in the real pool of players, a priori there is a difference between the average outcome of every game where the rating difference is exactly 100 (say), and what is predicted by the formula.
I would be curious to know if there is actually a big difference between them.