While thinking about chess improvement strategies, I became very interested in the relationship between tactics training and in-game improvement. Unfortunately, Lichess doesn't provide the information necessary for an in-depth assessment of that relationship.
So I got the data myself...
# Question 1: What is the distribution of tactics/puzzles ratings?
Lichess does not offer this information at the moment, but it does provide ratings distributions for other chess variants:
en.lichess.org/stat/rating/distribution/blitz
# Question 2: How strong is the relationship between tactics skills and in-game playing strength?
# Methodology
1. Copy the names of all members of the most popular Lichess Team. This gives us a sample of nearly 5000 players.
2. Record the rating of each player in every chess variant, as well as her tactics training rating.
3. Plot the distribution of tactics ratings.
4. Estimate a linear regression model to measure the association between tactics ratings and classical (or blitz) ratings.
# Results: Distribution of tactics ratings
Overall, the distribution of tactics ratings look like something between classical and blitz (but with higher variance than either of those):
* 25% of players in the sample have tactics ratings below 1229 (classical=1331; blitz=1251)
* 50% of players in the sample have tactics ratings below 1451 (classical=1511; blitz=1408)
* 75% of players in the sample have tactics ratings below 1710 (classical=1725; blitz=1633)
On average, players' tactics rating tends to be 87 points lower than their classical rating.
On average, players' tactics rating tends to be 21 points higher than their blitz rating.
On average, players' classical chess rating tends to be 103 points higher than their blitz rating.
Full distributions graph here: http://imgur.com/a/BiNzr
# Results: Association between puzzles and in-game performance
Here's a scatter plot of the relationship between tactics skills and in-game strength (with LOESS fitted line): http://imgur.com/a/BiNzr
The association between tactics skills and in-game strength is obviously positive: good tacticians are hard to beat (not surprising).
Interestingly, the relationship between tactics ratings and classical chess ratings appears to be strongly linear. If you're willing to take liberties with causal interpretation, this could suggest that there are no "decreasing returns" to tactics improvement. Do more puzzles!
On average, if you compare two players with 100 points difference in tactics rating, the higher rated player will tend to have 62 higher classical chess rating (and 61 higher blitz rating).
But tactics isn't everything! The training ratings "explain" only about half of the variance in classical and blitz ratings (R^2=0.48 in the linear regressions).
# Is the sample representative?
Probably not.
Measurement issues: There may be ratings drift over time, and some of the players I looked up may not have played in a while.
Selection problem: Members of this particular team may be different than others.
Oh well!
# Want to play with the data?
Here's a link to the data in CSV format: pastebin.com/CJLxKRAA
So I got the data myself...
# Question 1: What is the distribution of tactics/puzzles ratings?
Lichess does not offer this information at the moment, but it does provide ratings distributions for other chess variants:
en.lichess.org/stat/rating/distribution/blitz
# Question 2: How strong is the relationship between tactics skills and in-game playing strength?
# Methodology
1. Copy the names of all members of the most popular Lichess Team. This gives us a sample of nearly 5000 players.
2. Record the rating of each player in every chess variant, as well as her tactics training rating.
3. Plot the distribution of tactics ratings.
4. Estimate a linear regression model to measure the association between tactics ratings and classical (or blitz) ratings.
# Results: Distribution of tactics ratings
Overall, the distribution of tactics ratings look like something between classical and blitz (but with higher variance than either of those):
* 25% of players in the sample have tactics ratings below 1229 (classical=1331; blitz=1251)
* 50% of players in the sample have tactics ratings below 1451 (classical=1511; blitz=1408)
* 75% of players in the sample have tactics ratings below 1710 (classical=1725; blitz=1633)
On average, players' tactics rating tends to be 87 points lower than their classical rating.
On average, players' tactics rating tends to be 21 points higher than their blitz rating.
On average, players' classical chess rating tends to be 103 points higher than their blitz rating.
Full distributions graph here: http://imgur.com/a/BiNzr
# Results: Association between puzzles and in-game performance
Here's a scatter plot of the relationship between tactics skills and in-game strength (with LOESS fitted line): http://imgur.com/a/BiNzr
The association between tactics skills and in-game strength is obviously positive: good tacticians are hard to beat (not surprising).
Interestingly, the relationship between tactics ratings and classical chess ratings appears to be strongly linear. If you're willing to take liberties with causal interpretation, this could suggest that there are no "decreasing returns" to tactics improvement. Do more puzzles!
On average, if you compare two players with 100 points difference in tactics rating, the higher rated player will tend to have 62 higher classical chess rating (and 61 higher blitz rating).
But tactics isn't everything! The training ratings "explain" only about half of the variance in classical and blitz ratings (R^2=0.48 in the linear regressions).
# Is the sample representative?
Probably not.
Measurement issues: There may be ratings drift over time, and some of the players I looked up may not have played in a while.
Selection problem: Members of this particular team may be different than others.
Oh well!
# Want to play with the data?
Here's a link to the data in CSV format: pastebin.com/CJLxKRAA