Maia public website
maiachess.com/ | CAPTURING HUMAN STYLE IN CHESS
From that website:
> You can play against Maia yourself on Lichess! You can play Maia 1100, Maia 1500, and Maia 1900.
@maia1 @maia5 @maia9 > blog posts about Maia:
Computational Social Science Lab: Blog: Introducing Maia: a Human-Like Chess Engine Aug 24, 2020
csslab.cs.toronto.edu/blog/2020/08/24/maia_chess_kdd/broken links on that page, all outlinks but for those to SF, LC0, and a possible early version of the paper (fig11?)
Microsoft Research: Blog: The human side of AI for chess Published November 30, 2020
www.microsoft.com/en-us/research/blog/the-human-side-of-ai-for-chesswww.microsoft.com/en-us/research/blog/the-human-side-of-ai-for-chess/To chew on (seem very happy with some 50% move matching. I agree more about the 1900 to 1100 not representing same error model as maia might have embedded those bands.
But that is the blog. Unfortunately, it does not make it clear what is the perfect reference basis for the error model, it seems as though they actually did some reinforcemnt learning, but it would not make sense, it is just not touched.
The following is text excerpts i want to ponder or dissect when going to the paper itself. It is possible the second paper offers a better presentation of both error models, by having to present prior work and differentiating clearly from it.
> Maia is an engine designed to play like humans at a particular skill level. To achieve this, we adapted the AlphaZero/Leela Chess framework to learn from human games. We created nine different versions, one for each rating range from 1100-1199 to 1900-1999. We made nine training datasets in the same way that we made the test datasets (described above), with each training set containing 12 million games. We then trained a separate Maia model for each rating bin to create our nine Maias, from Maia 1100 to Maia 1900.
> Importantly, every version of Maia uniquely captures a specific human skill level since every curve achieves its maximum accuracy at a different human rating. Even Maia 1100 achieves over 50% accuracy in predicting 1100-rated moves, and it’s much better at predicting 1100-rated players than 1900-rated players!
> This means something deep about chess: there is such a thing as “1100-rated style.” And furthermore, it can be captured by a machine learning model. This was surprising to us: it would have been possible that human play is a mixture of good moves and random blunders, with 1100-rated players blundering more often and 1900-rated players blundering less often. Then it would have been impossible to capture 1100-rated style, because random blunders are impossible to predict. But since we can predict human play at different levels, there is a reliable, predictable, and maybe even algorithmically teachable difference between one human skill level and the next.
www.microsoft.com/en-us/research/publication/aligning-superhuman-ai-with-human-behavior-chess-as-a-model-system/> Page top GIF direct link, animated schematic of the whole problem per the researcher formulation (wathchamagonnacallitotherwise?).
www.microsoft.com/en-us/research/uploads/prod/2020/11/1400x788_AiChess_nologo.gif> Editor’s note: The section “Modeling individual players’ styles with Maia” has been updated as of July 12, 2021.
www.microsoft.com/en-us/research/publication/learning-personalized-models-of-human-behavior-in-chess/ memo: find which page links to this tool (likely top Maia public vitrine)
csslab.github.io/Maia-Agreement-Visualizer/