Comments on https://lichess.org/@/matstc/blog/a-chess-metric-ease-for-humans/dIqTm3AJ
@matstc
I liked the idea. I like how you explained the formula well.
This is cool.
I could see this being used in some puzzles for the opponent to have more human-like responses. It could even go a step further and opt for moves that give the puzzlegoer the lowest certainty for the correct move!
I could see this being used in some puzzles for the opponent to have more human-like responses. It could even go a step further and opt for moves that give the puzzlegoer the lowest certainty for the correct move!
@Blinto said in #3:
I could see this being used in some puzzles for the opponent to have more human-like responses. It could even go a step further and opt for moves that give the puzzlegoer the lowest certainty for the correct move!
For sure!
Actually there are quite a few papers on puzzle complexity (see further reading).
@Blinto said in #3:
> I could see this being used in some puzzles for the opponent to have more human-like responses. It could even go a step further and opt for moves that give the puzzlegoer the lowest certainty for the correct move!
For sure!
Actually there are quite a few papers on puzzle complexity (see [further reading](https://colab.research.google.com/drive/1LEXjH18A34lkZw2qwHIV0AwNuJrjLBGR#scrollTo=mbCh7j2EZbhR)).
Я Вася
Я Вася
now my head hurts.. is that calculus
now my head hurts.. is that calculus
I had this very same idea (compare goodness and human-choose likeliness to get "position difficulty") long ago but have quite difficulty deciding the exact formula.
So I am very excited and intersted you find this approach
But, I am still confused how you get the probability P_i by MAIA or neural network. Can you give more explanation about how you get it? I tried some simple thing like P_i=exp(-Q_i) but it looks not good.
Also can you explain how you determine parameter α and β too? They are not totallt arbitrary, aren't they?
Thank you for your blog really.
I had this very same idea (compare goodness and human-choose likeliness to get "position difficulty") long ago but have quite difficulty deciding the exact formula.
So I am very excited and intersted you find this approach
But, I am still confused how you get the probability P_i by MAIA or neural network. Can you give more explanation about how you get it? I tried some simple thing like P_i=exp(-Q_i) but it looks not good.
Also can you explain how you determine parameter α and β too? They are not totallt arbitrary, aren't they?
Thank you for your blog really.
@F-14_the_Maomao said in #7:
But, I am still confused how you get the probability P_i by MAIA or neural network. Can you give more explanation about how you get it? I tried some simple thing like P_i=exp(-Q_i) but it looks not good.
Pi is gotten from lc0 with verbose move stats: https://lczero.org/dev/wiki/technical-explanation-of-leela-chess-zero/#example---verbose-move-stats-output
As for α and β, they are quite arbitrary. I chose simple values that fit my personal chess intuition. The formula would still work without these terms... They just make the output values more intuitive. For example, α stretches out the curve at the easy end so there is more resolution there. And β biases the ease up when a move or some moves have high percentages. Without these, I felt that the ease values huddled around the middle and that was not as helpful.
@F-14_the_Maomao said in #7:
> But, I am still confused how you get the probability P_i by MAIA or neural network. Can you give more explanation about how you get it? I tried some simple thing like P_i=exp(-Q_i) but it looks not good.
Pi is gotten from lc0 with verbose move stats: https://lczero.org/dev/wiki/technical-explanation-of-leela-chess-zero/#example---verbose-move-stats-output
As for α and β, they are quite arbitrary. I chose simple values that fit my personal chess intuition. The formula would still work without these terms... They just make the output values more intuitive. For example, α stretches out the curve at the easy end so there is more resolution there. And β biases the ease up when a move or some moves have high percentages. Without these, I felt that the ease values huddled around the middle and that was not as helpful.
I would love an evaluation that will actually reflect how easy or hard to play a position is, for human beings.
It's refreshing to see critical thinking in action! I hope that players start realizing that while "AI" often hallucinates, tools designed by humans provide useful information.
> I would love an evaluation that will actually reflect how easy or hard to play a position is, for human beings.
It's refreshing to see critical thinking in action! I hope that players start realizing that while "AI" often hallucinates, tools designed by humans provide useful information.
A nice way to understand how easy a position is to play is to actually use a "bad" engine (and by bad I mean a 2800 rated engine or something) or Stockfish at, say, maximum 15 depth.
This is not flawless of course, but a big problem with super strong engines is that, if the look deep enough, they'll find draws in most cases, bringing the evaluation to 0.00. However, if you pay close attention (or if your computer is rather slow) you'll notice that in very hard positions to play, it actually starts with a certain value, and that value only goes down and down with depth, as the engine starts to figure out concrete ways to draw.
So, for example, if you want to know how easy a certain endgame is to defend, looking at Stockfish's evaluation at depth 15 can be really useful, as it will probably display a number (say, -0,42) instead of 0.00, which would mean that while the endgame is drawn, Black is the one playing comfortably for a win
Just another thought I wanted to leave out here :)
A nice way to understand how easy a position is to play is to actually use a "bad" engine (and by bad I mean a 2800 rated engine or something) or Stockfish at, say, maximum 15 depth.
This is not flawless of course, but a big problem with super strong engines is that, if the look deep enough, they'll find draws in most cases, bringing the evaluation to 0.00. However, if you pay close attention (or if your computer is rather slow) you'll notice that in very hard positions to play, it actually starts with a certain value, and that value only goes down and down with depth, as the engine starts to figure out concrete ways to draw.
So, for example, if you want to know how easy a certain endgame is to defend, looking at Stockfish's evaluation at depth 15 can be really useful, as it will probably display a number (say, -0,42) instead of 0.00, which would mean that while the endgame is drawn, Black is the one playing comfortably for a win
Just another thought I wanted to leave out here :)




