- Blind mode tutorial
lichess.org
Donate

Should Lichess have an option for higher depth for "Request a Computer analysis"?

If you want deeper analysis load engine locally. Also on interactive mode the analysis is running on you machine and hence you ask for deeper analysis of the position.

Investing more cpu on server side analysis in probably not reasonable. someone has to donate the cpu powet to do it and hence it is resource limitted service

If you want deeper analysis load engine locally. Also on interactive mode the analysis is running on you machine and hence you ask for deeper analysis of the position. Investing more cpu on server side analysis in probably not reasonable. someone has to donate the cpu powet to do it and hence it is resource limitted service

I asked on the Lichess discord server what the fishnet clients were doing when analyzing games. In particular, which form of the uci 'go' command was being used. I got the following informative answer. Thanks to revoof for the quick response.

-----quote
revoof:
here in the code is the go command: https://github.com/niklasf/fishnet/blob/4639eccf202526dd91dc20d7daa4d9d78abc2ad7/src/stockfish.rs#L331-L335. if you trace it back all the way to lila, you will find https://github.com/ornicar/lila/blob/36472373153769f84a94c9db95e9292608c85360/conf/base.conf#L422. so it's a fixed node limit, and depth may vary. indeed analysis in the browser surpasses it quite quickly, when modern webassembly features are supported by the browser
-----end quote

In that link, we see the line of code...
analysis.nodes = 1500000 # sf 15 dev

So the answer is that depth varies throughout the game analysis, and that the "go" command is based on a maximum number of nodes as 1.5 million. Also, that Stockfish in the browser will usually go deeper.

I assume that the maximum number of nodes value was chosen based on the type and number of the machines whose computer time was donated to be fishnet clients; among other considerations. I don't know how often that value is reviewed for possible change.

I asked on the Lichess discord server what the fishnet clients were doing when analyzing games. In particular, which form of the uci 'go' command was being used. I got the following informative answer. Thanks to revoof for the quick response. -----quote revoof: here in the code is the go command: https://github.com/niklasf/fishnet/blob/4639eccf202526dd91dc20d7daa4d9d78abc2ad7/src/stockfish.rs#L331-L335. if you trace it back all the way to lila, you will find https://github.com/ornicar/lila/blob/36472373153769f84a94c9db95e9292608c85360/conf/base.conf#L422. so it's a fixed node limit, and depth may vary. indeed analysis in the browser surpasses it quite quickly, when modern webassembly features are supported by the browser -----end quote In that link, we see the line of code... analysis.nodes = 1500000 # sf 15 dev So the answer is that depth varies throughout the game analysis, and that the "go" command is based on a maximum number of nodes as 1.5 million. Also, that Stockfish in the browser will usually go deeper. I assume that the maximum number of nodes value was chosen based on the type and number of the machines whose computer time was donated to be fishnet clients; among other considerations. I don't know how often that value is reviewed for possible change.

@petri999 said in #11:

If you want deeper analysis load engine locally. Also on interactive mode the analysis is running on you machine and hence you ask for deeper analysis of the position.

Investing more cpu on server side analysis in probably not reasonable. someone has to donate the cpu powet to do it and hence it is resource limitted service

In Post #7 I said:

One can, of course, manually look at every move separately with higher depth, before or after making a request.
This, though, may be pretty slow and inefficient.

What do you mean by "interactive mode"? Do you mean just local machine analysis?

"Investing more cpu on server side analysis in probably not reasonable." On what grounds do you say this?
If the fishnet depth were raised from ~15 to 18+, would that cause server/fishnet problems/considerable slow-down?
Have you heard of ANY problems right now with computer analysis on Lichess because of the lack of donated CPU power?
I wonder about the storage factor; but would it really take that much extra space compared to what is already being used?
Couldn't we accommodate that extra?

Your Thoughts,
Thanks.

@petri999 said in #11: > If you want deeper analysis load engine locally. Also on interactive mode the analysis is running on you machine and hence you ask for deeper analysis of the position. > > Investing more cpu on server side analysis in probably not reasonable. someone has to donate the cpu powet to do it and hence it is resource limitted service In Post #7 I said: > One can, of course, manually look at every move separately with higher depth, before or after making a request. > This, though, may be pretty slow and inefficient. What do you mean by "interactive mode"? Do you mean just local machine analysis? "Investing more cpu on server side analysis in probably not reasonable." On what grounds do you say this? If the fishnet depth were raised from ~15 to 18+, would that cause server/fishnet problems/considerable slow-down? Have you heard of ANY problems right now with computer analysis on Lichess because of the lack of donated CPU power? I wonder about the storage factor; but would it really take that much extra space compared to what is already being used? Couldn't we accommodate that extra? Your Thoughts, Thanks.

Depth has virtually nothing to do with the quality of analysis. Nodes searched is the correct measure, and it's the one fishnet uses.

Fishnet resources are donated. It's quite pointless to suggest other people donate more computer time.

If you want the quota raised donate to LiChess.

https://www.youtube.com/watch?v=C2SjcVbRfp0

Depth has virtually nothing to do with the quality of analysis. Nodes searched is the correct measure, and it's the one fishnet uses. Fishnet resources are donated. It's quite pointless to suggest other people donate more computer time. If you want the quota raised donate to LiChess. https://www.youtube.com/watch?v=C2SjcVbRfp0

Did anybody ask for lower depth, and for any type of analysis? if not here I think that would be a better question, sorry op, if you want more you can do it manually wherever you are curious. What is lacking is the ability to explore human scale horizon, and or extent of pruning. Which, given iterative deepening reliance on the concept of continuity, can be relaxed by considered shorter depths than 20. That would also enable human post game analyses, to get some sense of the actual full position evaluation function of SF, because the PV display in the multiPV box would then have more chance to be complete down to the tip, where such evaluations occur. Now, at 22 or more, that tip is never shown, so that the score seems kind of magical....

Did anybody ask for lower depth, and for any type of analysis? if not here I think that would be a better question, sorry op, if you want more you can do it manually wherever you are curious. What is lacking is the ability to explore human scale horizon, and or extent of pruning. Which, given iterative deepening reliance on the concept of continuity, can be relaxed by considered shorter depths than 20. That would also enable human post game analyses, to get some sense of the actual full position evaluation function of SF, because the PV display in the multiPV box would then have more chance to be complete down to the tip, where such evaluations occur. Now, at 22 or more, that tip is never shown, so that the score seems kind of magical....

on my pc it takes like 10-15 sec for 1 position. meanwhile with cloud analysis it takes ~5 sec for an entire game, so im really glad for everything i can get.

didnt understand the connection between nodes and depth anyways.
obviously i know what depth means and i can imagine a giant tree diagram for the node part, but i have no clue how the computer limits or finds candidate moves or how it evaluates a position.

tomorrow ill try to make more sense of this article i found about SF12, they explain some of it:

https://towardsdatascience.com/dissecting-stockfish-part-1-in-depth-look-at-a-chess-engine-7fddd1d83579

on my pc it takes like 10-15 sec for 1 position. meanwhile with cloud analysis it takes ~5 sec for an entire game, so im really glad for everything i can get. didnt understand the connection between nodes and depth anyways. obviously i know what depth means and i can imagine a giant tree diagram for the node part, but i have no clue how the computer limits or finds candidate moves or how it evaluates a position. tomorrow ill try to make more sense of this article i found about SF12, they explain some of it: https://towardsdatascience.com/dissecting-stockfish-part-1-in-depth-look-at-a-chess-engine-7fddd1d83579

@dboing said in #15:

Did anybody ask for lower depth, and for any type of analysis? if not here I think that would be a better question, sorry op, if you want more you can do it manually wherever you are curious. What is lacking is the ability to explore human scale horizon, and or extent of pruning. Which, given iterative deepening reliance on the concept of continuity, can be relaxed by considered shorter depths than 20. That would also enable human post game analyses, to get some sense of the actual full position evaluation function of SF, because the PV display in the multiPV box would then have more chance to be complete down to the tip, where such evaluations occur. Now, at 22 or more, that tip is never shown, so that the score seems kind of magical....

you have to scroll through it (when your on a PC it will show you a small chessboard when hoovering the mouse over the line where you can mousewheel through the line without changing the position on the board). you can toggle a setting, not 100% sure but might be "inline notation" and you see the complete line in a box.

unless with tip you meant the human horizon, then nevermind me. then it would indeed be better to go for way lower depth and just use it as a learning tool to see whats good, whats bad, how many moves are even possible etc.... only problem the computer sometimes waits for 20+moves before recapturing a pawn, so you might miss out on the absolute best line that just pushes an advantage (e.g. space or king safety) veeeeery patiently.

@dboing said in #15: > Did anybody ask for lower depth, and for any type of analysis? if not here I think that would be a better question, sorry op, if you want more you can do it manually wherever you are curious. What is lacking is the ability to explore human scale horizon, and or extent of pruning. Which, given iterative deepening reliance on the concept of continuity, can be relaxed by considered shorter depths than 20. That would also enable human post game analyses, to get some sense of the actual full position evaluation function of SF, because the PV display in the multiPV box would then have more chance to be complete down to the tip, where such evaluations occur. Now, at 22 or more, that tip is never shown, so that the score seems kind of magical.... you have to scroll through it (when your on a PC it will show you a small chessboard when hoovering the mouse over the line where you can mousewheel through the line without changing the position on the board). you can toggle a setting, not 100% sure but might be "inline notation" and you see the complete line in a box. unless with tip you meant the human horizon, then nevermind me. then it would indeed be better to go for way lower depth and just use it as a learning tool to see whats good, whats bad, how many moves are even possible etc.... only problem the computer sometimes waits for 20+moves before recapturing a pawn, so you might miss out on the absolute best line that just pushes an advantage (e.g. space or king safety) veeeeery patiently.

@Rookitiki said in #17:

you have to scroll through it (when your on a PC it will show you a small chessboard when hoovering the mouse over the line where you can mousewheel through the line without changing the position on the board). you can toggle a setting, not 100% sure but might be "inline notation" and you see the complete line in a box.

Yes it would seem long enough, but if you go and actually count the plies, you will not get the full depth. And even if there is no hash table or other caches (somewhere between local client machine, lichess servers, and other cloud machines), that is happening. If you are lucky to be the first to toggle engine on a position, even then you do not get the full length of the PV.

This is not a SF behavior but a lichess choice. It used to be a SF problem, but one can check with command-line SF executable through UCI commands for multiPV, that it does produce the full PV whether as new evaluated full depth (or seldepth) tree search branch, or as stored in hash table from previous within-session tree searches (or current root search).

On lichess I think most of the time one get 16, and often 6, and what is funny is that this cutoff happens for all PVs in mutiPV (the chances that all 5 PVs come from hash table at same branch length seem small to me, if one were to blame the hash table anyway). There are issues on lichess github repo, and what I read seemed to point to SF, issues, but those are solved now. So one would need to scrutinize the issues on lichess for remaining arguments toward keeping the status quo. There may be reasons, beside man-hours, and or lack of awareness or curiosity from the user base.

edit: tip therefore means SF end of main PV (or any PV if multi). So if depth were of human scale, then also human horizon. however see below for extensions (qsearches).

@Rookitiki said in #17: > you have to scroll through it (when your on a PC it will show you a small chessboard when hoovering the mouse over the line where you can mousewheel through the line without changing the position on the board). you can toggle a setting, not 100% sure but might be "inline notation" and you see the complete line in a box. > Yes it would seem long enough, but if you go and actually count the plies, you will not get the full depth. And even if there is no hash table or other caches (somewhere between local client machine, lichess servers, and other cloud machines), that is happening. If you are lucky to be the first to toggle engine on a position, even then you do not get the full length of the PV. This is not a SF behavior but a lichess choice. It used to be a SF problem, but one can check with command-line SF executable through UCI commands for multiPV, that it does produce the full PV whether as new evaluated full depth (or seldepth) tree search branch, or as stored in hash table from previous within-session tree searches (or current root search). On lichess I think most of the time one get 16, and often 6, and what is funny is that this cutoff happens for all PVs in mutiPV (the chances that all 5 PVs come from hash table at same branch length seem small to me, if one were to blame the hash table anyway). There are issues on lichess github repo, and what I read seemed to point to SF, issues, but those are solved now. So one would need to scrutinize the issues on lichess for remaining arguments toward keeping the status quo. There may be reasons, beside man-hours, and or lack of awareness or curiosity from the user base. edit: tip therefore means SF end of main PV (or any PV if multi). So if depth were of human scale, then also human horizon. however see below for extensions (qsearches).

nodes vs depth (input depth), my take:

nodes include the whole tree search for one root position, i.e. many branches.
depth affects each branch in that tree search (depending on other parameters and position tests along the way), some tentative horizon for single variations within the explored tree.

One could think of input depth as an average horizon (it can both be shortened from hash table effect or other tests I have not yet understood precisely, and extended by quiescence searches when the tip position at input horizon=depth, is in the middle of some material tumble sequence and dust has not settled yet, extension to dust settled (and I have not yet an idea of how far such an extension can go, if bounded by some depth a function of input depth or otherwise, all i know is that it stops if it finds a quiescent node...).

I welcome any better version or correction of the above.

> nodes vs depth (input depth), my take: nodes include the whole tree search for one root position, i.e. many branches. depth affects each branch in that tree search (depending on other parameters and position tests along the way), some tentative horizon for single variations within the explored tree. One could think of input depth as an average horizon (it can both be shortened from hash table effect or other tests I have not yet understood precisely, and extended by quiescence searches when the tip position at input horizon=depth, is in the middle of some material tumble sequence and dust has not settled yet, extension to dust settled (and I have not yet an idea of how far such an extension can go, if bounded by some depth a function of input depth or otherwise, all i know is that it stops if it finds a quiescent node...). I welcome any better version or correction of the above.

@StingerPuzzles said in #14:

Depth has virtually nothing to do with the quality of analysis. Nodes searched is the correct measure, and it's the one fishnet uses.

Fishnet resources are donated. It's quite pointless to suggest other people donate more computer time.

Hmm, interesting.
At 1:27 in the video I see "Number of logical cores to use for engine threads (default 15 , max 16)" in the command box.
Is that basically the default and max depth of the analysis? If so, if not raise the default, why not at least raise the maximum?
Thanks for sharing.

@StingerPuzzles said in #14: > Depth has virtually nothing to do with the quality of analysis. Nodes searched is the correct measure, and it's the one fishnet uses. > > Fishnet resources are donated. It's quite pointless to suggest other people donate more computer time. > Hmm, interesting. At 1:27 in the video I see "Number of logical cores to use for engine threads (default 15 , max 16)" in the command box. Is that basically the default and max depth of the analysis? If so, if not raise the default, why not at least raise the maximum? Thanks for sharing.

This topic has been archived and can no longer be replied to.