- Blind mode tutorial
lichess.org
Donate

chess skill vs calculation depth and breadth

I recall hearing Sadler say that alpha0 plays at 2400 level based only on its evaluation engine (and without calculating variations).

I was wondering about the complementary question: if the evaluation engine is really simple, and you constrain calculation to depth d and breadth b at each step, what would its performance be (as a function of b&d of course).

Seems simple to do so likely someone has tried it-- if you know please tell me as I am curious, well generally very curious--maybe even in both senses of the word too.

Bill

I recall hearing Sadler say that alpha0 plays at 2400 level based only on its evaluation engine (and without calculating variations). I was wondering about the complementary question: if the evaluation engine is really simple, and you constrain calculation to depth d and breadth b at each step, what would its performance be (as a function of b&d of course). Seems simple to do so likely someone has tried it-- if you know please tell me as I am curious, well generally very curious--maybe even in both senses of the word too. Bill

@g6firste6second said in #2:

chess.stackexchange.com/questions/29860/is-there-a-list-of-approximate-elo-ratings-for-each-stockfish-level

That is interesting of course but I didnt see where/how the levels correlate with search breadth and depth.

@g6firste6second said in #2: > chess.stackexchange.com/questions/29860/is-there-a-list-of-approximate-elo-ratings-for-each-stockfish-level That is interesting of course but I didnt see where/how the levels correlate with search breadth and depth.

A pro chess engine programmer told that doubling CPU gave about 100 Elo points somewhere 15-20 years ago. Doubling cpu will not take even one ply ahead. With all smart pruning at best better comparison of top moves.

Estimates from early 1990's gave that one ply is about 200 pts (after base levela of 4. Ply). That probably does not hold as well ad tree is already so deep that "correct" answer is found. And if evaluation function is really bad also gains from lookahead are worse.

A pro chess engine programmer told that doubling CPU gave about 100 Elo points somewhere 15-20 years ago. Doubling cpu will not take even one ply ahead. With all smart pruning at best better comparison of top moves. Estimates from early 1990's gave that one ply is about 200 pts (after base levela of 4. Ply). That probably does not hold as well ad tree is already so deep that "correct" answer is found. And if evaluation function is really bad also gains from lookahead are worse.

@petri999 said in #4:

A pro chess engine programmer told that doubling CPU gave about 100 Elo points somewhere 15-20 years ago. Doubling cpu will not take even one ply ahead. With all smart pruning at best better comparison of top moves.

Estimates from early 1990's gave that one ply is about 200 pts (after base levela of 4. Ply). That probably does not hold as well ad tree is already so deep that "correct" answer is found. And if evaluation function is really bad also gains from lookahead are worse.
.
Thanks!! That is interesting! I was motivated by a comment about Reshevsky I read, that, in effect, he didnt remember theory but he never missed anything 3 moves of fewer deep. Of course he also had a sophisticated evaluation package in his brain.

I was wondering if computers could quantify the advice we all get to ''analyze broader not deeper''

Bill

@petri999 said in #4: > A pro chess engine programmer told that doubling CPU gave about 100 Elo points somewhere 15-20 years ago. Doubling cpu will not take even one ply ahead. With all smart pruning at best better comparison of top moves. > > Estimates from early 1990's gave that one ply is about 200 pts (after base levela of 4. Ply). That probably does not hold as well ad tree is already so deep that "correct" answer is found. And if evaluation function is really bad also gains from lookahead are worse. . Thanks!! That is interesting! I was motivated by a comment about Reshevsky I read, that, in effect, he didnt remember theory but he never missed anything 3 moves of fewer deep. Of course he also had a sophisticated evaluation package in his brain. I was wondering if computers could quantify the advice we all get to ''analyze broader not deeper'' Bill

I think with the available engines, it is difficult to test the correlation you want. Many engines have a depth setting, so we can configure them at different depths and run a large tournament involving a reference engine such as Lc0 at 1 node, and estimate relative strength. But there are two other difficulties. First, most engines have a complex evaluation function. and second, limiting width is probably impossible with most engines. But one could program an engine with pure material evaluation (e.g., something like queen = 9, rook = 5, etc, and no other considerations), or use an old engines from the 1970s that does just that. The difficulty with limiting width, to say k, is that on every turn we will have to select k moves to limit the search to. We could do something like this: do material eval to depth 1 (2 plies) and select k best moves, then under each of the k moves, do depth 2 material eval, and again select k best moves, and so on until depth d. I guess this is not too difficult to implement (for someone who has a little bit of experience with engine programming). Or we could even use an engine that does full-width search to a fixed depth. But doing these tests with existing engines seems difficult.

OTOH, I think the advice of doing higher width and lower depth analysis seems reasonable for most moves, but supplementing the analysis with deeper analysis of selected moves is also required.

I think with the available engines, it is difficult to test the correlation you want. Many engines have a depth setting, so we can configure them at different depths and run a large tournament involving a reference engine such as Lc0 at 1 node, and estimate relative strength. But there are two other difficulties. First, most engines have a complex evaluation function. and second, limiting width is probably impossible with most engines. But one could program an engine with pure material evaluation (e.g., something like queen = 9, rook = 5, etc, and no other considerations), or use an old engines from the 1970s that does just that. The difficulty with limiting width, to say k, is that on every turn we will have to select k moves to limit the search to. We could do something like this: do material eval to depth 1 (2 plies) and select k best moves, then under each of the k moves, do depth 2 material eval, and again select k best moves, and so on until depth d. I guess this is not too difficult to implement (for someone who has a little bit of experience with engine programming). Or we could even use an engine that does full-width search to a fixed depth. But doing these tests with existing engines seems difficult. OTOH, I think the advice of doing higher width and lower depth analysis seems reasonable for most moves, but supplementing the analysis with deeper analysis of selected moves is also required.

@swimmerBill said in #3:

That is interesting of course but I didnt see where/how the levels correlate with search breadth and depth.

One link further down the rabbit hole... check out the top answer - https://chess.stackexchange.com/questions/8123/stockfish-elo-vs-search-depth?rq=1

Komodo talks about it briefly in their settings, see skills section - https://komodochess.com/store/pages.php?cmsid=14

Cheers

@swimmerBill said in #3: > That is interesting of course but I didnt see where/how the levels correlate with search breadth and depth. One link further down the rabbit hole... check out the top answer - https://chess.stackexchange.com/questions/8123/stockfish-elo-vs-search-depth?rq=1 Komodo talks about it briefly in their settings, see skills section - https://komodochess.com/store/pages.php?cmsid=14 Cheers

Here is a summary of 1 discussion in a link of g6... (Thanks!):
depth: elo:
20 2894
19 2828
18 2761
17 2695
16 2629
15 2563
then: (extrapolating)
"depth 8 = 2099 Elo, depth 7 = 2033 Elo, depth 6 = 1966 Elo, and the Elo delta between levels is quite consistently 66 Elo. So extrapolating, depth 5 = 1900, depth 4 = 1834, depth 3 = 1768, depth 2 = 1702, and depth 1 = 1636."

If this can be applied to humans it give some indication of how difficult it is to play 2500 level. There wasnt an indication of search breadth though. It does seem plausible that if you never drop pieces to 1 move combinations (and know openings well) you'll be 1600 Elo. Beyond that first entry, the numbers dont seem relevant for humans (aside from generally 2500 is not easy to attain)
thanks all!
bill

Here is a summary of 1 discussion in a link of g6... (Thanks!): depth: elo: 20 2894 19 2828 18 2761 17 2695 16 2629 15 2563 then: (extrapolating) "depth 8 = 2099 Elo, depth 7 = 2033 Elo, depth 6 = 1966 Elo, and the Elo delta between levels is quite consistently 66 Elo. So extrapolating, depth 5 = 1900, depth 4 = 1834, depth 3 = 1768, depth 2 = 1702, and depth 1 = 1636." If this can be applied to humans it give some indication of how difficult it is to play 2500 level. There wasnt an indication of search breadth though. It does seem plausible that if you never drop pieces to 1 move combinations (and know openings well) you'll be 1600 Elo. Beyond that first entry, the numbers dont seem relevant for humans (aside from generally 2500 is not easy to attain) thanks all! bill

This topic has been archived and can no longer be replied to.