How far will depth of one get you?
How far will depth of one get you?
How far will depth of one get you?
One ply or one move?
Relevant links ...
https://chessify.me/blog/what-is-depth-in-chess-different-depths-for-stockfish-and-lczero
https://github.com/rooklift/nibbler
Play these games in self-play mode on NibblerGUi and see when a depth of one is no longer enough.
A00 Van't Kruijs Opening (1. e3 ...)
A46 Yusupov-Rubinstein System (1. d4 Nf6 2. Nf3 e6 3. e3 ...)
B21 Sicilian Defense: Morphy Gambit (1. e4 c5 2. d4 cxd4 3. Nf3 ...)
To discover when it was not enough ... upload your game and find out by getting it analysed on the server. See how long it lasts before it blunders or makes an error. Then say which engine you used.
Nibbler Limited auto-eval/play: 1
Move 5 (ply10) inaccuracy ... Engine: Kayra 1.4 sse41
A06 Zukertort Opening: Old Indian Attack (1.d3 ...)
https://lichess.org/2xn1tUSr#9
White vs Black
6 inaccuracies 5
7 mistakes 5
2 blunders 2
40 Average centipawn loss 31
84% Accuracy 85%
The opening that has the least mistakes is going to be the best for depth of one.
Well I just put 4 games into one study.
Do you think it's good for beginners too?
E11 Bogo-Indian Defense: Grünfeld Variation is the one that had the least mistakes.
The study has 4 games: e4, d4, c4, & Nf3.
https://lichess.org/study/ZUCSWovD/uwfNzjJr
Result of 1. c4 ... using SF-15.1 dev-20230622
White vs Black
1 inaccuracy 2
0 mistakes 2 (No mistakes for white, but Black made 2. the first was on move 6)
0 blunders 0 (No blunders even at a depth of one !!)
17 Average centipawn loss 41
96% Accuracy 89%
What I find interesting, is how long it will take before an engine makes a blunder or mistake at a depth of one.
Beginners when playing 1. c4 ... might be happy with an engine at a depth of one.
A depth of one seems to be enough for an engine to not blunder or make mistakes (at least for the white player).
@Toscani said in #4:
Nibbler Limited auto-eval/play: 1
Is that a "depth" limit or a "nodes" limit? They are not the same! The modern engines support limiting search both by the depth of the search tree as well as by the total number of tree nodes evaluated.
As far as I know Nibbler doesn't support limiting search by depth, only by the node count or somewhat equivalent time limit assuming constant nodes per second search speed.
Could you enable engine logging to verify that "go depth 1" was really used instead of "go nodes 1"?
I think a slightly different test could be you play lc0 against lc0 at 1 node, and then analyse the game with stockfish full strength, and see how frequently mistakes occur and how big they are. Then adjust the nodes visited to your liking. So as not to repeat the games, you could use a small opening book, say 6 ply, so that lc0 will effectively start after 6 plies. Opening books (2-ply, 4-ply, ..., 16-ply) used in lc0 testing are available (with a link from lc0 site).
The lowest limit is good enough to find easy openings.
--> Nibbler Limited auto-eval/play: 1
The is a setting say limit by time instead of nodes. So I guess the Limited auto-eval/play: 1 is nodes, not depth.
If we select the limit auto-eval/play, we can then increase slightly the nodes by pressing on the key board Ctrl+]
My aim is to find and sort the openings that give the best results. So, it's simplest to hardest ECO codes.
At the moment, 1. c4 gave me a good game at that minimum limit setting. I guess I could use BanksiaGui and make a tournament with a depth of one, then evaluated the games using Lucas chess or in a Lichess study. Maybe there are fast ways, but I don't know of any. It's not going to be an exact science, but it will be good enough, until someone else creates something of the sort or shows us that it has already been done. When I hear things like GM openings or Beginner openings, it would seem that the ECO code has already been sorted in that way. But I have never found that list.
The opening list I will be using is from the files named like a.tsv in the github site.
https://github.com/lichess-org/chess-openings
https://github.com/lichess-org/chess-openings/blob/master/a.tsv
So the openings might already have some sort of depth to it, but will continue with a minimum limit setting as a constant factor for the test. The end result of who wins is not important. It's the amount of mistakes that I want to see by opening.
I believe some openings are harder than others and need more depth to minimize the mistakes.
It would be nice to know, the simplest of openings that don't require much depth to be played well.
Why is there two different lines with the same opening name?
Line 11 - A00 Barnes Opening: Gedult Gambit 1. f3 d5 2. e4 g6 3. d4 dxe4 4. c3
Line 12 - A00 Barnes Opening: Gedult Gambit 1. f3 f5 2. e4 fxe4 3. Nc3
github.com/lichess-org/chess-openings/blob/master/a.tsv#L11
@Toscani said in #8:
The end result of who wins is not important. It's the amount of mistakes that I want to see by opening.
I believe some openings are harder than others and need more depth to minimize the mistakes.
Thanks for the clarification. I can already tell you that your "minimum depth" approach is nowhere near close to the skill level of chess beginner. Even with "go nodes 1" and NNUE disabled Stockfish is significantly stronger than beginner/intermediate player. It is true that it wont discover many tactics or combinations. But it's move ordering is still stronger than many grandmasters.
https://www.chessprogramming.org/Move_Ordering
I saw many of your posts and you seem to be running your tests with the engines searching single principal variation. To see how Stockfish is still better than many players at depth 1 set the MultiPV to the maximum i.e. 256 and nodes to 10. Then observe which variations the engine didn't even try to evaluate, because they would be obvious one-move blunders.
Adding a human-like/beginer-like factor to this experiment will be unfortunately way more complex.
This topic has been archived and can no longer be replied to.