- Blind mode tutorial
lichess.org
Donate

Chess engines need to search from solid positions or by phase.

@IndigoEngun

Modern top engines play the opening perfectly fine. The idea that they play poorly in the opening is severely outdated.
Well, the thing is there is no need to analyse unnecessarily about opening a use opening moves are all book moves and there's no bad opening enough to lose (of course except fool's mate).

@IndigoEngun > Modern top engines play the opening perfectly fine. The idea that they play poorly in the opening is severely outdated. Well, the thing is there is no need to analyse unnecessarily about opening a use opening moves are all book moves and there's no bad opening enough to lose (of course except fool's mate).

I have not read carefully, and would invite @jomega, when coming back to lichess forum for inspiration, to fix my understanding or make more precise and complementary comment.

But it has been my understanding that in the evolution of stockfish, the question was being progressively adressed, of what I call the matching problem between detecting material conversions in a subtree and doing min-max comparisons with branches ending in possible mates (during legal search we still need to compare if the mate is really min-max).

SF has created for its static evaluatoin (or some intermediate position information evaluation -- hard to follow nested recursions in source code) sub-classes of near mate or maybe sub-classes of endgame AND near mate: my point about matching problem does not require endgame, but endgame with less material is likely to increase proportions of legal search mates -- or other terminal classes). I use sub. because rules of chess only gives 3 terminal classes (and ok draw also has had some growth in sub-classes). W, D and L.

With such sub-classes with different mating configurations (or position information closely correlated to mating conditions, not sure), different valuation parameter choices for mate (or near mate?) would allow some improvement toward going from remote material conversion detector to terminal class detector. But I understand this was stopped with the advent of NNue (all classical static evaluation parameters are frozen i think). The problem might have something to do, with how NNue is trained (if still done with classical SF of moderate search depth.

taking a breath, not done yet, the relevance of above text, is that I would think that the same sort of concerns may have been used in developing those sub-classes of evaluation.

don't blame me for my transparent ignorance, it is because i find it more precise information.... i give you my remaining doubts as bonus to my current understanding. takes some ease of reading away, i get it (but the damn if do or don't applies).

and now i need to put this on my serious reading backlog.. (ordering is wildlly fluctuating, inspiration chunks thougout my limited days)... Good area of question from the op, I say.. hopeful for a good discussion. this was impulsive after i thought i got the gist.

I have not read carefully, and would invite @jomega, when coming back to lichess forum for inspiration, to fix my understanding or make more precise and complementary comment. But it has been my understanding that in the evolution of stockfish, the question was being progressively adressed, of what I call the matching problem between detecting material conversions in a subtree and doing min-max comparisons with branches ending in possible mates (during legal search we still need to compare if the mate is really min-max). SF has created for its static evaluatoin (or some intermediate position information evaluation -- hard to follow nested recursions in source code) sub-classes of near mate or maybe sub-classes of endgame AND near mate: my point about matching problem does not require endgame, but endgame with less material is likely to increase proportions of legal search mates -- or other terminal classes). I use sub. because rules of chess only gives 3 terminal classes (and ok draw also has had some growth in sub-classes). W, D and L. With such sub-classes with different mating configurations (or position information closely correlated to mating conditions, not sure), different valuation parameter choices for mate (or near mate?) would allow some improvement toward going from remote material conversion detector to terminal class detector. But I understand this was stopped with the advent of NNue (all classical static evaluation parameters are frozen i think). The problem might have something to do, with how NNue is trained (if still done with classical SF of moderate search depth. taking a breath, not done yet, the relevance of above text, is that I would think that the same sort of concerns may have been used in developing those sub-classes of evaluation. don't blame me for my transparent ignorance, it is because i find it more precise information.... i give you my remaining doubts as bonus to my current understanding. takes some ease of reading away, i get it (but the damn if do or don't applies). and now i need to put this on my serious reading backlog.. (ordering is wildlly fluctuating, inspiration chunks thougout my limited days)... Good area of question from the op, I say.. hopeful for a good discussion. this was impulsive after i thought i got the gist.

one paragraph down­.
I see using human tendency to make chunks (no not the patterns, but the smaller size) of problems from a bigger problem

Also not about dynamic programming, which is about recursion of sub-problems, not chunking them, unless I got my CS memory remnants totally nuked from time past, and little experience ever (recursive syntax always been hard, until put in a dynamic iteration format, more natural given my training or wiring).

This human ability to see sub-problem for calculation memory fitting, may be dependent on the notion of pattern recognition. Perhaps there are few of those that could be "triangulated" (signal meaning not chessboard) to give enough of a human "chunking" power, an approximation that would already be helpful against runaways depth detection in some tunnel vision prone chess position territory.. (if that is what is happening)...

On the other hand, if the complexity of number of patterns needed to have such correct sub-problem partition effective is too large, with a hand-crafted model (the pre-NNue classical eval bet paired with type A engine), then maybe a more empirical approach about pattern discovery or recognition based on chess data all the way to terminal outcome might be of some guidance and structuring even the hand crafted efforts (which would be nice for our ability to use such tools for analysis in our format).

certainly in endgame we don't follow TB or whatever SF is doing to solve long sequence problems... At least this shows how not helpful in SF in human chess analysis or training automatic feedback tool (at least there are weak regions, if not whole subspaces --- throw tomatoes here). Depth is not always the analytical solution..... but we knew that, right?

one paragraph down­. I see using human tendency to make chunks (no not the patterns, but the smaller size) of problems from a bigger problem Also not about dynamic programming, which is about recursion of sub-problems, not chunking them, unless I got my CS memory remnants totally nuked from time past, and little experience ever (recursive syntax always been hard, until put in a dynamic iteration format, more natural given my training or wiring). This human ability to see sub-problem for calculation memory fitting, may be dependent on the notion of pattern recognition. Perhaps there are few of those that could be "triangulated" (signal meaning not chessboard) to give enough of a human "chunking" power, an approximation that would already be helpful against runaways depth detection in some tunnel vision prone chess position territory.. (if that is what is happening)... On the other hand, if the complexity of number of patterns needed to have such correct sub-problem partition effective is too large, with a hand-crafted model (the pre-NNue classical eval bet paired with type A engine), then maybe a more empirical approach about pattern discovery or recognition based on chess data all the way to terminal outcome might be of some guidance and structuring even the hand crafted efforts (which would be nice for our ability to use such tools for analysis in our format). certainly in endgame we don't follow TB or whatever SF is doing to solve long sequence problems... At least this shows how not helpful in SF in human chess analysis or training automatic feedback tool (at least there are weak regions, if not whole subspaces --- throw tomatoes here). Depth is not always the analytical solution..... but we knew that, right?

@petri999 there are old ideas like type B engine that turned out to need patience and allow empirical method to weigh on knowledge (even if we are currently not able to translate that knowledge, the engine has embodied it in its neural network)..... and it works. The out-of-current game knowledge from many games has been digested and brought to bear on in-game decision making, which is essentially what type B was about, the hand-crafted early attempts being a tangential aspect (or we could give another name to the type B that goes full empirical... ).

NNue is proof that there was a subspace missing in SF legal tree searches. I think relying on books and TB to forget about the failings of type A engine (which are not only about not enough depth, the failings, right?), is blinding one self. specially that we have no clue if the middle game behavior is not also biased in ways we are unable to calculate.

If tournament structure of engine X engine small genetic pool is enough for blind trust, then yes no need to ask such questions as the op. But that is not rejecting the ideas or suggestions of possible solution, it is rejecting the question.

@Toscani may have been too close to detail in describing the legal tree search, but it seemed compatible with my understanding.. where I prefer to look at the whole potential subtree, which then is explored, until recursive end-points (there are many) make for new branching points, or leaf stop for full position evaluation (where the biases might happen first, not just precision errors).

what ever the initial first level move candidates at root positions exploration ordering, there is a recursive traversal that one could get lost in.. (I do after certain piling up or down level or nesting depth).

but forgetting in what order was the legal subtree constructed, one can recapitulate the traversal and look at all of it and see the branches next to each other being min-max optimized backward using the static evaluations with full position input only at end of the branches.

That is simplifying, for along the way past iterative deepening blablabla..., but not changing the story, or one can explain further from that high level model, without changing the understanding much.
which is:
many branches end up being considered, and some not at all, depending on evaluations at such branches ends, and the evaluations are being actualized and backward shoved all the way back to the root successor position (move) candidates, using only those selectively applied static evaluation at branches end... did i make it worse. somebody else with another try, the intersection of our babbling (me first) attempts at translating recursive tree traversal while optimizing alternately on half move depth**, might end up giving some notion close to what SF is really doing.. help!

** (turn sign, one could negamax, instead and just say optimize, the turn alternance might be a flower in the carpet for understanding sake (it is for me, I easily invert polarity when saturating my short term memory (or other loading conditions....).

@petri999 there are old ideas like type B engine that turned out to need patience and allow empirical method to weigh on knowledge (even if we are currently not able to translate that knowledge, the engine has embodied it in its neural network)..... and it works. The out-of-current game knowledge from many games has been digested and brought to bear on in-game decision making, which is essentially what type B was about, the hand-crafted early attempts being a tangential aspect (or we could give another name to the type B that goes full empirical... ). NNue is proof that there was a subspace missing in SF legal tree searches. I think relying on books and TB to forget about the failings of type A engine (which are not only about not enough depth, the failings, right?), is blinding one self. specially that we have no clue if the middle game behavior is not also biased in ways we are unable to calculate. If tournament structure of engine X engine small genetic pool is enough for blind trust, then yes no need to ask such questions as the op. But that is not rejecting the ideas or suggestions of possible solution, it is rejecting the question. @Toscani may have been too close to detail in describing the legal tree search, but it seemed compatible with my understanding.. where I prefer to look at the whole potential subtree, which then is explored, until recursive end-points (there are many) make for new branching points, or leaf stop for full position evaluation (where the biases might happen first, not just precision errors). what ever the initial first level move candidates at root positions exploration ordering, there is a recursive traversal that one could get lost in.. (I do after certain piling up or down level or nesting depth). but forgetting in what order was the legal subtree constructed, one can recapitulate the traversal and look at all of it and see the branches next to each other being min-max optimized backward using the static evaluations with full position input only at end of the branches. That is simplifying, for along the way past iterative deepening blablabla..., but not changing the story, or one can explain further from that high level model, without changing the understanding much. which is: many branches end up being considered, and some not at all, depending on evaluations at such branches ends, and the evaluations are being actualized and backward shoved all the way back to the root successor position (move) candidates, using only those selectively applied static evaluation at branches end... did i make it worse. somebody else with another try, the intersection of our babbling (me first) attempts at translating recursive tree traversal while optimizing alternately on half move depth**, might end up giving some notion close to what SF is really doing.. help! ** (turn sign, one could negamax, instead and just say optimize, the turn alternance might be a flower in the carpet for understanding sake (it is for me, I easily invert polarity when saturating my short term memory (or other loading conditions....).

kudos to op for transparent process with the study. this is discussion, a thread. not a unidirectional blog (not a critic of the blog thing, just specifying differences). We are not full proof, but that is not a reason to not share ideas, and open them for discussion even if not fully vetted before.. the discussion is there for that... I may not be able to follow through all the chapters in time.. still at main text reading, here and replies.

I would like to make sure about my understanding of the first post evidence about stockfish.. Is the study about making that case? I do not need to consider the TB and opeining book counter arguments if that became a point, as I want to know the algorithm behavior where we can check against other knowledge (like TB and our own human consensus, which i am still learning about at the same time...).

let SF loose on any position.....

kudos to op for transparent process with the study. this is discussion, a thread. not a unidirectional blog (not a critic of the blog thing, just specifying differences). We are not full proof, but that is not a reason to not share ideas, and open them for discussion even if not fully vetted before.. the discussion is there for that... I may not be able to follow through all the chapters in time.. still at main text reading, here and replies. I would like to make sure about my understanding of the first post evidence about stockfish.. Is the study about making that case? I do not need to consider the TB and opeining book counter arguments if that became a point, as I want to know the algorithm behavior where we can check against other knowledge (like TB and our own human consensus, which i am still learning about at the same time...). let SF loose on any position.....

Let SF loose on any position ... @dboing
Well here is at least 4 days worth of work. I let the GUI ChessX & Stockfish 15 engine run on the initial position.
See if each line is accurate or has mistakes in them. I also have a list done using my old Xeon that has memory correcting, just in case it's memory errors that is causing mistakes in the chess lines and not the engine.

Have fun loading them into a study.

The time in the brackets does not indicate the days. That's a feature ChessX needs and another feature is if a power failure happens, it's all lost. It needs to save things in a way that it can continue from where it left off.

+0.43 [+] [*] 1. e4 e5 2. Nf3 Nc6 3. Bb5 Nf6 4. O-O Nxe4 5. Re1 Nd6 6. Nxe5 Be7 7. Bf1 Nxe5 8. Rxe5 O-O 9. d4 Bf6 10. Re1 Re8 11. c3 Rxe1 12. Qxe1 Ne8 13. Bf4 d5 14. Nd2 Nd6 15. a4 a5 16. h3 Bf5 17. Qd1 c6 18. Qb3 h6 19. Bh2 Qd7 20. Nf3 Re8 21. Bxd6 Qxd6 22. Qxb7 Rb8 23. Qa7 Be4 (depth 55, 11:46:55)

+0.41 [+] [*] 1. d4 Nf6 2. c4 e6 3. Nf3 d5 4. Nc3 Be7 5. Bf4 O-O 6. e3 b6 7. Be2 dxc4 8. Bxc4 a6 9. O-O Bb7 10. Qe2 Nbd7 11. a4 Nh5 12. Bg3 Nxg3 13. hxg3 Nf6 14. Rfd1 c6 15. e4 Qc7 (depth 55, 11:46:55)

+0.37 [+] [*] 1. c4 e5 2. g3 Nf6 3. Bg2 c6 4. d4 e4 5. Bg5 d5 6. Bxf6 Qxf6 7. Nc3 Bb4 8. e3 Qe7 9. cxd5 cxd5 10. Ne2 Bxc3+ 11. Nxc3 Be6 12. Qh5 g6 13. Qh6 Nc6 14. f3 exf3 15. Bxf3 O-O-O 16. O-O Kb8 17. Rac1 g5 18. Bg2 Nb4 19. Qf6 (depth 55, 11:46:55)

+0.37 [+] [*] 1. Nf3 d5 2. g3 Bg4 3. Bg2 c6 4. h3 Bh5 5. O-O e6 6. d4 Nd7 7. c4 Ngf6 8. cxd5 exd5 9. Nh4 Bg6 10. Bg5 Be7 11. Nxg6 hxg6 12. Nd2 O-O 13. h4 a5 14. Qc2 Re8 15. Rad1 Nf8 16. e3 Qd7 (depth 54, 11:46:55)

+0.35 [+] [*] 1. g3 c5 2. Bg2 Nc6 3. c4 g6 4. Nf3 Bg7 5. Nc3 d6 6. O-O Bf5 7. h3 e5 8. d3 Nge7 9. Rb1 O-O 10. Bd2 Be6 11. a3 a5 12. Qc1 f6 13. Kh2 Kh8 14. Ne1 f5 15. Nc2 f4 16. b4 b6 17. bxc5 bxc5 18. Qd1 Rb8 19. Rxb8 Qxb8 (depth 54, 11:46:55)

+0.17 [+] [*] 1. e3 Nf6 2. Nf3 e6 3. d4 d5 4. b3 b6 5. Bb2 Bb7 6. Bd3 Be7 7. Nbd2 O-O 8. Qe2 Nbd7 9. O-O c5 10. Rac1 Ne4 11. Rfd1 cxd4 12. Nxd4 Nxd2 13. Qxd2 Nc5 14. Bf1 Bf6 15. c4 dxc4 16. Rxc4 Rc8 17. Rcc1 Ne4 18. Qe2 Qe7 (depth 54, 11:46:55)

0.00 [+] [*] 1. Nc3 d5 2. d4 Nf6 3. Bf4 a6 4. e3 h6 5. Nf3 e6 6. Bd3 c5 7. dxc5 Bxc5 8. e4 d4 9. Ne2 Nc6 10. O-O Nh5 11. e5 Qc7 12. Bd2 Nxe5 13. Nxe5 Qxe5 14. Re1 Qd5 15. Nf4 Nxf4 16. Bxf4 Bd6 17. Be4 Qc5 18. Bxd6 Qxd6 19. c3 O-O 20. Qxd4 Qxd4 21. cxd4 Rb8 22. d5 exd5 23. Bxd5 Rd8 24. Rad1 Be6 25. Bxe6 fxe6 26. f4 Kf7 (depth 54, 11:46:55)

0.00 [+] [*] 1. c3 Nf6 2. Nf3 e6 3. g3 Be7 4. Bg2 d5 5. O-O O-O 6. d4 c5 7. dxc5 Bxc5 8. c4 Nc6 9. cxd5 exd5 10. Qc2 Bb6 11. Nc3 Re8 12. Na4 Bg4 13. Nxb6 Qxb6 14. b3 Ne4 15. Bb2 Rad8 16. Rad1 h6 17. Bd4 Nxd4 18. Nxd4 Rc8 19. Qb2 Nc3 20. Rd2 Ne4 (depth 54, 11:46:55)

0.00 [+] [*] 1. h3 e5 2. e4 Nf6 3. Nc3 Bc5 4. Nf3 d6 5. Bc4 O-O 6. O-O a5 7. d4 exd4 8. Nxd4 Nbd7 9. Be3 Ne5 10. Be2 c6 11. f4 Ng6 12. Kh1 a4 13. a3 Re8 14. Qd3 Qc7 15. Bg1 Qb6 16. Be3 (depth 54, 11:46:55)

0.00 [+] [*] 1. b3 e5 2. Bb2 Nc6 3. e3 d5 4. Bb5 Bd6 5. f4 Qh4+ 6. g3 Qe7 7. Nf3 f6 8. Nc3 Be6 9. O-O O-O-O 10. Bxc6 bxc6 11. b4 Nh6 12. Qe2 Kb7 13. Na4 Bh3 14. c4 Bxf1 15. Rxf1 dxc4 16. Rc1 exf4 17. Qxc4 fxe3 18. Qxc6+ Kb8 19. Nd4 exd2 20. Qb5+ Kc8 21. Qa6+ Kd7 22. Qb5+ (depth 54, 11:46:55)

0.00 [+] [*] 1. a3 Nf6 2. d4 g6 3. Nf3 Bg7 4. e3 O-O 5. c4 d6 6. Be2 e5 7. Nc3 Bf5 8. O-O a5 9. Bd2 Re8 10. h3 c6 11. Be1 Na6 12. dxe5 dxe5 13. Qxd8 Raxd8 14. g4 Bc2 15. Rc1 Bd3 16. g5 Bxe2 17. Nxe2 (depth 54, 11:46:55)

-0.04 [+] [*] 1. d3 d5 2. Nf3 Nf6 3. g3 g6 4. Bg2 Bg7 5. c4 O-O 6. cxd5 Nxd5 7. h4 Bg4 8. Qb3 Nc6 9. Nc3 Nxc3 10. bxc3 Qd7 11. Bd2 Rab8 12. h5 gxh5 13. Nh2 Bf5 14. Qd5 Qc8 15. a4 Bg6 16. a5 Rd8 17. Qa2 Rd6 18. Nf3 Qd7 19. Bh3 e6 (depth 54, 11:46:55)

-0.12 [+] [*] 1. b4 e6 2. Bb2 c6 3. e3 Nf6 4. a3 Be7 5. c4 O-O 6. Nc3 d5 7. Nf3 dxc4 8. Bxc4 b5 9. Be2 a5 10. bxa5 Rxa5 11. O-O Bb7 12. d4 Nbd7 13. Nd2 Qa8 14. Nb3 Ra7 15. e4 Bxa3 16. Bxa3 Rxa3 17. e5 Rxa1 18. Qxa1 (depth 54, 11:46:55)

-0.27 [+] [*] 1. a4 Nf6 2. d4 e6 3. Nf3 c5 4. e3 Nc6 5. Be2 Be7 6. O-O O-O 7. c4 d5 8. b3 b6 9. Ne5 Bb7 10. Nxc6 Bxc6 11. Bb2 Qc7 12. cxd5 Bxd5 13. Nd2 Qb7 14. Bf3 Rfd8 15. a5 Rac8 16. Bxd5 Rxd5 17. axb6 axb6 18. Qf3 Nd7 19. Rfd1 b5 20. Ne4 (depth 54, 11:46:55)

-0.32 [+] [*] 1. f4 d5 2. Nf3 g6 3. d4 Bg7 4. e3 Nf6 5. Bd3 c5 6. c3 Qb6 7. O-O Nc6 8. Qe2 Bf5 9. Bxf5 gxf5 10. b3 Ne4 11. Bb2 e6 12. Nbd2 O-O-O 13. Nxe4 dxe4 14. Ne5 Bxe5 15. fxe5 cxd4 16. cxd4 Kb8 17. Ba3 Nb4 (depth 54, 11:46:55)

-0.58 [+] [*] 1. h4 e5 2. c4 Nf6 3. d3 Bc5 4. e3 Bb6 5. b4 d6 6. Be2 c6 7. Bb2 O-O 8. Nd2 Qe7 9. Ngf3 Nbd7 10. a4 a5 11. b5 Bc5 12. Bc3 Ng4 13. Ng5 Ndf6 14. Nge4 Bxe3 15. fxe3 Nxe3 16. Nxf6+ Qxf6 17. Qb3 Nxg2+ 18. Kd1 Nf4 19. Bf1 Bf5 20. Kc2 d5 (depth 54, 11:46:55)

-0.72 [+] [*] 1. Na3 e5 2. Nc4 Nc6 3. e4 Nf6 4. d3 d5 5. exd5 Nxd5 6. Nf3 Qe7 7. c3 Bf5 8. Ne3 Nxe3 9. Bxe3 O-O-O 10. Qa4 a6 11. O-O-O Bd7 12. Qc2 Be6 13. Qa4 Rd5 14. b4 f5 15. d4 f4 16. Bd2 e4 17. c4 exf3 18. cxd5 Bxd5 (depth 54, 11:46:55)

-0.74 [+] [*] 1. Nh3 d5 2. d4 Nf6 3. Ng5 c5 4. e3 Nc6 5. Be2 Bf5 6. Nf3 a6 7. O-O Qc7 8. Nh4 Bd7 9. Nf3 e5 10. dxe5 Nxe5 11. Nbd2 Bc6 12. Nxe5 Qxe5 13. Nf3 Qe6 14. b3 Bd6 15. c4 O-O-O 16. Qc2 Bc7 17. b4 cxb4 18. c5 (depth 54, 11:46:55)

-0.83 [+] [*] 1. f3 e5 2. Nc3 Nc6 3. e4 Nf6 4. Bc4 Bc5 5. d3 h6 6. a3 a6 7. Bd2 O-O 8. Qc1 b5 9. Ba2 d6 10. Be3 Nd4 11. Nd1 Ba7 12. c3 Ne6 13. Ne2 d5 14. Bxa7 Rxa7 15. O-O c5 16. Nf2 dxe4 17. fxe4 c4 18. Qe3 Qc7 19. d4 Qb6 (depth 54, 11:46:55)

-1.65 [+] [*] 1. g4 d5 2. e3 e5 3. d4 Nc6 4. Nc3 Be6 5. a3 Bd6 6. Bg2 Nge7 7. dxe5 Bxe5 8. Bd2 Qd7 9. Nf3 O-O-O 10. Qe2 Bxg4 11. Nxe5 Nxe5 12. f3 Be6 13. O-O-O Nc4 14. Rhe1 Qc6 15. Bf1 Kb8 16. Qf2 Nxd2 17. Rxd2 Qb6 18. Na4 Qa5 19. Nc3 g6 20. h4 c6 21. e4 Qc7 22. exd5 Nxd5 23. Nxd5 Bxd5 (depth 54, 11:46:55)

Try putting them through Lucas Chess analysis and get some results of the analysis like:
Average lost scores, Domination, Complexity, Efficient mobility, Narrowness, Pieces activity, Exchange tendency.

Let SF loose on any position ... @dboing Well here is at least 4 days worth of work. I let the GUI ChessX & Stockfish 15 engine run on the initial position. See if each line is accurate or has mistakes in them. I also have a list done using my old Xeon that has memory correcting, just in case it's memory errors that is causing mistakes in the chess lines and not the engine. Have fun loading them into a study. The time in the brackets does not indicate the days. That's a feature ChessX needs and another feature is if a power failure happens, it's all lost. It needs to save things in a way that it can continue from where it left off. +0.43 [+] [*] 1. e4 e5 2. Nf3 Nc6 3. Bb5 Nf6 4. O-O Nxe4 5. Re1 Nd6 6. Nxe5 Be7 7. Bf1 Nxe5 8. Rxe5 O-O 9. d4 Bf6 10. Re1 Re8 11. c3 Rxe1 12. Qxe1 Ne8 13. Bf4 d5 14. Nd2 Nd6 15. a4 a5 16. h3 Bf5 17. Qd1 c6 18. Qb3 h6 19. Bh2 Qd7 20. Nf3 Re8 21. Bxd6 Qxd6 22. Qxb7 Rb8 23. Qa7 Be4 (depth 55, 11:46:55) +0.41 [+] [*] 1. d4 Nf6 2. c4 e6 3. Nf3 d5 4. Nc3 Be7 5. Bf4 O-O 6. e3 b6 7. Be2 dxc4 8. Bxc4 a6 9. O-O Bb7 10. Qe2 Nbd7 11. a4 Nh5 12. Bg3 Nxg3 13. hxg3 Nf6 14. Rfd1 c6 15. e4 Qc7 (depth 55, 11:46:55) +0.37 [+] [*] 1. c4 e5 2. g3 Nf6 3. Bg2 c6 4. d4 e4 5. Bg5 d5 6. Bxf6 Qxf6 7. Nc3 Bb4 8. e3 Qe7 9. cxd5 cxd5 10. Ne2 Bxc3+ 11. Nxc3 Be6 12. Qh5 g6 13. Qh6 Nc6 14. f3 exf3 15. Bxf3 O-O-O 16. O-O Kb8 17. Rac1 g5 18. Bg2 Nb4 19. Qf6 (depth 55, 11:46:55) +0.37 [+] [*] 1. Nf3 d5 2. g3 Bg4 3. Bg2 c6 4. h3 Bh5 5. O-O e6 6. d4 Nd7 7. c4 Ngf6 8. cxd5 exd5 9. Nh4 Bg6 10. Bg5 Be7 11. Nxg6 hxg6 12. Nd2 O-O 13. h4 a5 14. Qc2 Re8 15. Rad1 Nf8 16. e3 Qd7 (depth 54, 11:46:55) +0.35 [+] [*] 1. g3 c5 2. Bg2 Nc6 3. c4 g6 4. Nf3 Bg7 5. Nc3 d6 6. O-O Bf5 7. h3 e5 8. d3 Nge7 9. Rb1 O-O 10. Bd2 Be6 11. a3 a5 12. Qc1 f6 13. Kh2 Kh8 14. Ne1 f5 15. Nc2 f4 16. b4 b6 17. bxc5 bxc5 18. Qd1 Rb8 19. Rxb8 Qxb8 (depth 54, 11:46:55) +0.17 [+] [*] 1. e3 Nf6 2. Nf3 e6 3. d4 d5 4. b3 b6 5. Bb2 Bb7 6. Bd3 Be7 7. Nbd2 O-O 8. Qe2 Nbd7 9. O-O c5 10. Rac1 Ne4 11. Rfd1 cxd4 12. Nxd4 Nxd2 13. Qxd2 Nc5 14. Bf1 Bf6 15. c4 dxc4 16. Rxc4 Rc8 17. Rcc1 Ne4 18. Qe2 Qe7 (depth 54, 11:46:55) 0.00 [+] [*] 1. Nc3 d5 2. d4 Nf6 3. Bf4 a6 4. e3 h6 5. Nf3 e6 6. Bd3 c5 7. dxc5 Bxc5 8. e4 d4 9. Ne2 Nc6 10. O-O Nh5 11. e5 Qc7 12. Bd2 Nxe5 13. Nxe5 Qxe5 14. Re1 Qd5 15. Nf4 Nxf4 16. Bxf4 Bd6 17. Be4 Qc5 18. Bxd6 Qxd6 19. c3 O-O 20. Qxd4 Qxd4 21. cxd4 Rb8 22. d5 exd5 23. Bxd5 Rd8 24. Rad1 Be6 25. Bxe6 fxe6 26. f4 Kf7 (depth 54, 11:46:55) 0.00 [+] [*] 1. c3 Nf6 2. Nf3 e6 3. g3 Be7 4. Bg2 d5 5. O-O O-O 6. d4 c5 7. dxc5 Bxc5 8. c4 Nc6 9. cxd5 exd5 10. Qc2 Bb6 11. Nc3 Re8 12. Na4 Bg4 13. Nxb6 Qxb6 14. b3 Ne4 15. Bb2 Rad8 16. Rad1 h6 17. Bd4 Nxd4 18. Nxd4 Rc8 19. Qb2 Nc3 20. Rd2 Ne4 (depth 54, 11:46:55) 0.00 [+] [*] 1. h3 e5 2. e4 Nf6 3. Nc3 Bc5 4. Nf3 d6 5. Bc4 O-O 6. O-O a5 7. d4 exd4 8. Nxd4 Nbd7 9. Be3 Ne5 10. Be2 c6 11. f4 Ng6 12. Kh1 a4 13. a3 Re8 14. Qd3 Qc7 15. Bg1 Qb6 16. Be3 (depth 54, 11:46:55) 0.00 [+] [*] 1. b3 e5 2. Bb2 Nc6 3. e3 d5 4. Bb5 Bd6 5. f4 Qh4+ 6. g3 Qe7 7. Nf3 f6 8. Nc3 Be6 9. O-O O-O-O 10. Bxc6 bxc6 11. b4 Nh6 12. Qe2 Kb7 13. Na4 Bh3 14. c4 Bxf1 15. Rxf1 dxc4 16. Rc1 exf4 17. Qxc4 fxe3 18. Qxc6+ Kb8 19. Nd4 exd2 20. Qb5+ Kc8 21. Qa6+ Kd7 22. Qb5+ (depth 54, 11:46:55) 0.00 [+] [*] 1. a3 Nf6 2. d4 g6 3. Nf3 Bg7 4. e3 O-O 5. c4 d6 6. Be2 e5 7. Nc3 Bf5 8. O-O a5 9. Bd2 Re8 10. h3 c6 11. Be1 Na6 12. dxe5 dxe5 13. Qxd8 Raxd8 14. g4 Bc2 15. Rc1 Bd3 16. g5 Bxe2 17. Nxe2 (depth 54, 11:46:55) -0.04 [+] [*] 1. d3 d5 2. Nf3 Nf6 3. g3 g6 4. Bg2 Bg7 5. c4 O-O 6. cxd5 Nxd5 7. h4 Bg4 8. Qb3 Nc6 9. Nc3 Nxc3 10. bxc3 Qd7 11. Bd2 Rab8 12. h5 gxh5 13. Nh2 Bf5 14. Qd5 Qc8 15. a4 Bg6 16. a5 Rd8 17. Qa2 Rd6 18. Nf3 Qd7 19. Bh3 e6 (depth 54, 11:46:55) -0.12 [+] [*] 1. b4 e6 2. Bb2 c6 3. e3 Nf6 4. a3 Be7 5. c4 O-O 6. Nc3 d5 7. Nf3 dxc4 8. Bxc4 b5 9. Be2 a5 10. bxa5 Rxa5 11. O-O Bb7 12. d4 Nbd7 13. Nd2 Qa8 14. Nb3 Ra7 15. e4 Bxa3 16. Bxa3 Rxa3 17. e5 Rxa1 18. Qxa1 (depth 54, 11:46:55) -0.27 [+] [*] 1. a4 Nf6 2. d4 e6 3. Nf3 c5 4. e3 Nc6 5. Be2 Be7 6. O-O O-O 7. c4 d5 8. b3 b6 9. Ne5 Bb7 10. Nxc6 Bxc6 11. Bb2 Qc7 12. cxd5 Bxd5 13. Nd2 Qb7 14. Bf3 Rfd8 15. a5 Rac8 16. Bxd5 Rxd5 17. axb6 axb6 18. Qf3 Nd7 19. Rfd1 b5 20. Ne4 (depth 54, 11:46:55) -0.32 [+] [*] 1. f4 d5 2. Nf3 g6 3. d4 Bg7 4. e3 Nf6 5. Bd3 c5 6. c3 Qb6 7. O-O Nc6 8. Qe2 Bf5 9. Bxf5 gxf5 10. b3 Ne4 11. Bb2 e6 12. Nbd2 O-O-O 13. Nxe4 dxe4 14. Ne5 Bxe5 15. fxe5 cxd4 16. cxd4 Kb8 17. Ba3 Nb4 (depth 54, 11:46:55) -0.58 [+] [*] 1. h4 e5 2. c4 Nf6 3. d3 Bc5 4. e3 Bb6 5. b4 d6 6. Be2 c6 7. Bb2 O-O 8. Nd2 Qe7 9. Ngf3 Nbd7 10. a4 a5 11. b5 Bc5 12. Bc3 Ng4 13. Ng5 Ndf6 14. Nge4 Bxe3 15. fxe3 Nxe3 16. Nxf6+ Qxf6 17. Qb3 Nxg2+ 18. Kd1 Nf4 19. Bf1 Bf5 20. Kc2 d5 (depth 54, 11:46:55) -0.72 [+] [*] 1. Na3 e5 2. Nc4 Nc6 3. e4 Nf6 4. d3 d5 5. exd5 Nxd5 6. Nf3 Qe7 7. c3 Bf5 8. Ne3 Nxe3 9. Bxe3 O-O-O 10. Qa4 a6 11. O-O-O Bd7 12. Qc2 Be6 13. Qa4 Rd5 14. b4 f5 15. d4 f4 16. Bd2 e4 17. c4 exf3 18. cxd5 Bxd5 (depth 54, 11:46:55) -0.74 [+] [*] 1. Nh3 d5 2. d4 Nf6 3. Ng5 c5 4. e3 Nc6 5. Be2 Bf5 6. Nf3 a6 7. O-O Qc7 8. Nh4 Bd7 9. Nf3 e5 10. dxe5 Nxe5 11. Nbd2 Bc6 12. Nxe5 Qxe5 13. Nf3 Qe6 14. b3 Bd6 15. c4 O-O-O 16. Qc2 Bc7 17. b4 cxb4 18. c5 (depth 54, 11:46:55) -0.83 [+] [*] 1. f3 e5 2. Nc3 Nc6 3. e4 Nf6 4. Bc4 Bc5 5. d3 h6 6. a3 a6 7. Bd2 O-O 8. Qc1 b5 9. Ba2 d6 10. Be3 Nd4 11. Nd1 Ba7 12. c3 Ne6 13. Ne2 d5 14. Bxa7 Rxa7 15. O-O c5 16. Nf2 dxe4 17. fxe4 c4 18. Qe3 Qc7 19. d4 Qb6 (depth 54, 11:46:55) -1.65 [+] [*] 1. g4 d5 2. e3 e5 3. d4 Nc6 4. Nc3 Be6 5. a3 Bd6 6. Bg2 Nge7 7. dxe5 Bxe5 8. Bd2 Qd7 9. Nf3 O-O-O 10. Qe2 Bxg4 11. Nxe5 Nxe5 12. f3 Be6 13. O-O-O Nc4 14. Rhe1 Qc6 15. Bf1 Kb8 16. Qf2 Nxd2 17. Rxd2 Qb6 18. Na4 Qa5 19. Nc3 g6 20. h4 c6 21. e4 Qc7 22. exd5 Nxd5 23. Nxd5 Bxd5 (depth 54, 11:46:55) Try putting them through Lucas Chess analysis and get some results of the analysis like: Average lost scores, Domination, Complexity, Efficient mobility, Narrowness, Pieces activity, Exchange tendency.

I’m Not sure about what are you trying to do.

If you just want engines to play perfectly because you ran an engine few days and analysing its variation you found mistakes or innacuracy, that is purely nonsense. Accuracy is computed with engines, and it makes no sense to analyse a game player by an engine at depth X by an engine at depth < X.

You said « engines just want to win, not to play perfectly ». That’s false. An engine is trying to make the best possibles moves, no matter if the position is winning or not.

You said that if the engine finds that his move is not the best, it should search another move. That’s already done. This is called alpha-beta algorithm. All strong engines are alpha-beta like, except Lc0 and AlphaZero using MCTS algorithm.

You also said that an engine should search by phase. This is a really bad idea. Openning leads to middlegame, and middlegame leads to endgame. So they are all working together, and it is bad to separate their analysis. As far as possible from this idea, modern engines use a concept called « Tapered eval » (you can search it on chessprogramming.org). The principe is to make a smoth transition between phases if the game.

I hope this helps, and sorry if there is any typo, I’m writting on my phone with my french keyboard, and the french correctif is not happy

I’m Not sure about what are you trying to do. If you just want engines to play perfectly because you ran an engine few days and analysing its variation you found mistakes or innacuracy, that is purely nonsense. Accuracy is computed with engines, and it makes no sense to analyse a game player by an engine at depth X by an engine at depth < X. You said « engines just want to win, not to play perfectly ». That’s false. An engine is trying to make the best possibles moves, no matter if the position is winning or not. You said that if the engine finds that his move is not the best, it should search another move. That’s already done. This is called alpha-beta algorithm. All strong engines are alpha-beta like, except Lc0 and AlphaZero using MCTS algorithm. You also said that an engine should search by phase. This is a really bad idea. Openning leads to middlegame, and middlegame leads to endgame. So they are all working together, and it is bad to separate their analysis. As far as possible from this idea, modern engines use a concept called « Tapered eval » (you can search it on chessprogramming.org). The principe is to make a smoth transition between phases if the game. I hope this helps, and sorry if there is any typo, I’m writting on my phone with my french keyboard, and the french correctif is not happy

@CheckmaterELO said in #18:

I’m Not sure about what are you trying to do.

You said « engines just want to win, not to play perfectly ». That’s false. An engine is trying to make the best possibles moves, no matter if the position is winning or not.

all we know about engine performance comes for ELO measures from engine pools populated until SF8 by very similar engines, and few of them (compare with ELO or glicko with human populations where whatever the rating might be measuring, one could assume enough wet "algorithms" different biases to make ELO of a high rated human more meaningful and less likely going to be competing over some unknown bias, assuming humans are less conformist than a bunch of similar engine algorithms....

anyway, it you read the source code, just the comments, it becomes clear that speed has been a driving factor over many versions, and that comes directly from how the engine tournament blindly follow human tournament constraints with clocks taking the forefront.

Best here, mean better in the amount of time given (symetric across color). so an engine that is exploring more width, for whatever reason, might get to spit a bad current best in the same time an engine skimming on width might complete it more limited width exploration....

we are down to whether width can always be recoverd by increased depth search.... unfortunately, there might be tunnel vision that is not fixable through more depth. Such expereiment by op (still need to look at) did not seem to be about engine games. but about using engine as an analysis tool from a position.. I am sorry if i misunderstood. but it will find out in next few days.

I would revise assumption about engine always giving best play... any better sequence found in time would give an advantage in engine tournament current constraints... competition can be happening on very narrow region if no external measure ever applied to test for covering of chess space (set of all legal positions to be used in engine continuation for example, that can be tested with TB data for example, letting loose SF in TB land).

@CheckmaterELO said in #18: > I’m Not sure about what are you trying to do. > > You said « engines just want to win, not to play perfectly ». That’s false. An engine is trying to make the best possibles moves, no matter if the position is winning or not. > all we know about engine performance comes for ELO measures from engine pools populated until SF8 by very similar engines, and few of them (compare with ELO or glicko with human populations where whatever the rating might be measuring, one could assume enough wet "algorithms" different biases to make ELO of a high rated human more meaningful and less likely going to be competing over some unknown bias, assuming humans are less conformist than a bunch of similar engine algorithms.... anyway, it you read the source code, just the comments, it becomes clear that speed has been a driving factor over many versions, and that comes directly from how the engine tournament blindly follow human tournament constraints with clocks taking the forefront. Best here, mean better in the amount of time given (symetric across color). so an engine that is exploring more width, for whatever reason, might get to spit a bad current best in the same time an engine skimming on width might complete it more limited width exploration.... we are down to whether width can always be recoverd by increased depth search.... unfortunately, there might be tunnel vision that is not fixable through more depth. Such expereiment by op (still need to look at) did not seem to be about engine games. but about using engine as an analysis tool from a position.. I am sorry if i misunderstood. but it will find out in next few days. I would revise assumption about engine always giving best play... any better sequence found in time would give an advantage in engine tournament current constraints... competition can be happening on very narrow region if no external measure ever applied to test for covering of chess space (set of all legal positions to be used in engine continuation for example, that can be tested with TB data for example, letting loose SF in TB land).

Yes speed is of paramount importance for chess engines. And by necessity there is a time limit in competition but it does not matter it time is 1 minute per move or 3 days per move. time still matters. Most of stuff that increases speed of engine does at no cost to strength of the engine.

First iterative deepening was introduced way controlling amount of time at players disposal then it was found that iteration-ply-0 gives good amount of information which moves are good. And amount time calculate Ply(n) is small fraction of time Ply(n+1) so the overhead is very small compared to extra alpha-beta cuts gained. similarly killer-move require some memory but boost again number of cuts.

Null move pruning is only thing that can drop good moves out. But those happen in the pieceless endgames and composed unreal positions. again speed gains more than compensate that

old rule of thumb is that doubling cpu power increases engine by 50 Elo points. similarly cuttting CPU consumption to half would gain same 50 points.

At very long thinking times tree will grow too wide to make sense to deepen un-selectively. And on 3 days per move we probably are already there but not on normal time limits. Besides on 3 days per move draw rate is close 100% so that is no longer of any interest anymore

Yes speed is of paramount importance for chess engines. And by necessity there is a time limit in competition but it does not matter it time is 1 minute per move or 3 days per move. time still matters. Most of stuff that increases speed of engine does at no cost to strength of the engine. First iterative deepening was introduced way controlling amount of time at players disposal then it was found that iteration-ply-0 gives good amount of information which moves are good. And amount time calculate Ply(n) is small fraction of time Ply(n+1) so the overhead is very small compared to extra alpha-beta cuts gained. similarly killer-move require some memory but boost again number of cuts. Null move pruning is only thing that can drop good moves out. But those happen in the pieceless endgames and composed unreal positions. again speed gains more than compensate that old rule of thumb is that doubling cpu power increases engine by 50 Elo points. similarly cuttting CPU consumption to half would gain same 50 points. At very long thinking times tree will grow too wide to make sense to deepen un-selectively. And on 3 days per move we probably are already there but not on normal time limits. Besides on 3 days per move draw rate is close 100% so that is no longer of any interest anymore

@petri999 said in #20:

Yes speed is of paramount importance for chess engines. And by necessity there is a time limit in competition but it does not matter it time is 1 minute per move or 3 days per move. time still matters. Most of stuff that increases speed of engine does at no cost to strength of the engine.

Please read about forward pruning in the chessprogramming wiki. And the whole bet of type A engine to not evaluate all positions being traversed but only a select class based on quiescence notion, and finally, a strain evolution of static evaluation function that started with pure material counting and froze those parameters (conserved to date) leaving little room for positional parameters to matter.

one could hope that such leaf node selection for full position evaluation would not matter on the basis that depth will always uncover an ever existing material conversion deeper. that hope would be wishful thinking (see matching problem mentioned in my first post, or just the fact that mate is not a material conversion of something missed earlier).

the above are 3 points of control of the legal tree search, and each are now being saving on time at the expense of covering.
This has been the type A approach from the begginning... we can cut on evaluation of full position information, if we catch a wide enough net in legal space search, well not wide, but big enough breadth... while this was pure AB, the possible biases were restricted to the leaf selection and the parameter space limitiation for optimization of the full position static evaluation induced by hard prior on material counting. But then forward pruning made it clear that engine competition is about gaining speed over "strength". This is not negligence on part of coders it is consequence of tournament with small population diversity (assuming rating is enough of a measure, maybe, that may work with diverse human pop, but not engines, i claim).

I would again invite @jomega, for his first hand command line experiments (and verbose modifications of SF14), that put a stop on further inquiries about a high level model of SF (understanding for chess purposes of what SF does), about the amount of forward pruning implemented in SF14 (and before, the experiments were with SF14).

@petri999 said in #20: > Yes speed is of paramount importance for chess engines. And by necessity there is a time limit in competition but it does not matter it time is 1 minute per move or 3 days per move. time still matters. Most of stuff that increases speed of engine does at no cost to strength of the engine. > Please read about forward pruning in the chessprogramming wiki. And the whole bet of type A engine to not evaluate all positions being traversed but only a select class based on quiescence notion, and finally, a strain evolution of static evaluation function that started with pure material counting and froze those parameters (conserved to date) leaving little room for positional parameters to matter. one could hope that such leaf node selection for full position evaluation would not matter on the basis that depth will always uncover an ever existing material conversion deeper. that hope would be wishful thinking (see matching problem mentioned in my first post, or just the fact that mate is not a material conversion of something missed earlier). the above are 3 points of control of the legal tree search, and each are now being saving on time at the expense of covering. This has been the type A approach from the begginning... we can cut on evaluation of full position information, if we catch a wide enough net in legal space search, well not wide, but big enough breadth... while this was pure AB, the possible biases were restricted to the leaf selection and the parameter space limitiation for optimization of the full position static evaluation induced by hard prior on material counting. But then forward pruning made it clear that engine competition is about gaining speed over "strength". This is not negligence on part of coders it is consequence of tournament with small population diversity (assuming rating is enough of a measure, maybe, that may work with diverse human pop, but not engines, i claim). I would again invite @jomega, for his first hand command line experiments (and verbose modifications of SF14), that put a stop on further inquiries about a high level model of SF (understanding for chess purposes of what SF does), about the amount of forward pruning implemented in SF14 (and before, the experiments were with SF14).

This topic has been archived and can no longer be replied to.