multiPV has some engine X engine pool dependent ELO cost I think, I have read somewhere, maybe on GitHub.
multiPV has some engine X engine pool dependent ELO cost I think, I have read somewhere, maybe on GitHub.
multiPV has some engine X engine pool dependent ELO cost I think, I have read somewhere, maybe on GitHub.
The Engine on Lichess ... Stockfish is around 3150 ? Are u worried it might perform "Only" at 3100 ?
You have the opportunity to Study with a better player than Magnus heh' by 250 points
#1
Here is a simple example with some data.
Please note that I do not think engines know best about opening moves, though they can tell you if you really mess up. This is just an example to compare the work Stockfish does when asked to display 1 line versus 5 lines. Also, results would differ depending on the hardware used, but we can get a relative idea from this data for this one position.
Which position? The starting position!
stockfish
Stockfish 14.1 by the Stockfish developers (see AUTHORS file)
go depth 23
...
info depth 23 seldepth 31 multipv 1 score cp 46 nodes 7071762 nps 1822149 hashfull 998 tbhits 0 time 3881 pv e2e4 c7c5 g1f3 b8c6 f1b5 e7e6 e1g1 g8e7 c2c3 a7a6 b5a4 b7b5 a4c2 c8b7 d1e2 e7g6 d2d4 c5d4 c3d4 c6b4 c2b3 f8e7 b1c3
bestmove e2e4 ponder c7c5
So 3.88 secs = 7071762/1822149.
stockfish
Stockfish 14.1 by the Stockfish developers (see AUTHORS file)
setoption name MultiPV value 5
go depth 23
...
info depth 23 seldepth 26 multipv 1 score cp 45 nodes 16047424 nps 1750373 hashfull 1000 tbhits 0 time 9168 pv e2e4 e7e6 d2d4 d7d5 b1c3 g8f6 c1g5 f8e7 e4e5 f6d7 g5e7 d8e7 f2f4 e8g8 g1f3 c7c5 d1d2 b8c6 e1c1 a7a6 h2h4 c5d4 f3d4 f7f6 e5f6 d7f6
info depth 23 seldepth 30 multipv 2 score cp 38 nodes 16047424 nps 1750373 hashfull 1000 tbhits 0 time 9168 pv d2d4 d7d5 c2c4 e7e6 g1f3 g8f6 g2g3 f8b4 c1d2 b4e7 f1g2 c7c6 e1g1 b8d7 d1b3 e8g8 a2a4 b7b6 c4d5 c6d5 f1c1 c8a6 b1c3 f8e8
info depth 23 seldepth 27 multipv 3 score cp 35 nodes 16047424 nps 1750373 hashfull 1000 tbhits 0 time 9168 pv c2c4 e7e5 g2g3 g8f6 f1g2 h7h6 g1f3 e5e4 f3d4 d7d5 d2d3 d5c4 d3e4 f8c5 e2e3 b8c6 d4c6 d8d1 e1d1 b7c6 h2h3 a8b8 b1d2 f6d7 d2c4 c8a6
info depth 23 seldepth 31 multipv 4 score cp 29 nodes 16047424 nps 1750373 hashfull 1000 tbhits 0 time 9168 pv g2g3 d7d5 g1f3 g8f6 f1g2 c7c5 e1g1 e7e6 d2d4 c5d4 f3d4 e6e5 d4b3 c8e6 c2c4 b8c6 c4d5 f6d5 b1d2 d5f6 d2e4 f6e4 g2e4 d8d1 f1d1
info depth 23 seldepth 34 multipv 5 score cp 29 nodes 16047424 nps 1750373 hashfull 1000 tbhits 0 time 9168 pv g1f3 d7d5 d2d4 g8f6 c2c4 e7e6 g2g3 f8b4 c1d2 b4e7 f1g2 e8g8 e1g1 b8d7 d1c2 c7c6 f1c1 a7a5 b2b3 b7b6 c4d5 c6d5 b1c3 c8a6
bestmove e2e4 ponder e7e6
So 9.17 secs = 16047424/1750373 .
Essentially, 4 secs versus 9 secs, and in either case that is so fast that I don't notice.
nps = nodes per second
nodes = this is not what you think it is!
One would think that 'node' is a position, and one might also think that the count for this would be unique positions. Neither of those is true for Stockfish. The Stockfish developers wanted a count that reflected the work being done by the program, and the expensive operation is the function called 'do_move()'. So that is what they made the value associated with 'nodes' - it is the number of times that do_move() is called. However, Stockfish uses a technique called iterative deepening.
See: https://www.chessprogramming.org/Iterative_Deepening
The upshot of this is, that what we would call a 'position' is seen by Stockfish multiple times and it does some of the same moves over again, and counts them again.
[Edit: Put in all the PVs for second run.]
@Hitsugaya said in #9:
It's called pruning, at every step you have to choose where to better devote your resource that's one of the most crucial point of the algorithm, otherwise it's just brute force and unless you solve the game it's not very useful, example: en.wikipedia.org/wiki/Alpha%E2%80%93beta_pruning
You got to actually HAVE lines before you can prune.
Of course you have to prune. But the more lines you have investigated, the better choice you can make when to prune. If all you have is a handful of bad moves, and you prune some, and calculate the rest a hundred plies deep, all you have is a set of bad moves to make a choice from. It doesn't matter if you have calculated the best moves 20 plies deep if the first move of your line is bad.
but how does multiPV feature specifically affect any of that. I guess the op question more general interpretation (or my curiosity) might be, does it affect the scoring at all, or is it just a wider (and more verbose) window into what SF is already doing anyway, so no change in scoring, and finally in ranking of candidate board decisions?
I am not sure it was about software performance in computation time (was it?). maybe in rating, but more generally in human chess intelligible output. At least that is the version i am interest in (not to repeat myself, but yes).
@dboing said in #11:
multiPV has some engine X engine pool dependent ELO cost I think, I have read somewhere, maybe on GitHub.
Makes sense to me. Even though a rejected line might be better on deepening the search, the cost to find out that a 'bad move' might turn out to be good after all, would be prohibitive, and usually not worth it. Also, you'd think the developers would include the MultiPV value in their stats if they used it during the regression testing, and I don't see that listed. For example, on the page of the regression test before the release of Stockfish 4.1 here: https://tests.stockfishchess.org/tests/view/6175c320af70c2be1788fa2b
@dboing said in #16:
... does it affect the scoring at all, or is it just a wider (and more verbose) window into what SF is already doing anyway, so no change in scoring, and finally in ranking of candidate board decisions?
I think most of the time using MultiPV is not going to affect the eval returned for the root positions, or the ranking of candidates. However, and this was a surprise to me when I just recently found out, Stockfish is not using a minimax preserving algorithm. This is due to several heuristics being used that up the ELO at that cost of not preserving minimax. So, it seems to me that deepening the search on the non-primary PV could change what happens to those heuristics, and so produce a different result than what one would expect from a previous pruning. If that happens, then the ranking of candidates could change, and even the best move and score could change.
Notice in my example in #14, that while the first move is 1.e4, the ponder move changed!
@jomega said in #17:
Makes sense to me. Even though a rejected line might be better on deepening the search, the cost to find out that a 'bad move' might turn out to be good after all, would be prohibitive, and usually not worth it. Also, you'd think the developers would include the MultiPV value in their stats if they used it during the regression testing, and I don't see that listed. For example, on the page of the regression test before the release of Stockfish 4.1 here: tests.stockfishchess.org/tests/view/6175c320af70c2be1788fa2b
I think my readings were about the issue of full PV length display by SF some time ago (closed issues), that were the original closing point for lichess own PV length display issues being also closed (and BTW SF had reopened that and reclosed by fixing, ;)
But yes, one might use high level reasoning with source code and developper discussions to induce what SF might be doing that is not explicitely documented (to say the least, but that is the era).
So thanks for sharing your insights from your serious research into SF behavior and what it means for us humans, that transpires from your post.
This topic has been archived and can no longer be replied to.