lichess.org
Donate

If white plays a game without any mistakes or blunders...

#31
My reading of
stockfishchess.org/blog/2020/introducing-nnue-evaluation/
where it says

"Both the NNUE and the classical evaluations are available, and can be used to assign a value to a position that is later used in alpha-beta (PVS) search to find the best move. "

is that they are saying that at the leaf nodes you can specify whether the NNUE or the classical evaluations are used, and then the interior nodes are evaluated with the alpha/beta algorithm. That algorithm gives the same answer as minimax applied to the same tree with leaf nodes already scored.

I've not read the stockfish code to see if, assuming NNUE has been turned on, stockfish ever applies the NN to interior nodes.
I would not be surprised if it did. The actual way these engines work is quite complex; they can do things like incrementally deepening the tree, and/or an initial eval of candidate positions to order moves for later search, and other things that make our naive model of how it works incorrect.

Does stockfish with NNUE use Monte Carlo (MCST) or Predictor + Upper Confidence Bound tree search (PUCT)? Either of these? Neither?
Compare with: github.com/LeelaChessZero/lc0/wiki/Technical-Explanation-of-Leela-Chess-Zero

Can anyone who has read the code please chime in?
Chess isn’t a solved game and (probably) never will be. The only way to say that white can guarantee a win, or black can guarantee a draw is if chess was solved. Otherwise we have no proof. Connect 4 was probably thought to always be a draw until someone solved it. This could be the same for chess.
#41 I found an answer to one of my questions. Stockfish NNUE is only using the NN on leaf nodes.

www.chessprogramming.org/Stockfish_NNUE#Hybrid

"In August 2020 a new patch changed Stockfish NNUE into a hybrid engine: it uses NNUE evaluation only on quite balanced material positions, otherwise uses the classical one."

That's probably a typo for "quiet"; aka quiescent, and is what I mean by "leaf node" - a node with no children in the considered tree.
@Goldrider #40 I'm aware of the size of the numbers involved. The eventual promise of true quantum computing, once it's TRULY mastered (as opposed to the faltering baby-steps we're taking now), is superpositional computation, with representations of positions simultaneously residing in much less physical memory than anything that is state-dependant.

That's off in speculative science-fiction land for now, but in a couple or three centuries, it'd be hard to predict.
#41 I just skimmed the Stockfish code here
github.com/official-stockfish/Stockfish/blob/master/src/search.cpp

I don't see any indication that Stockfish is using either MCST or PUCT. What I see is it using a version of alpha/beta called PVS, which I assume stands for Principal Variation Search.

The code is extremely complex. I see one call to the NN explicit in search.cpp and others in evaluate.cpp. I find it amusing that the code can override whether to use NNUE in some cases. Example:

// If the classical eval is small and imbalance large, use NNUE nevertheless.
// For the case of opposite colored bishops, switch to NNUE eval with
// small probability if the classical eval is less than the threshold.
chess is solved. what chess? exactly. That is not a trick question. TB solved, but it is solved also in legal chess. not just perfect chess. I think that there is some confusion in the definitions contained in "chess is solved". i mean hidden restrictive assumptions that may need to be spelled out. Not denying it could have the meaning intended in post, but it could have other ones as well, and maybe the more restricted problem can get an easier solution through converging approximation solution to the larger legal chess problem, where one do spell out some hidden assumptions and make it part of the problem.

legal chess surely contains perfect chess, no? and what if one could make sure that while not exactly sitting on the fence or possibly pointy narrow space of perfect chess, one would have the mathematical assurance that one could generate a sequence of approximating solutions that would from everywhere in legal chess converge to perfect chess, maybe like an arrow never hitting some wall, because Newton did not invent momentum yet when that happened in some past imagination, turned paradox. (last bit is a personal favorite funny thought of mine, took me a while to understand why the arrow did not hit the wall in that paradox, time was not measured, as Newtonian physics would ensure that the time taken to half the distance would be bounded, and tok).
#42
Connect Four was thought to be a win and the only winning move is obvious from logical deduction: in the center.
Checkers was thought to be a draw long before it was mathematically proven. They even imposed slightly balanced openings to prevent all draws, just like they now do in the TCEC superfinals.
Checkers (32 squares, 24 men, 2 kinds of pieces, mandatory capture) was mathematically proven to be a draw by forward calculating towards a 10 men table base.
Chess (64 squares, 32 men, 6 kinds of pieces, discretionary capture) may be mathematically proven to be a draw by forward calculating towards a 13 men table base. We now have a 7 men table base and we get about +1 man per 10 years, so expect chess to be solved 60 years from now.
“We now have a 7 men table base and we get about +1 man per 10 years, so expect chess to be solved 60 years from now.”

I think we will hit the physical limits of Moore’s law long before then; I do not think humanity will ever see Chess be solved. Quantum computers will not get us there; a quantum computer can, as a rule of thumb, half the exponent in a hard problem. For example, a problem with requires 10^120 units to solve with a classical computer can be solved with 10^60 units using a quantum computer. 10^60 is still too big to be readily solved (that’s on the order of “the number of atoms in the solar system multiplied by another really big number”).
@tpr

Checkers has identical rules for each man. Chess does not. Using that analogy for how large the tablebase must be to solve chess is a faulty generalization.

It looks like your ego is immune to reason here. You stubbornly use a syllogism based on premises that have trivial exceptions, so you're making a "special pleading" fallacy. Now, you intoduce a false analogy.

If a premise of a syllogism has exceptions, the conclusion's validity can be doubted and cannot be stated as categorically true.

If the particulars of an analogy are fundamentally different from what you are analogizing, conclusions based on extentions of that analogy are faulty.

You argue like a flat-earther or a creationist or a moon landing hoaxer. If one weakness in your argument is exposed, you never acknowledge it; you just recycle the next argument. And whenever a new thread starts, you go back to the top.

This topic has been archived and can no longer be replied to.