lichess.org
Donate

Does Stockfish understand the principles of chess?

I suspect that it (so to speak) sees the principles behind the principles. ;)
I think stockfish calculates in numbers and relations and not in principles. Stockfish has absolutely no problem with an completely open king or being material down for no obvious compensation. It just calculates and sums up positional (= potential future material) and current material advantage together for both sides and spits out one final number.

I can't tell you in detail how engines work, it's just what I assume.
Well summing up those feature is not 'principles' as of how? Obviously tyou cannot have: Knight is stronger thatn bishop if it can reach 5th row. It is put choose among possible moves you have to say much also every move tends to have negative conseqences as well like 'all pieces should be protected' which the knight move may violate and needs to have some value to allow comparison..
Those things 'summed up' are in selected by masters/grand master etc. based on thet understanding and then converted into formulas by programmers. And then exact values need to be tuned by self play etc..

Many of the programmes are good chess players. Like David Levy who ran a company making game engines long time ago
#24 It's even fine if you are a very poor chess player. Only programming skills is required. It's ok, even if you are 200 in lichess, you can create an engine.
Sure there is free information available but if you want make exceptional engine then chess exceptional chess knowledge is required. Deep blue team had several grandmasters in helping the project Joel Benjamin one of them most influental I would assume. Unless going for A0 way where knowledge is accumalated by selfplay. Then just deep neural net. knowledge is required
#21 and #23. Interesting projections, but the scrutiny of the programming, do not support those views. nothing goes in stockfish that is not of human conceptualization. The only information that would be emergent, would be from its auto-tuning of parameters (pre SF12), and auto-tuning somewhat tangled with training of NNue for some other partial subset of positions, both depending on training sets chosen by humans (or there might be some empirical set-up of the self-play type partially involved in the NNue part, but democratization of that aspect has yet to percolate to chess community of the non-dev flavor).

I also wish to point to the fact that auto-tuning with test sets or test engine pairs sets and parameter optimization of NNue, using training sets, is basically of the same algorithm and mathematical nature. the proportion of explicit knowledge and emergent or empirical knowledge that is usable as predictive of chess diversity of challenge positions or game paths, is ruled by the same mathematics. The only problem is that auto-tuning is not careful about the flow of information.

Positional evaluation parameters should not be globally optimized with less mathematical or statistical "power" than the NNue, they are not hyperparameters (is my claim). And the fact that these "positional" parameters are used only on a small fraction of positions in all the subtree explored by any SF engine, is not a factor of trust into how accurate that evaluation might be. The bigger the tree (total nodes), the less pressure there would be on that evaluation function to be accurate...

sounds nebulous? well, I don't have the energy to prove that, but I do intend to check first statistically, if we could have full engine traces that would spit out evaluation statistics over all the types of nodes, evaluated ones and semi-evaluated (evaluated for whether there should be more full evaluatoins further down that path), then error propagation as a function of other tree seach parameters could be studied, and I am pretty sure that the core difference betwee tree search heavy engines (type A), and the new full NN engines (a0, lc0, self-play conceived), as type B engines, would be clear as to which is more human like in its evaluation function (whether explicitly programmed knowledge, or empirically found function).

I suspect that an exhaustive tree search can be very sloppy in its leafs evaluations, if there are enough of them, while a more evaluation heavy tree search, evaluating more nodes right away, would have more pressure to be accurate.

That should lead to a more complete discussion on the op question... some areas to be discussed, some fog remaining, I am only one person in the view it seems... but I am also having limited time reading, on a given day... hence me writing in forums like this one. The power of internet, bringing rare points of view to the front... for good or bad.... or undecided or undecidable.
I think that complete understanding of the eveluating functions in Stockfish requires some serious mathematical skills. If you want to know how computers "understand" chess you should look into some old programs, they were smaller and simpler. Start discovering from there. Then tell us what you found out.
Doing that on slow burner.
But, if the UCI protocol was updated, or engines allowing some common interface to probe their behavior, a lot could be learned by tying that data to the compact algorithms. Because often we know how to write a algorithm in a local generator format, that only makes full sense when actually executed from the task input-output point of view.

currently, on has to contemplate debugging to access such data, and looking at simpler programs is not going to even approach the statistical problems of increasing sub-tree search scope and complexifying the sparse evaluation function at the same time (which has been the long term evolution of AB engines). Not applying statistical or mathematical scrutiny to the relationship between auto-tuning the evaluation function component labelled at programming syntax level with terms that remind of positional feature, does not make the actually optimized parameters having any relationship with the human concept attached to that parameter. not when the training set and the testing sets are not defined (because it is called auto-tuning, not training).

Also, looking at simpler program is not enough, as it would not show the contrast I proposed above (could not possibly show it, as it is possible that there would not be, but to say for sure, one needs to have some range from simpler to complexer.... And some things don't scale. I think at the epoch of deep blue thingy, computing powere was barely enough for exhaustive tree search so that some evaluation by humans was helpful, but having both computer power exhaustive search, and evaluation complexification without any mean of measuring error and its propagation, makes the question of the op, undecidable. However, I proposed reasons for some hypotheses, and I wished that existing software would allow scientific level "debugging", in final executables.... from the chess user point of view, as an analyst, not just an indivudal improver or player.

But you are right in principle. And with the help of a fellow lichess member more attuned to programming culture, I have been trying that way. real tough extracting a common mathematical view out of the idiosyncratic programmer choices in the evolution.. slow project that way. executable traces as standard for engines would go a lot faster....
@pointlesswindows assuming we talking traditional stockfish understanding its evaluation function does not require much of math skills. Mostily simple multiplication and addtion. there is just lot of. Skill neeeed to understand is some elementary C programmin skills.

This topic has been archived and can no longer be replied to.