@LegendaryQueen
I was referring to current top engine ubitquitously used "on" large online human chess player population as sparring partner or own chess out-of game analysis tools (but to stay in thread, well, also imposed by engine assisted players, even if only 1, "on" their human opponent).
I am not saying engines will always be liked that. So thanks for the outlook from the links. I will keep them for when i get more curious about that.. (now into human chess more, but not forgetting what i learned already).
Quantum computing if it is what i understand of it, is going to help the neural net functions cost impediment, even with current immutablel "pre"-historic engineXengine ELO defining competitions formats stay the same.
But i would hope some imagination other than mine would also revise the competitives set of goals for computer things used by humans outside of engineXengine tournaments... The old slow clunky things may have needed the current primary goal, of code optimisation for speed, given the human spec only concerned with wins, in clocked symetric settings, but nothing prevent imagination to actually compete on the quality of the leaf evaluation functions in as shallow depths as needed .
like we humans do on any position, not only the quiescent ones with non-small imbalance. and most of the ugly current top engine behavior is about it being blind early, and always resorting to depth.... and more depth.. there are plenty of positions where an experienced human can see very shallow positiional clues on just the current psition or nearby in skeleton shallow caluculations, that the current top engine seems to be blind to, and keeps exploring like a Knight path doing the travelling salesman problem on the whole board: zillions of steps. (some of which ("rare" compositions?) positions don't even need humans to consult EGTB).
That aspect is what top engine fail us learning humans, whether as opponent or as single position evaluation machine (via its "best" move variation evaluations from somewhere beyond 16 plies, like 22 here on lichess or more if we knew the extension status in output).
But they have been like that and nobody noticed until another species of algorithms uncovered shallow blindness in top engine top ELO constant winner until then.. NNue is a leaf evaluation, helper to classical leaf evaluation, on "quiescent only" positions of the legal position maximal set. The admissible (quiescent) positions out of legal set that are being evaluated as leaf evaluation during play by NNue, are those under a certain (unknown to me, blog says small imbalance in the SF meaning, not Sillman, btw).
Talking about A0, and for the span betwee SF8 and SF12, of LC0. That other species. A dwarf versions of that, NNue came to the rescue for the small imbalance leaf positions that non-NNue part of the leaf evaluation fonction, could not see at input depth yet. But even NNue, is trained by non-NNue SF moderate depth single tree seach value score. So, the blindness is still there but input-depth of first SF call (on the human visible position), plus moderate input depth, classical visible reward...
That it may have improve slowly before that patch, via making classes of near mate (or other terminal outcomes), positions characteristics, was not enough compared to the SF11 blindness on small static imbablances (by definition). what is small might be a tweakable parameter now... but NNue is more costly in sequential computation currency (in ELO=speed).
I would welcome any statement that would make the above a incorrect high level model. please without linking out, unless well linked to soure code or its documentation offspring anywhere (official blog included).
but not talking about all engines of the future.... (so thanks for the links, and bringing that aspect. all is not hopeless self-blinding).
@LegendaryQueen
I was referring to current top engine ubitquitously used "on" large online human chess player population as sparring partner or own chess out-of game analysis tools (but to stay in thread, well, also imposed by engine assisted players, even if only 1, "on" their human opponent).
I am not saying engines will always be liked that. So thanks for the outlook from the links. I will keep them for when i get more curious about that.. (now into human chess more, but not forgetting what i learned already).
Quantum computing if it is what i understand of it, is going to help the neural net functions cost impediment, even with current immutablel "pre"-historic engineXengine ELO defining competitions formats stay the same.
But i would hope some imagination other than mine would also revise the competitives set of goals for computer things used by humans outside of engineXengine tournaments... The old slow clunky things may have needed the current primary goal, of code optimisation for speed, given the human spec only concerned with wins, in clocked symetric settings, but nothing prevent imagination to actually compete on the quality of the leaf evaluation functions in as shallow depths as needed .
like we humans do on any position, not only the quiescent ones with non-small imbalance. and most of the ugly current top engine behavior is about it being blind early, and always resorting to depth.... and more depth.. there are plenty of positions where an experienced human can see very shallow positiional clues on just the current psition or nearby in skeleton shallow caluculations, that the current top engine seems to be blind to, and keeps exploring like a Knight path doing the travelling salesman problem on the whole board: zillions of steps. (some of which ("rare" compositions?) positions don't even need humans to consult EGTB).
That aspect is what top engine fail us learning humans, whether as opponent or as single position evaluation machine (via its "best" move variation evaluations from somewhere beyond 16 plies, like 22 here on lichess or more if we knew the extension status in output).
But they have been like that and nobody noticed until another species of algorithms uncovered shallow blindness in top engine top ELO constant winner until then.. NNue is a leaf evaluation, helper to classical leaf evaluation, on "quiescent only" positions of the legal position maximal set. The admissible (quiescent) positions out of legal set that are being evaluated as leaf evaluation during play by NNue, are those under a certain (unknown to me, blog says small imbalance in the SF meaning, not Sillman, btw).
Talking about A0, and for the span betwee SF8 and SF12, of LC0. That other species. A dwarf versions of that, NNue came to the rescue for the small imbalance leaf positions that non-NNue part of the leaf evaluation fonction, could not see at input depth yet. But even NNue, is trained by non-NNue SF moderate depth single tree seach value score. So, the blindness is still there but input-depth of first SF call (on the human visible position), plus moderate input depth, classical visible reward...
That it may have improve slowly before that patch, via making classes of near mate (or other terminal outcomes), positions characteristics, was not enough compared to the SF11 blindness on small static imbablances (by definition). what is small might be a tweakable parameter now... but NNue is more costly in sequential computation currency (in ELO=speed).
I would welcome any statement that would make the above a incorrect high level model. please without linking out, unless well linked to soure code or its documentation offspring anywhere (official blog included).
but not talking about all engines of the future.... (so thanks for the links, and bringing that aspect. all is not hopeless self-blinding).