- Blind mode tutorial
lichess.org
Donate

What IS a pawn?

#11
It is David Bronstein, expert on both the King's Gambit and the King's Indian Defence who said "King's Indian Defence is riskier for black than King's Gambit for white."

In the King's Indian Defenec the white pressure on the queen's wing is usually sufficient to win a pawn, so it is like a gambit: white wins a pawn and wins the endgame unless black can checkmate before.

Fischer did not dare to play the King's Indian Defence in his only encounter with Botvinnik, because he feared Botvinnik was too well armed for it. In the 1972 match against Spassky Fischer did not play a single King's Indian Defence, probably for the same reason. This proves that Fischer did not feel comfortable with the King's Indian Defence.

Kasparov turned away from the King's Indian Defence after he lost to Kramnik. Also Radjabov turned away from it.
King’s Indian Defense (KID) is 1. d4 Nf6 2. c4 g6

Lichess Masters database: 36% White win, 42% Draw, 23% Black win

King’s Gambit (KG) is 1. e4 e5 2. f4

Lichess Masters database: 31% White win, 32% Draw, 37% Black win

So, based on master level match results, I have to agree with tpr: I would rather play the King’s Gambit as White instead of the King’s Indian Defense as Black.

In terms of KG as White vs. KID as Black, KG has a higher White win than KID has a Black win, but engine evaluation says White is worse in the KG (-1.3 pawns) then Black is in the KID (-1 pawn).
№ 21,

It’s interesting you mention an advantage for White on the queenside in particular. While there are certainly some variations of the KID in which Black pushes his kingside pawns, those games are rare from what I’ve seen. (For example, the games in this playlist: www.youtube.com/playlist?list=PL7FC7B8018339B521.) And in fact I usually attack on the queenside, myself. For me the KID is simply an opening setup, from which an attack on either side of the board is equally possible, depending on what my opponent’s doing. d6 can support either an e5 or c5 pawn break; and after all, once your king is safely tucked away on the kingside, you can shamelessly push those queenside pawns. (I even play d6 first against e4, so that I can stick to the same setup thereafter. Most of these games are *called* “Pirc Defense”s, but they are KIDs for all intents & purposes. Anyway, that’s versatility for you: when you can literally play the same setup against anything your opponent does.)

When I was learning to play in 2019, no one would play with me OTB, so I played ~50 KIDs against myself, correspondence style, on a turntable on my coffee table, spending hours or even days on the moves — dreaming about them at night — then writing them all down to import & analyze later. I chose this particular opening as it seemed to me the different strategies for White & Black would make it more interesting when playing against myself. I learned each side’s book as impartially as I could — and I knew my opponent’s strategy perfectly (by definition, no 2ⁿᵈ player could have been more evenly matched) — yet I almost always won as Black. (FYI: In my first 45 games against myself, Black won 25, White won 12, and 8 were draws.) And often, it was a queenside pawn push that did the trick!

For example, Black can prepare b5 with Na6 to c7, Rb8, Bd7, a6, Qa5, etc. (not necessarily in that order). And even knowing Black’s strategy perfectly and preparing for it as best I could, I found that Black could attack b5 more times than White could defend it, no matter what I tried as White. I even played some Sämisch Variations, where I castled queenside as White, thinking to make the king himself an additional defender of those pawns. No dice. (In fact that was the weakest approach I tried, I think.)

In the end I determined Black’s strategy was simply superior to White’s. Although I had specifically chosen the KID in order to avoid symmetry, I ended up playing Symmetrical Variations against myself. I independently discovered the KIA that way (I didn’t know it was an opening till I noticed one of my imported games had that description). To this day, I play the King’s Indian Attack almost exclusively as White. In general, I prefer a streamlined castle-ASAP strategy to a waste-time-pushing-pawns strategy. 🤷 I really believe the former is both more defensive, *and* more aggressive, and the latter is just weaker all around. I also note the former gives you a significant lead in development, often by multiple tempi, even if you’re Black. (Very often I find I’ve developed all my pieces already, when my opponent is only half developed, because he’s been wasting time with his pawns.)

While I am far from a master 😅, I am someone who learned chess specifically with this opening, so I feel qualified to say that . . . I have no idea what you’re talking about. 😄 But that is very interesting to know! I will need to look at some of the games of the masters you mentioned, and try to suss WTH they were tripping on that they could possibly have come to the opposite conclusion of mine, which I reached so painstakingly. 🤔 Perhaps it’s simply a question of what level you’re playing at, but I find the King’s Indian setup to be very strong against anyone under 2k. I have beaten people a few hundred points higher rated with it, so I swear by it. But maybe at the GM level everything suddenly changes. 🤷 It would be sad to learn I’ve hampered my development by sticking to this opening. That’s life for you, I guess. 😑 (I was led to believe openings aren’t as important as midgames or endgames, so my idea was to just pick one and run with it. As I said, I’m able to play this setup against anything, as either White or Black — usually I can completely ignore what my opponent’s doing for the first several moves, and just fianchetto, castle, etc. — and that simplicity, coupled with my experience in those games against myself — in which attacking from the flank really did seem to be the superior strategy to grabbing a big center — is why I’ve made this my pet opening.)

PS: For anyone unfamiliar with the King’s Indian setup (and thus even more confused than I am about what we’re discussing), this short video shows how to play it as either White or Black: www.youtube.com/watch?v=kK0cq6UBt1Y PPS: I apologize to the OP for derailing your thread on this tangent. ❤️ (Some people say I have a one-track mind, but derailed may actually be the better description. 😇) It’s just you learn something new every day; and well, maybe some of you will find my experience interesting. Cheers.
#23
King's Indian Defence and King's Gambit are still fine openings, playable to grandmaster level.
These are sharp, allow to play for a win, but at a greater risk of losing.
However, for a classical world championship match or for an ICCF correspondence game it is considered too risky.
Here is the game that made Kasparov switch from the King's Indian Defence to the Queen's Gambit Accepted:
www.chessgames.com/perl/chessgame?gid=1070932
I re-read my post. #29, i think. And wow. i wanted to cut down on my rambling. it made it more rambling... wrong cuts.
About the distorsion between the probabiligy score of NN based engines versus the material based score of classical engine:

Edit:
This could could be compared to astigmatism. A smooth systematic spatial deformation . my ellipse might look like your circles or vice-versa.

Now in reply to #30.
I was actually debating. I view the probability score as having a true mathematical foundation. However, any monotonic conversion function like those I was referring too, means that the directions of increase and decrease between positions will be coherent. The only difference, would be where one score has more delta change between a position pair, than the other score. that is distortion. Which one is rigth? I can say that the NN net score represent the true odds better. But if your chess experience has been with classical engine and mostly material based evaluation of positions, the material exchanges might become the real outcomes of the many sub-games where there are material exchanges within your game. And the only legal rules define outcomes of mate or draw (W,D,L), become conversions of material difference in that astigmatism. The cherish material difference gets converted in the end into a mate. The ultimate immaterial terminal position that is not an assumption or a handcrafted ad-hoc valuation of a position.

People can easily with the repetitions of some surrogate truth, believe it over a more empirical one. forgetting the scaffold arbitrarily assigning some value to a chess man. because it is consistent. and it computes. it is also very reasonable. we don't play chess moving empty square, we move pieces and pawns. They ought to count for something. Yes, the positional tweaks to the CP scores. What is the total mass of positional tweaks contribution to CP scores in general of integrated over all known positions versus that of material difference. How does auto-tuning the many parameters of the positional tweaks come up with how much a position difference can beat a material difference. That is also probably ad-hoc, i don't know, very difficult to extract information without some mathematical formulation. Right now the best documentation for those tweaks, is a web of pages implementing some call graph. But it seems difficult to have a closed form.. It seems the ad-hoc function has reached the same problem as attributed to neural networks. too many parameters. The only difference is that neural network training is mathematically defined with that many parameters in specific architecture, to converge and maximize generalization from its training set to any test set it has not been trained on. No chance of confirmation bias here.. I am not sure about auto-tuning the positional parameters matrix, can do the same.

if the community kept playing material exchange rich game only by style or even some opening preference, or bias, it the material scoring error could very well be not noticeable. I don't know, maybe sharp openings? or games where every move is a material grab... ?

The positional differences between quiescent consecutive positions, might show better now that there are NNue patches in new generation of SF. So that kind of errors might be actually corrected (depending on the detail of classical parameter tuning and NNue parameter tuning over their common training sets or their exclusive training sets, don't know those details, and the op question might need such information, because hybrid make debate difficult that way). One thing though, the fact that SF still output CP as score, shows that either, the NNue was not trained with game outcome based loss function, or that they were and the probablility score was converted somehow into the CP score. or something else even more foggy, and not documented to my knowledge). So i think the op question, while waiting for such clarification. might be easier to debate from this more mathematical functional view point, by comparing SF11, and lc0. the non-hybrids, where the nature of the score and their mathematical foundations can be compared.

I am confident that the discontinuity problem when mate appears anywhere in the horizon, is not affecting the legal outcome only based score that lc0 nets are approximating. Approximation does to mean hand-waving. It means approaching. The more one train lc0 the better its evaluation function is, the closer it gets to the true reward expectations if the whole legal game tree could be used as training.

I am actually saying that the neural nets provide for a more accurate position evaluation, irrespective of the quiescence of the position. But that as long as people orient themselves with the CP scoring, they will make the same distortions as the engine does, and might really think that the engine is having the right proportions.

The material based evaluation are the ad-hoc, coarse approximation to the true outcome probabilities (under assumption of best chess). They invent intermediate truncated games tree terminal surrogate outcomes all throughout the game, and propagate their ad-hoc value system back to the move candidates under search ( i could nuance that, but that does not make my point, and apparently i was too wishy-washy in my previous rambling (my fault).

I used many level of explanation. many sets of vocabularies. hopefully a subset will reach some readers...

I think the hand-waving is not in the black-box that is actually doing proper training and testing. No pretention of assigning individual weights to a feature. yet, its architecture motifs might yields more clues, than attempting to differentiate the many classical parameters being auto-tuned. (but for basic ones, like material imbalance value system, which might actually be a principal component of some correct approximation of the same true function that the neural nets can be proven to approximate as close as required. Approximation with convergence (this is a non finite concept).

All that to come around, and say. well it does not matter, that there maybe distortions. we get used to them anyway, and compensate by our own internal distortions, to fit experience. like in the past, one would ignore SF in the openings. knowing it was off track. not even approximating...
Bottom line: The values that both these types of engines return need to be taken with a large grain of salt.

My understanding is that lc0 is not programmed with the standard (or any) piece value system. However, being a UCI engine, lc0 must use centipawns when it provides engine information in the "info" protocol.
See http://wbec-ridderkerk.nl/html/UCIProtocol.html
"the score from the engine's point of view in centipawns."

According to the lc0 FAQ (lczero.org/dev/wiki/technical-explanation-of-leela-chess-zero/) , this is done with the following formula:
"Lc0 uses an average expected score Q in the range [-1,1]. This expected score is converted to a traditional centi-pawn (cp) eval using this formula: cp = 111.714640912 * tan(1.5620688421 * Q)."

I graphed this formula and the formula lichess analysis uses per
lichess.org/blog/WFvLpiQAACMA8e9D/learn-from-your-mistakes
"winning chances = 50 + 50 * (2 / (1 + exp(-0.004 * centipawns)) - 1)"

The curves are very similar (one of the formulas has to be inverted to do the graphing).

I don't think engines assign value to the King any longer. See hxim.github.io/Stockfish-Evaluation-Guide/
where if you drill down in these categories on the left, I never see a value assigned to the King itself.

That page is also very interesting and is as close as I've seen to answering dboing's question about how to understand what stockfish is doing without reading the actual stockfish code. Here, the page authors have written in javascript an approximation of what stockfish is doing. Yes, it is still code. Expanding all that into one mathematical function would be unreadable, hence the way the page works to give one an idea of the contributions of the various parts of the evaluation.

On that page you can enter an FEN and then, using the left hand side categories, see what the evaluation components are. You can also load a particular neural network file, and see the "Neuron activation".

Now as to one of the other issues mentioned...

Non neural net engines can suffer from the horizon effect. This means their evaluation at leaf nodes in the tree they created can be wrong in that there is some variation from the leaf that upsets their evaluation. However, most of these type engines by using the alpha/beta algorithm and other techniques, are essentially consider all moves (full-width) at the interior nodes. They don't actually have to because, by the alpha/beta algorithm, there is a proof that that produces the same value as the minimax algorithm on that tree. The hybrid Stockfish engines use the NN on the leaf nodes (other places?) and then do the usual thing on the interior nodes.

My understanding of lc0 is that during training it is not full-width, but while playing itself grows the tree to terminal nodes (W/D/L). The NN gets trained and now produces the Q value when handed a position. During a game without training, lc0 is using an algorithm that is talked about here
github.com/LeelaChessZero/lc0/wiki/Technical-Explanation-of-Leela-Chess-Zero
"This is the same search specified by the AGZ paper, PUCT (Predictor + Upper Confidence Bound tree search). Many people call this MCTS (Monte-Carlo Tree Search), because it is very similar to the search algorithm the Go programs started using in 2006. But the PUCT used in AGZ and Lc0 replaces rollouts (sampling playouts to a terminal game state) with a neural network that estimates what a rollout would do. "

During play lc0 is not full-width. Hence, lc0 can get a wrong evaluation simply because it did not consider all the possible moves!
#27 "Non neural net engines can suffer from the horizon effect."
[Edit: I'm no longer sure of what I originally wrote. So edit here.]
This was not to imply that NN engines cannot also suffer from the horizon effect. Can they?
"Rollouts" are to a "terminal game state". So does PUCT always get the same answer as MCTS would get?
The above quote from the technical-explanation page says PUCT "estimates what a rollout would do".
That sounds like PUCT might not give the same answer as MCTS and hence might suffer something like an horizon effect.
I have a better, but less profound question:
What the heck is Average Centipawn Loss?

This topic has been archived and can no longer be replied to.