lichess.org
Donate

Winning / losing endgame practical vs theoretical when playing to improve

@sjcjoosten

As people have said above, obviously if you don't know and you think the endgame is actually good for you, or that the middlegame you're in is losing anyway, then go for it.

OTOH if you're in a situation where you know the endgame's theoretically losing (but you think you might be able to swindle a win) and you think the middlegame is probably even or better (but you're worried that you might get outplayed) then the "growth mindset" approach seems like it should be to stay in the middlegame, give it your best shot and see what you can learn from the experience if it goes south. You'll have plenty of opportunities to flex your endgame skills in other games.
Such a delightfully elaborated loaded question. J'ai savouré tous les mots. Bravo. Et 1000 thanks for conforming to the format and yet find a way to say needed things.. In the form of a question. The enjoyment kept increasing as I got to read what amounts to my own thoughts and that of other friend(s) of mine on lichess.

I would say I had some nostalgia brewing. But, I now feel like there is more hope. Keep up the discerning questions, dear fellow lichess player, user, and engine tool victim that seem to have own critical mind in good working order, keep that up, trust that the community can still progress through many heads in many directions of communication.
I might have read what I wanted. But the question does really bring us to take a step back does it not?

The conflict between optimizing rating as a goal versus exploring. There is a relevant machine learning model:
Reinforcement learning. say A0 or Lc0. It has the same dilemma. Actually all RL AI has to deal with it. With only one initiatial condition to keep "improving from" via only the outcome ratings, and ensure locally that it does start without prior conceptions or a unique repertoire bias, too soon, it will have the option to take some time before entering that compromise trajectory to podium glory within its bubble of the chess world.

The thing is, since it is producing its own games to learn from their outcome through own games rating, it will improve, probably even over past more explorative games, but no new games are going to have the same, hmm, how can I say ..., explorative potential (one might need to start being serious about the word exploration, btw, as it seems that we only quantify the exploitation.. or reward it).
While the reward measure, for engine race, might be all one wants, I could even argue that it might also be a sort of training set "programmed bias", that there has to be exploration compromise, the parameters choice won't change that. So, the source code would only be blamable for its scheduling parameters, but without any metric about the appropriate latent space representation of the full chess position world, and consequently the full game world (not only it decision tree projection), we have the same problem in Lc0 as in SF species or engine pool "universes" input of ELO output. The problem being we have no clue, and frankly I am kind of wondering if keeping my enthousiam up for sharing this "obvious" hole in the whole information flow of it all, given that if no one ask the question, well no one will find or try answers. I think this is called research. But it is ugly research, the risk is high. The tournament stakes always trump, or push the notion of risk on the conservative side.. human and the humans behind the machines alike.

I say we take some 2 cents from engine, but first we need to know their angle of partial coverage.. And then, keep thinking like the op. I.e. keep asking one self the good questions.. not go robotic. And please.. find me a definition of I in A.I. before putting the word artificial in front. And no, beating a best human player on average or even totally is not intelligence. It is like playing from a book against someone without a book.. in the best case that it does what we assume without testing that it does. I don't think this is up to the devs. this is not their intent that we swallow and use out of context of optimization what they worked hard to optimize in the other context (ELO engine pool undefined tournaments). I think it is same problem as receiving an advice without any explanation by an expert chess player.. Which is the best move.. that move.. because 42.

Nope this is our confusion between best player and best teacher. They can happen together, but best player might even be pressuring to forget certain things that would help learning to think like they think. It requires constant effort to not let the scaffold of old fall into oblivion (I actually am experiencing some of that, and have stuggled for few years now to reconstruct some of my past things I had internalized, because I would like to share my questions toward understanding such things as theory of learning in chess. All the abuse of purpose I keep seeing in chess culture (well what i got to look at past 5 years), seem to stem from a confusion between performance test, and learning methods. That being expert individual result of life time morsel commitment, is not a commitment to studying learning methods to get there. Some of us, are not really like chess itself. I suppose, or in proportoin with the win ratio. just for the last positoin itself. if they could have an instant win one move, perhaps they would be satisfied. Whether a book gave the secret, and engine black box, or an EGTB... they might take in for the win.

That means a lot of preparation study, a lot of positions to learn the move to. And without the entrails conception high model as key to chess interpreatation, that would allow weighing our own rational model in learning, on common basis of board positon information clues or facts, we would not be able to learn that all those position are not equally distinct. sorry for the wall of text.. I converted the other more engine chess theory point. to focus on the human chess fallacy of using untested purpose of engine as our oracles. that they be the best engine or not is not the question. The theory of learning is.
<Comment deleted by user>

This topic has been archived and can no longer be replied to.