#12 I do agree with you to some extent. Here my nuances.
Putting all the evaluation eggs in one engine basket, may already be lopsided as sole automatic feedback expected from a user, and almighty truth oracle. But not providing alternate or adjustment to that automatic treatment may be contributing to not seeing the subjectivity around such notions as mistakes, inaccuracies, and blunders. I would even question making any degrees of mistake scale... if one size fits all were a hardware constraint. leaving further gradation to the humans involved.
The attitude being too serious or not, may depend on the attitude with respect to the game just played.
Also, whether in general the person being wanting to measure the game among other games, with a competitive outlook over many game in a small range of time, or alternatively, wanting a measure over a game that has been played with some thinking at least few spots throughout, and wanting to extract the most out of one game. It might even be that there was some debating withing the game either one-sided, or openly between opponents. Some move may actually have been experimental decision in the context of new visible ignorance or hesitation in plans implied by a few move alternatives. That investment of time, might be better rewarded with a corresponding adjustable framework of feedback.
There seems to be a few parallel threads touching this subject, with different examples and sub-focus, and I made comments there that might also apply here. And I suspect many interested posters here or there have been doing the same. No complain, just a note that some ensemble view might help. ... somewhere.... don't know how.
My fun is debate the c..p out of anything. To be accused of being too serious is always a surprise. What is being too serious, is the importance given to the engine blunder call feedback too much. Or is it the one size fits all assumption, not acknowledging the various chess ambitions across chess enthusiasts.
So yes it might look like a storm in a glass, given some assumptions of motivations on the individuals wanting feedback.
Putting all the evaluation eggs in one engine basket, may already be lopsided as sole automatic feedback expected from a user, and almighty truth oracle. But not providing alternate or adjustment to that automatic treatment may be contributing to not seeing the subjectivity around such notions as mistakes, inaccuracies, and blunders. I would even question making any degrees of mistake scale... if one size fits all were a hardware constraint. leaving further gradation to the humans involved.
The attitude being too serious or not, may depend on the attitude with respect to the game just played.
Also, whether in general the person being wanting to measure the game among other games, with a competitive outlook over many game in a small range of time, or alternatively, wanting a measure over a game that has been played with some thinking at least few spots throughout, and wanting to extract the most out of one game. It might even be that there was some debating withing the game either one-sided, or openly between opponents. Some move may actually have been experimental decision in the context of new visible ignorance or hesitation in plans implied by a few move alternatives. That investment of time, might be better rewarded with a corresponding adjustable framework of feedback.
There seems to be a few parallel threads touching this subject, with different examples and sub-focus, and I made comments there that might also apply here. And I suspect many interested posters here or there have been doing the same. No complain, just a note that some ensemble view might help. ... somewhere.... don't know how.
My fun is debate the c..p out of anything. To be accused of being too serious is always a surprise. What is being too serious, is the importance given to the engine blunder call feedback too much. Or is it the one size fits all assumption, not acknowledging the various chess ambitions across chess enthusiasts.
So yes it might look like a storm in a glass, given some assumptions of motivations on the individuals wanting feedback.