- Blind mode tutorial
lichess.org
Donate

Memory: The Key To Chess?

<Comment deleted by user>

Thanks for the interesting link.
https://storage.googleapis.com/uncertainty-over-space/alphachess/index.html

I like the concept of chunks. End game mating patterns is chunks of knowledge too. Knowing endgames also helps to recognize the pattern during the game to capture other pieces besides mating. Mate the queen or the rook is capturing a piece. So those mating chunks and tactics without all the extra pieces not involved are great chunks of information.

Thanks for the interesting link. https://storage.googleapis.com/uncertainty-over-space/alphachess/index.html I like the concept of chunks. End game mating patterns is chunks of knowledge too. Knowing endgames also helps to recognize the pattern during the game to capture other pieces besides mating. Mate the queen or the rook is capturing a piece. So those mating chunks and tactics without all the extra pieces not involved are great chunks of information.
<Comment deleted by user>

Thank you for the heads up on the paper about internal representations of chess via Alpha zero.

I was curious about any such attempt. It may be interesting to reduce the chess problems to endgame where we can control the human rational factor, using space reasoning to reduce the amount of calculation, using smaller NN or the minimal size that does the job, to figure out the patterns that emerge in such cases..

I am also curious about the methodology of such pattern extraction, not back onto the chessboard but in the embedding latent space (I guess i have to read that paper). It was long overdue, that peek into that kind of representation.

I wonder why it had to be with alpha-zero which is not open source and not lc0, though. For reproducibility's sake.
Lc0 might learn from that, in its development, being a more promising scientific basis for such experiments and findings.

Yep very editorial my post. also, for not searching sequentially through links. https://arxiv.org/abs/2111.09259

I suggest that people do attempt to read the paper itself, or at least the section on a0 architecture and what knowledge means in that context. The paper seems to do a good job.

Open source Lc0 would have allowed sharing NN weights for independent research and reproducible experimentation, but the article does say that the results themselves are available online. But not the code or the NN weights.

Thank you for the heads up on the paper about internal representations of chess via Alpha zero. I was curious about any such attempt. It may be interesting to reduce the chess problems to endgame where we can control the human rational factor, using space reasoning to reduce the amount of calculation, using smaller NN or the minimal size that does the job, to figure out the patterns that emerge in such cases.. I am also curious about the methodology of such pattern extraction, not back onto the chessboard but in the embedding latent space (I guess i have to read that paper). It was long overdue, that peek into that kind of representation. I wonder why it had to be with alpha-zero which is not open source and not lc0, though. For reproducibility's sake. Lc0 might learn from that, in its development, being a more promising scientific basis for such experiments and findings. Yep very editorial my post. also, for not searching sequentially through links. https://arxiv.org/abs/2111.09259 I suggest that people do attempt to read the paper itself, or at least the section on a0 architecture and what knowledge means in that context. The paper seems to do a good job. Open source Lc0 would have allowed sharing NN weights for independent research and reproducible experimentation, but the article does say that the results themselves are available online. But not the code or the NN weights.

While the article provides some interesting insights into the relationship between chess and memory, it mostly relies on anecdotal evidence and incomplete research. While Magnus Carlsen is certainly a chess prodigy with an exceptional memory for the game, it is not clear whether his memory abilities are the sole reason for his success. Moreover, the article suggests that chess masters' memory prowess is limited only to chess, which is a questionable statement since it doesn't take into account the possible overlap between chess and other memory tasks.

The article also discusses the concept of chunking, which is a well-established theory in cognitive psychology. However, it is unclear what these chess chunks would consist of, and the author cites a psychologist's estimation that a grandmaster knows 1.8 million chunks, without providing any evidence for this claim. This overly simplistic view of memory overlooks the complexity and variability of chunking across different individuals and domains.

Furthermore, the article makes an inaccurate comparison between the workings of a neural network chess engine like AlphaZero and the human brain. While neural networks can provide insights into how humans process information, they are not an accurate model of how the human brain works. The article also cites a study that examines AlphaZero's internal representations of chess positions to suggest that grandmasters' chunks could be found there. However, the authors themselves state that not all of these concepts correspond to any known chess concept, which weakens the argument that they can be used to decode grandmasters' memories.

Finally, the article implies that Magnus Carlsen's success can be attributed to his exceptional interest in chess. While interest and passion are important factors in achieving success in any field, they do not fully explain the complexities of expertise development. Chess mastery requires extensive knowledge, strategic thinking, and pattern recognition, among other skills, which cannot be fully attributed to a person's interest or memory abilities.

In conclusion, while the article provides some interesting insights into the relationship between chess and memory, it oversimplifies the complexity of memory and expertise development, and relies on incomplete evidence to support its claims.

While the article provides some interesting insights into the relationship between chess and memory, it mostly relies on anecdotal evidence and incomplete research. While Magnus Carlsen is certainly a chess prodigy with an exceptional memory for the game, it is not clear whether his memory abilities are the sole reason for his success. Moreover, the article suggests that chess masters' memory prowess is limited only to chess, which is a questionable statement since it doesn't take into account the possible overlap between chess and other memory tasks. The article also discusses the concept of chunking, which is a well-established theory in cognitive psychology. However, it is unclear what these chess chunks would consist of, and the author cites a psychologist's estimation that a grandmaster knows 1.8 million chunks, without providing any evidence for this claim. This overly simplistic view of memory overlooks the complexity and variability of chunking across different individuals and domains. Furthermore, the article makes an inaccurate comparison between the workings of a neural network chess engine like AlphaZero and the human brain. While neural networks can provide insights into how humans process information, they are not an accurate model of how the human brain works. The article also cites a study that examines AlphaZero's internal representations of chess positions to suggest that grandmasters' chunks could be found there. However, the authors themselves state that not all of these concepts correspond to any known chess concept, which weakens the argument that they can be used to decode grandmasters' memories. Finally, the article implies that Magnus Carlsen's success can be attributed to his exceptional interest in chess. While interest and passion are important factors in achieving success in any field, they do not fully explain the complexities of expertise development. Chess mastery requires extensive knowledge, strategic thinking, and pattern recognition, among other skills, which cannot be fully attributed to a person's interest or memory abilities. In conclusion, while the article provides some interesting insights into the relationship between chess and memory, it oversimplifies the complexity of memory and expertise development, and relies on incomplete evidence to support its claims.

"And the memory feats of chess masters are limited to chess: when it comes to remembering other sorts of things, they are no better than average."

Obviously there are exceptions. Nakamura has displayed some impressive results in memory games on his stream.

"And the memory feats of chess masters are limited to chess: when it comes to remembering other sorts of things, they are no better than average." Obviously there are exceptions. Nakamura has displayed some impressive results in memory games on his stream.

I think some of us should go an google search visual cortex architecture, Hubel-Wiesel models (sorry for spelling).

The deep NN architecture that is now so ubiquitously used throughout AI stuff, building on feed-forward convolution connectivity layer per layer with a dense connectivity at last layers, is a hall-mark of the visual cortex, 7 or so "layers" exploding the retinal input into internal features, each layer acting like a hyper-angle point of view on the same object being decomposed and recomposed. Edge detectors etc..... (thats for visual natural inputs). The recomposition from such an expanded representation scheme is even thought to carry to face recognition, further up (or down?) the layers.. The hierarchical aspect of sensory input processing contains that pattern knowledge.

Now we are not just trying to recognize a pawn from a queen... so how far does such connexionist model of perception goes, I don't know.. but it is really the basic architecture of a whole big slab of cortex.

The part that is not really biological is the weight updating.. But the principle that the synapses strength is key to knowledge in the wet brain is well accepted. The dynamics of those during learning might not be how the sequential processing algorithms for artificial NN error based updating do it. as it our brains there is not really a core bottleneck like that, but more parallel processing.

Our impression that we have a stream of consciousness, results from not being aware of that parallel processing, it might be an emerging property of how such bushy thing in being herded into tip of iceberg stream (i smell frontal lobe time segmentation of all that stuff)... .... ... vaguer and vaguer...

anyway, I find that artificial NN a better suited to chess board vision than linguistic chunks memory notions. but it is not a competition. just a complement.

I think some of us should go an google search visual cortex architecture, Hubel-Wiesel models (sorry for spelling). The deep NN architecture that is now so ubiquitously used throughout AI stuff, building on feed-forward convolution connectivity layer per layer with a dense connectivity at last layers, is a hall-mark of the visual cortex, 7 or so "layers" exploding the retinal input into internal features, each layer acting like a hyper-angle point of view on the same object being decomposed and recomposed. Edge detectors etc..... (thats for visual natural inputs). The recomposition from such an expanded representation scheme is even thought to carry to face recognition, further up (or down?) the layers.. The hierarchical aspect of sensory input processing contains that pattern knowledge. Now we are not just trying to recognize a pawn from a queen... so how far does such connexionist model of perception goes, I don't know.. but it is really the basic architecture of a whole big slab of cortex. The part that is not really biological is the weight updating.. But the principle that the synapses strength is key to knowledge in the wet brain is well accepted. The dynamics of those during learning might not be how the sequential processing algorithms for artificial NN error based updating do it. as it our brains there is not really a core bottleneck like that, but more parallel processing. Our impression that we have a stream of consciousness, results from not being aware of that parallel processing, it might be an emerging property of how such bushy thing in being herded into tip of iceberg stream (i smell frontal lobe time segmentation of all that stuff)... .... ... vaguer and vaguer... anyway, I find that artificial NN a better suited to chess board vision than linguistic chunks memory notions. but it is not a competition. just a complement.