@hicetnunc said in #5:
> So all in all, my understanding is that - yes, we will use processes which we have found to work in the past and it may indeed make it more difficult to discover new ideas, but it doesn't mean we can't be aware that different tasks require different kind of solutions, especially in chess.
The problem is one of self-awareness of own familiarity (and maybe bullet not allowing such depth of perception, but not sure). I think this is mathematically illustrated in reinforcement learning, deep or not (usually using deep for non-tic-tac-toe problems), I mean the machine learning implement of the behavioral psychology theory of learning (long ago, and not that obsolete, if it is thought so). They call this the dilemna of exploration versus exploitation (of what was explored).
This goes with the more expert you are, the less exploration you do, as you "know" more what works, but then the error margin, in big chess (that is hard to really assume being known all over, so one has to compromise at some point, and ratings will reinforce, within one lifetime and enough people playing in the same neighborhood of the expert pool expectations, soemthing like that, it may not be always as conservative over time, in human, thanks for our diversity of trajectories, accident happens, i.e. progress).
the assumption that there is not time to explore outside a certain well hard worked bias, that there are no real surprises around the corner, that come with success and expertise, is being reinforced.. did not start this sentence appropriately. help! but elements are here.
@torscani, can you put the link for the paper that you shared with me. It does talk about this, I did not read, or know the method of inverstigation, but I remember from abstract having touched that question. I wish I could think the above, and then write it concisely, but then I would not be able to develop as well. I need that external memory support so my working memory can juggle above it.. (or around it, not hierarchy needed).