AlphaZero during its learning process may very well have encoded in its neural network information which amounts to having an "opening book" and an "endgame tablebase". Stripping Stockfish of these then certainly amounts to an unfair advantage.
Having said that, I have always believed that claiming that chess is already "practically" solved ( to a draw ) since the best engines today can always keep a draw when starting from an equal position ( especially from the starting position ), is complacency and utter nonsense.
The fact that already with 7 pieces ( as the Lomonosov table base proves ) there are forced mates that take more than 500 moves ( even if there is a technical draw somewhere in them due to the 50 move rule ), is proof that chess can generate unbelievable amount of complexity.
Stephen Wolfram is considered by many a crank, but I cannot get rid of the thought that the basic message laid down in his book "New kind of science" is correct. This message is about computational irreducibility.
Any computationally universal system ( that is capable of computing any function that is computable in the sense of the Turing-Church thesis ) is also computationally irreducible. This means that you cannot make predictions about the evolution of the system without letting it play through the steps of its evolution. The only way you can know what a system will do, is to run it and see what it does. You cannot make any short computation that will let you predict this.
While chess is not a computationally universal system ( most obviously it is simply finite ) it behaves in many ways as if it was one. In particular it shows signs of computational irreducibility ( as the said long mating sequences show ).
To claim that you can tackle such a system with tree search combined with heuristics to a near perfect level is a big complacency. This goes against the very essence of computational irreducibility.
There indeed may be different approaches, such as a neural network, that may upset a conventional search engine, just because the fountain is very deep and conventional search just scratches the surface.
Having said that, I have always believed that claiming that chess is already "practically" solved ( to a draw ) since the best engines today can always keep a draw when starting from an equal position ( especially from the starting position ), is complacency and utter nonsense.
The fact that already with 7 pieces ( as the Lomonosov table base proves ) there are forced mates that take more than 500 moves ( even if there is a technical draw somewhere in them due to the 50 move rule ), is proof that chess can generate unbelievable amount of complexity.
Stephen Wolfram is considered by many a crank, but I cannot get rid of the thought that the basic message laid down in his book "New kind of science" is correct. This message is about computational irreducibility.
Any computationally universal system ( that is capable of computing any function that is computable in the sense of the Turing-Church thesis ) is also computationally irreducible. This means that you cannot make predictions about the evolution of the system without letting it play through the steps of its evolution. The only way you can know what a system will do, is to run it and see what it does. You cannot make any short computation that will let you predict this.
While chess is not a computationally universal system ( most obviously it is simply finite ) it behaves in many ways as if it was one. In particular it shows signs of computational irreducibility ( as the said long mating sequences show ).
To claim that you can tackle such a system with tree search combined with heuristics to a near perfect level is a big complacency. This goes against the very essence of computational irreducibility.
There indeed may be different approaches, such as a neural network, that may upset a conventional search engine, just because the fountain is very deep and conventional search just scratches the surface.