All the talk all over the web about equal or level playing fields is missing the point.
The point is that there is this approach to chess and other games, neural nets for policy and eval, that hasn't worked very well for chess in the past (Matthew Lai's Giraffe was the best, but it was close to a thousand points behind the best engines).
Many people were skeptical that the approach could ever work for chess, just in principle because of how different the game was than Go or other games for which the approach had some moderate success.
What this shows is that the approach can do quite well indeed in chess.
Sure, achieving this level of strength required Google's TPUs, special purpose hardware for neural network inference, but the point is that the approach can actually work.
What's more, using Google's TPUs (for playing, where it only used 4) is not quite the same as using some monstrous cluster of ridiculous hardware.
Google's TPUs actually don't use a lot of electricity, so by one metric, power utilization, SF and AlphaZero were actually on a roughly even playing field anyway.
It's not AlphaZero's fault that SF was written to run on general-purpose CPUs that are less efficient for what SF does than TPUs are for what AlphaZero does :)
Again, quibbling about things like the time control, opening books, hardware, hash settings, etc. just misses the point.
Even if AlphaZero had just gotten within a couple hundred points of SF it would have been an amazing result for the reasons describe earlier; the whole neural network paradigm was suspected by many to simply be inapt for chess, and it turns out that it can actually work quite well.
On the one hand, sure, it's not like AlphaZero played like God (it performed somewhere in the neighborhood of 65-100 points over SF; I don't think in two years when SF has improved itself by 100 points anyone was going to proclaim the death of chess) and we shouldn't get too worshipful of it; on the other hand, we shouldn't get overly defensive about current engines like SF and look for excuses.
It's an impressive result that indicates neural nets not only can work well for chess (again, this was in doubt previously), but that they can do so self-trained with relatively little electricity and time invested (the amount of electricity used by the 5,000 TPUs for training is still a LOT less than that used by all the computers in the SF testing framework over the last few years).
Having said that, the paper shows that AlphaZero's improvement in chess flattened out really quickly, so some of the intuitions about the approach being less apt for chess than for a game like Go might hold some water. There certainly wasn't any indication that it was going to get measurably stronger if it trained for another couple days; most of its improvement came in what was probably the first 75 minutes or so of self-play and training.
Of course, this was just a first try with a very general-purpose approach; it's quite possible that with improvements to the approach it could do even better, but I'm not sure we'll get to see such an attempt.
Let's just take it for what it is, and await the release of the full paper :)