lichess.org
Donate

7-piece Syzygy tablebases are complete

only 10^60 positions to go for 32 piece syzygy :)
Wow
423836835667331 positions
How long did it take?
0.3 bits storage per position???, math looks right but i really can not believe this value, that's some kind of vodoo stuff ;-)
api looks nice in python: python-chess.readthedocs.io/en/latest/syzygy.html
Well done, pretty amazing, you open source guys are out of this world.
@sk0bel Probably a combination of mathematical analysis and computation. However the tablebase generator really had to enumerate all the positions.

@BKIRCA But is is on the right side, isn't it?

@Shipustik Cheers, and more importantly thanks to Bojun Guo and Ronald de Man, who shared their work with everyone (including Lichess).

@Entuzijasta Full 8-piece set in 10 years does not look likely at this point, but I'd like to be proven wrong. RdM mentioned it might be feasible to generate a few well-chosen 8-piece tables to beat the record for the longest endgame.

@kettwiesel Pretty much exactly 5 months. I don't know more details about the hardware than given in the blog post. The compression really is excellent. Similar positions often have similar outcomes, especially if there is a material imbalance that gives one side a big advantage. Glad you like my Python library :)
@revoof Excellent to hear the compression is indeed excellent! I can't imagine doing multiple passes to compare compression schemes...
@Toadofsky It's a custom scheme based on RE-PAIR compression (http://www.larsson.dogma.net/dcc99.pdf). RdM probably did plenty of experimentation when he originally built the tables with fewer pieces a couple years ago. And then luckily it also performs quite well with more pieces (in fact even better).

I tried to summarize what I learned about the format here: chess.stackexchange.com/a/22210/3122. A notable feature is that it can take advantage of "don't care" values, say for illegal positions, or positions with winning captures.

Since Syzygy is intended for practical use in engines it's important that decompression is fast, which is why he deliberately traded even better compression (which is possible) for fast probing.
It's interesting to see that the DTZ table actually is smaller than the WDL.
That's probably for that exact reason mentioned by @revoof , to make decompression faster but it also shows just how good the DTZ compression algorithm is.
<Comment deleted by user>

This topic has been archived and can no longer be replied to.