ubdip said:
2-3%? The top programs have improved at a rate of
about 50 Elo per year over the last couple of years.
I was not talking about their increase in strength but how much the various factors contribute to that.
The basic algorithms are the same, but many
heuristics have been improved and nowadays
the tree is pruned very aggressively, especially
in Stockfish, so the engines can search much
deeper even on the same hardware.
Yes - and this is the 2-3%. As long as you do not change the basic algorithm its Landau symbol stays the same. Note that the relationship of calculated nodes and playing strength is not linear. The last calculated x nodes do not contribute so much to the playing strength as the first x nodes.
petri999 said:
- Evaluation function has positional understanding
on ELo 1300. Well perhaps in seventies.
No, still today. The reason is not that it would be impossible to build such an evaluation function (in fact these functions have already been built, way better than 1300) but the gain from a better function which takes longer to calculate is less than the gain from more calculated nodes is. It is better (=resulting in more overall strength) to evaluate 1 million nodes shoddily than to evaluate 100k nodes well.
Btw.: note that "1300" is not an exact value. Fact is the less programs know about chess the better they play.
As long as this is the case there is simply no reason to make the evaluation function better. Look at Deep Blue: its evaluation function was even substandard for these days but they built the move generator in hardware (this actually was Hsus invention, the rest was run-of-the-mill hardware) and put in as many move-generating processors as they could.
The machine was a common IBM SP/2 with 2 frames and 32 thin-nodes. A thin-node was a single-processor (POWER 2) system with up to 512 MB of RAM. I actually built a lot of these systems for Deutsche Bank before they outsourced their data center to IBM. What they did was to put two special cards with 8 of Hsus chess processors each into each of the thin nodes.
Another example for that was Hydra, which demolished Mickey Adams in 2005 5.5:0.5 in a six-game match. It was basically a huge FPGA array with little chess knowledge built in but fast and parallelised.
What adds to this is the increase in size of long-term storage. This space allows to accomodate the larger endgame databases with more pieces, as i said above. The place needed to store a single position is about O(n), which means that 1000 positions to store takes about 1000 times the space of a single position. As you can see from the table above the number of positions to store (and hence the necessary space) increases dramatically with an increasing number of pieces.
That means that todays programs can far earlier in the game make an informed decision about wether to trade and go into an endgame or not.
krasnaya
ubdip said:
> 2-3%? The top programs have improved at a rate of
> about 50 Elo per year over the last couple of years.
I was not talking about their increase in strength but how much the various factors contribute to that.
> The basic algorithms are the same, but many
> heuristics have been improved and nowadays
> the tree is pruned very aggressively, especially
> in Stockfish, so the engines can search much
> deeper even on the same hardware.
Yes - and this is the 2-3%. As long as you do not change the basic algorithm its Landau symbol stays the same. Note that the relationship of calculated nodes and playing strength is not linear. The last calculated x nodes do not contribute so much to the playing strength as the first x nodes.
petri999 said:
> 1. Evaluation function has positional understanding
> on ELo 1300. Well perhaps in seventies.
No, still today. The reason is not that it would be impossible to build such an evaluation function (in fact these functions have already been built, way better than 1300) but the gain from a better function which takes longer to calculate is less than the gain from more calculated nodes is. It is better (=resulting in more overall strength) to evaluate 1 million nodes shoddily than to evaluate 100k nodes well.
Btw.: note that "1300" is not an exact value. Fact is the less programs know about chess the better they play.
As long as this is the case there is simply no reason to make the evaluation function better. Look at Deep Blue: its evaluation function was even substandard for these days but they built the move generator in hardware (this actually was Hsus invention, the rest was run-of-the-mill hardware) and put in as many move-generating processors as they could.
The machine was a common IBM SP/2 with 2 frames and 32 thin-nodes. A thin-node was a single-processor (POWER 2) system with up to 512 MB of RAM. I actually built a lot of these systems for Deutsche Bank before they outsourced their data center to IBM. What they did was to put two special cards with 8 of Hsus chess processors each into each of the thin nodes.
Another example for that was Hydra, which demolished Mickey Adams in 2005 5.5:0.5 in a six-game match. It was basically a huge FPGA array with little chess knowledge built in but fast and parallelised.
What adds to this is the increase in size of long-term storage. This space allows to accomodate the larger endgame databases with more pieces, as i said above. The place needed to store a single position is about O(n), which means that 1000 positions to store takes about 1000 times the space of a single position. As you can see from the table above the number of positions to store (and hence the necessary space) increases dramatically with an increasing number of pieces.
That means that todays programs can far earlier in the game make an informed decision about wether to trade and go into an endgame or not.
krasnaya