- Blind mode tutorial
lichess.org
Donate

Why do chess bots have supper high ratting?

Why do chess bots like Stockfish and Aplhazero have incredibly high rating compared to humans? For example, stockfish has a 4000ish rating meanwhile Magnus has 2850+, is it because the bots play a lot of games giving them more rating faster than humans?

Why do chess bots like Stockfish and Aplhazero have incredibly high rating compared to humans? For example, stockfish has a 4000ish rating meanwhile Magnus has 2850+, is it because the bots play a lot of games giving them more rating faster than humans?

I was in the room the Tournament Room in 1987 or 1988 American Open I believe when GM Bent Larsen lost the First Tournament game vs a Computer Program . They were allowed in tournaments those days . It had an eerie feeling , Never had a computer engine had beaten a GM before that day' , Then a few' years later were the epic battles vs World Chess Champion Gary Kasparov by Deep Blue . After a few more years GM Micheal Adams lost 6-0 vs a computer bot engine whatever you want to call them heh' . Since then about 20 more years have passed & computers have gained about 300 more points & mostly the best players can make a Draw here & there & maybe win one inn a hundred . The "bots" are simply too fast with calculations & Alpha Zero is an AI nowadays as well @LDFerrer_2009_Rizal So there is a History to learn there which goes way back to Fake 'automations' manned by men called "The Turk" I believe even heh . Computers so surpassed humans forever around the year 2000 you can pretty much say so people use them to help their game by Studying with them nowadays so Study Play Play Study Chess with Chess Books Chess Videos while using the engine to LOOK at the games presented to Learn

I was in the room the Tournament Room in 1987 or 1988 American Open I believe when GM Bent Larsen lost the First Tournament game vs a Computer Program . They were allowed in tournaments those days . It had an eerie feeling , Never had a computer engine had beaten a GM before that day' , Then a few' years later were the epic battles vs World Chess Champion Gary Kasparov by Deep Blue . After a few more years GM Micheal Adams lost 6-0 vs a computer bot engine whatever you want to call them heh' . Since then about 20 more years have passed & computers have gained about 300 more points & mostly the best players can make a Draw here & there & maybe win one inn a hundred . The "bots" are simply too fast with calculations & Alpha Zero is an AI nowadays as well @LDFerrer_2009_Rizal So there is a History to learn there which goes way back to Fake 'automations' manned by men called "The Turk" I believe even heh . Computers so surpassed humans forever around the year 2000 you can pretty much say so people use them to help their game by Studying with them nowadays so Study Play Play Study Chess with Chess Books Chess Videos while using the engine to LOOK at the games presented to Learn

Chess ratings are measured by success. The higher the win ratio, the higher the rating.

Some players win much more often then others. The rating inflation stops when the rating gap between the matched players is beyond the settings of the rating system.

Players needs to play in their category because it takes more games to increase a rating than to decrease a rating. To remain polite, the ethics during a match impact the time it takes to increase a rating. Engine matches don't have that problem. They can be set to resign or draw at a centipawn loss. Humans don't have that feature, but they can avoid the temptations to increase a rating from a losing game. Engines do lose on time, but I'm not sure it's because of the position. It probably happens less often then humans and probably disrupts less the win ratio.

Chess ratings are measured by success. The higher the win ratio, the higher the rating. Some players win much more often then others. The rating inflation stops when the rating gap between the matched players is beyond the settings of the rating system. Players needs to play in their category because it takes more games to increase a rating than to decrease a rating. To remain polite, the ethics during a match impact the time it takes to increase a rating. Engine matches don't have that problem. They can be set to resign or draw at a centipawn loss. Humans don't have that feature, but they can avoid the temptations to increase a rating from a losing game. Engines do lose on time, but I'm not sure it's because of the position. It probably happens less often then humans and probably disrupts less the win ratio.

@LDFerrer_2009_Rizal said in #1:

Why do chess bots like Stockfish and Aplhazero have incredibly high rating compared to humans? For example, stockfish has a 4000ish rating meanwhile Magnus has 2850+, is it because the bots play a lot of games giving them more rating faster than humans?

No human has a chance against Stockfish. You can use Stockfish on a smartphone and you will win against Carlsen without problems. If you use Stockfish on a fast PC it is even stronger. That's the reason why engines like Stockfish doesn't have only one rating. It is always the combination of engine + hardware which defines the strength. But this is too complicated for people.

@LDFerrer_2009_Rizal said in #1: > Why do chess bots like Stockfish and Aplhazero have incredibly high rating compared to humans? For example, stockfish has a 4000ish rating meanwhile Magnus has 2850+, is it because the bots play a lot of games giving them more rating faster than humans? No human has a chance against Stockfish. You can use Stockfish on a smartphone and you will win against Carlsen without problems. If you use Stockfish on a fast PC it is even stronger. That's the reason why engines like Stockfish doesn't have only one rating. It is always the combination of engine + hardware which defines the strength. But this is too complicated for people.

Now, why does one engine has a superhuman rating over another engine, well that depends on the engine tournaments constraints, which have not changed much over all those decades. While us humans, have a lot of different upbringing that still makes us individuals when playing chess (I hope), we can trust our pool based ratings to mean something more than the differential ratings of the engines, which are playing some other race to better play than we are.

There is also the question of width versus depth... We as humans, individually might actually be better in few positions each that any engine, but since we can't be fast at depth like them, none of us get to beat them one on one... so they just have to be faster long enough, and they will always beat us..

And that is true among themselves....

Ratings are competitive pairing statistics based on pools of players.. now we would need to look at the diversity of the engine pools, and we can't rely on the human demographics to have yielded as many different programming, in fast, the opposite happens there.. Programmers focusing on such engine pool based ELOs, will do what is natural and borrown the good algorithm feature from each other, that keeps increasing that ELO, often that has been against computation cost (time), and against width (not just superfluous width)... as long as they are faster for the same accuracy (unknowable, for now), they will win over the other.. The last time this was not the case, in engine ladders was a few SF versions ago, before NNue... SF got lucky with A0 and LC0. Until another luck comes about, hopefully.

Now, why does one engine has a superhuman rating over another engine, well that depends on the engine tournaments constraints, which have not changed much over all those decades. While us humans, have a lot of different upbringing that still makes us individuals when playing chess (I hope), we can trust our pool based ratings to mean something more than the differential ratings of the engines, which are playing some other race to better play than we are. There is also the question of width versus depth... We as humans, individually might actually be better in few positions each that any engine, but since we can't be fast at depth like them, none of us get to beat them one on one... so they just have to be faster long enough, and they will always beat us.. And that is true among themselves.... Ratings are competitive pairing statistics based on pools of players.. now we would need to look at the diversity of the engine pools, and we can't rely on the human demographics to have yielded as many different programming, in fast, the opposite happens there.. Programmers focusing on such engine pool based ELOs, will do what is natural and borrown the good algorithm feature from each other, that keeps increasing that ELO, often that has been against computation cost (time), and against width (not just superfluous width)... as long as they are faster for the same accuracy (unknowable, for now), they will win over the other.. The last time this was not the case, in engine ladders was a few SF versions ago, before NNue... SF got lucky with A0 and LC0. Until another luck comes about, hopefully.

Actually that is a good question. What is the exact pool of games that give engine their ratings..

Does each version of SF start its rating from a pool of games where all other engines in the tournaments also have whatever unininformed initial conditions a newcomver would have?

Do old versions of SF, or other already engine tournament tested engines keep their last rating going in.

Is there a pool of humans also part of the pool of tournaments? All those questions might make me revise my previous post to some extent.

We would need the whole history of engine tournaments, and their pool of games or players (human or not).
Rating and pools of games they go together, or are meaningless... Then someone might think of characterizing the pools, if one were ever wanting to go beyond competitive rating measure and into accuracy notions (mathematically and statistically, we can't just assume that we have covered all of chess with whatever history of pools we have had without any clue of what those pool constituents covered)...

Actually that is a good question. What is the exact pool of games that give engine their ratings.. Does each version of SF start its rating from a pool of games where all other engines in the tournaments also have whatever unininformed initial conditions a newcomver would have? Do old versions of SF, or other already engine tournament tested engines keep their last rating going in. Is there a pool of humans also part of the pool of tournaments? All those questions might make me revise my previous post to some extent. We would need the whole history of engine tournaments, and their pool of games or players (human or not). Rating and pools of games they go together, or are meaningless... Then someone might think of characterizing the pools, if one were ever wanting to go beyond competitive rating measure and into accuracy notions (mathematically and statistically, we can't just assume that we have covered all of chess with whatever history of pools we have had without any clue of what those pool constituents covered)...

This topic has been archived and can no longer be replied to.