lichess.org
Donate

Could a current top player beat the same DeepBlue that beat Kasparov?

@Akbar2thegreat said in #35:
> @RamblinDave
> Lol! Just 800. Plus whose games were analysed. This much can't do.
> No hate to Nunn, he's my favourite endgame tactician and theorist but analysing games for comparing new players versus old players is not his cup of tea. Tell names of players whose games were analysed and not stupid rankings.

It's not hard to google if you want, or you can follow the link in my original post for the full details of the methodology. But the players involved included Rubinstein, Schlecter, Teichmann, Nimzowitsch, Marshall and Spielmann. Also Alekhine, although at this point he was 19 and some way off the peak of his powers.

> I know who were real top level players over history and mental knowledge and self dependence and thinking ability has been on decline since then over time. This clearly shows that new players are weaker than older players.

So Nunn, a 2600 rated GM has done some detailed and fairly objective analysis of some of the strongest players of their day, by looking at an unbiased selection of 800 actual games. You conclude that this must be rubbish and the evidence that you've got is that you, 1800-ish Lichess blitz, "know who were real top level players over history" and can tell that "thinking ability has been on the decline".
@kalafiorczyk said in #39:
> For those interested in chess necromancy, that is somewhat objective comparison of playing strength of dead chess players or dead chess computers I'm providing links to the important papers of Professor Regan about Intrinsic Performance Ratings:
>
> cse.buffalo.edu/~regan/papers/pdf/ReHa11c.pdf
> cse.buffalo.edu/~regan/papers/pdf/Reg12IPRs.pdf
>
> Unfortunately, those weren't updated for the newer engines, probably because similar methodology is nowadays being used for cheat detection.

The cheating business (its existence and its fear, both) is wasting knowledge, with the cat and mouse self-censorship....

started reading the pdfs. I noticed the use of intrisic quality of moves based on engine as the external reference system. However, engines too, have their own strenght based on games statistics. There is no external referential system being constructed. Just that we trust engine because we can't reproduce their legal search depth to be better, and then we let same nature engines compete with each other and optmize their games ratios (upon some pool constraints the details of which are not clear to me across the history of engine competition).

However, the exercise of wanting to directly assess a quality of move could be a scheme that could have the paper other context of engine as the metric be switched for other constructs. I just read the abstract.... though. But this is getting somewhere for me thanks. Perhaps this is the basis for the ubiquitous centipawn metric here on lichess, btw.

Edit: however using engine gives for retracable and reproducible testing grounds that could be used through time, and the pools are not connected (i assume). neglecting chess space covering evolution issues, if existing or ever tested.
"the same DeepBlue that beat Kasparov"

Beat is a strong word. It is true that the comp got the result, but it was a very unfair match. You see, the computer had basically the whole database of Kasparov and many other GM's. But Kasparov had no information about the computer, nor the logs of how it processed the information, and the devs negated him that information. So he went in blind.

This is very important, because Deep blue was a technological advance over its predecesors. It was unexplored terrain.
You see, back then, any computer, as fast as it was, couldnt really handle closed positions, nor had a deep tactical sense (it wouldnt trade say a rook for a bishop in order to get control over weak squares in the long run, it wouldnt do a lesser trade if it wouldnt recover the loss material in a few moves unless it was a mating position).

He was not aware of this. So he was thinking and aiming to get those closed positions, aware that they could be subject to one of those tactics by a human, but also aware that the computer would not take advantage.

To everyone's shock, this new computer could. But the moment he realized. he already got lost positions.

Had he know about this beforehand, he wouldnt have aimed for such positions, as he got them on purpose to exploit the engine. He had no reason to think that the computer could counter the traps. They didnt before that match.

Nowadays, GM's are aware on how the computers think. That is why you see them making machine looking moves all the time when we normally dont find the move. they know how the computer evaluates the position, so they emulate it.

Having said that, since GM's play more similar to modern computers, Deep Blue's game is a bit obsolete. So yeah, the top 10-15 GM's, if not more, could do the deed on a regular basis.
@Alientcp said in #43:
>
> He was not aware of this. So he was thinking and aiming to get those closed positions, aware that they could be subject to one of those tactics by a human, but also aware that the computer would not take advantage.
>

So expectations of styles do matter. thanks for this outlook. Knowing the biases of the engine school of thought.... (of the time).
Deep Blue, black box. a deity. don't ask.
@RamblinDave
The names you mentioned weren't top players according to various comparisons of top players over history. Nunn was doing some research and those players were theoretical and he was interested in analysing games of such players since he himself was a theorist. His actual research was different than so called conclusion. And someone has played with Nunn's words which he never said. The guy who had written all this info has definitely done some wrong.
@Alientcp
The thing is humans may see stats of other players before playing to prepare against them but computer simply doesn't think so as it lacks human element. It plays according to the strength of engine programmed by humans. It doesn't use player's stats to prepare against them. As simple as that.
And who the hell did tell you about such fake story? Or weren't you aware about it?
It seemed to me as if you were intelligent enough.
@Akbar2thegreat said in #46:
> @Alientcp
> The thing is humans may see stats of other players before playing to prepare against them but computer simply doesn't think so as it lacks human element.

Computers are programmed. They follow their programming. If they arent programmed to do something, they dont do it.
Before Deep Blue, they didnt had any tactical sense whatsoever, it was not programmed, or wasnt very advanced because they didnt played like that. Thats a fact.

>It plays according to the strength of engine programmed by humans. It doesn't use player's stats to prepare against them. As simple as that.

Nope, it plays according to the programming they have, the faster the computer, the faster the algorithm is to process it. Thats what speed gives you, not strenght.
What could happen, is that a very complex algorithm that takes a long time to process, if you have a time cap, it will provide an inferior result if it doesnt complete it.
So a very fast computer will give you a proper result, as the algorithm will finish.
But its not stronger, its the same algorithm, its just faster on better hardware and will give you a better response ithan the slow one in the same time frame. It gives the illusion of strenght. But its not stronger.

> And who the hell did tell you about such fake story? Or weren't you aware about it?
Im pretty sure its not fake. Kasparov said it himself, its on video, somewhere. I saw it.

> It seemed to me as if you were intelligent enough.
You are a Dunning-Kruger example dude. At the top of the graph. You have no ability to gauge my intelligence.
@Alientcp
Buddy, you forgot the actual motive of discussion.
Computers are programmed to play top moves with their CPU efficiency and not to see stats and play accordingly like human.
@Alientcp said in #47:
> What could happen, is that a very complex algorithm that takes a long time to process, if you have a time cap, it will provide an inferior result if it doesnt complete it.
> So a very fast computer will give you a proper result, as the algorithm will finish.
> But its not stronger, its the same algorithm, its just faster on better hardware and will give you a better response ithan the slow one in the same time frame. It gives the illusion of strenght. But its not stronger.
>

I have been concerned about that. thanks for showing me at least someone else can make the difference or confirming the worth of that concern. So if in computer competitions the symetric physical time for different algorightms were relaxed, and some other parameter or constraint of competition could allow each engine type to take its time to complete its algorithm (constraint would be on some other common parameter to the 2 algorithms) they could compete on other things than speed.
@dboing said in #49:
> I have been concerned about that. thanks for showing me at least someone else can make the difference or confirming the worth of that concern. So if in computer competitions the symetric physical time for different algorightms were relaxed, and some other parameter or constraint of competition could allow each engine type to take its time to complete its algorithm (constraint would be on some other common parameter to the 2 algorithms) they could compete on other things than speed.

Well, yes. if time is no factor and give the engines all the time they need. They will play the absolute best they can.
If that happens, you will find out for real which engine is the best.

New algorithms tend to be more efficient than old algorithms too. But those algorithms are designed to work with the current hardware. So new algorithms take full advantage of all the wealth of spare memory of new computers, but they would probably run slower in old hardware, than old algorithms because the old hardware has less memory, creating bottlenecks for the new algorithm.

So the efficiency is relative to the hardware they use.

So, if you use an old algorithm and run it in a modern computer, it wouldnt be stronger. It will only finish its own algorithm faster, and will give you the best response it can get. This could be an improvement, since maybe back in the day couldnt finish processing in a certain time frame but now it can. But it will only be as strong as its programming.

So, the 80's battle chess would be as dumb today with new hardware as it was back on the day, it would only play faster. The algorithm was bad.

This topic has been archived and can no longer be replied to.