lichess.org
Donate

Could a current top player beat the same DeepBlue that beat Kasparov?

@Alientcp said in #50:
> Well, yes. if time is no factor and give the engines all the time they need. They will play the absolute best they can.
> If that happens, you will find out for real which engine is the best.
>
> New algorithms tend to be more efficient than old algorithms too.

some algorithms have been saving on time at the expense of covering (or width). using forward pruning for a while, which can amplify biases at leaf evaluation levels, they are based on bulk "continuity" of score during iterative deepening, so giving an even bigger role to transposition table than before (continuity meaning the recent past is "predictive" of the proximal future, or however long the transposition table can hold that thought).

so i am thinking that finding other things than speed to compete for might make a more rounded set of competition conditions. and perhaps help with the proper use of engines in human analysis (less blind, or knowing that more angles are included in the results).
@Akbar2thegreat said in #48:
> @Alientcp
> Buddy, you forgot the actual motive of discussion.
> Computers are programmed to play top moves with their CPU efficiency and not to see stats and play accordingly like human.

I guess you have little knowledge or programming, and probably are too young to know or understand how computers played before.

Computers back in the day had little to no resources in terms of hardware. They followed databases (some still do), but after they got out of theory, they couldnt do much, they didnt had sophisticated algorithms because there was no hardware to back it up. They were dumb. They bottlenecked on closed positions, they couldnt handle positional play. They had no computing power, and the algorithms were not there. They could play the top move, sure, but the top move would be as good as the programming, and the programming wasnt there.

So any computer, as resourceless at it was, fed with the database, could play as good as any while following said database, but once out on the woods, would fail miserably. Kasparov was used to play against that.

However, Deep Blue had the top of the line hardware for its time, but also introduced new chess programming concepts never seen before because the new hardware allowed more complex algorithms. The computer was able to play positionally, and the computer improved greatly in closed positions. It had no precedent.

Its no different for preparing to play against a player that has 1000 games with e4. If you are going to prepare against him, you are going to prepare something against e4. But if he plays d4, or c4, you have nothing. Your preparation was meaningless.

It doesnt matter if the computer doesnt know how you play (although in Kasparovs case, it did, as it had his entire database), but humans do care how the opponent plays.
Kasparov prepared to play against a computer that could not play closed positions and had no positional sense. He discovered soon after that the computer could indeed play those positions.

Kasparov though, as there was no precedent of that, that in fact, Karpov and/or a group of GM's were playing as Deep Blue behind curtains. The devs negating the logs, source code or anything that could give a clue to him how to play against the computer fueled that thought. The computers were not supposed to play like that.

Game 6 of the 96 tournament
"2c3 Alapin. This was not unexpected, since at first glance it looks like a good choice for the computer. Kasparov is one of the world's ldeading experts in the main line of the Sicilian, so its natural for the Deep Blue team to select a solid line which is not directly in the mainstream"

They were avoiding the games from the database where Kasparov got leads.

"23. d5! Kasparov was taken completely by surprise. This is the kid of positional sacrifice computers are not supposed to play. Later we found that by sheer brute force Deep Blue had calculated that it could win back the pawn by force."

Game 5 of the 97 match
19...Nh4. "I do not believe that Kasparov would have played like this against a strong human player. Of course, in a cramped position it is desirable to exchange pieces........ Perhaps the first game led Kasparov to believe that the computer would not undertake direct aggresive action, but this time Deep blue pursue its aims with admirable directness"

If you look at that game, Kasparov played 34....f6. Its not a bad move per say, but it closes the diagonal for his own dark square bishop. He was thinking to exchange BxB, so closing the position wasnt a problem. But he was not counting that the computer would actually use its own dark square bishop to exchange it for a knight, then, blockading the dark square diagonal, leaving Kasparov with a useless bishop behind the pawn chain, and with no chance to win.

Computers didnt played like that. You need to understand that. Computers in the past were programmed totally different than today.

en.chessbase.com/post/25-years-ago-deep-blue-beats-kasparov
@Alientcp
Buddy, the topic is not about how computer works rather about whether any current top player can defeat Deep Blue that beat Kasparov.
For that, I and pointlesswindows have put out our similar views.
And I am no young like you rather a programmer in Python, C/C++, SQL, Java, HTML. I have designed numerous applications as well.
About our debate, computer don't prepare like humans rather just play take Stockfish for example and that's human fact. Oops, sorry you are Alien so you can't understand. May God make you human in next life!
@Akbar2thegreat said in #53:
> @Alientcp
> Buddy, the topic is not about how computer works rather about whether any current top player can defeat Deep Blue that beat Kasparov.

I already said why the top 10-15 top GMs should be able to beat Deep Blue. I said it on my original post. Dont know why you imply i have not responded that.

> About our debate, computer don't prepare like humans rather just play take Stockfish for example and that's human fact. Oops, sorry you are Alien so you can't understand. May God make you human in next life!.

Computers dont prepare at all. They compute the information they are fed, the way they are programmed to compute it.
There was an IBM development team preparing the computer to play specifically against Kasparov. So humans did, in fact, prepared the computer in a new way, i dont know why are you trying to prove otherwise or that it didnt matter, it is well documented. Kasparov was very vocal about the subject. And any source will tell you that Deep Blue was an improvement as not a single computer played that way before. It was a breaktrough. Kasparov was caught with his pants down, and nobody warned him he was facing something new. As far as he was concerned, he was playing a rudimentary algorithm on very fast hardware..

It was no different when AlphaZero destroyed Stockfish. The brand new algorithm played like nothing before that. Stockfish had no chance.

You cant prepare against something that has not happened before, with no clue that it may happen.
Its the same case when Lee Sedol played against alphago. He had no clue what to expect. No clue of alphago weaknesses or strenghts. He changed his approach during games to see where the blindspot was, he found one, but was patched by the next day.

You cant win against something that calculates deeper, faster and more accurate than you. You find the weaknesses in the programming and exploit them. IBM denied that to Kasparov. So he had to poke the usual spots to test if the weaknesses were still there but they were already patched with newer algorithms. He could have found new weaknesses if he had the logs or the source code. But he didnt had that. He lost on the trial and error.
I think the thread allows to cover the tangential aspects that would help understand the question completely.
What are the problems toward answering the question, that may require context. Good that we can discuss all of that here.

This topic has been archived and can no longer be replied to.