lichess.org
Donate

isn't this a Therotical Draw

Recently i had this game
http://tr.lichess.org/ljtmcwc10kr1
Which i have a king and my opponent had a bishop and a pawn which is queening square is opposite color of bishop. i succesfully manged to go corner and i know this is a therotical draw but comp. analysis gives my opponent 7.5. is there a way for him/her to continue in last position?
Engines can give weird points in the analysis. You are right, this is clearly a draw and unless white does something stupid (that i doubt) this is a draw.
It is indeed dead drawn. Yet, I just went and played it out : Stockfish waited two moves to see if I knew how to deal with this position and then gave up his pawn. Weird engines...
There's something strange going on in that engine analysis. I'm not sure what version of SF they're using, or how it's been modified, but Stockfish DD on my phone instantly evaluates the position as 0.00 (Bxb4) after black played b4...probably the Stockfish here has been modified in some way that causes this bug.
I think the engine is only thinking of each move for like 1/2 a second so the server doesn't get overloaded. Run even the best engines for very short times like that and you'll get a few funny moves once in a while.
I highly doubt it's just a matter of insufficient resources. Plug the position into Stockfish DD, and it finds a 0.00 eval for 57.Bxb4 at depth 7, which required 2,837 nodes. Even my old phone gets almost 100,000 nodes per second, so the engine would have to be forced to either:

1) Analyze to a fixed depth less than 7, which I suppose is possible but would make the analysis basically useless

2) Analyze for less than 1/30th a second per move.

It is much, much more likely that whatever engine (I think it's Stockfish here) is being used has been modified (since Stockfish is open source, that's quite easy to do).

Also, the above is just for the position before white's 57th move.

In the final position, Stockfish DD evaluates the position as 0.00 at depth 1, after 17 nodes. There is simply no way that this evaluation is due merely to insufficient time/depth allowed by the server. It's either a different engine than Stockfish, or a modified Stockfish.
This is a very interesting thread.

Does the Stockfish engine work out its analysis based on perfect play for both players? And then take the value closest to zero?

You're on the right track. All strong engines are based on an algorithm called minimax. Minimax is an algorithm that works best for two player, perfect information (there are no unknowns, as in poker, for example), zero sum (what is good for one side is always bad for the other side, so there are no results where both sides win, or both sides lose) games.

The heart of the minimax algorithm is quite simple: you assume both players will choose the best move available to them in any given position. That's why it is so named, since if I evaluate the position from my perspective, when it is my move, I am trying to maximize the evaluation, and when it is my opponent's move, he is trying to minimize the evaluation.

A simple example will make everything clear. Imagine a position in which I have two legal moves (Moves 1 and 2), and my opponent has two legal replies to each of those legal moves (1a, 1b, 2a, and 2b).

So, assume we've already calculated out those four sequences of moves. We start by evaluation the positions after the final positions in the first branch (1a and 1b).

Let's say that after the opponent moves 1a, I'm up a piece, so that gets an evaluation of +3, while for 1b I'm up a queen, so the evaluation is +9. Well, we assume our opponent will play the best move, which means minimizing the evaluation from our point of view, so we then say that our Move 1 gets an evaluation of +3, since that's what will happen if we play move 1 and our opponent plays the best move.

Now let's say that after move 2a, the position is dead equal, for an eval of 0, and after 2b, it's slightly better for our opponent, say -0.3 from our perspective. Again, we assume our opponent will play the best, minimum evaluation from our perspective, so he will play 2b, meaning that our move 2 from the initial position gets an evaluation of -0.3.

Now we have Moves 1 and 2, with evaluations of +3 and -0.3 respectively, and we are trying to maximize our evaluation, so we will choose move 1, meaning that evaluation of the initial position is +3 for us.

There's another technique that all strong engines use on top of minimax, called alpha beta search. The core of alpha beta is the idea that as we go through the tree, we can use the results we've already discovered to realize that some parts of the tree don't have to be searched completely.

In the example I've used, with 1a +3, 1b +9, 2a 0, 2b -0.3, alpha beta would start out the exact same way, by looking at 1a and 1b, and then saying that move 1 is +3 with best play.

However, the moment it sees that move 2a from our opponent gives 0, it will stop searching the opponent replies to move 2.

Why? Because since our opponent is choosing the moves with the lowest evaluation, we know after just looking at 2a that the BEST we can do is going to be 0. There may be another reply that is lower than 0, so our opponent would choose it, but at best we are going to get 0. We already know that move 1 is +3, so if move 2 is going to be at most 0, then we can stop searching, since it will never be better than move 1.

That means that we still play the best move, but we only had to search 3 positions instead of 4, a savings of 25%. Alpha beta works best when you look at the best moves first. In that case, it has been shown that the number of positions computers need to look at gets cut roughly to around the square root of what they would otherwise need to, with basic minimax, which in chess, with 20, 30, 40 legal moves per side per turn, is a huge help in searching deeper.

Now, modern engines use a LOT of other search techniques that enable them to search deeper with less work, which is why engines are so much stronger today than 10 years ago, even accounting for hardware improvements. Alpha beta alone on today's hardware with a good evaluation is enough for an engine to play maybe 2300ish. The remaining 700 points all come from other improvements to the search, where you will find such exciting terms as null move, razoring, killer moves, and futility search.

Isn't chess programming fun? :)
I haven't looked into Chess programming for a few decades when David Levy was the guru, sounds like it has come on a lot since then. Thanks for the tutorial.

This topic has been archived and can no longer be replied to.