- Blind mode tutorial
lichess.org
Donate

Is this AI analysis of a 5 game losing streak correct?

To estimate the probability of losing five chess games in a row against players with given Elo ratings, while considering your move accuracy rates, we need to make some assumptions and simplifications. The Elo rating system provides a framework to calculate expected win probabilities, and the accuracy rates can be used as a proxy to adjust for performance quality. Here’s a step-by-step approach to roughly calculate this probability:
Assumptions:
Elo-Based Expected Probability: The Elo rating difference between you and each opponent determines the expected probability of winning, drawing, or losing.

Accuracy Rates: The provided accuracy rates (93%, 90%, 91%, 96%, 94%) reflect your performance in each game. Higher accuracy suggests you played closer to optimal moves, but it doesn’t directly translate to win probability. We’ll use accuracy to adjust the effective Elo performance in each game.

Independence of Games: Each game is independent, so the probability of losing all five is the product of the individual loss probabilities.

No Draws: For simplicity, we’ll assume games result in a win or loss (ignoring draws), as this simplifies the calculation and aligns with your query about losing all five games.

Accuracy Adjustment: Accuracy rates will adjust your effective Elo rating per game to reflect performance quality. We’ll assume a high accuracy (e.g., 90%+) means you played close to your nominal Elo, while lower accuracy reduces your effective Elo.

Step 1: Elo-Based Expected Probability
The expected probability of winning a game against an opponent is given by the Elo formula:
EA=11+10(RB−RA)/400E_A = \frac{1}{1 + 10^{(R_B - R_A)/400}}E_A = \frac{1}{1 + 10^{(R_B - R_A)/400}}
Where:
RAR_AR_A
is your Elo rating (2296 initially).

RBR_BR_B
is the opponent’s Elo rating.

EAE_AE_A
is the expected probability of you winning.

The probability of losing is approximately 1−EA1 - E_A1 - E_A
(assuming no draws for simplicity).
Let’s calculate the expected win probability for each opponent:
Game 1: Opponent Elo = 2259
EA=11+10(2259−2296)/400=11+10−37/400=11+10−0.0925≈11+0.807≈0.553E_A = \frac{1}{1 + 10^{(2259 - 2296)/400}} = \frac{1}{1 + 10^{-37/400}} = \frac{1}{1 + 10^{-0.0925}} \approx \frac{1}{1 + 0.807} \approx 0.553E_A = \frac{1}{1 + 10^{(2259 - 2296)/400}} = \frac{1}{1 + 10^{-37/400}} = \frac{1}{1 + 10^{-0.0925}} \approx \frac{1}{1 + 0.807} \approx 0.553
Probability of losing: 1−0.553=0.4471 - 0.553 = 0.4471 - 0.553 = 0.447
.

Game 2: Opponent Elo = 2220
EA=11+10(2220−2296)/400=11+10−76/400≈11+0.676≈0.597E_A = \frac{1}{1 + 10^{(2220 - 2296)/400}} = \frac{1}{1 + 10^{-76/400}} \approx \frac{1}{1 + 0.676} \approx 0.597E_A = \frac{1}{1 + 10^{(2220 - 2296)/400}} = \frac{1}{1 + 10^{-76/400}} \approx \frac{1}{1 + 0.676} \approx 0.597
Probability of losing: 1−0.597=0.4031 - 0.597 = 0.4031 - 0.597 = 0.403
.

Game 3: Opponent Elo = 2104
EA=11+10(2104−2296)/400=11+10−192/400≈11+0.331≈0.751E_A = \frac{1}{1 + 10^{(2104 - 2296)/400}} = \frac{1}{1 + 10^{-192/400}} \approx \frac{1}{1 + 0.331} \approx 0.751E_A = \frac{1}{1 + 10^{(2104 - 2296)/400}} = \frac{1}{1 + 10^{-192/400}} \approx \frac{1}{1 + 0.331} \approx 0.751
Probability of losing: 1−0.751=0.2491 - 0.751 = 0.2491 - 0.751 = 0.249
.

Game 4: Opponent Elo = 2260
EA=11+10(2260−2296)/400=11+10−36/400≈11+0.813≈0.552E_A = \frac{1}{1 + 10^{(2260 - 2296)/400}} = \frac{1}{1 + 10^{-36/400}} \approx \frac{1}{1 + 0.813} \approx 0.552E_A = \frac{1}{1 + 10^{(2260 - 2296)/400}} = \frac{1}{1 + 10^{-36/400}} \approx \frac{1}{1 + 0.813} \approx 0.552
Probability of losing: 1−0.552=0.4481 - 0.552 = 0.4481 - 0.552 = 0.448
.

Game 5: Opponent Elo = 2161
EA=11+10(2161−2296)/400=11+10−135/400≈11+0.464≈0.683E_A = \frac{1}{1 + 10^{(2161 - 2296)/400}} = \frac{1}{1 + 10^{-135/400}} \approx \frac{1}{1 + 0.464} \approx 0.683E_A = \frac{1}{1 + 10^{(2161 - 2296)/400}} = \frac{1}{1 + 10^{-135/400}} \approx \frac{1}{1 + 0.464} \approx 0.683
Probability of losing: 1−0.683=0.3171 - 0.683 = 0.3171 - 0.683 = 0.317
.

Step 2: Adjust for Accuracy
Your accuracy rates (93%, 90%, 91%, 96%, 94%) suggest how well you played relative to an engine’s optimal moves. High accuracy (e.g., 90%+) typically indicates strong play, but it doesn’t guarantee a win, especially against strong opponents. To incorporate accuracy, we can estimate an “effective Elo” for each game by scaling your performance based on accuracy. A rough heuristic is to adjust your Elo downward if accuracy is below a baseline (e.g., 95%) and slightly upward if above.
Let’s assume:
Baseline accuracy = 95% corresponds to your nominal Elo (2296).

For every 1% below 95%, reduce effective Elo by ~20 points (a rough estimate based on typical performance correlations).

For every 1% above 95%, increase effective Elo by ~20 points.

Adjustments:
Game 1: Accuracy = 93% 2% below 95% Effective Elo = 2296−2×20=22562296 - 2 \times 20 = 22562296 - 2 \times 20 = 2256

EA=11+10(2259−2256)/400=11+103/400≈11+1.007≈0.498E_A = \frac{1}{1 + 10^{(2259 - 2256)/400}} = \frac{1}{1 + 10^{3/400}} \approx \frac{1}{1 + 1.007} \approx 0.498E_A = \frac{1}{1 + 10^{(2259 - 2256)/400}} = \frac{1}{1 + 10^{3/400}} \approx \frac{1}{1 + 1.007} \approx 0.498
Loss probability: 1−0.498=0.5021 - 0.498 = 0.5021 - 0.498 = 0.502
.

Game 2: Accuracy = 90% 5% below 95% Effective Elo = 2296−5×20=21962296 - 5 \times 20 = 21962296 - 5 \times 20 = 2196

EA=11+10(2220−2196)/400=11+1024/400≈11+1.148≈0.465E_A = \frac{1}{1 + 10^{(2220 - 2196)/400}} = \frac{1}{1 + 10^{24/400}} \approx \frac{1}{1 + 1.148} \approx 0.465E_A = \frac{1}{1 + 10^{(2220 - 2196)/400}} = \frac{1}{1 + 10^{24/400}} \approx \frac{1}{1 + 1.148} \approx 0.465
Loss probability: 1−0.465=0.5351 - 0.465 = 0.5351 - 0.465 = 0.535
.

Game 3: Accuracy = 91% 4% below 95% Effective Elo = 2296−4×20=22162296 - 4 \times 20 = 22162296 - 4 \times 20 = 2216

EA=11+10(2104−2216)/400=11+10−112/400≈11+0.526≈0.655E_A = \frac{1}{1 + 10^{(2104 - 2216)/400}} = \frac{1}{1 + 10^{-112/400}} \approx \frac{1}{1 + 0.526} \approx 0.655E_A = \frac{1}{1 + 10^{(2104 - 2216)/400}} = \frac{1}{1 + 10^{-112/400}} \approx \frac{1}{1 + 0.526} \approx 0.655
Loss probability: 1−0.655=0.3451 - 0.655 = 0.3451 - 0.655 = 0.345
.

Game 4: Accuracy = 96% 1% above 95% Effective Elo = 2296+1×20=23162296 + 1 \times 20 = 23162296 + 1 \times 20 = 2316

EA=11+10(2260−2316)/400=11+10−56/400≈11+0.724≈0.580E_A = \frac{1}{1 + 10^{(2260 - 2316)/400}} = \frac{1}{1 + 10^{-56/400}} \approx \frac{1}{1 + 0.724} \approx 0.580E_A = \frac{1}{1 + 10^{(2260 - 2316)/400}} = \frac{1}{1 + 10^{-56/400}} \approx \frac{1}{1 + 0.724} \approx 0.580
Loss probability: 1−0.580=0.4201 - 0.580 = 0.4201 - 0.580 = 0.420
.

Game 5: Accuracy = 94% 1% below 95% Effective Elo = 2296−1×20=22762296 - 1 \times 20 = 22762296 - 1 \times 20 = 2276

EA=11+10(2161−2276)/400=11+10−115/400≈11+0.514≈0.660E_A = \frac{1}{1 + 10^{(2161 - 2276)/400}} = \frac{1}{1 + 10^{-115/400}} \approx \frac{1}{1 + 0.514} \approx 0.660E_A = \frac{1}{1 + 10^{(2161 - 2276)/400}} = \frac{1}{1 + 10^{-115/400}} \approx \frac{1}{1 + 0.514} \approx 0.660
Loss probability: 1−0.660=0.3401 - 0.660 = 0.3401 - 0.660 = 0.340
.

Step 3: Probability of Losing All Five Games
Assuming independence, the probability of losing all five games is the product of the individual loss probabilities:
P(all losses)=P(loss1)×P(loss2)×P(loss3)×P(loss4)×P(loss5)P(\text{all losses}) = P(\text{loss}_1) \times P(\text{loss}_2) \times P(\text{loss}_3) \times P(\text{loss}_4) \times P(\text{loss}_5)P(\text{all losses}) = P(\text{loss}_1) \times P(\text{loss}_2) \times P(\text{loss}_3) \times P(\text{loss}_4) \times P(\text{loss}_5)
P=0.502×0.535×0.345×0.420×0.340P = 0.502 \times 0.535 \times 0.345 \times 0.420 \times 0.340P = 0.502 \times 0.535 \times 0.345 \times 0.420 \times 0.340
Calculate step-by-step:
0.502×0.535≈0.268570.502 \times 0.535 \approx 0.268570.502 \times 0.535 \approx 0.26857

0.26857×0.345≈0.09265650.26857 \times 0.345 \approx 0.09265650.26857 \times 0.345 \approx 0.0926565

0.0926565×0.420≈0.03891570.0926565 \times 0.420 \approx 0.03891570.0926565 \times 0.420 \approx 0.0389157

0.0389157×0.340≈0.01323130.0389157 \times 0.340 \approx 0.01323130.0389157 \times 0.340 \approx 0.0132313

So, the probability is approximately 0.0132, or 1.32%.
Step 4: Considerations and Caveats
Draws: In reality, chess games often end in draws, especially at high Elo levels. If we account for draws, the loss probability per game would be lower, increasing the overall probability of losing all five (since the outcome space includes draws). However, your query specifies losses, so we focused on win/loss.

Accuracy Correlation: The Elo adjustment based on accuracy is a rough heuristic. Accuracy (as reported by chess engines) doesn’t perfectly correlate with game outcomes, as critical mistakes in key positions can outweigh high overall accuracy.

Elo Changes: Your Elo rating would adjust after each game, but we used the initial 2296 for simplicity, as the query doesn’t specify post-game ratings.

Opponent Accuracy: We didn’t account for opponents’ accuracy, which could affect outcomes. If opponents played with higher accuracy, your loss probability increases.

Final Answer
The rough probability of losing all five chess games, given your Elo rating of 2296, the opponents’ Elo ratings (2259, 2220, 2104, 2260, 2161), and your accuracy rates (93%, 90%, 91%, 96%, 94%), is approximately 1.32%.
If you want a more precise calculation (e.g., accounting for draws or dynamic Elo changes), please provide additional details!

To estimate the probability of losing five chess games in a row against players with given Elo ratings, while considering your move accuracy rates, we need to make some assumptions and simplifications. The Elo rating system provides a framework to calculate expected win probabilities, and the accuracy rates can be used as a proxy to adjust for performance quality. Here’s a step-by-step approach to roughly calculate this probability: Assumptions: Elo-Based Expected Probability: The Elo rating difference between you and each opponent determines the expected probability of winning, drawing, or losing. Accuracy Rates: The provided accuracy rates (93%, 90%, 91%, 96%, 94%) reflect your performance in each game. Higher accuracy suggests you played closer to optimal moves, but it doesn’t directly translate to win probability. We’ll use accuracy to adjust the effective Elo performance in each game. Independence of Games: Each game is independent, so the probability of losing all five is the product of the individual loss probabilities. No Draws: For simplicity, we’ll assume games result in a win or loss (ignoring draws), as this simplifies the calculation and aligns with your query about losing all five games. Accuracy Adjustment: Accuracy rates will adjust your effective Elo rating per game to reflect performance quality. We’ll assume a high accuracy (e.g., 90%+) means you played close to your nominal Elo, while lower accuracy reduces your effective Elo. Step 1: Elo-Based Expected Probability The expected probability of winning a game against an opponent is given by the Elo formula: EA=11+10(RB−RA)/400E_A = \frac{1}{1 + 10^{(R_B - R_A)/400}}E_A = \frac{1}{1 + 10^{(R_B - R_A)/400}} Where: RAR_AR_A is your Elo rating (2296 initially). RBR_BR_B is the opponent’s Elo rating. EAE_AE_A is the expected probability of you winning. The probability of losing is approximately 1−EA1 - E_A1 - E_A (assuming no draws for simplicity). Let’s calculate the expected win probability for each opponent: Game 1: Opponent Elo = 2259 EA=11+10(2259−2296)/400=11+10−37/400=11+10−0.0925≈11+0.807≈0.553E_A = \frac{1}{1 + 10^{(2259 - 2296)/400}} = \frac{1}{1 + 10^{-37/400}} = \frac{1}{1 + 10^{-0.0925}} \approx \frac{1}{1 + 0.807} \approx 0.553E_A = \frac{1}{1 + 10^{(2259 - 2296)/400}} = \frac{1}{1 + 10^{-37/400}} = \frac{1}{1 + 10^{-0.0925}} \approx \frac{1}{1 + 0.807} \approx 0.553 Probability of losing: 1−0.553=0.4471 - 0.553 = 0.4471 - 0.553 = 0.447 . Game 2: Opponent Elo = 2220 EA=11+10(2220−2296)/400=11+10−76/400≈11+0.676≈0.597E_A = \frac{1}{1 + 10^{(2220 - 2296)/400}} = \frac{1}{1 + 10^{-76/400}} \approx \frac{1}{1 + 0.676} \approx 0.597E_A = \frac{1}{1 + 10^{(2220 - 2296)/400}} = \frac{1}{1 + 10^{-76/400}} \approx \frac{1}{1 + 0.676} \approx 0.597 Probability of losing: 1−0.597=0.4031 - 0.597 = 0.4031 - 0.597 = 0.403 . Game 3: Opponent Elo = 2104 EA=11+10(2104−2296)/400=11+10−192/400≈11+0.331≈0.751E_A = \frac{1}{1 + 10^{(2104 - 2296)/400}} = \frac{1}{1 + 10^{-192/400}} \approx \frac{1}{1 + 0.331} \approx 0.751E_A = \frac{1}{1 + 10^{(2104 - 2296)/400}} = \frac{1}{1 + 10^{-192/400}} \approx \frac{1}{1 + 0.331} \approx 0.751 Probability of losing: 1−0.751=0.2491 - 0.751 = 0.2491 - 0.751 = 0.249 . Game 4: Opponent Elo = 2260 EA=11+10(2260−2296)/400=11+10−36/400≈11+0.813≈0.552E_A = \frac{1}{1 + 10^{(2260 - 2296)/400}} = \frac{1}{1 + 10^{-36/400}} \approx \frac{1}{1 + 0.813} \approx 0.552E_A = \frac{1}{1 + 10^{(2260 - 2296)/400}} = \frac{1}{1 + 10^{-36/400}} \approx \frac{1}{1 + 0.813} \approx 0.552 Probability of losing: 1−0.552=0.4481 - 0.552 = 0.4481 - 0.552 = 0.448 . Game 5: Opponent Elo = 2161 EA=11+10(2161−2296)/400=11+10−135/400≈11+0.464≈0.683E_A = \frac{1}{1 + 10^{(2161 - 2296)/400}} = \frac{1}{1 + 10^{-135/400}} \approx \frac{1}{1 + 0.464} \approx 0.683E_A = \frac{1}{1 + 10^{(2161 - 2296)/400}} = \frac{1}{1 + 10^{-135/400}} \approx \frac{1}{1 + 0.464} \approx 0.683 Probability of losing: 1−0.683=0.3171 - 0.683 = 0.3171 - 0.683 = 0.317 . Step 2: Adjust for Accuracy Your accuracy rates (93%, 90%, 91%, 96%, 94%) suggest how well you played relative to an engine’s optimal moves. High accuracy (e.g., 90%+) typically indicates strong play, but it doesn’t guarantee a win, especially against strong opponents. To incorporate accuracy, we can estimate an “effective Elo” for each game by scaling your performance based on accuracy. A rough heuristic is to adjust your Elo downward if accuracy is below a baseline (e.g., 95%) and slightly upward if above. Let’s assume: Baseline accuracy = 95% corresponds to your nominal Elo (2296). For every 1% below 95%, reduce effective Elo by ~20 points (a rough estimate based on typical performance correlations). For every 1% above 95%, increase effective Elo by ~20 points. Adjustments: Game 1: Accuracy = 93% 2% below 95% Effective Elo = 2296−2×20=22562296 - 2 \times 20 = 22562296 - 2 \times 20 = 2256 EA=11+10(2259−2256)/400=11+103/400≈11+1.007≈0.498E_A = \frac{1}{1 + 10^{(2259 - 2256)/400}} = \frac{1}{1 + 10^{3/400}} \approx \frac{1}{1 + 1.007} \approx 0.498E_A = \frac{1}{1 + 10^{(2259 - 2256)/400}} = \frac{1}{1 + 10^{3/400}} \approx \frac{1}{1 + 1.007} \approx 0.498 Loss probability: 1−0.498=0.5021 - 0.498 = 0.5021 - 0.498 = 0.502 . Game 2: Accuracy = 90% 5% below 95% Effective Elo = 2296−5×20=21962296 - 5 \times 20 = 21962296 - 5 \times 20 = 2196 EA=11+10(2220−2196)/400=11+1024/400≈11+1.148≈0.465E_A = \frac{1}{1 + 10^{(2220 - 2196)/400}} = \frac{1}{1 + 10^{24/400}} \approx \frac{1}{1 + 1.148} \approx 0.465E_A = \frac{1}{1 + 10^{(2220 - 2196)/400}} = \frac{1}{1 + 10^{24/400}} \approx \frac{1}{1 + 1.148} \approx 0.465 Loss probability: 1−0.465=0.5351 - 0.465 = 0.5351 - 0.465 = 0.535 . Game 3: Accuracy = 91% 4% below 95% Effective Elo = 2296−4×20=22162296 - 4 \times 20 = 22162296 - 4 \times 20 = 2216 EA=11+10(2104−2216)/400=11+10−112/400≈11+0.526≈0.655E_A = \frac{1}{1 + 10^{(2104 - 2216)/400}} = \frac{1}{1 + 10^{-112/400}} \approx \frac{1}{1 + 0.526} \approx 0.655E_A = \frac{1}{1 + 10^{(2104 - 2216)/400}} = \frac{1}{1 + 10^{-112/400}} \approx \frac{1}{1 + 0.526} \approx 0.655 Loss probability: 1−0.655=0.3451 - 0.655 = 0.3451 - 0.655 = 0.345 . Game 4: Accuracy = 96% 1% above 95% Effective Elo = 2296+1×20=23162296 + 1 \times 20 = 23162296 + 1 \times 20 = 2316 EA=11+10(2260−2316)/400=11+10−56/400≈11+0.724≈0.580E_A = \frac{1}{1 + 10^{(2260 - 2316)/400}} = \frac{1}{1 + 10^{-56/400}} \approx \frac{1}{1 + 0.724} \approx 0.580E_A = \frac{1}{1 + 10^{(2260 - 2316)/400}} = \frac{1}{1 + 10^{-56/400}} \approx \frac{1}{1 + 0.724} \approx 0.580 Loss probability: 1−0.580=0.4201 - 0.580 = 0.4201 - 0.580 = 0.420 . Game 5: Accuracy = 94% 1% below 95% Effective Elo = 2296−1×20=22762296 - 1 \times 20 = 22762296 - 1 \times 20 = 2276 EA=11+10(2161−2276)/400=11+10−115/400≈11+0.514≈0.660E_A = \frac{1}{1 + 10^{(2161 - 2276)/400}} = \frac{1}{1 + 10^{-115/400}} \approx \frac{1}{1 + 0.514} \approx 0.660E_A = \frac{1}{1 + 10^{(2161 - 2276)/400}} = \frac{1}{1 + 10^{-115/400}} \approx \frac{1}{1 + 0.514} \approx 0.660 Loss probability: 1−0.660=0.3401 - 0.660 = 0.3401 - 0.660 = 0.340 . Step 3: Probability of Losing All Five Games Assuming independence, the probability of losing all five games is the product of the individual loss probabilities: P(all losses)=P(loss1)×P(loss2)×P(loss3)×P(loss4)×P(loss5)P(\text{all losses}) = P(\text{loss}_1) \times P(\text{loss}_2) \times P(\text{loss}_3) \times P(\text{loss}_4) \times P(\text{loss}_5)P(\text{all losses}) = P(\text{loss}_1) \times P(\text{loss}_2) \times P(\text{loss}_3) \times P(\text{loss}_4) \times P(\text{loss}_5) P=0.502×0.535×0.345×0.420×0.340P = 0.502 \times 0.535 \times 0.345 \times 0.420 \times 0.340P = 0.502 \times 0.535 \times 0.345 \times 0.420 \times 0.340 Calculate step-by-step: 0.502×0.535≈0.268570.502 \times 0.535 \approx 0.268570.502 \times 0.535 \approx 0.26857 0.26857×0.345≈0.09265650.26857 \times 0.345 \approx 0.09265650.26857 \times 0.345 \approx 0.0926565 0.0926565×0.420≈0.03891570.0926565 \times 0.420 \approx 0.03891570.0926565 \times 0.420 \approx 0.0389157 0.0389157×0.340≈0.01323130.0389157 \times 0.340 \approx 0.01323130.0389157 \times 0.340 \approx 0.0132313 So, the probability is approximately 0.0132, or 1.32%. Step 4: Considerations and Caveats Draws: In reality, chess games often end in draws, especially at high Elo levels. If we account for draws, the loss probability per game would be lower, increasing the overall probability of losing all five (since the outcome space includes draws). However, your query specifies losses, so we focused on win/loss. Accuracy Correlation: The Elo adjustment based on accuracy is a rough heuristic. Accuracy (as reported by chess engines) doesn’t perfectly correlate with game outcomes, as critical mistakes in key positions can outweigh high overall accuracy. Elo Changes: Your Elo rating would adjust after each game, but we used the initial 2296 for simplicity, as the query doesn’t specify post-game ratings. Opponent Accuracy: We didn’t account for opponents’ accuracy, which could affect outcomes. If opponents played with higher accuracy, your loss probability increases. Final Answer The rough probability of losing all five chess games, given your Elo rating of 2296, the opponents’ Elo ratings (2259, 2220, 2104, 2260, 2161), and your accuracy rates (93%, 90%, 91%, 96%, 94%), is approximately 1.32%. If you want a more precise calculation (e.g., accounting for draws or dynamic Elo changes), please provide additional details!

Are you expecting anyone that's not an AI to read that?

Please think for yourself and please don't post AI garbage here.

Are you expecting anyone that's not an AI to read that? Please think for yourself and please don't post AI garbage here.

@Clownboots said in #1:

Elo-Based Expected Probability: The Elo rating difference between you and each opponent determines the expected probability of winning, drawing, or losing.
What you get is expected mean result. But if you take 100 random games beween players rated e.g. 1500 and 1600 and 100 random games between players rated 2500 and 2600, I'm pretty sure you get very different draw rates.

Deducing anything from accuracy is rather futile, you would have to consider what did the game look like, the playing style etc.

@Clownboots said in #1: > Elo-Based Expected Probability: The Elo rating difference between you and each opponent determines the expected probability of winning, drawing, or losing. What you get is expected mean result. But if you take 100 random games beween players rated e.g. 1500 and 1600 and 100 random games between players rated 2500 and 2600, I'm pretty sure you get very different draw rates. Deducing anything from accuracy is rather futile, you would have to consider what did the game look like, the playing style etc.

I must admit I disagree with premise that the AI uses for step 2 adjusting to the accuracy of an opponent. Something that correlates with the amount of moves in a game would seem more applicable. Do you think that the conclusion of 1.32% probability is overstating or understating the probabilities?

I must admit I disagree with premise that the AI uses for step 2 adjusting to the accuracy of an opponent. Something that correlates with the amount of moves in a game would seem more applicable. Do you think that the conclusion of 1.32% probability is overstating or understating the probabilities?
<Comment deleted by user>

I don't think this is a correct approach.

You only need the ratings of the players. The accuracy levels are mostly irrelevant and misleading. Accuracy can differ wildly based on the type of position, and you can deliberately chose to play non-accurate moves, for example when the game is not going your way and you need the chaos button, or complicating things in time trouble, etc.

Before the game, you can use the ratings to calculate the expected outcome. After the game, there is no point in calculating anything, the probability for the given result is 1 - if your model gives anything else, it's faulty.

I don't think this is a correct approach. You only need the ratings of the players. The accuracy levels are mostly irrelevant and misleading. Accuracy can differ wildly based on the type of position, and you can deliberately chose to play non-accurate moves, for example when the game is not going your way and you need the chaos button, or complicating things in time trouble, etc. Before the game, you can use the ratings to calculate the expected outcome. After the game, there is no point in calculating anything, the probability for the given result is 1 - if your model gives anything else, it's faulty.

@Clownboots said in #4:

Do you think that the conclusion of 1.32% probability is overstating or understating the probabilities?
I don't think it makes much sense to calculate a probability of something that happened. The result will always depend a lot on your definition of "what happened" and the model you choose. The probability of 1.32% may seem small but e.g. if you assume the result is random with loss probability of 40%, the probability of five losses in five games would be 1.02% so I guess it's not completely unrealistic. But I don't think we can say what is "correct" without exactly specifying a particular model.

BtW, another problem with the calculation is that it handles the five games and their result probabilities as independent. But the nature of a losing streak is that losing few games in a row affects your mind and most people are more likely to lose again than they would be in an independent game.

@Clownboots said in #4: > Do you think that the conclusion of 1.32% probability is overstating or understating the probabilities? I don't think it makes much sense to calculate a probability of something that happened. The result will always depend a lot on your definition of "what happened" and the model you choose. The probability of 1.32% may seem small but e.g. if you assume the result is random with loss probability of 40%, the probability of five losses in five games would be 1.02% so I guess it's not completely unrealistic. But I don't think we can say what is "correct" without exactly specifying a particular model. BtW, another problem with the calculation is that it handles the five games and their result probabilities as independent. But the nature of a losing streak is that losing few games in a row affects your mind and most people are more likely to lose again than they would be in an independent game.

@nadjarostowa said in #6:

I don't think this is a correct approach.

You only need the ratings of the players. The accuracy levels are mostly irrelevant and misleading. Accuracy can differ wildly based on the type of position, and you can deliberately chose to play non-accurate moves, for example when the game is not going your way and you need the chaos button, or complicating things in time trouble, etc.

Before the game, you can use the ratings to calculate the expected outcome. After the game, there is no point in calculating anything, the probability for the given result is 1 - if your model gives anything else, it's faulty.

The only point being measured is the probability of losing 5 game in a row against a certain range of players with a given set of accuracy being measured......

The AI response was...
The rough probability of losing all five chess games, given your Elo rating of 2296, the opponents’ Elo ratings (2259, 2220, 2104, 2260, 2161), and your(* edit should be opponent accuracy rate) accuracy rates (93%, 90%, 91%, 96%, 94%), is approximately 1.32%.

All I am interested in is a rough estimate of the probability of the set of numbers given. Do you agree that it would be 1.32%?

@nadjarostowa said in #6: > I don't think this is a correct approach. > > You only need the ratings of the players. The accuracy levels are mostly irrelevant and misleading. Accuracy can differ wildly based on the type of position, and you can deliberately chose to play non-accurate moves, for example when the game is not going your way and you need the chaos button, or complicating things in time trouble, etc. > > Before the game, you can use the ratings to calculate the expected outcome. After the game, there is no point in calculating anything, the probability for the given result is 1 - if your model gives anything else, it's faulty. The only point being measured is the probability of losing 5 game in a row against a certain range of players with a given set of accuracy being measured...... The AI response was... The rough probability of losing all five chess games, given your Elo rating of 2296, the opponents’ Elo ratings (2259, 2220, 2104, 2260, 2161), and your(* edit should be opponent accuracy rate) accuracy rates (93%, 90%, 91%, 96%, 94%), is approximately 1.32%. All I am interested in is a rough estimate of the probability of the set of numbers given. Do you agree that it would be 1.32%?

@Clownboots said in #1:

Independence of Games: Each game is independent, so the probability of losing all five is the product of the individual loss probabilities.

i disagree with that assessment, at least in some circumstances. those games might be independent if they are not connected somehow. but if they happen during the same tournament for example, then e.g. losing the first 4 games could surely change your winning chances in the last game. however, since it's unknown in which way it would change and by how much, the model assumption of independence is probably still your best bet. despite being wrong.

@Clownboots said in #1: > Independence of Games: Each game is independent, so the probability of losing all five is the product of the individual loss probabilities. i disagree with that assessment, at least in some circumstances. those games might be independent if they are not connected somehow. but if they happen during the same tournament for example, then e.g. losing the first 4 games could surely change your winning chances in the last game. however, since it's unknown in which way it would change and by how much, the model assumption of independence is probably still your best bet. despite being wrong.

@Clownboots said in #8:

All I am interested in is a rough estimate of the probability of the set of numbers given. Do you agree that it would be 1.32%?

I think the whole approach makes very little sense, and letting AI calculate some probabilities on that even less.

AI does not understand what it is doing. The assumption that "the higher the accuracy the closer you play to your rating" is also nonsense.

And the main problem remains: Those accuracy values are only available when the game has already finished. And there, you can usually simply take the higher accuracy of the two and determine the winner (not always, as I proved somewhere else, but very very often - that is, ignoring games that ended by other means, like "wrong" resignation, flagging, etc.).

But I don't feel that arguing about what an AI thinks about some flawed idea is a particular good use of time...

@Clownboots said in #8: > All I am interested in is a rough estimate of the probability of the set of numbers given. Do you agree that it would be 1.32%? I think the whole approach makes very little sense, and letting AI calculate some probabilities on that even less. AI does not understand what it is doing. The assumption that "the higher the accuracy the closer you play to your rating" is also nonsense. And the main problem remains: Those accuracy values are only available when the game has already finished. And there, you can usually simply take the higher accuracy of the two and determine the winner (not always, as I proved somewhere else, but very very often - that is, ignoring games that ended by other means, like "wrong" resignation, flagging, etc.). But I don't feel that arguing about what an AI thinks about some flawed idea is a particular good use of time...

This topic has been archived and can no longer be replied to.