lichess.org
Donate

Kramnick's Current Study Of Cheating In On-Line Chess

@AlexiHarvey said in #26:
> (2) Defining the 'best move' based only on 1 sec of engine analysis is weak, imo. As a ~1500 player using a fairly powerful modern PC I would analyse my own games at a minimum of 10 seconds per move - and these games would be of the throwaway type, rated OTB games I use 60seconds with even long times of 1+ hour at key conjunctions.
==== snip ====
@sosumisai said in #43:
> 1 second of engine analysis per move is not nearly enough to judge grandmaster chess.
==== snip ====
Guys, I let you on a secret, but you have to promise me that you won't tell anyone else and keep it on the down low.

Cheaters aren't analyzing their games. They are running their engines at depth as low as practical to sneak below the chess engine detection radar.

How deep they run them is the closely held secret, both by the cheaters and the cheater detection teams. One seconds is a decent scientific wild ass guess.

But it does show that Dorian Quelle understands what he is testing for, even if he didn't explicitly say this in his paper.
personally i think Kramnik role is a punching back for other high level GM who want to address the cheating.

will update later:

- Talk Kramnik down. Good chess player not a statistician.
- We have been successful in the past with cheat detection of amateurs.
- Stakes gone up in title tuesday but we have not grown with it.
- Lack of communication.
- Lack of punishment from cheating title players.
- Believe in second chances.

- 3% cheating in TT. (title tuesday)
- 100 title players got caught out of 10K title players.
(how many of 10K actually play??)

-Accurance is not a good detection tool.
- Believe in fair play monitoring. (2 camera)
@kalafiorczyk said in #51:
> ==== snip ====
>
> ==== snip ====
> Guys, I let you on a secret, but you have to promise me that you won't tell anyone else and keep it on the down low.
>
> Cheaters aren't analyzing their games. They are running their engines at depth as low as practical to sneak below the chess engine detection radar.
>
> How deep they run them is the closely held secret, both by the cheaters and the cheater detection teams. One seconds is a decent scientific wild ass guess.
>
> But it does show that Dorian Quelle understands what he is testing for, even if he didn't explicitly say this in his paper.

Well I need to avoid details here. But modern cheat detection systems only look at a small subset of chess positions for any given game - these positions have certain characteristics. In this context Dorian Quelle was very blunt, analysing all moves from an opening novelty to the start of the endgame - frankly it's no surprise he found nothing weird.

However I can give a non-chess example of just how powerful statistical analysis can be. In same states in the US it was/is common practice to test students using multiple-choice tests. Clearly there is an ease with which teachers can cheat on such tests - post exam amending the results - to bump up their students scores for potential financial gains. When the educational authorities performed 'cheat detection' statistical analysis, they found that 7% of teachers were cheaters, a few even lost their jobs. The Teacher Unions subsequently succeeded in banning the use of such 'cheat detection' systems - the problem of 'false positives' and the cost to the individual of getting it wrong. This is a clear example of just how difficult it is for a humans to behave in a truly random way to slip into the fog of statistical analysis given sufficient time and data points. The same would apply to chess although there is a greater degree of fuzziness involved due to iffy assumptions.

FWIW: In the UK multiple-choice tests have become very rare whilst students' exam performances have skyrocketed - additionally IQ tests which used to be done at the end of primary schooling have also disappeared.
@Gingersquirrelnuts said in #53:
> And a big swipe at Kramnik. Ouch!
Well actually big swipes at anyone who threaten revenue - is my cynical take.

For those not interested in wading through the video some important stats - in the context of this thread:-

For the month of October: 1% of titled players banned (of 10000+), and 0.6% of Ordinary Joes banned (of 150M), ongoing suspicions requiring monitoring, at worst, about 3% of titled players. The argument these stats indicate that 'cheating is minor' was made. Take note the first set of stats are for ONE month. {If this interpretation is inaccurate feel free to amend, I am certainly not going to watch the video again!}

Also of interest is that the 'accuracy game stat' does not form part of the 'cheat detection' analysis and is intend only to help lower level players gauge the quality of their play- stated that the accuracy stat makes very low use of hardware resources as a result. Note, stated you can make use of engine when playing Bots as it doesn't affect other players - I would check this if you ever decide to do this.
@AlexiHarvey said in #56:
> Well actually big swipes at anyone who threaten revenue - is my cynical take.
>

Well from chess.com's point of view (and that of Lichess, I guess!) it's really unhelpful having influential people telling people how to spot a cheater, when actually the method they're claiming would lead to a LOT of false positives and maybe some real cheaters going unreported. Even with Kramnik being careful not to name anyone, the danger is he's unleashing an army of other people to make a load of false accusations, making it harder for the sites.

I'm not a huge fan of Danny Rensch, but to give him credit, he's repreatedly made a point of telling the world that CAPS (the accuracy score) isn't usable to tell if somebody's cheating. The fact that people continue to walk up that path regardless is disappointing.
@kalafiorczyk said in #52:
> The other site is now posting on Youtube their "State of Chess" expose that includes a long segment about cheating.
@AlexiHarvey said in #56:
> For the month of October: 1% of titled players banned (of 10000+), and 0.6% of Ordinary Joes banned (of 150M), ongoing suspicions requiring monitoring, at worst, about 3% of titled players...

I do think analysing Danny's 4 hour podcast is heading towards hijacking the thread. It's enough to say he comes across like Klaus Schwab.

I play chess for the good it brings to me, competing to think "deeper" than another person, while they are trying to do the same. I don't play for money or esteem, or the viewers, I don't even do it for the babes that fawn over us chess players. So why should I be subject to a witch hunt? I don't see very much cheating, and from what I have experienced, I have never played against a cheater on Lichess.

I think the moral dilemma which needs to be addressed, is how many bans for non-cheaters is acceptable. The more "military industry" minded would probably accept a much higher rate of "collateral damage" but how many of us peaceful civilians would accept they would have to suffer? Kramnik was at least upfront, he is happy that 3 percent are gunned down in cold blood due to not being from where he is, whereas Danny doesn't even remember what his ethics or message was supposed to be: "we closed 16 - holy bejesus why were you cheating so much titled players? In, in, in, I think this was July and August...")

@AlexiHarvey The statistics don't much matter when you have sadistic tzars and barbarous politicians in control, as they are only interested in their already grossly bulging pockets and flexing their power. I would imagine choosing the statistics is easy enough to get away with, especially when everyone ignores that you are lying. And not just normal lies, false accusations are the devils of the lies! Caruana and Chirila (in their roles as the "Meeja") can now been seen actively stooging, sweeping aside any obvious concerns about proof or impact of false positives and funnelling the narrative towards chess.corns power ranking authoritarian methodology: if your stats are better than the 3% over any interval, and you're not one of us, we're going to tell everyone you're a cheater! How much effort have they put into establishing the bases for their assumptions that you can win a tournament with merely 1 instance of +/- evaluation at a chosen position, or the skill variance between players OTB and online? These are both rife for gleaming fantastic insight into chess, but they completely jump the hurdle and claim they know the outcomes, they have another 100 million to help chess.corn make, and guessing is much cheaper than having to work for their kills.
I wonder if any US-based lawyers are reading this and can shed some light on the Employee Polygraph Protection Act and how it came to be.

The polygraphs/lie-detectors were also based on goofball statistical assumptions, [very much] somewhat like the chess cheating detection. There was also a class of certified examiners that worked using secret principles after receiving secret training.

Was there any insider in the polygraph industry that spilled the truth about the accuracy of lie detection? There was a period in the XX century that this was considered real science.

Edit: changed 2nd paragraph
@Nomoreusernames said in #58:
> I do think analysing Danny's 4 hour podcast is heading towards hijacking the thread. It's enough to say he comes across like Klaus Schwab.

The video had already been referred too twice in the thread, and I certainly wouldn't have looked at it otherwise. I made the comment as I considered it 'in context' and could save some people time.

Regards 'false positives' - of which there will always be if only statistics is deployed - you have to look at the penalty - do it twice and you are kicked off a website. I would say this is reasonable commercial usage if persons are not named. Whether such usage is being executed correctly is basically a matter of trust in the website.

FWIW: Another stat - which I think is in context - the hit rate on user cheater reporting is on average 2%.

This topic has been archived and can no longer be replied to.