- Blind mode tutorial
lichess.org
Donate

Prohibit AI-generated Text from the Forums

@InkyDarkBird said in #7:

That's a lie. I never said AI-generated tools should be used to moderate AI-generated text.
Then who did? I certainly did not.

I already answered both of these points: the most obvious AI posts should be taken down to actually encourage people to put effort into writing their posts instead of using an AI.
You’ve only really addressed one part. The question “How can one know for sure if a piece of text is generated using AI? ” still hasn’t been answered. Just saying a post “sounds obviously AI” is way too subjective. And let’s be honest, removing posts just because they seem like AI doesn’t solve anything. “Encouraging effort” doesn’t justify unfair moderation. Without a clear and reliable way to tell, all you’ll get is frustrated people saying, “My post got removed for being AI, but it wasn’t!” And honestly, who can blame them?

This last point is just pure speculation. We already have multiple forum rules that aren't perfect and somewhat subjective (misinformation, insults) and many people whine in the forums about how the appeal system doesn't work for them. Claiming that an AI rule shouldn't be added because there will be false positives is a nonunique argument because false positives already occur with many other forum rules.
You're missing the point... AI detection false positives are way worse than existing ones. When someone posts an obvious insult, mods can tell. But AI-generated content? The detection tech is garbage, AI evolves constantly, and there's no clear definition of what even counts as "AI-generated." We're not talking about the same level of false positives, we're talking about massively more of them. And if the current appeal system already fails with simpler rules, why would adding something nearly impossible to verify make it better? Plus you've got moderator burnout, false reporting, and users self-censoring because they're afraid their writing style might trigger the rule. Forum pollution is just one problem, this whole proposal creates way more chaos than it solves.

Also, just so you know, this entire post was written with the help of AI tools. I'm guessing you didn’t even notice — did you?

It seems to me that you don’t actually want to remove AI-generated posts, but you just don’t want to see posts that seem like they were written by AI. Because think about it. If a post doesn’t seem like it was written by AI, but actually was, then you’d have no way of knowing. So how would you even enforce your own proposal? Or maybe you do not want to see low-effort posts on the forum. This I agree with, unless the topic does call for it.

My original point still stands, just as I said at the very start of this conversation: "People can say whatever they want, however they want on the forums. It is up to them to decide whether it is appropriate or not." And I’ll add this too - even if something was written using AI tools, I still think that as long as it adds something meaningful to the discussion, it’s worth keeping.

@InkyDarkBird said in #7: > That's a lie. I never said AI-generated tools should be used to moderate AI-generated text. Then who did? I certainly did not. > I already answered both of these points: the most obvious AI posts should be taken down to actually encourage people to put effort into writing their posts instead of using an AI. You’ve only really addressed one part. The question “How can one know for sure if a piece of text is generated using AI? ” still hasn’t been answered. Just saying a post “sounds obviously AI” is way too subjective. And let’s be honest, removing posts just because they seem like AI doesn’t solve anything. “Encouraging effort” doesn’t justify unfair moderation. Without a clear and reliable way to tell, all you’ll get is frustrated people saying, “My post got removed for being AI, but it wasn’t!” And honestly, who can blame them? > This last point is just pure speculation. We already have multiple forum rules that aren't perfect and somewhat subjective (misinformation, insults) and many people whine in the forums about how the appeal system doesn't work for them. Claiming that an AI rule shouldn't be added because there will be false positives is a nonunique argument because false positives already occur with many other forum rules. You're missing the point... AI detection false positives are way worse than existing ones. When someone posts an obvious insult, mods can tell. But AI-generated content? The detection tech is garbage, AI evolves constantly, and there's no clear definition of what even counts as "AI-generated." We're not talking about the same level of false positives, we're talking about massively more of them. And if the current appeal system already fails with simpler rules, why would adding something nearly impossible to verify make it better? Plus you've got moderator burnout, false reporting, and users self-censoring because they're afraid their writing style might trigger the rule. Forum pollution is just one problem, this whole proposal creates way more chaos than it solves. Also, just so you know, this entire post was written with the help of AI tools. I'm guessing you didn’t even notice — did you? It seems to me that you don’t actually want to remove AI-generated posts, but you just don’t want to see posts that seem like they were written by AI. Because think about it. If a post doesn’t seem like it was written by AI, but actually was, then you’d have no way of knowing. So how would you even enforce your own proposal? Or maybe you do not want to see low-effort posts on the forum. This I agree with, unless the topic does call for it. My original point still stands, just as I said at the very start of this conversation: "People can say whatever they want, however they want on the forums. It is up to them to decide whether it is appropriate or not." And I’ll add this too - even if something was written using AI tools, I still think that as long as it adds something meaningful to the discussion, it’s worth keeping.

@Cieron said in #11:

Then who did? I certainly did not.
You literally brought up the first mention of using AI detection tools in #4, you liar.
"How would one decide whether a piece of text was generated using AI tools?"

You’ve only really addressed one part. The question “How can one know for sure if a piece of text is generated using AI? ” still hasn’t been answered. Just saying a post “sounds obviously AI” is way too subjective. And let’s be honest, removing posts just because they seem like AI doesn’t solve anything. “Encouraging effort” doesn’t justify unfair moderation. Without a clear and reliable way to tell, all you’ll get is frustrated people saying, “My post got removed for being AI, but it wasn’t!” And honestly, who can blame them?
You're missing the point... AI detection false positives are way worse than existing ones. When someone posts an obvious insult, mods can tell. But AI-generated content? The detection tech is garbage, AI evolves constantly, and there's no clear definition of what even counts as "AI-generated." We're not talking about the same level of false positives, we're talking about massively more of them. And if the current appeal system already fails with simpler rules, why would adding something nearly impossible to verify make it better? Plus you've got moderator burnout, false reporting, and users self-censoring because they're afraid their writing style might trigger the rule. Forum pollution is just one problem, this whole proposal creates way more chaos than it solves.
Borderline useless paragraphs lacking any reasoning, will cover this further down in my post.

Also, just so you know, this entire post was written with the help of AI tools. I'm guessing you didn’t even notice — did you?
Yeah, I can tell with you randomly listing arguments and then providing no evidence, such as "moderator burnout", "forum pollution", somehow claiming that there will be "massively more" false positives, and even going so far as to say "there's no clear definition of what even counts as 'AI-generated'", when there literally is. Random rhetorical questions such as "who can blame them?".

It seems to me that you don’t actually want to remove AI-generated posts, but you just don’t want to see posts that seem like they were written by AI. Because think about it. If a post doesn’t seem like it was written by AI, but actually was, then you’d have no way of knowing. So how would you even enforce your own proposal?
I literally implied this already in my previous posts, and I already answered twice about how banning the most obvious AI users will deter others.

My original point still stands, just as I said at the very start of this conversation: "People can say whatever they want, however they want on the forums. It is up to them to decide whether it is appropriate or not." And I’ll add this too - even if something was written using AI tools, I still think that as long as it adds something meaningful to the discussion, it’s worth keeping.
Yeah, your post right here really was not meaningful besides the second-to-last paragraph. Your post repeatedly fearmongers about false positives despite you providing zero evidence of people in the forums having a writing style similar to ChatGPT.

Again, so what if AI rules are subjective? Are rules about insults and misinformation not somewhat subjective as well? YOU haven't addressed or explained how false positives will be more common or worse using the AI rule when compared to insults.

@Cieron said in #11: > Then who did? I certainly did not. You literally brought up the first mention of using AI detection tools in #4, you liar. "How would one decide whether a piece of text was generated using AI tools?" > You’ve only really addressed one part. The question “How can one know for sure if a piece of text is generated using AI? ” still hasn’t been answered. Just saying a post “sounds obviously AI” is way too subjective. And let’s be honest, removing posts just because they seem like AI doesn’t solve anything. “Encouraging effort” doesn’t justify unfair moderation. Without a clear and reliable way to tell, all you’ll get is frustrated people saying, “My post got removed for being AI, but it wasn’t!” And honestly, who can blame them? > You're missing the point... AI detection false positives are way worse than existing ones. When someone posts an obvious insult, mods can tell. But AI-generated content? The detection tech is garbage, AI evolves constantly, and there's no clear definition of what even counts as "AI-generated." We're not talking about the same level of false positives, we're talking about massively more of them. And if the current appeal system already fails with simpler rules, why would adding something nearly impossible to verify make it better? Plus you've got moderator burnout, false reporting, and users self-censoring because they're afraid their writing style might trigger the rule. Forum pollution is just one problem, this whole proposal creates way more chaos than it solves. Borderline useless paragraphs lacking any reasoning, will cover this further down in my post. > Also, just so you know, this entire post was written with the help of AI tools. I'm guessing you didn’t even notice — did you? Yeah, I can tell with you randomly listing arguments and then providing no evidence, such as "moderator burnout", "forum pollution", somehow claiming that there will be "massively more" false positives, and even going so far as to say "there's no clear definition of what even counts as 'AI-generated'", when there literally is. Random rhetorical questions such as "who can blame them?". > It seems to me that you don’t actually want to remove AI-generated posts, but you just don’t want to see posts that seem like they were written by AI. Because think about it. If a post doesn’t seem like it was written by AI, but actually was, then you’d have no way of knowing. So how would you even enforce your own proposal? I literally implied this already in my previous posts, and I already answered twice about how banning the most obvious AI users will deter others. > My original point still stands, just as I said at the very start of this conversation: "People can say whatever they want, however they want on the forums. It is up to them to decide whether it is appropriate or not." And I’ll add this too - even if something was written using AI tools, I still think that as long as it adds something meaningful to the discussion, it’s worth keeping. Yeah, your post right here really was not meaningful besides the second-to-last paragraph. Your post repeatedly fearmongers about false positives despite you providing zero evidence of people in the forums having a writing style similar to ChatGPT. Again, so what if AI rules are subjective? Are rules about insults and misinformation not somewhat subjective as well? YOU haven't addressed or explained how false positives will be more common or worse using the AI rule when compared to insults.

Sometimes I wish forums could self-moderate like reddit, since people of all perspectives are entitled to share opinions but no healthy community likes unconstructive opinions.

Sometimes I wish forums could self-moderate like reddit, since people of all perspectives are entitled to share opinions but no healthy community likes unconstructive opinions.

See—using AI is obviously bad and no one shouldn't be writing up their posts with the help of a LLM, but you need to consider some problems—some people might be have English as a SL/FL, and probably can't write up posts in english, some might have extremely bad grammar, maybe someone couldn't figure out how to express their thoughts, whatever, what matters in the end is IF they have a real problem and are asking for help, they get help.

Using AI to just make random topics in the off-topic subforum is, of course, not good and needs some sort of deterrence.

PS: I actually have never seen someone using AI for forum posts myself, so I also don't know how true #1's issue is.

See—using AI is obviously bad and no one shouldn't be writing up their posts with the help of a LLM, but you need to consider some problems—some people might be have English as a SL/FL, and probably can't write up posts in english, some might have extremely bad grammar, maybe someone couldn't figure out how to express their thoughts, whatever, what matters in the end is IF they have a real problem and are asking for help, they get help. Using AI to just make random topics in the off-topic subforum is, of course, not good and needs some sort of deterrence. PS: I actually have never seen someone using AI for forum posts myself, so I also don't know how true #1's issue is.

90% of non-streaming traffic is AI generated or crawlers these days. what's worse, Cieron is correct that anyone offering an online detection tool for AI generated text is selling snake oil. the "AI detection" sites sell ad space beside a text form that does run your input through some low parameter models and do some ass math, but might as well give you a random number between 0-100.

people that research, work on, and understand this stuff all agree that reliable AI detection is an unsolved problem. there is no DNA evidence within plain text.

90% of non-streaming traffic is AI generated or crawlers these days. what's worse, Cieron is correct that anyone offering an online detection tool for AI generated text is selling snake oil. the "AI detection" sites sell ad space beside a text form that does run your input through some low parameter models and do some ass math, but might as well give you a random number between 0-100. people that research, work on, and understand this stuff all agree that reliable AI detection is an unsolved problem. there is no DNA evidence within plain text.

@AyaanshGaur12 said in #15:

See—using AI is obviously bad and no one shouldn't be writing up their posts with the help of a LLM, but you need to consider some problems—some people might be have English as a SL/FL, and probably can't write up posts in english, some might have extremely bad grammar, maybe someone couldn't figure out how to express their thoughts
Online translators easily exist. I'd rather read a somewhat grammatically incorrect post with human thought and effort rather than AI slop done with no thinking whatsoever. If someone does not know how to express their thought correctly, they clearly need to think more about their idea before making a post.

https://lichess.org/forum/lichess-feedback/policy-on-use-of-chatgpt?page=1

@AyaanshGaur12 said in #15: > See—using AI is obviously bad and no one shouldn't be writing up their posts with the help of a LLM, but you need to consider some problems—some people might be have English as a SL/FL, and probably can't write up posts in english, some might have extremely bad grammar, maybe someone couldn't figure out how to express their thoughts Online translators easily exist. I'd rather read a somewhat grammatically incorrect post with human thought and effort rather than AI slop done with no thinking whatsoever. If someone does not know how to express their thought correctly, they clearly need to think more about their idea before making a post. https://lichess.org/forum/lichess-feedback/policy-on-use-of-chatgpt?page=1

@Cedur216 said in #2:

These posts are already very much considered spam afaik

Which posts? All the forum posts? I am missing context.
Is there like a spam checker that already detects when the op post itself is AI generated?
Or the moderators are already monitoring (trying to) the posts, and it would fall under "spam".

@Cedur216 said in #2: > These posts are already very much considered spam afaik Which posts? All the forum posts? I am missing context. Is there like a spam checker that already detects when the op post itself is AI generated? Or the moderators are already monitoring (trying to) the posts, and it would fall under "spam".

@Toadofsky said in #12:

imgs.xkcd.com/comics/constructive.png

We are not there. Most often, language AI will fill a response token quota, no matter the amount of human meaningfull content. It can fill the wall of text with ASCII better than me. Problem is, it might be thoughtless. While mine, are scrambled thoughts (can't speak for other humans).

@Toadofsky said in #12: > imgs.xkcd.com/comics/constructive.png We are not there. Most often, language AI will fill a response token quota, no matter the amount of human meaningfull content. It can fill the wall of text with ASCII better than me. Problem is, it might be thoughtless. While mine, are scrambled thoughts (can't speak for other humans).

We should be encouraging progress, not hindering it.

We should be encouraging progress, not hindering it.

This topic has been archived and can no longer be replied to.