@schlawg said in #16:
people that research, work on, and understand this stuff all agree that reliable AI detection is an unsolved problem. there is no DNA evidence within plain text.
I fully agree and yet I still think progress can be made (offering a web discussion area which doesn't look like something from the 90's but instead looks more like HN or SE or reddit where people can easily filter uninteresting content, and AI-generated "bot" content can be useful too).
@schlawg said in #16:
> people that research, work on, and understand this stuff all agree that reliable AI detection is an unsolved problem. there is no DNA evidence within plain text.
I fully agree and yet I still think progress can be made (offering a web discussion area which doesn't look like something from the 90's but instead looks more like HN or SE or reddit where people can easily filter uninteresting content, and AI-generated "bot" content can be useful too).
@Toadofsky said in #21:
I fully agree and yet I still think progress can be made (offering a web discussion area which doesn't look like something from the 90's but instead looks more like HN or SE or reddit where people can easily filter uninteresting content, and AI-generated "bot" content can be useful too).
If SE means stackexchange, this comment might not age well. Their question numbers are dropping so fast they are almost gone irrelevant. Absolutely shocking. :-(
https://data.stackexchange.com/stackoverflow/query/1717975/new-question-activity-per-month-since-2018#graph
@Toadofsky said in #21:
> I fully agree and yet I still think progress can be made (offering a web discussion area which doesn't look like something from the 90's but instead looks more like HN or SE or reddit where people can easily filter uninteresting content, and AI-generated "bot" content can be useful too).
If SE means stackexchange, this comment might not age well. Their question numbers are dropping so fast they are almost gone irrelevant. Absolutely shocking. :-(
https://data.stackexchange.com/stackoverflow/query/1717975/new-question-activity-per-month-since-2018#graph
i'm sure he has a "downvote threshold to hide forum posts" feature request somewhere in the github.
i'm sure he has a "downvote threshold to hide forum posts" feature request somewhere in the github.
@InkyDarkBird said in #13:
You literally brought up the first mention of using AI detection tools in #4, you liar.
"How would one decide whether a piece of text was generated using AI tools?"
I think you have misinterpreted what I have said. I didn't say "How would one decide whether a piece of text was generated, using AI tools?". See the lack of the comma in the sentence you quoted? That means that said piece of text IS generated using AI tools, not enforced using AI tools. Subtle, but different. Make sure to read it carefully next time.
And be careful buddy, you are treading on fine line here with your comment "you liar" (https://lichess.org/page/forum-etiquette#respect-other-players).
Borderline useless paragraphs lacking any reasoning, will cover this further down in my post.
I'm interested to see responses that have been made to misinterpreted texts. We'll wait and see...
Yeah, I can tell with you randomly listing arguments and then providing no evidence, such as "moderator burnout", "forum pollution", somehow claiming that there will be "massively more" false positives,
True haha. But I can assure you that this post is not like that one. But here's the thing: the arguments that I have made are indeed true and real. Moderator burnout and forum pollutions are not some random points raised by AI chatbots, they are very tangible problems, especially services ran with donations like lichess. If you think otherwise, feel free to rebut.
and even going so far as to say "there's no clear definition of what even counts as 'AI-generated'", when there literally is.
"when there literally is"... no there isn't. Nobody can know with 100% certainty if a piece of text is AI-generated, and no definition which everyone can agree upon. If there is, I'd love to know. And don't try to sidestep the matter with "Yeah, of course nobody can know with a 100% certainty. However, there are really obvious cases..." or anything similar.
I literally implied this already in my previous posts, and I already answered twice about how banning the most obvious AI users will deter others.
You keep answering the wrong question, do you fail to realise the very core of the points/questions I am raising? Again, please read my posts carefully. "Implying" things in posts does not mean anything, because everyone can say that yet at that time, they actually haven't even thought about it.
You have accused me of coming up with a "pure speculation" before. But look at what you've said here: "banning the most obvious AI users will deter others." I think that is a pure speculation. If anything, banning accounts/removing posts are a very subtle process and is not very outwardly obvious that a person's post was removed/account was banned due to any reason. There's no announcement saying "X has been banned from the forums due to [reason]" Therefore, it is not very clear that penalties were received due to "AI usage", deterring nobody.
Yeah, your post right here really was not meaningful besides the second-to-last paragraph.
If you think that all of my prior arguments are void, you are kidding yourself. I have raised very serious points, including the increased complexity of forum etiquette, how your proposal creates more problems than it solves, definitions of "AI-generated", a larger number of false positives, and encouragement of effort does not relate with unfair moderation. These points have not been addressed however has been simply ignored.
Your post repeatedly fearmongers about false positives despite you providing zero evidence of people in the forums having a writing style similar to ChatGPT.
There have been many cases where a completely human generated text has been identified as "AI-generated" by AI text detectors, most famous examples being The Bible, Declaration of Independence, and Apple's first Ever iPhone press release. I personally have been accused of using AI-generated material in my school assignment in the past. "False positive" problems are real, and especially very personal to me. In addition, I cannot prove to someone if their writing was completely generated with AI or just a mimic of their tone. Because as I have said, you cannot be completely certain. Therefore, I cannot provide any examples.
Again, so what if AI rules are subjective? Are rules about insults and misinformation not somewhat subjective as well?
Intentions of an action is the biggest factor when it comes to the moderation of posts. And I am pretty certain this applies to the lichess forums also. The intention of insults and misinformation are negative. The intention behind them are to hurt, to demoralise, to hate, and so much more. They are all negative intentions. However, posts that are "generated using AI" have very varied intentions. Yes, people could use "AI generated texts" with negative intentions. However, in an overwhelmingly majority of the time, they are positive (i.e. not negative). The duality of intentions that are present in the usage of "AI generated content" is THE reason why AI rules has to be objective, or just barely. It is too subjective right now, and it will always be.
"rules about insults and misinformation" are grounded in observable harm and established societal norms. Unlike insults which spread hate, misinformation which spreads falsehoods, AI-generated content is again, not inherently harmful. So the subjectivity in "AI rules" invite overreach and inconsistency.
YOU haven't addressed or explained how false positives will be more common or worse using the AI rule when compared to insults.
That is because you have not asked me to, because I expected you to understand without me explaining it. Did you not, or are you simply angry and writing posts blinded by your emotions?
Keep this in mind: forum rules should exist to protect users and to make life easier, not to police how users sound.
@InkyDarkBird said in #13:
> You literally brought up the first mention of using AI detection tools in #4, you liar.
> "How would one decide whether a piece of text was generated using AI tools?"
I think you have misinterpreted what I have said. I didn't say "How would one decide whether a piece of text was generated, using AI tools?". See the lack of the comma in the sentence you quoted? That means that said piece of text IS generated using AI tools, not enforced using AI tools. Subtle, but different. Make sure to read it carefully next time.
And be careful buddy, you are treading on fine line here with your comment "you liar" (https://lichess.org/page/forum-etiquette#respect-other-players).
> Borderline useless paragraphs lacking any reasoning, will cover this further down in my post.
I'm interested to see responses that have been made to misinterpreted texts. We'll wait and see...
> Yeah, I can tell with you randomly listing arguments and then providing no evidence, such as "moderator burnout", "forum pollution", somehow claiming that there will be "massively more" false positives,
True haha. But I can assure you that this post is not like that one. But here's the thing: the arguments that I have made are indeed true and real. Moderator burnout and forum pollutions are not some random points raised by AI chatbots, they are very tangible problems, especially services ran with donations like lichess. If you think otherwise, feel free to rebut.
> and even going so far as to say "there's no clear definition of what even counts as 'AI-generated'", when there literally is.
"when there literally is"... no there isn't. Nobody can know with 100% certainty if a piece of text is AI-generated, and no definition which everyone can agree upon. If there is, I'd love to know. And don't try to sidestep the matter with "Yeah, of course nobody can know with a 100% certainty. However, there are really obvious cases..." or anything similar.
> I literally implied this already in my previous posts, and I already answered twice about how banning the most obvious AI users will deter others.
You keep answering the wrong question, do you fail to realise the very core of the points/questions I am raising? Again, please read my posts carefully. "Implying" things in posts does not mean anything, because everyone can say that yet at that time, they actually haven't even thought about it.
You have accused me of coming up with a "pure speculation" before. But look at what you've said here: "banning the most obvious AI users will deter others." I think that is a pure speculation. If anything, banning accounts/removing posts are a very subtle process and is not very outwardly obvious that a person's post was removed/account was banned due to any reason. There's no announcement saying "X has been banned from the forums due to [reason]" Therefore, it is not very clear that penalties were received due to "AI usage", deterring nobody.
> Yeah, your post right here really was not meaningful besides the second-to-last paragraph.
If you think that all of my prior arguments are void, you are kidding yourself. I have raised very serious points, including the increased complexity of forum etiquette, how your proposal creates more problems than it solves, definitions of "AI-generated", a larger number of false positives, and encouragement of effort does not relate with unfair moderation. These points have not been addressed however has been simply ignored.
> Your post repeatedly fearmongers about false positives despite you providing zero evidence of people in the forums having a writing style similar to ChatGPT.
There have been many cases where a completely human generated text has been identified as "AI-generated" by AI text detectors, most famous examples being The Bible, Declaration of Independence, and Apple's first Ever iPhone press release. I personally have been accused of using AI-generated material in my school assignment in the past. "False positive" problems are real, and especially very personal to me. In addition, I cannot prove to someone if their writing was completely generated with AI or just a mimic of their tone. Because as I have said, you cannot be completely certain. Therefore, I cannot provide any examples.
> Again, so what if AI rules are subjective? Are rules about insults and misinformation not somewhat subjective as well?
Intentions of an action is the biggest factor when it comes to the moderation of posts. And I am pretty certain this applies to the lichess forums also. The intention of insults and misinformation are negative. The intention behind them are to hurt, to demoralise, to hate, and so much more. They are all negative intentions. However, posts that are "generated using AI" have very varied intentions. Yes, people could use "AI generated texts" with negative intentions. However, in an overwhelmingly majority of the time, they are positive (i.e. not negative). The duality of intentions that are present in the usage of "AI generated content" is THE reason why AI rules has to be objective, or just barely. It is too subjective right now, and it will always be.
"rules about insults and misinformation" are grounded in observable harm and established societal norms. Unlike insults which spread hate, misinformation which spreads falsehoods, AI-generated content is again, not inherently harmful. So the subjectivity in "AI rules" invite overreach and inconsistency.
> YOU haven't addressed or explained how false positives will be more common or worse using the AI rule when compared to insults.
That is because you have not asked me to, because I expected you to understand without me explaining it. Did you not, or are you simply angry and writing posts blinded by your emotions?
Keep this in mind: forum rules should exist to protect users and to make life easier, not to police how users sound.
@Cieron said in #24:
I think you have misinterpreted what I have said.
I did, and I am wrong regarding that.
But here's the thing: the arguments that I have made are indeed true and real. Moderator burnout and forum pollutions are not some random points raised by AI chatbots, they are very tangible problems, especially services ran with donations like lichess. If you think otherwise, feel free to rebut.
Except you haven't proven how they are a unique problem specifically to an AI rule. There are multiple other rules that can create moderator burnout and forum pollution, so why is the AI rule any different?
"when there literally is"... no there isn't. Nobody can know with 100% certainty if a piece of text is AI-generated, and no definition which everyone can agree upon. If there is, I'd love to know. And don't try to sidestep the matter with "Yeah, of course nobody can know with a 100% certainty. However, there are really obvious cases..." or anything similar.
My entire point is that these "really obvious cases" of AI get banned.
You have accused me of coming up with a "pure speculation" before. But look at what you've said here: "banning the most obvious AI users will deter others." I think that is a pure speculation.
If anything, banning accounts/removing posts are a very subtle process and is not very outwardly obvious that a person's post was removed/account was banned due to any reason. There's no announcement saying "X has been banned from the forums due to [reason]" Therefore, it is not very clear that penalties were received due to "AI usage", deterring nobody.
Your accusation that an AI rule will fail is based entirely on the non-unique argument that account bans don't show the reason, which occurs for every other rule on Lichess.
If you think that all of my prior arguments are void, you are kidding yourself. I have raised very serious points, including the increased complexity of forum etiquette, how your proposal creates more problems than it solves, definitions of "AI-generated", a larger number of false positives, and encouragement of effort does not relate with unfair moderation. These points have not been addressed however has been simply ignored.
- You still haven't identified an actual forum user that speaks like ChatGPT or another LLM and has been directly accused of using ChatGPT or an LLM. Therefore, your argument of how an AI rule will create more "false positives" than the current rules completely lacks any evidence.
- The definition of "AI-generated content" is any post containing text, images, etc that have been fully generated by LLMs or AI image generators.
- I don't understand your unfair moderation point.
There have been many cases where a completely human generated text has been identified as "AI-generated" by AI text detectors, most famous examples being The Bible, Declaration of Independence, and Apple's first Ever iPhone press release. I personally have been accused of using AI-generated material in my school assignment in the past.
AI text detectors should not be used to regulate AI if an AI rule is to be implemented on the Lichess forums. I already implied this.
In addition, I cannot prove to someone if their writing was completely generated with AI or just a mimic of their tone. Because as I have said, you cannot be completely certain. Therefore, I cannot provide any examples.
However, posts that are "generated using AI" have very varied intentions. Yes, people could use "AI generated texts" with negative intentions. However, in an overwhelmingly majority of the time, they are positive (i.e. not negative). The duality of intentions that are present in the usage of "AI generated content" is THE reason why AI rules has to be objective, or just barely. It is too subjective right now, and it will always be.
"rules about insults and misinformation" are grounded in observable harm and established societal norms. Unlike insults which spread hate, misinformation which spreads falsehoods, AI-generated content is again, not inherently harmful. So the subjectivity in "AI rules" invite overreach and inconsistency.
The intention behind an AI post frankly doesn't matter since it still represents laziness and a severe lack of effort and creativity. Forums are designed to be a place for human-to-human interaction, not human-to-AI interaction. The consequences of allowing AI-generated content is that it harms the very purpose of a forum space. Why should a forum even exist if AI-generated will keep growing and taking over real communication with other humans?
That is because you have not asked me to, because I expected you to understand without me explaining it. Did you not, or are you simply angry and writing posts blinded by your emotions?
Without actual evidence of false positives occurring in the Lichess forum, your claim about how there will be a large number of false positives falls completely flat. I have already provided evidence of existing topics that are clearly created by AI, while you have provided no evidence of any false positives occurring in the Lichess forum.
Keep this in mind: forum rules should exist to protect users and to make life easier, not to police how users sound.
Protecting the very basis of what a forum is should be included.
@Cieron said in #24:
> I think you have misinterpreted what I have said.
I did, and I am wrong regarding that.
> But here's the thing: the arguments that I have made are indeed true and real. Moderator burnout and forum pollutions are not some random points raised by AI chatbots, they are very tangible problems, especially services ran with donations like lichess. If you think otherwise, feel free to rebut.
Except you haven't proven how they are a unique problem specifically to an AI rule. There are multiple other rules that can create moderator burnout and forum pollution, so why is the AI rule any different?
> "when there literally is"... no there isn't. Nobody can know with 100% certainty if a piece of text is AI-generated, and no definition which everyone can agree upon. If there is, I'd love to know. And don't try to sidestep the matter with "Yeah, of course nobody can know with a 100% certainty. However, there are really obvious cases..." or anything similar.
My entire point is that these "really obvious cases" of AI get banned.
> You have accused me of coming up with a "pure speculation" before. But look at what you've said here: "banning the most obvious AI users will deter others." I think that is a pure speculation.
> If anything, banning accounts/removing posts are a very subtle process and is not very outwardly obvious that a person's post was removed/account was banned due to any reason. There's no announcement saying "X has been banned from the forums due to [reason]" Therefore, it is not very clear that penalties were received due to "AI usage", deterring nobody.
Your accusation that an AI rule will fail is based entirely on the non-unique argument that account bans don't show the reason, which occurs for every other rule on Lichess.
> If you think that all of my prior arguments are void, you are kidding yourself. I have raised very serious points, including the increased complexity of forum etiquette, how your proposal creates more problems than it solves, definitions of "AI-generated", a larger number of false positives, and encouragement of effort does not relate with unfair moderation. These points have not been addressed however has been simply ignored.
1. You still haven't identified an actual forum user that speaks like ChatGPT or another LLM and has been directly accused of using ChatGPT or an LLM. Therefore, your argument of how an AI rule will create more "false positives" than the current rules completely lacks any evidence.
2. The definition of "AI-generated content" is any post containing text, images, etc that have been fully generated by LLMs or AI image generators.
3. I don't understand your unfair moderation point.
> There have been many cases where a completely human generated text has been identified as "AI-generated" by AI text detectors, most famous examples being The Bible, Declaration of Independence, and Apple's first Ever iPhone press release. I personally have been accused of using AI-generated material in my school assignment in the past.
AI text detectors should not be used to regulate AI if an AI rule is to be implemented on the Lichess forums. I already implied this.
> In addition, I cannot prove to someone if their writing was completely generated with AI or just a mimic of their tone. Because as I have said, you cannot be completely certain. Therefore, I cannot provide any examples.
> However, posts that are "generated using AI" have very varied intentions. Yes, people could use "AI generated texts" with negative intentions. However, in an overwhelmingly majority of the time, they are positive (i.e. not negative). The duality of intentions that are present in the usage of "AI generated content" is THE reason why AI rules has to be objective, or just barely. It is too subjective right now, and it will always be.
> "rules about insults and misinformation" are grounded in observable harm and established societal norms. Unlike insults which spread hate, misinformation which spreads falsehoods, AI-generated content is again, not inherently harmful. So the subjectivity in "AI rules" invite overreach and inconsistency.
The intention behind an AI post frankly doesn't matter since it still represents laziness and a severe lack of effort and creativity. Forums are designed to be a place for human-to-human interaction, not human-to-AI interaction. The consequences of allowing AI-generated content is that it harms the very purpose of a forum space. Why should a forum even exist if AI-generated will keep growing and taking over real communication with other humans?
> That is because you have not asked me to, because I expected you to understand without me explaining it. Did you not, or are you simply angry and writing posts blinded by your emotions?
Without actual evidence of false positives occurring in the Lichess forum, your claim about how there will be a large number of false positives falls completely flat. I have already provided evidence of existing topics that are clearly created by AI, while you have provided no evidence of any false positives occurring in the Lichess forum.
> Keep this in mind: forum rules should exist to protect users and to make life easier, not to police how users sound.
Protecting the very basis of what a forum is should be included.
I guess as we are arguing about what we are arguing about, we're done here... https://lichess.org/terms-of-service already in many ways cover OP's suggestion of not allowing AI content in forums (although I don't know who polices "Off-Topic Discussion").
All use of our services are subject to reasonable caps, limits, restrictions, or other bottle-necking which are purely determined at Lichess' discretion. This includes private messages which are subject to a reasonable use limitation, as well as public chat messages. For more information on these points, please refer to our privacy policy.
I guess as we are arguing about what we are arguing about, we're done here... https://lichess.org/terms-of-service already in many ways cover OP's suggestion of not allowing AI content in forums (although I don't know who polices "Off-Topic Discussion").
> All use of our services are subject to reasonable caps, limits, restrictions, or other bottle-necking which are purely determined at Lichess' discretion. This includes private messages which are subject to a reasonable use limitation, as well as public chat messages. For more information on these points, please refer to our privacy policy.