Self-explanatory. Forums are designed for human-to-human communication, not human-to-AI communication.
If someone cannot muster the effort and time to type up their own post, they shouldn't be posting at all.
If an OP really wants a response from ChatGPT instead of Lichess users, they should ask it themselves instead of posting on the forums.
Self-explanatory. Forums are designed for human-to-human communication, not human-to-AI communication.
If someone cannot muster the effort and time to type up their own post, they shouldn't be posting at all.
If an OP really wants a response from ChatGPT instead of Lichess users, they should ask it themselves instead of posting on the forums.
These posts are already very much considered spam afaik
These posts are already very much considered spam afaik
@Cedur216 said in #2:
These posts are already very much considered spam afaik
Source or fake.
@Cedur216 said in #2:
> These posts are already very much considered spam afaik
Source or fake.
People can say whatever they want, however they want on the forums. It is up to them to decide whether it is appropriate or not.
How would one decide whether a piece of text was generated using AI tools? Moderators will not copy-paste every single message sent on the forums into an AI text identifier/detector. Implementing the detector onto lichess is obviously a bad idea. These detectors aren't even accurate anyways.
People can say whatever they want, however they want on the forums. It is up to them to decide whether it is appropriate or not.
How would one decide whether a piece of text was generated using AI tools? Moderators will not copy-paste every single message sent on the forums into an AI text identifier/detector. Implementing the detector onto lichess is obviously a bad idea. These detectors aren't even accurate anyways.
@Cieron said in #4:
How would one decide whether a piece of text was generated using AI tools?
Who said AI tools would be used? At least right now, AI-generated text and images are very easy to spot.
Moderation doesn't have to be perfect to target AI since if the simplest AI posts are banned, then AI users will be forced to invest more effort to make their post more human-like, at which that point they might as well just write their own post by themselves.
https://lichess.org/forum/general-chess-discussion/manifesto-the-incremental-draw--for-fairer-chess-in-the-age-of-the-clock
https://lichess.org/forum/lichess-feedback/suggestion-time-delay
https://lichess.org/forum/off-topic-discussion/the-story-of-the-thibault-organization
https://lichess.org/forum/off-topic-discussion/i-will-make-you-a-picture-for-your-profile-based-on-your-username-2
@Cieron said in #4:
> How would one decide whether a piece of text was generated using AI tools?
Who said AI tools would be used? At least right now, AI-generated text and images are very easy to spot.
Moderation doesn't have to be perfect to target AI since if the simplest AI posts are banned, then AI users will be forced to invest more effort to make their post more human-like, at which that point they might as well just write their own post by themselves.
https://lichess.org/forum/general-chess-discussion/manifesto-the-incremental-draw--for-fairer-chess-in-the-age-of-the-clock
https://lichess.org/forum/lichess-feedback/suggestion-time-delay
https://lichess.org/forum/off-topic-discussion/the-story-of-the-thibault-organization
https://lichess.org/forum/off-topic-discussion/i-will-make-you-a-picture-for-your-profile-based-on-your-username-2
@InkyDarkBird said in #5:
Who said AI tools would be used?
Uhm... You did.
At least right now, AI-generated text and images are very easy to spot.
There are indeed some "red flags" that are present in texts generated by AI tools. But the spotting mechanisms will never be perfect.
Moderation doesn't have to be perfect to target AI since if the simplest AI posts are banned, then AI users will be forced to invest more effort to make their post more human-like, at which that point they might as well just write their own post by themselves.
You have not addressed the big questions of this topic. How can one know for sure if a piece of text is generated using AI? And how will you enforce this on the forums?
And no, the systems must be perfect to catch AI-generated texts. Because once a person's post gets falsely taken down due to it being "suspected" of being generated using AI, there will be more complaints and more noise on the forums, which no one wants to deal with.
@InkyDarkBird said in #5:
> Who said AI tools would be used?
Uhm... You did.
> At least right now, AI-generated text and images are very easy to spot.
There are indeed some "red flags" that are present in texts generated by AI tools. But the spotting mechanisms will never be perfect.
> Moderation doesn't have to be perfect to target AI since if the simplest AI posts are banned, then AI users will be forced to invest more effort to make their post more human-like, at which that point they might as well just write their own post by themselves.
You have not addressed the big questions of this topic. How can one know for sure if a piece of text is generated using AI? And how will you enforce this on the forums?
And no, the systems must be perfect to catch AI-generated texts. Because once a person's post gets falsely taken down due to it being "suspected" of being generated using AI, there will be more complaints and more noise on the forums, which no one wants to deal with.
@Cieron said in #6:
Uhm... You did.
That's a lie. I never said AI-generated tools should be used to moderate AI-generated text.
There are indeed some "red flags" that are present in texts generated by AI tools. But the spotting mechanisms will never be perfect.
You have not addressed the big questions of this topic. How can one know for sure if a piece of text is generated using AI? And how will you enforce this on the forums?
I already answered both of these points: the most obvious AI posts should be taken down to actually encourage people to put effort into writing their posts instead of using an AI. If a user manages to make their AI post sound completely human and bypass the rule, good for them, although as I already said, I don't expect this scenario to occur that often because it takes more effort to humanize an AI than to simply write your own post.
And no, the systems must be perfect to catch AI-generated texts. Because once a person's post gets falsely taken down due to it being "suspected" of being generated using AI, there will be more complaints and more noise on the forums, which no one wants to deal with.
This last point is just pure speculation. We already have multiple forum rules that aren't perfect and somewhat subjective (misinformation, insults) and many people whine in the forums about how the appeal system doesn't work for them. Claiming that an AI rule shouldn't be added because there will be false positives is a nonunique argument because false positives already occur with many other forum rules.
@Cieron said in #6:
> Uhm... You did.
That's a lie. I never said AI-generated tools should be used to moderate AI-generated text.
> There are indeed some "red flags" that are present in texts generated by AI tools. But the spotting mechanisms will never be perfect.
> You have not addressed the big questions of this topic. How can one know for sure if a piece of text is generated using AI? And how will you enforce this on the forums?
I already answered both of these points: the most obvious AI posts should be taken down to actually encourage people to put effort into writing their posts instead of using an AI. If a user manages to make their AI post sound completely human and bypass the rule, good for them, although as I already said, I don't expect this scenario to occur that often because it takes more effort to humanize an AI than to simply write your own post.
> And no, the systems must be perfect to catch AI-generated texts. Because once a person's post gets falsely taken down due to it being "suspected" of being generated using AI, there will be more complaints and more noise on the forums, which no one wants to deal with.
This last point is just pure speculation. We already have multiple forum rules that aren't perfect and somewhat subjective (misinformation, insults) and many people whine in the forums about how the appeal system doesn't work for them. Claiming that an AI rule shouldn't be added because there will be false positives is a nonunique argument because false positives already occur with many other forum rules.
there are ai written blogs, too :(
there are ai written blogs, too :(
'The Times They Are A-Changin'' - Bob Dylan
Maybe we will all use AI for writing just like we use computers even with spelling checkers instead of mechanical typewriters and instead of pen and ink
'The Times They Are A-Changin'' - Bob Dylan
Maybe we will all use AI for writing just like we use computers even with spelling checkers instead of mechanical typewriters and instead of pen and ink
@InkyDarkBird said in #3:
Source or fake.
I'd say if you monitor the forums closely, you will see that AI generated content doesn't do very well... it might get deleted, closed, or simply called out.
The human moderators may also take different aspects into consideration, like "does this actually make sense", or "has a meaningful discussion already started".
A clear guideline might be in order. But it is a difficult situation - as in some cases AI can be used in a good way. But letting it write philosophical essays is not one of those.
@InkyDarkBird said in #3:
> Source or fake.
I'd say if you monitor the forums closely, you will see that AI generated content doesn't do very well... it might get deleted, closed, or simply called out.
The human moderators may also take different aspects into consideration, like "does this actually make sense", or "has a meaningful discussion already started".
A clear guideline might be in order. But it is a difficult situation - as in some cases AI can be used in a good way. But letting it write philosophical essays is not one of those.