lichess.org
Donate

Policy on use of ChatGPT

@CSKA_Moscou said in #20:
> I consider that the use of artificial intelligence to generate forum messages is worrying because i think that a forum is a human connection

Using LLMs for forum posts means you get quick, accurate answers anytime without searching all over, and moderators can chill a bit since bots handle the routine stuff. Plus, forums stay lively and friendly 24/7, making everyone happy!

BTW, that's generated by my buddy ChatGPT. I would agree that we don't need AI that decides to write a new post in forums, but in the normal flow it's your responsibility to write a prompt and approve the result. I don't believe in rules that remove options from people just because we have no trust in our fellow humans to curate what their tools generate.
@CSKA_Moscou said in #20:
> online translators can translate people's text and keep the main idea quite accurately.
Online translators are AI. Google translate is powered by gemini. No difference you writing something then copy/paste it to gtp/gemini ask the thing to translate/proof read it. And I see absolutely nothing wrong in that
@TotalNoob69 said in #21:
> Using LLMs for forum posts means you get quick, accurate answers
How do you know chatbots are accurate? ChatGPT occasionally gives misinformation.
Also, if we assume that you are correct, why should the forums even exist if we do not need to interact with other users and can simply ask chatbots, as you claim?

> I don't believe in rules that remove options from people just because we have no trust in our fellow humans to curate what their tools generate.
Chatbots are not a "tool"; they are a complete replacement.

@petri999 said in #22:
> Online translators are AI.
Translators are not chatbots.
@TotalNoob69 said in #21:
> Using LLMs for forum posts means you get quick, accurate answers anytime without searching all over, and moderators can chill a bit since bots handle the routine stuff. Plus, forums stay lively and friendly 24/7, making everyone happy!
>
> BTW, that's generated by my buddy ChatGPT. I would agree that we don't need AI that decides to write a new post in forums, but in the normal flow it's your responsibility to write a prompt and approve the result. I don't believe in rules that remove options from people just because we have no trust in our fellow humans to curate what their tools generate.

if you ask chatgpt for a drawing, you will have 6-fingered hands and strange eyes. Such tools are not completely reliable and are only capable of finding specific choices based on keywords. So, in a way you still have some control over the AI, that is true, but on the other hand it wastes the human effort of searching and attempting, even in a foreign language.

Therefore, you lose part of your creativity and personality by letting a tool do the half of the task for you.

so, if everyone here used AI, there would only be empty speeches, since they were devoid of spelling/writting mistakes or risky expressions, or even of your favorite phrases, sentences or quotes...

the usefulness of AI is to learn and enable faster searches. I heard that people were composing music just with chatgpt, so I feel threatened for my future profession and especially in relation to people's perception of artistic or creative professions

@petri999 said in #22:
> Online translators are AI. Google translate is powered by gemini. No difference you writing something then copy/paste it to gtp/gemini ask the thing to translate/proof read it. And I see absolutely nothing wrong in that

I agree than today it's like that. I don't use or know how Google translator work, but translators like Reverso used until recently old technologies, i.e. databases and algorithms.

AI only makes the task of the algorithm easier by choosing the best possibilities. AI, in short, is just a series of more autonomous algorithms.

Online translators existed before Chatgpt or Gemini. People like linguists or professional translators were employed to translate, find synonyms and have in-depth knowledge of one or more languages, then it went to the database.
<Comment deleted by user>
@Toadofsky said in #13:
> Three is an xkcd webcomic about this: xkcd.com/810/

So what is the point? That what is the problem if the post is actually helpful.

To complicate or make this more chessy. Cheating sustainably to appear human player of a normal rating trajectory or even a given rating band with some humanlike fluctuations. That is the ultimate cheater invocation, evocation? BTW if there is a human doing that, and, given the existing chess engines abilities to play human level, how good at chess would the player be without the engines? I always wondered... I never tried. I don't even listen to SF most of the time, when I should post-game, only for the pretty graphic of whole game maximal dent. But is that not a similar question?

There are degree of linguistic helpers. I have been using (when not thinking anymore, othewise it never ends) Language tools mostly, and then I did some more chatty ones, maybe like MS-editor (some co-pilot cousin) And there is a difference of scope.

Some are extensions of correctors after you dumped your thoughts, like now I have a bunch of red and yellow above.

But then there is the more whole scale many paragraph rephrasing, language tools is not aware (or I am not aware that it is, having made retries with same input, and nope the paragraphs, and not even that much). MS editor (or your chat toy of favor) will rephrase, but then if your text was trying to say soemthing that is alraedy not convential, like a new thought about something complex, it mighjt even be conjecture or transparent reasoning. Then it is likely to become double work, and average any chunk of useful insight into what the whole of its data set would rather have imprinted on its model of linguistic reality.

I think more than languge corrector, should have a some honor code (whatever that is agreeable in cooperate setting, we can do that can't we) where if this is the output from a prompt give the prompt. or the link to it. If actually ready to publish on the internent, I don't see why the method should be private. One may not be proud of it, and let user know by not pasting the verbatim.

I don't know all the user cases though. But maybe a principle of transparency that would go well with Lichess offcial open source and was it open data too lately philosophy? (i also like open reasoning, or design, but that might be spam).

I do agree that wanting to talk to each other in legibgle ways should weigh in, given we are not all native english speakers and even if we were, we might not be of the same lingo or age level even.. or experience with various level of speech.

But what is the cost of backtrace. or even mention. Because now, it ain't like the clip.. I need to know how much creative from human there is and how much from an obscrure training dataset languge machine model flattening of the mind there might be.

I did not read the conversation. I thought this had become a put your thought thread.. (:)

This is not to say that there are not other usage when one is exploring some questions, getting the dominant or well established consensus about certain questions from the internet, but then one needs to always couter-check.. sometimes if the question is itself difficult for humans in the past as community (nothing comes to mind, yet), then it might be more work to find that out from the very confident looking answer. but with such caveat.

conclusion. I think all is fine, in transparency. did i get that opinion through?
@TotalNoob69 said in #21:
> Using LLMs for forum posts means you get quick, accurate answers anytime without searching all over, and moderators can chill a bit since bots handle the routine stuff. Plus, forums stay lively and friendly 24/7, making everyone happy!
>

I agree that it might be actually the same thing as cheating. The difference betwen the avertised cooperation of playing a game, and spending the energy to read someone, but then it was some averaging (if it is not dense in human meaning behind it).

The problem may be quality. But also quality of what. The form, how readable. or how informative. The question might arise someday, when meaning would be a thing. so that a user could orchestrate the meaning and the form would not kill it..

But I agree fundamentally that forum is about human connection. their individuality, not the averaging model of thoughts. (as sometimes form passes itself as content, no idea of examples right now, of the top of my head.. I am sure we can find some).

i think while wating for meaningful machine helpers.. having the mention of such tools. beyond language tools corrections (or even that if we can't use that as too soft a crit.). should help us navigate without having to make some interdiction. But that needs cooperation and clarity of principles. (not chess ones...) Can TOS do that? no clue.
Well @dboing you text for so long that asked LLM to summarize :

Sent by Copilot:
Sure, here’s a summary in bullet points:

Cheating in Chess:
Sustainable cheating to appear as a human player with a normal rating trajectory and human-like fluctuations.
Ultimate goal: to invoke or evoke the appearance of a genuine player.
Question: How good would a player be without engines, given current chess engines’ abilities to play at human level?
Linguistic Helpers:
Different tools for language correction and enhancement.
Use of tools like Language Tools and MS Editor for correcting and rephrasing text.
Extensions of correctors help after initial thoughts are written.
Whole-scale paragraph rephrasing tools may not always capture unconventional or complex thoughts accurately.
Potential issue: rephrasing tools might average out unique insights into more conventional language.
Honor Code for Language Tools:
Suggestion for an honor code in language tools.
Transparency in showing the prompt or method used for generating text.
Importance of maintaining the integrity of original thoughts, especially when publishing online

it did not too much sorter but it works other way as well you can give list of your ideas and ask concise paragrah for it lets try

With prompt:
write short non verbose essay on use LLM for forum text using following points:
it is ok use LLM to proof read
is acceptable to ask LLM to write short decritption idead writer provides
it not acceptable to give broad description of idead being discussed and copy paste the output

I would have written shorter but still it is what I wanted to say
--------------------------------
Using Large Language Models (LLMs) for forum text can be beneficial, but it's important to understand the appropriate and ethical ways to utilize them.

Firstly, it is perfectly acceptable to use LLMs for proofreading. These models can efficiently correct grammar, spelling, and punctuation errors, ensuring that the text is clear and professional. This use case enhances the readability and quality of the content without altering the original ideas of the writer.

Secondly, it is also acceptable to ask an LLM to write short descriptions based on ideas provided by the writer. This can help in articulating thoughts more clearly and concisely, especially when the writer has a specific concept in mind but struggles with phrasing. The key here is that the core idea originates from the writer, and the LLM is merely assisting in expression.

However, it is not acceptable to give a broad description of an idea being discussed and then copy-paste the output generated by the LLM. This approach can lead to misrepresentation of the writer's original thoughts and may result in content that lacks authenticity. It also raises ethical concerns about originality and intellectual honesty, as the generated text might not accurately reflect the nuanced understanding or intentions of the writer.

In summary, while LLMs can be valuable tools for enhancing and refining forum text, they should be used responsibly. Proofreading and assisting with short descriptions are appropriate uses, but relying on them to generate entire sections of text from broad prompts undermines the authenticity and originality of the content.
Can I ask you @dboing the following question directly?

Is the "summary in bullet points" which petri999 gives in #28 an accurate summary of what you wanted to say in the lengthy texts you wrote in #26 and #27? It's something I genuinely cannot judge myself because my comprehension skills are not up to it. Your answer might be a valuable indicator of the usefulness of LLMs in discussions.
@Brian-E said in #29:
>
I agree. can you help me about using them to figure out why I am trying to say while I write. Calamity of qwerty touch typing.
LLMs. I was just describing my experiene. In a context where we are not just rehasching common sense but trying to find new ways to think about a common question. I just don't find LLM up to that. Either the thing is already common sense of the training data monster we routinely have no clue about how it completes the LLM code of the model-free, but nonetheless model, and we get onctuous prose about, no surprise. no information. We knew that (or not if not part of the common sense implied boundaries in the big populaton that is bigger than a village, but we insist to attribute some common sense).

Or it is not common sense in that unknown characteristics trainin set, and then whatchout for the double work, and me going tilt trying to check. I think for reading my stuff, it is more efficient, if you are the energy, no one would force you, to use the LLMS for your own needs.. As I often find enough energy for things I am trying in cooperation to figure out that are not yet obvious to me.

So, in summary, or nonetheless, and other smoothing of verbalosia (ah.. the red took a while to show up)`:
Pros and cons about LLM and mypost. But you get points for perspicacity. I like perplexity in general. A problem when trying to share that. hunting for the post though, as there might be a legitimate part of your post. thanks.

Sorry, did not see the post.. now If I could read my own posts, I would be able to fix them. But in general, when I had enough energy repleted (it takes a lot of available ressources which are not as plenty full as one would assume).

I could tomorow, read again my text, and make a better iteratoni. but the word now to check upon your request. is not possible. So, maybe yes. some automatic LLMs. I just whish I could things and type correctely the first time.

Do you want me to stop even trying to share my thoughts. I suggest doing what Petri did. I hope it worked (it might as I make circonvolutions at times, which might help compensate for the exsitence model at information level).