News update. I started a guenuine web documentation based wishful long conversation with chatGPT consumer chat free website of the same brand.
I now understand better some of the human exasperating text production it can lead to, besides the good useful stuff. It has not self-awareness of its own quality of knowledge (they just dumps a big shapeless data blob onto a LLM, and bet of the huge size to sort all those reality representativity hidden issues to magically sort out themselves, or at least I can hunch away that impression, and not even close to any evidence that would make me regret blabbering that hypothesis).
It can shrink, but it can also expand vaguely at will or user control and the default seems to errr on the wall of text, but not my kind, where the problem I have is fitting all I am considering into some ascii noodle stream. with only CR or LF visual management. which also depends on my taking a breadth while take notes of my incomings things I want to share..
resulting in another kind of despair, as I mess my initial intentions, through considering alternate understandings of what I might just have written (or thought). Exasperation from not being able express myself, what I see but does not seem to fit with my verbal written skills. (or such few time windows and flukes when I might, that trying to reproduce, becomes its own frustration). And I understand, that I might be another source of wall of text frustration. But I think for the opposite causal factors. I always fear having missing an important logical or reasoning or hidden assumption not being shared by lichess unknown internet scope mind diversity and luggages trying to read this..
I find the AI have no problem with my breathless content. as they don't breath! and they are hunch machines. with uncharacterized huge massive blob of corpus (if not so, we would like to have access to the partitions and existing conceived and found characterization measures, as the things are on neutral navigation, powerless about their corpus information so that maybe a user could help the verbomotor machine, generate more logical AND information increase text things to read.
there seems to be a huge abliity to chew on text and it seems to be completley independ of informative content measure (that any human actually in pursuit and in effective digestion of any new information, might end up after prolonged interogation and overlapping input meaning, start figuring out, that their last hours might have been pain in vain (ok that might be me).
That figuring it out what is actionable information that seems cogent, if it had been me on the other side (lots of the human side projections structural design artefact going on, either oblivious to human psychology or the opposite (and I would bet it would be the chat interface layer more than the web content corpus. Language AI made chat bot. I think that might be analogous to putting the horse in the cart. and the whole things not one any road withiin any life sustained planet in sight.
In conclusion. And since one can tune the text wall verbosity to actual informative content after individual research project with many tete a tete on same topics.. I hope that will carry to my other attempts, when I get hopefull again (memory fading, not a myth, everyday, we can fool ourselves into forgetting certain things, and do it again and then same results etc.. you known the drilll)..
in summar of concluion. or in reality of conclusion that just derailed.
Lichess should impose its open source and open data philosophy to public forum and blog (even) about using such tools beyond mere correction. and rephrasing, shrinking or bullet condensing (those are not the problem, but knowing that there were used would increase confidnece in the wall of text not being pain in vain). And most importantly. if someone is going the other way. For example given few keywords, and then ask for a haiku, to maximally spread the actual human provided information in those keywords, say one word in each motif.. Then if link put on top of post having used quantiity of words out high ration to their input in, someone among the many lichess users coming into contact might warn the others by having checkied the context real human prior creative or informative content that was context to the produced well writtent and in the worst cases vague or not informative. (some people confuse general with vague.. I think. Maybe a lot of people, and maybe the web is strong with that "force", at least when wanting more depths as human. They still seem to be limited to small brain me.
ok. I might have super AI strength in wall of text. Same results difference methods.. sorry for the pain.
so. here. on the other hand and nonetheless, but in executive summary:
Obligation to make the long conversation that lead to the post content, a public link, and post that link. and maintain the link (it can be restricted to the converastion at the date of the link creation) through the life expectancy of thread.
If the chat tool being used does not allow that. then maybe obligation to put the verbatim of the conversation in some text hostinig website.
which I realize would require the chat bot to index there conversation. and not rely on their vague but assertive and reassuring claims that they have all the conversation in memory, or that one can request that they do for the verbatim. That they have their own inert memory indexing, might be good for them. but we would need here to ask for visible indexing .
in conclusion. The sloppy and untraceable rational typical presentation to try to fool us into a casual atmostphere of chat, might be great for the product propagation appeal, but I would ask here. that something of more transparent pressure be explored as policies.
There are serious efforts and even some using such AI but it is enough from small amount of pain in vain posts. to raise general suspicion itself in vain, as most economical assumption. More noise in the human to human communication is not acceptable.
so when using AI in good faith effort, please be aware, that it might be fooling you into making sense. And do share your actual true input out of your mind with it. and then feel the appreciation of sharing your true self through the sugary wrapper danger of loss of information.
News update. I started a guenuine web documentation based wishful long conversation with chatGPT consumer chat free website of the same brand.
I now understand better some of the human exasperating text production it can lead to, besides the good useful stuff. It has not self-awareness of its own quality of knowledge (they just dumps a big shapeless data blob onto a LLM, and bet of the huge size to sort all those reality representativity hidden issues to magically sort out themselves, or at least I can hunch away that impression, and not even close to any evidence that would make me regret blabbering that hypothesis).
It can shrink, but it can also expand vaguely at will or user control and the default seems to errr on the wall of text, but not my kind, where the problem I have is fitting all I am considering into some ascii noodle stream. with only CR or LF visual management. which also depends on my taking a breadth while take notes of my incomings things I want to share..
resulting in another kind of despair, as I mess my initial intentions, through considering alternate understandings of what I might just have written (or thought). Exasperation from not being able express myself, what I see but does not seem to fit with my verbal written skills. (or such few time windows and flukes when I might, that trying to reproduce, becomes its own frustration). And I understand, that I might be another source of wall of text frustration. But I think for the opposite causal factors. I always fear having missing an important logical or reasoning or hidden assumption not being shared by lichess unknown internet scope mind diversity and luggages trying to read this..
I find the AI have no problem with my breathless content. as they don't breath! and they are hunch machines. with uncharacterized huge massive blob of corpus (if not so, we would like to have access to the partitions and existing conceived and found characterization measures, as the things are on neutral navigation, powerless about their corpus information so that maybe a user could help the verbomotor machine, generate more logical AND information increase text things to read.
there seems to be a huge abliity to chew on text and it seems to be completley independ of informative content measure (that any human actually in pursuit and in effective digestion of any new information, might end up after prolonged interogation and overlapping input meaning, start figuring out, that their last hours might have been pain in vain (ok that might be me).
That figuring it out what is actionable information that seems cogent, if it had been me on the other side (lots of the human side projections structural design artefact going on, either oblivious to human psychology or the opposite (and I would bet it would be the chat interface layer more than the web content corpus. Language AI made chat bot. I think that might be analogous to putting the horse in the cart. and the whole things not one any road withiin any life sustained planet in sight.
In conclusion. And since one can tune the text wall verbosity to actual informative content after individual research project with many tete a tete on same topics.. I hope that will carry to my other attempts, when I get hopefull again (memory fading, not a myth, everyday, we can fool ourselves into forgetting certain things, and do it again and then same results etc.. you known the drilll)..
in summar of concluion. or in reality of conclusion that just derailed.
Lichess should impose its open source and open data philosophy to public forum and blog (even) about using such tools beyond mere correction. and rephrasing, shrinking or bullet condensing (those are not the problem, but knowing that there were used would increase confidnece in the wall of text not being pain in vain). And most importantly. if someone is going the other way. For example given few keywords, and then ask for a haiku, to maximally spread the actual human provided information in those keywords, say one word in each motif.. Then if link put on top of post having used quantiity of words out high ration to their input in, someone among the many lichess users coming into contact might warn the others by having checkied the context real human prior creative or informative content that was context to the produced well writtent and in the worst cases vague or not informative. (some people confuse general with vague.. I think. Maybe a lot of people, and maybe the web is strong with that "force", at least when wanting more depths as human. They still seem to be limited to small brain me.
ok. I might have super AI strength in wall of text. Same results difference methods.. sorry for the pain.
so. here. on the other hand and nonetheless, but in executive summary:
Obligation to make the long conversation that lead to the post content, a public link, and post that link. and maintain the link (it can be restricted to the converastion at the date of the link creation) through the life expectancy of thread.
If the chat tool being used does not allow that. then maybe obligation to put the verbatim of the conversation in some text hostinig website.
which I realize would require the chat bot to index there conversation. and not rely on their vague but assertive and reassuring claims that they have all the conversation in memory, or that one can request that they do for the verbatim. That they have their own inert memory indexing, might be good for them. but we would need here to ask for visible indexing .
in conclusion. The sloppy and untraceable rational typical presentation to try to fool us into a casual atmostphere of chat, might be great for the product propagation appeal, but I would ask here. that something of more transparent pressure be explored as policies.
There are serious efforts and even some using such AI but it is enough from small amount of pain in vain posts. to raise general suspicion itself in vain, as most economical assumption. More noise in the human to human communication is not acceptable.
so when using AI in good faith effort, please be aware, that it might be fooling you into making sense. And do share your actual true input out of your mind with it. and then feel the appreciation of sharing your true self through the sugary wrapper danger of loss of information.