- Blind mode tutorial
lichess.org
Donate

Policy on use of ChatGPT

@dboing said in #50:

can someone shrinnk the above in bullets of half liners. In that direction we are safe. Also maybe find out how to force an AI to be consistent with it own pretension of multipost memory in one convo. Ask it to put the bullets line into some meaningful groupings. it is good at that. as this does not really require any notion of non-linguistic source of meaning. Natural language either through population needs for efficient communication, or with grafted academy futile written form overhead rules, tends to contain a lot of associative "logic" (possibly common human semantics causative factors), or sequential syntactical roles of verbal stream segmentation dynamical relationships. sorry for the verbiage. best i coudl do. in this overspilling last words.

I am not saying it is useless. but the work to find the BS zone. is a long learning curve, in my own abliities to understand such trained model (whch should be having some reproducible internal mechanisc to figure out, even thourgh the smoke of proprietary and marketting and public concerns ad-hoc code patches or interface selling appeal (the chat as a person bait).

The LLM generative model, should have an invariant probablilty law for its behavior. Conversations should be reproducible. if we had at least transparency on the interpretable parameters values used thorugh a conversation..

But the more obscurity layers, the more a waste of energies it can induce for us. I think one had to be trained almost as much as in chess, to be able to get the soft spot of input nudging and restriction to be able to find the ration of human energy put in and the information gotten out to start making such technology usable. Clue: ;the amount of web pages about the art of prompt. prompt engineering. long tutorials wth lots of acrobatic wall of text in and out, and renetrant.

I even have doubts now about the coding tutorial I saw in the line of web articles trying to show how their one example did work. but as I realized above. there is not tracecablility conceived to go with the tool that can manipulate the thoughts of many with less energty or spin doctor HR work force budget . That is about my rambling above being in vain finally. It might be a populatoin pandora box . it was already bad before. now language is losiing some trust I would say. and not about my difficult relationship. i mean people even here. So humans. be careful.

Fr made by AI lol. (jk)

@dboing said in #50: > can someone shrinnk the above in bullets of half liners. In that direction we are safe. Also maybe find out how to force an AI to be consistent with it own pretension of multipost memory in one convo. Ask it to put the bullets line into some meaningful groupings. it is good at that. as this does not really require any notion of non-linguistic source of meaning. Natural language either through population needs for efficient communication, or with grafted academy futile written form overhead rules, tends to contain a lot of associative "logic" (possibly common human semantics causative factors), or sequential syntactical roles of verbal stream segmentation dynamical relationships. sorry for the verbiage. best i coudl do. in this overspilling last words. > > I am not saying it is useless. but the work to find the BS zone. is a long learning curve, in my own abliities to understand such trained model (whch should be having some reproducible internal mechanisc to figure out, even thourgh the smoke of proprietary and marketting and public concerns ad-hoc code patches or interface selling appeal (the chat as a person bait). > > The LLM generative model, should have an invariant probablilty law for its behavior. Conversations should be reproducible. if we had at least transparency on the interpretable parameters values used thorugh a conversation.. > > But the more obscurity layers, the more a waste of energies it can induce for us. I think one had to be trained almost as much as in chess, to be able to get the soft spot of input nudging and restriction to be able to find the ration of human energy put in and the information gotten out to start making such technology usable. Clue: ;the amount of web pages about the art of prompt. prompt engineering. long tutorials wth lots of acrobatic wall of text in and out, and renetrant. > > I even have doubts now about the coding tutorial I saw in the line of web articles trying to show how their one example did work. but as I realized above. there is not tracecablility conceived to go with the tool that can manipulate the thoughts of many with less energty or spin doctor HR work force budget . That is about my rambling above being in vain finally. It might be a populatoin pandora box . it was already bad before. now language is losiing some trust I would say. and not about my difficult relationship. i mean people even here. So humans. be careful. Fr made by AI lol. (jk)

invariant generative model behavior probabilities given the same determining input (that will have the algrorithm implementation programmed response).

I am sorry to have collapse more than one thought chunk into one. The reproduciblity idea, as above is my somewhat educated guess, that ML has not changed its basic unifying formalism while I was on intellectual vacation (having another source of soul sustaining activities whether I was mostly still taking breaks to rest my neck, etc...). soul=glabal health in that sentence. kind of. The necessary ingredient that make you a non-depressive deluded optmist (kidding.. might be old news that experiemental finding, about odds prediction for valuated random outcomes).

I meant the natural language domain with the corpus blob builds an model of reality in the lerning instance of the large language with leanring parameter exploration (just giving more information than needed, I like to inject clues here and there, like shooting myseslf in the ramblings foot, or the other, so feet).

The zone in that domain where the model has little sampling experience, or whatever characterisics of corpus in that language space where a human might be exploring during chat, leads to BS being detectable, if one had made science before the cart (or was it the horse).. Was the market visibililty dibs that important?

educating us with the stil visible failures.. but then not having done population psychology homeworks about loneliness and group and thoougt or culture IDs anxieties in light of increasing information overload about a bigger and more complex world that we might have been raised to narrative with certainty in our individual ignorance of its possible scope? ok. cheating. rigged question.

We like the connections, even if completely dehumancise, atomise in small chunks through text crippled means of actual full commuynicatoni with all the biologically available non verbal co-signals and common non explicit context of the converstion. That anonymity, is a small word to encompass all that is crippled.

and we still fall for it. many shallower social interactions. lots of shallow tought reflex triggering shorter sentences .. the path to simplicity of world view (world is not just the globe, here, it is the indidividaul cosmogony or their implict model of what to expect in their life span as a world of experience they would have to navigate.. or project themselves navigating.

The epitome of focus group become most of us. Getting a protracted discussion with good enough commited to our conversation and undertanding our need to be understood. And promising the big internat truth as the fingertips. (only the finger tips. or voice in small devices, as thumb tips and swiping degree of creative experssion might not be sustainable. or one adapts to the reduced or even more crippled bandwidth. compensating with many, and some self-image of having some socially active healthy mental pillar.

So now. the epitome. a few humans designed interlocutor. I prefer the cold google. and be having a clearer line of communication. I and you. should not be used for obscure mind shaping tools.

I fell for it, even with my overthinking overdrive critical and reluctant per-conceptions. Just by complying with the chat staging. and contrainst of linguistive model interface. having to me too, trying to figure out what is the meaning of the responses, and what pronouns seems to be conforing. like I and you.

like "antoine Doinel" mirror self-mesmerizing. but in reverse. say you to the machine or I as if you were confiding, enought time while working on the content, and suddenly to slip into the more familiar mode that this format of machine interface is trying to simualte. Maybe finally the machine has adapted to the humans, but I fear, it might not be self-learned with some optmizing learning objective being the common good of many people..

back to reproducible. i meant the sampling induced (valid hypothesis, or factor), non-"uniform" actual characteristics that the blob would propagate into the learned model. Or the learner model too. but I think that is the only thing that has had some actuall control or study from many years now. It is the other part of the finally learned model that we call LLM. The corpus itseslf. its sampling quirks for examples would probaliity have a causative relation in the statistical distrib utoin of the learned model generative behavior law of probabality that its final parameter set instance would represent. We could call is statistical in relation to the reality the sample was assumed representative of. But as a machine in generative and human interaction behavior dynamics, it is its law of proabbalty, that all of us are probing as a population.

at the population level. but also in abstract if I were to reproduce my inputs, for sure there would be variations. but that statistics of that variation given the language domain of the corpus ambinent space (varying words, checking my own thinking, and also being generous with too much informaton).

well. the statistical distributoin would converge, I would claim, to the law of probablitiy of the generative machine "model" of the language world, the post-trained or learned instance version.

Terminology note: many uses for "model". there can be classes of them, and one should always think model "of" even if not written or spoken. (and in what math. or machine implementation substrate would the model be instanciated). fininshing. That distribution on my limite repeated experiements. common sense and obvious in complex new environments and wanting to understand (overthink?, some of us have that drive, don't worry it is nourrishing) , are communication traps, as hidden assumptions. but I am of a certain type of educated aspiring expert in using models of mechanistic nature (using math. a lot) in my past. So I might be like an expert forgetting that not all the universe of people (say lichess possible readers), are clones or co-experts.

giving hunches and trying to share where they might come from.. might be a problem.. sorry. I can't be perfect and in coontrl of both sharing intent.

invariant generative model behavior probabilities given the same determining input (that will have the algrorithm implementation programmed response). I am sorry to have collapse more than one thought chunk into one. The reproduciblity idea, as above is my somewhat educated guess, that ML has not changed its basic unifying formalism while I was on intellectual vacation (having another source of soul sustaining activities whether I was mostly still taking breaks to rest my neck, etc...). soul=glabal health in that sentence. kind of. The necessary ingredient that make you a non-depressive deluded optmist (kidding.. might be old news that experiemental finding, about odds prediction for valuated random outcomes). I meant the natural language domain with the corpus blob builds an model of reality in the lerning instance of the large language with leanring parameter exploration (just giving more information than needed, I like to inject clues here and there, like shooting myseslf in the ramblings foot, or the other, so feet). The zone in that domain where the model has little sampling experience, or whatever characterisics of corpus in that language space where a human might be exploring during chat, leads to BS being detectable, if one had made science before the cart (or was it the horse).. Was the market visibililty dibs that important? educating us with the stil visible failures.. but then not having done population psychology homeworks about loneliness and group and thoougt or culture IDs anxieties in light of increasing information overload about a bigger and more complex world that we might have been raised to narrative with certainty in our individual ignorance of its possible scope? ok. cheating. rigged question. We like the connections, even if completely dehumancise, atomise in small chunks through text crippled means of actual full commuynicatoni with all the biologically available non verbal co-signals and common non explicit context of the converstion. That anonymity, is a small word to encompass all that is crippled. and we still fall for it. many shallower social interactions. lots of shallow tought reflex triggering shorter sentences .. the path to simplicity of world view (world is not just the globe, here, it is the indidividaul cosmogony or their implict model of what to expect in their life span as a world of experience they would have to navigate.. or project themselves navigating. The epitome of focus group become most of us. Getting a protracted discussion with good enough commited to our conversation and undertanding our need to be understood. And promising the big internat truth as the fingertips. (only the finger tips. or voice in small devices, as thumb tips and swiping degree of creative experssion might not be sustainable. or one adapts to the reduced or even more crippled bandwidth. compensating with many, and some self-image of having some socially active healthy mental pillar. So now. the epitome. a few humans designed interlocutor. I prefer the cold google. and be having a clearer line of communication. I and you. should not be used for obscure mind shaping tools. I fell for it, even with my overthinking overdrive critical and reluctant per-conceptions. Just by complying with the chat staging. and contrainst of linguistive model interface. having to me too, trying to figure out what is the meaning of the responses, and what pronouns seems to be conforing. like I and you. like "antoine Doinel" mirror self-mesmerizing. but in reverse. say you to the machine or I as if you were confiding, enought time while working on the content, and suddenly to slip into the more familiar mode that this format of machine interface is trying to simualte. Maybe finally the machine has adapted to the humans, but I fear, it might not be self-learned with some optmizing learning objective being the common good of many people.. back to reproducible. i meant the sampling induced (valid hypothesis, or factor), non-"uniform" actual characteristics that the blob would propagate into the learned model. Or the learner model too. but I think that is the only thing that has had some actuall control or study from many years now. It is the other part of the finally learned model that we call LLM. The corpus itseslf. its sampling quirks for examples would probaliity have a causative relation in the statistical distrib utoin of the learned model generative behavior law of probabality that its final parameter set instance would represent. We could call is statistical in relation to the reality the sample was assumed representative of. But as a machine in generative and human interaction behavior dynamics, it is its law of proabbalty, that all of us are probing as a population. at the population level. but also in abstract if I were to reproduce my inputs, for sure there would be variations. but that statistics of that variation given the language domain of the corpus ambinent space (varying words, checking my own thinking, and also being generous with too much informaton). well. the statistical distributoin would converge, I would claim, to the law of probablitiy of the generative machine "model" of the language world, the post-trained or learned instance version. Terminology note: many uses for "model". there can be classes of them, and one should always think model "of" even if not written or spoken. (and in what math. or machine implementation substrate would the model be instanciated). fininshing. That distribution on my limite repeated experiements. common sense and obvious in complex new environments and wanting to understand (overthink?, some of us have that drive, don't worry it is nourrishing) , are communication traps, as hidden assumptions. but I am of a certain type of educated aspiring expert in using models of mechanistic nature (using math. a lot) in my past. So I might be like an expert forgetting that not all the universe of people (say lichess possible readers), are clones or co-experts. giving hunches and trying to share where they might come from.. might be a problem.. sorry. I can't be perfect and in coontrl of both sharing intent.

@Unseekedspy said in #51:

Fr made by AI lol. (jk)

you missed the point. try AI to shrink it. and then expand. I am too tired. or just don't read. you might be better off.

I am serious. and spend a lot of my limited energie per day. behind all that rambling. I just don't have the energy to clean up after myself. just think, this is not the post I am wanting to read. and all will be fine.

@Unseekedspy said in #51: > Fr made by AI lol. (jk) you missed the point. try AI to shrink it. and then expand. I am too tired. or just don't read. you might be better off. I am serious. and spend a lot of my limited energie per day. behind all that rambling. I just don't have the energy to clean up after myself. just think, this is not the post I am wanting to read. and all will be fine.

@dboing said in #53:

you missed the point. try AI to shrink it. and then expand. I am too tired. or just don't read. you might be better off.

I am serious. and spend a lot of my limited energie per day. behind all that rambling. I just don't have the energy to clean up after myself. just think, this is not the post I am wanting to read. and all will be fine.

It's fine, you don't have to. We get it. :))

@dboing said in #53: > you missed the point. try AI to shrink it. and then expand. I am too tired. or just don't read. you might be better off. > > I am serious. and spend a lot of my limited energie per day. behind all that rambling. I just don't have the energy to clean up after myself. just think, this is not the post I am wanting to read. and all will be fine. It's fine, you don't have to. We get it. :))

i was explaining in there somehhere that not all walls of text have the same information content. a pretentious claim. but I think it as a good working hypothesis. in my limited grasp of my "cosmos" or world view. I might be trying to combat the BS propagation "forces" with my own stream of characters. But I pretend these have all been my stream of conscious notetaking from my strom front of thoughts coming throught without pause.. and that shows. I know. I also have pain. but it was worth the satisfcation. So, I promise that if using language AI for what it does best. you would get a workable. less painful version.
just ask for TOC or bullets. 2 level of hierachy. for all my preiouvs post.. it is good at that. i am out of full model juice.
it took all my posture enduracen left and more to do this. maybe I am breaking TOS. let's see.

Also please do not quote my wall verbatim, if only making a joke. or short thought liner. You can still ping me, just erase the things after > delimiter

sorry was you last reply too late.

i was explaining in there somehhere that not all walls of text have the same information content. a pretentious claim. but I think it as a good working hypothesis. in my limited grasp of my "cosmos" or world view. I might be trying to combat the BS propagation "forces" with my own stream of characters. But I pretend these have all been my stream of conscious notetaking from my strom front of thoughts coming throught without pause.. and that shows. I know. I also have pain. but it was worth the satisfcation. So, I promise that if using language AI for what it does best. you would get a workable. less painful version. just ask for TOC or bullets. 2 level of hierachy. for all my preiouvs post.. it is good at that. i am out of full model juice. it took all my posture enduracen left and more to do this. maybe I am breaking TOS. let's see. Also please do not quote my wall verbatim, if only making a joke. or short thought liner. You can still ping me, just erase the things after > delimiter sorry was you last reply too late.

This topic has been archived and can no longer be replied to.