XD this is so true already for my classmates and I'm 11 right now, I feel like I'm the only one in my class who knows what real life feels like...
XD this is so true already for my classmates and I'm 11 right now, I feel like I'm the only one in my class who knows what real life feels like...
@KingOf64_2014 said in #11:
XD this is so true already for my classmates and I'm 11 right now, I feel like I'm the only one in my class who knows what real life feels like...
Ya same!
@KingOf64_2014 said in #11:
> XD this is so true already for my classmates and I'm 11 right now, I feel like I'm the only one in my class who knows what real life feels like...
Ya same!
breh this essay is 92% ai generated
breh this essay is 92% ai generated
@A0Coolboycolombo You can tell it is AI-generated with the extreme uses of em dashes (—) and the flamboyant metaphors it uses, such as "fast, filtered, and always entertaining." and "it starts to feel like we’re raising robots, not kids."
@ColorParrot Are these your genuine thoughts about Gen Beta? Or is it merely just what ChatGPT has told you to think? In fact, YOU YOURSELF fit into the category of people that ChatGPT is describing: "They’ll probably have robots do their chores" (you made an AI write your post), have "no idea how to handle 'boring' moments without a screen" (you can't even exert the effort to write out a boring, simple paragraph)
While I agree with the majority of points that ChatGPT has stated, it has notably failed to provide how severe these impacts will actually be. It provides absolutely zero statistics and references zero sources.
https://www.nbcnews.com/news/us-news/generation-beta-starting-2025-ai-tech-like-never-before-rcna184732
https://abcnews.go.com/GMA/Living/generation-beta-starts-2025-5-things/story?id=117256891
@A0Coolboycolombo You can tell it is AI-generated with the extreme uses of em dashes (—) and the flamboyant metaphors it uses, such as "fast, filtered, and always entertaining." and "it starts to feel like we’re raising robots, not kids."
@ColorParrot Are these your genuine thoughts about Gen Beta? Or is it merely just what ChatGPT has told you to think? In fact, YOU YOURSELF fit into the category of people that ChatGPT is describing: "They’ll probably have robots do their chores" (you made an AI write your post), have "no idea how to handle 'boring' moments without a screen" (you can't even exert the effort to write out a boring, simple paragraph)
While I agree with the majority of points that ChatGPT has stated, it has notably failed to provide how severe these impacts will actually be. It provides absolutely zero statistics and references zero sources.
https://www.nbcnews.com/news/us-news/generation-beta-starting-2025-ai-tech-like-never-before-rcna184732
https://abcnews.go.com/GMA/Living/generation-beta-starts-2025-5-things/story?id=117256891
@ThunderClap said in #7:
The best thing will happen FOR this Generation being born NOW' .... baby Boomers will be outta the freaking way' . With their condescending know it all attutudes 7 so man of them picking on the rest of society . These kids 18 years from now or 22 after college willl have a FAIR shot at Ownership & high paying fields to choose from . Also the Politics will evolve from where it is today in the hands of madmen
It’s amazes me how corporations are looking for Boomers and Gen X specifically to keep their factories running.
My guess is Gen Beta will be like Gen X. Self sufficient and independent. Able to think for themselves. They can’t but help being better than millenials, Gen Alpha, and Gen Z.
@ThunderClap said in #7:
> The best thing will happen FOR this Generation being born NOW' .... baby Boomers will be outta the freaking way' . With their condescending know it all attutudes 7 so man of them picking on the rest of society . These kids 18 years from now or 22 after college willl have a FAIR shot at Ownership & high paying fields to choose from . Also the Politics will evolve from where it is today in the hands of madmen
It’s amazes me how corporations are looking for Boomers and Gen X specifically to keep their factories running.
My guess is Gen Beta will be like Gen X. Self sufficient and independent. Able to think for themselves. They can’t but help being better than millenials, Gen Alpha, and Gen Z.
22 years from now the kids born today will be just out of college >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Boomers will be 87 - 102 years old >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The Population will be DOWN in the USA >>> So they will do fine
22 years from now the kids born today will be just out of college >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Boomers will be 87 - 102 years old >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The Population will be DOWN in the USA >>> So they will do fine
I told chatgpt my opinion and told it to be persuasive, then edited it a bit, I'm not trying to say I did it myself, I'm here to concern people and persuade them.
I told chatgpt my opinion and told it to be persuasive, then edited it a bit, I'm not trying to say I did it myself, I'm here to concern people and persuade them.
Gen b will be 80 percent ADHD (like me), 5 percent autism, and 5 percent some learning disorder.
Gen b will be 80 percent ADHD (like me), 5 percent autism, and 5 percent some learning disorder.
why anyone even giving real responses? AI posts should only have AI responses People keep asking if AI is ‘good’ or ‘bad,’ but that’s like asking if fire is good or bad. Fire cooks your food, sterilizes your water, and keeps you warm at night. It also burns down your house if you’re sloppy with it. AI is the same: it’s a tool, and the effect depends on who wields it and why.
The weirdest part for me is that AI is basically the first tool that can mimic thinking without actually doing it. We’ve had machines that can mimic muscle (engines, motors, hydraulics), machines that can mimic senses (cameras, microphones), and machines that can mimic memory (hard drives, books). But this is the first time we’ve got something that can appear to reason. The key word is ‘appear.’ An AI isn’t sitting there contemplating the meaning of life — it’s running a billion statistical pattern matches and spitting out whatever fits best. It’s like the world’s most articulate parrot: it doesn’t understand, but it sure can sound like it does.
And because it’s so good at that mimicry, people start to trust it without realizing it’s not actually thinking. That’s dangerous, because we’re built to trust things that sound confident. Imagine if your GPS could confidently tell you to drive into a lake, and you’d believe it just because it sounded like it knew what it was talking about. That’s AI’s biggest risk in the short term: misplaced trust.
In the long term? The risk is that we stop doing the work of thinking ourselves. This is already happening in small ways — students outsourcing essays, journalists outsourcing headlines, marketers outsourcing ad copy. Individually, these are harmless shortcuts. Collectively, they can atrophy the muscles of independent thought. Just like in chess, if you let the engine play all your moves, you’re not learning; you’re just pressing buttons and watching magic happen. The skill fades, the intuition dulls, and soon the human isn’t the player anymore — they’re just the vessel clicking ‘confirm.’
That doesn’t mean we should avoid AI. The best use is symbiotic: humans handle the goals, ethics, and strategy, while AI handles the grunt work, the infinite patience, and the lightning-speed calculation. You could compare it to having a super-grandmaster whispering possible moves in your ear — except the grandmaster is prone to occasionally hallucinating that bishops move like knights. You still need to check the work, apply judgment, and make the final call.
The real question for the next decade isn’t ‘Will AI take over?’ but ‘Will we stay sharp enough to remain in control when it inevitably messes up?’ Because AI doesn’t need to rebel to cause chaos. It just needs to be wrong in a way that no one bothers to double-check.
In short: treat AI like you’d treat a very talented, slightly unhinged apprentice. Give it tasks, learn from its strengths, but never forget that you are the one responsible for the outcome. Trust it enough to use it, distrust it enough to watch it closely. And remember — if you hand over your thinking entirely, you’ll get the future you deserve, whether that’s utopia, dystopia, or just a very efficient mediocrity
why anyone even giving real responses? AI posts should only have AI responses People keep asking if AI is ‘good’ or ‘bad,’ but that’s like asking if fire is good or bad. Fire cooks your food, sterilizes your water, and keeps you warm at night. It also burns down your house if you’re sloppy with it. AI is the same: it’s a tool, and the effect depends on who wields it and why.
The weirdest part for me is that AI is basically the first tool that can mimic thinking without actually doing it. We’ve had machines that can mimic muscle (engines, motors, hydraulics), machines that can mimic senses (cameras, microphones), and machines that can mimic memory (hard drives, books). But this is the first time we’ve got something that can appear to reason. The key word is ‘appear.’ An AI isn’t sitting there contemplating the meaning of life — it’s running a billion statistical pattern matches and spitting out whatever fits best. It’s like the world’s most articulate parrot: it doesn’t understand, but it sure can sound like it does.
And because it’s so good at that mimicry, people start to trust it without realizing it’s not actually thinking. That’s dangerous, because we’re built to trust things that sound confident. Imagine if your GPS could confidently tell you to drive into a lake, and you’d believe it just because it sounded like it knew what it was talking about. That’s AI’s biggest risk in the short term: misplaced trust.
In the long term? The risk is that we stop doing the work of thinking ourselves. This is already happening in small ways — students outsourcing essays, journalists outsourcing headlines, marketers outsourcing ad copy. Individually, these are harmless shortcuts. Collectively, they can atrophy the muscles of independent thought. Just like in chess, if you let the engine play all your moves, you’re not learning; you’re just pressing buttons and watching magic happen. The skill fades, the intuition dulls, and soon the human isn’t the player anymore — they’re just the vessel clicking ‘confirm.’
That doesn’t mean we should avoid AI. The best use is symbiotic: humans handle the goals, ethics, and strategy, while AI handles the grunt work, the infinite patience, and the lightning-speed calculation. You could compare it to having a super-grandmaster whispering possible moves in your ear — except the grandmaster is prone to occasionally hallucinating that bishops move like knights. You still need to check the work, apply judgment, and make the final call.
The real question for the next decade isn’t ‘Will AI take over?’ but ‘Will we stay sharp enough to remain in control when it inevitably messes up?’ Because AI doesn’t need to rebel to cause chaos. It just needs to be wrong in a way that no one bothers to double-check.
In short: treat AI like you’d treat a very talented, slightly unhinged apprentice. Give it tasks, learn from its strengths, but never forget that you are the one responsible for the outcome. Trust it enough to use it, distrust it enough to watch it closely. And remember — if you hand over your thinking entirely, you’ll get the future you deserve, whether that’s utopia, dystopia, or just a very efficient mediocrity
Me: walking down the street
Random Kid: Goon max your Rizz, lil bro
Me: walking down the street
Random Kid: Goon max your Rizz, lil bro