Just wanna point out that every time something scares you enough, it also reprograms/rewires your brain. Not trying to discredit the study, but the reprogramming really isn’t the concern; it’s if the reprogramming is beneficial, which this isn’t.
The study appears to be saying something different from what the headline implies.
Basically it might be better to say that using an LLM doesn’t require you to think as hard, you remember less of the essay, and when you go back to rewrite a previous essay without the LLM you have more trouble.
They also noted that for some people, using the LLM made them learn much better. Basically the difference between getting it to write for you and using it as a tool to structure information.
One reduced cognitive load from all sources, and the other reduced load relating to integrating different information sources.Basically it was a proper study by people who knew what they were doing. They never actually said anything about rewiring.
This comment just reprogrammed my brain after the reprogramming it got from that post 😮
Clicked the article to give it a read, saw the slop they’re using right next to the text, laughed, closed the damn thing
using AI images in an article about AI use leading to cognitive decline gotta be crazy💀
Yeah but what do you expect them to do, actually pay a human to make sure they don’t do that?
Or just survive on the merit of the text content?
honestly it’s hard to tell if the text is also ai slop before reading it fully. and I usually don’t have much time to waste on shitty articles, so i just skip those with ai slop images.
Try the more original link.
Does context escape your brain, The images are not the focus of this article, the fucking article is you weirdo
You know I’m getting real tired of stupid people being online and thinking they’re allowed to speak to me
Then why do you put that reply button under your post?
Yawn
The name and presentation of that site has a veneer of legitimacy, but it really doesn’t seem credible.
There’s also a lot of general antivax stuff.
Now, sharing a lot of … Questionable articles… Doesn’t make the article in question invalid. It does however call into extreme doubt any editorial context the site might be adding.
https://arxiv.org/pdf/2506.08872
This is the actual study being referenced. It’s conclusions are significantly less severe than this presents them as, while still conveying “LLMs are not generally the best tool for facilitating education”.
trade-off highlights an important educational concern: AI tools, while valuable for supporting performance, may unintentionally hinder deep cognitive processing, retention, and authentic engagement with written material. If users rely heavily on AI tools, they may achieve superficial fluency but fail to internalize the knowledge or feel a sense of ownership over it.
from an educational standpoint, these results suggest that strategic timing of AI tool introduction following initial self-driven effort may enhance engagement and neural integration. The corresponding EEG markers indicate this may be a more neurocognitively optimal sequence than consistent AI tool usage from the outset
Ultimately, this isn’t saying AI tools cause brain damage or make you stupid. It’s saying that learning via LLM often causes worse retention of the information being learned. It also says that search engines and LLMs can remove certain types of cognitive load that are not conducive to retention, making learning easier and faster in some cases where engagement can be kept high.
It’s important to be clear and honest about what a study is saying, even if it’s not as unequivocally negative as the venue might appreciate.
Well yeah thanks. The headline is an obvious lie so that’s kind of a red flag.
It’s important to be clear and honest about what a study is saying, even if it’s not as unequivocally negative as the venue might appreciate.
Of course. If you’re talking about presenting nuance then I would just briefly mention the generation of studies that showed exposure to television reduced cognitive abilities and were full of nuance. Because all of those studies were ignored, and more showing television advertising had no effect on people (how did those studies get funded I wonder, well anyway) nothing happened and here we are in Libertarian paradise.
AI is much more affecting and it’s adoption isn’t being “offered”, it’s being mandated. I think we can dispense with some of the nuance in headlines and leave that to the researchers looking at the raw data.
Nah, I don’t think we can. You may be okay with hyperbolic lies from an antivax quackery website, but I’m not.
I think our use of LLMs is overblown and rife with issues, but I don’t think the answer to that is to wrap your concerns in so much obvious bullshit that anyone who does even a cursory glance will see that it’s bunk. All you do is convey “people who think LLMs and generative AI are worrisome are full of shit”.
AI is much more affecting
Gee, if only there were some way to find information that validates those claims and be confident that people haven’t labeled them grossly incorrectly…
Why are you talking about TV, as an aside? People doing research poorly or ignoring research in the past is irrelevant to if we should lie to people now.
Why are you talking about TV, as an aside? People doing research poorly or ignoring research in the past is irrelevant to if we should lie to people now.
First of all, it’s as relevant as anything can be. Just say you don’t know anything about it. Secondly, who’s lying?
The article you linked to. Most people would call “saying inaccurate things” was a form of “lying”.
Explain why it’s relevant. I get that you’re saying “they said TV was fine and it caused problems”. I don’t see how that’s relevant to “we should say things that aren’t true about AI”.
Amusingly using what appears to be an AI generated image.
They always have a weird overuse of sepia tones.
Tech bros stay losing, it is a good day
I feel like kids are the primary loss-sufferers here :(
(phrasing there is me trying not to call them losers)
Hooray!
We invented cyberpsychosis, for reals!
Isn’t it so cool to live in a cyberpunk dystopia!?!
Brb, gonna go OD on some early access/preview alpha braindances!
I don’t know why, but I think I just realized what happened to my ex. I thought we were mending our relationship before she started sexting the guy she had an affair with, but it seemed really dumb, even for her.
But I also remember when Chat-gpt came out, that was the time she started using a VPN. Why? Idk didn’t bother me. But then I read about the LLMs essentially just being the ultimate sycophant, and studies like this show harm to critical thought, and I’m wondering - is this what happened with her?
Ever since I moved out, she just sort of got dumber. Like it’s possible I was blissfully unaware of just how unintelligent she was, but I think I would have even noticed some of this. This might be a bigger problem than we know of.
Why would she need a VPN to access chat gpt though?
I actually just sought out this comment to see if any reason was given about what a VPN had to do with any of it.
I think it was just a strange coincidence. In the past she never took my comments on IT security seriously, so it seemed odd that at the same time she started using Chat-gpt she started using the VPN.
Shortly after that, she wanted me to pay a credit card bill of hers, I said sure no problem, just get me the statement so I know how much. She refused. She could have just given me the total, but she refused because “I wanted to verify her purchases.”
That, obviously, made me very upset, because I wasn’t suspicious until she said I wanted to inspect her statement. That weekend she traveled out of state, and when she came back I discovered the sexting.
Clearly she was on the way out, but the point remains, did chat-gpt accelerate things downward? That’s my question.
Ah, that makes some sense. At a guess I’d say it’s plausible that she was asking chatgpt how to hide info online and it suggested a VPN. If she never used one or talked about digital privacy before that it could make sense.
There’s a good body of research on cognitive capacity and creativity in regard to enriching environments. Even down to rats. Give rats playgrounds and toys and they perform better at memory tasks and solving puzzles.
I suppose you could train rats to press a button to get a human to come solve problems for them. Take the human away, then what?
What’s insidious here is the same over scheduled kids, having their childhoods choreographed for enrichment, are often the ones coming out of childhood with critical thinking, anxiety, and socialization deficits, we think because they’re using their phones for every problem solve.
There are a million legit reasons to avoid and despise LLMs, their makers, and their pushers. I don’t think this is one of them.
Literally every piece of technology introduced in the past thousand years has had this kind of hue and cry built up around it, beginning with the printing press and books in Europe. Every form of communication or information technology has had “studies” (or what passed for them in ages past) claiming that the new technology would ruin the minds and morals of people who used it. Remember when television would “rot kids’ minds”? Remember when the Internet was going to end civilization as we know it?
This study is just more of the same. You’ll find equivalent studies about television back in the '50s to even as late as the '70s.
There are (a myriad of) good arguments for despising LLMs. This (not yet peer-reviewed) MIT study is not one of them. (And I should point out that the actual paper instead of this summary of it has quite a bit more nuance than is reported in the linked article.)
The study itself is entirely benign, and I’d actually accept it as a reason to eschew AI in an educational context. Their conclusion is basically “if you use an LLM to write an essay you tend to not retain the information as well”, which is… Downright boring in how reasonable it is. Particularly given the converse observation I wouldn’t have expected: if you are already familiar with a subject then using an LLM to write an essay can strengthen your understanding.
The “journal” this summary of the study was shared in is quackery, so I’m not surprised they distorted the findings.
Nothing bad ever happens.
Oh, are we playing a game of non sequitur? OK. My move is:
炮二平五
Your move.
No, like, you’re seemingly saying “new inventions never cause health problems”
Y’know, like asbestos. It was a wonder material! And then the health effects were uncovered.
Well played! This is going to take cunning to counter in our little non sequitur games.
Uh…
Oh, I know!
I’ve got new socks on!
I’m more concerned about degu infestation if I’m being honest. No one knows how to handle them here so it just turns into a whole mess.
An interesting fact about azure-winged magpies is that despite their strikingly similar appearance to Iberian magpies found in Spain and Portugal, the two populations are separated by over 5,000 miles, with genetic evidence showing they diverged thousands of years ago.
Yeah the anti-“AI” hype is just as much of a problem as the pro-“AI” hype. Both are based on the premise that “AI” actually exists and help to fuel the hype machine. That’s that same reason why “AI” grifters “worry” about “AI” destroying the planet, etc. A bad delusion is better and more profitable than no delusion at all.
Literally every piece of technology introduced in the past thousand years has had this kind of hue and cry built up around it, beginning with the printing press and books in Europe.
The ill-concieved printing press argument is a standard pro-AI trope.
Remember when television would “rot kids’ minds”? Remember when the Internet was going to end civilization as we know it?
Yeah, i don’t know where you live but here in America our democracy is in shreds thanks to those things. They came true, just as they said it would. Environmental collapse is next, for those keeping score at home.
The ill-concieved printing press argument is a standard pro-AI trope.
P.S. Characterizing my post as pro-AI is so utterly fucking stupid it speaks volumes as to the real source of your nation’s tattered democracy. Just sayin’.
You had an enormously destructive civil war in the 19th century, but yes, go ahead and blame television and the Internet on your democracy being in tatters.
News flash, homey: your democracy was in tatters from the very outset. It has never not been in tatters.
News flash, homey: your democracy was in tatters from the very outset. It has never not been in tatters.
Since you apparently don’t live here, we’ll leave it there then.
Saying knee jerk rejection of new technology is common is pro-ai?
If it’s such a bad, ill-concieved notion, why don’t you explain why it’s wrong, instead of just saying that it’s used by people you disagree with?Maybe if an argument is used by both pro and anti AI people, it’s less a “pro-ai” argument, and more a “let’s keep in mind how often doom and gloom has been wrong and keep our criticism grounded”?
If it’s such a bad, ill-concieved notion, why don’t you explain why it’s wrong, instead of just saying that it’s used by people you disagree with?
Again? No. Maybe someone else feels like explaining it this time.
“let’s keep in mind how often doom and gloom has been wrong and keep our criticism grounded”?
My good dude, the sentient life on this planet is about to witness several extinction events in our lifetime because we ignored the grounded “doom and gloom” research. If you want to stand on the big red X this time while a loud whistling noise and billionaire cackling gets louder, that’s up to you but as many people have tried to say before - it’s a bad idea, don’t do it.
Yeah, you’re attacking people who agree with you, but disagree with your notion that we can ignore “reality” in describing why it’s a bad idea.
Have fun with that.
deleted by creator
Related post from when this was initially published: https://midwest.social/post/30021473
Oh, baby, did you read the thing? Something tells me you didn’t read the thing.
Did they control this against my mental decline after using a rage-inducing traditional search for 10x as long? I get the point being made, but nothing exists in a vacuum.
Actually, yes. I know you were making a point, but the study this pile of garbage article referenced actually did comparisons between people using an LLM, people using a search engine, and people using neither.
The study is a lot more reasonable. It basically says using an LLM to write a research paper on a subject makes you retain less of the subject matter.
They specifically mention the cognitive load of using the tool, be it search engine or LLM, and how that load doesn’t contribute to knowledge retention.
Their research indicates “over usage” as opposed to “any usage” is bad for learning a subject.
Some people think doing work makes them weaker at the end of the day. Other people do work and feel it makes them stronger for having done it so you’re clearly in the first cohort.
You’re making your boss’s wallet stronger.
The fuck does that mean. We’re talking about looking up information, the fuck you talking about
Misleading headline. This article isn’t about “AI” (which doesn’t exist) but about ChatGPT.
Perhaps the only thing worse than the garbage science behind the “AI” grift is the “journalism” promoting it.
While you are 100% correct. Good luck changing people’s understanding at this point.
deleted by creator
Thank you for saving me from typing that.