Comment by sdoering
17 days ago
This reminds me of the recurring pattern with every new medium: Socrates worried writing would destroy memory, Gutenberg's critics feared for contemplation, novels were "brain softening," TV was the "idiot box." That said, I'm not sure "they've always been wrong before" proves they're wrong now.
Where I'm skeptical of this study:
- 54 participants, only 18 in the critical 4th session
- 4 months is barely enough time to adapt to a fundamentally new tool
- "Reduced brain connectivity" is framed as bad - but couldn't efficient resource allocation also be a feature, not a bug?
- Essay writing is one specific task; extrapolating to "cognition in general" seems like a stretch
Where the study might have a point:
Previous tools outsourced partial processes - calculators do arithmetic, Google stores facts. LLMs can potentially take over the entire cognitive process from thinking to formulating. That's qualitatively different.
So am I ideologically inclined to dismiss this? Maybe. But I also think the honest answer is: we don't know yet. The historical pattern suggests cognitive abilities shift rather than disappear. Whether this shift is net positive or negative - ask me again in 20 years.
[Edit]: Formatting
Soapbox time.
They were arguably right. Pre literate peole could memorise vast texts (Homer's work, Australian Aboriginal songlines). Pre Gutenberg, memorising reasonably large texts was common. See, e.g. the book Memory Craft.
We're becoming increasingly like the Wall E people, too lazy and stupid to do anything without our machines doing it for us, as we offload increasing amounts onto them.
And it's not even that machines are always better, they only have to be barely competent. People will risk their life in a horribly janky self driving car if it means they can swipe on social media instead of watching the road - acceptance doesn't mean it's good.
We have about 30 years of the internet being widely adopted, which I think is roughly similar to AI in many ways (both give you access to data very quickly). Economists suggest we are in many ways no more productive now than when Homer Simpson could buy a house and raise a family on a single income - https://en.wikipedia.org/wiki/Productivity_paradox
Yes, it's too early to be sure, but the internet, Google and Wikipedia arguably haven't made the world any better (overall).
> Pre literate peole could memorise vast texts
It seems more likely that there were only a handful of people who could. There still are a handful of people who can, and they are probably even better than in the olden times [1] (for example because there are simply more people now than back then.)
[1] https://oberlinreview.org/35413/news/35413/ (random first link from Google)
Yes, there is some actual technique to learn and then with moderate practice it's possible to accurately memorize surprisingly long passages, especially if they have any consistent structure. Reasonable enough to guess that this is a normally distributed skill, talent, domain of expertise.
Used to be, Tony Soprano could afford a mansion in New Jersey, buy furs for his wife, and eat out at the strip club for lunch every day, all on a single income as a waste management specialist.
Brains are adaptive. We're not getting dumber, we're just adapting to a new environment. Just because they're less fit for other environments doesn't make it worse.
As for the productivity paradox, this discounts the reality that we wouldn't even be able to scale the institutions we're scaling without the tech. Whether that scaling is a good thing is debatable.
> Brains are adaptive.
They are, but you go on to assume that they will adapt in a good way.
Bodies are adaptive too. That didn't work out well for a lot of people when their environment changed to be sedentary.
2 replies →
Brains are adaptive and as we adapt we are turning more cognitive unbalanced. We're absorbing potentially bias information at a faster rate. GPT can give you information of X in seconds. Have you thought about it? Is that information correct? Information can easily be adapted to sound real while masking the real as false.
Launching a search engine and searching may spew incorrectness but it made you make judgement, think. You could have two different opinions one underneath each other; you saw both sides of the coin.
We are no longer critical thinking. We are taking information at face value, marking it as correct and not questioning is it afterwards.
The ability to evaluate critically and rationally is what's decaying. Who opens an physical encyclopedia nowadays? That itself requires resources, effort and time. Add in life complexity; that doesn't help us in evaluating and rejecting consumption of false information. The Wall-E view isn't wrong.
5 replies →
> Just because they're less fit for other environments doesn't make it worse.
You think it's likely that we offload cognitive difficulty and complexity to machines, and our brains don't get worse at difficult, complex problems?
Brains are adaptive but skills are cumulative. You can't get good at what you don't practice.
> Just because they're less fit for other environments doesn't make it worse.
It literally does. If your brain shuts down the moment you can't access your LLM overlord then you're objectively worse.
2 replies →
> Homer Simpson
I can't stress this enough, Homer Simpson is a fictional character from a cartoon. I would not use him in an argument about economics any more than I would use the Roadrunner to argue for road safety.
No, it's useful evidence in the same way that contemporaneous fiction is often useful evidence. The first season aired from 1989-1990. The living conditions from the show were plausible. I know because I was alive during that time. My best friend was the son of a vacuum cleaner salesman with a high school education, and they owned a three bedroom house in a nice area, two purebred dogs, and always had new cars. His mom never worked in any capacity. My friend played baseball on a travel team and eventually he went to a private high school.
A 2025 Homer is only plausible if he had some kind of supplemental income (like a military pension or a trust fund), if Marge had a job, if the house was in a depressed region, or he was a higher level supervisor. We can use the Simpsons as limited evidence of contemporary economic conditions in the same way that we could use the depictions of the characters in the Canterbury Tales for the same purpose.
7 replies →
I also cited more serious analysis.
Yeah, Homer Simpson is fictional, a unionised blue-collar worker with specialised skills, and he lives in a small town.
> They were arguably right
I think they were right that something was lost in each transition.
But something much bigger was also gained, and I think each of those inventions were easily worth the cost.
But I'm also aware that one cost of the printing press was a century of very bloody wars across Europe.
There are still people that memorise the entire Quran word for word.
But it’s a complete waste of time. What is the point spending years memorising a book?
You seem like the kind of person that would still be eating rotten carcasses on the plains while the rest of us are sitting around a fire.
> They were arguably right. Pre literate peole could memorise vast texts (Homer's work, Australian Aboriginal songlines). Pre Gutenberg, memorising reasonably large texts was common. See, e.g. the book Memory Craft.
> We're becoming increasingly like the Wall E people, too lazy and stupid to do anything without our machines doing it for us, as we offload increasing amounts onto them.
You're right about the first part, wrong about the second part.
Pre-Gutenberg people could memorize huge texts because they didn't have that many texts to begin with. Obtaining a single copy cost as much as supporting a single well-educated human for weeks or months while they copied the text by hand. That doesn't include the cost of all the vellum and paper which also translated to man-weeks of labor. Rereading the same thing over and over again or listening to the same bard tell the same old story was still more interesting than watching wheat grow or spinning fabric, so that's what they did.
We're offloading our brains onto technology because it has always allowed us to function better than before, despite an increasing amount of knowledge and information.
> Yes, it's too early to be sure, but the internet, Google and Wikipedia arguably haven't made the world any better (overall).
I find that to be a crazy opinion. Relative to thirty years ago, quality of life has risen significantly thanks to all three of those technologies (although I'd have a harder time arguing for Wikipedia versus the internet and Google) in quantifiable ways from the lowliest subsistence farmers now receiving real time weather and market updates to all the developed world people with their noses perpetually stuck in their phones.
You'd need some weapons grade rose tinted glasses and nostalgia to not see that.
Economists suggest we are in many ways no more productive now than when Homer Simpson could buy a house and raise a family on a single income - https://en.wikipedia.org/wiki/Productivity_paradox
1 reply →
I certainly can't memorize Homer's work, and why would I? In exchange I can do so much more. I can find an answer to just about any question on any subject better than the most knowledgeable ancient Greek specialist, because I can search the internet. I can travel faster and further than their best explorers, because I can drive and buy tickets. I have no fighting experience, but give me a gun and a few hours of training and I could defeat their best champions. I traded the ability to memorize the equivalent of entire books to a set of skills that combined with modern technological infrastructure gives me what would be godlike powers at the time of the ancient Greeks.
In addition to these base skills, I also have specialized skills adapted to the modern world, that is my job. Combined with the internet and modern technology I can get to a level of proficiency that no one could get to in the ancient times. And the best part: I am not some kind of genius, just a regular guy with a job.
And I still have time to swipe on social media. I don't know what kind of brainless activities the ancient Greeks did, but they certainly had the equivalent of swiping on social media.
The general idea is that the more we offload to machines, the more we can allocate our time to other tasks, to me, that's progress, that some of these tasks are not the most enlightening doesn't mean we did better before.
And I don't know what economist mean by "productivity", but we can certainly can buy more stuff than before, it means that productivity must have increased somewhere (with some ups and downs). It may not appear in GDP calculations, but to me, it is the result that counts.
I don't count home ownership, because you don't produce land. In fact, that land is so expensive is a sign of high global productivity. Since land is one of the few things that we need and can't produce, the more we can produce the other things we need, the higher the value of land is, proportionally.
> Pre literate peole could memorise vast texts
Pre literate peole HAD TO memorise vast texts
Instead of memorizing vasts amount of text modern people memorize the plots of vast amounts of books, moves, TV shows, and video games and pop culture.
Computers are much better at remembering text.
You’re currently using the internet.
That doesn't contradict anything they wrote.
1 reply →
That’s a lot of assumptions.
> People will risk their life in a horribly janky self driving car if it means they can swipe on social media instead of watching the road - acceptance doesn't mean it's good.
People will risk their and others' lives in a horribly janky car if it means they can swipe on social media instead of watching the road - acceptance doesn't mean it's good.
FTFY
Needs more research. Fully agree on that.
That said:
TV very much is the idiot box. Not necessarily because of the TV itself but rather whats being viewed. An actual engaging and interesting show/movie is good, but last time I checked, it was mostly filled with low quality trash and constant news bombardment.
Calculators do do arithmetic and if you ask me to do the kind of calculations I had to do in high school by hand today I wouldnt be able to. Simple calculations I do in my head but my ability to do more complex ones diminished. Thats down to me not doing them as often yes, but also because for complex ones I simply whip out my phone.
> Calculators do do arithmetic and if you ask me to do the kind of calculations I had to do in high school by hand today I wouldnt be able to
I got scared by how awfully my juniour (middle? 5-11) school mathematics had slipped when helping my 9 year old boy with his homework yesterday.
I literally couldn't remember how to carry the 1 when doing subtractions of 3 digit numbers! Felt literally idiotic having to ask an LLM for help. :(
On my part, I don't use that carry method at ll. When I have to substract, I substract by chunks that my brain can easily subtract. For example 1233 - 718, I'll do 1233 - 700 = 533 then 533 - 20 = 513 then 513 + 2 = 515. It's completely instinctive (and thus I can't explain to my children :-) )
What I have asked my children to do very often is back-of-the-envelope multiplications and other computations. That really helped them to get a sense of the magnitude of things.
8 replies →
It's more complex than that. The three pillars of learning are theory (finding out about the thing), practice (doing the thing) and metacognition (being right, or more importantly, wrong. And correcting yourself.). Each of those steps reinforce neural pathways. They're all essential in some form or another.
Literacy, books, saving your knowledge somewhere else removes the burden of remembering everything in your head. But they don't come into effect into any of those processes. So it's an immensely bad metaphor. A more apt one is the GPS, that only leaves you with practice.
That's where LLMs come in, and obliterate every single one of those pillars on any mental skill. You never have to learn a thing deeply, because it's doing the knowing for you. You never have to practice, because the LLM does all the writing for you. And of course, when it's wrong, you're not wrong. So nothing you learn.
There are ways to exploit LLMs to make your brain grow, instead of shrink. You could make them into personalized teachers, catering to each student at their own rhythm. Make them give you problems, instead of ready-made solutions. Only employ them for tasks you already know how to make perfectly. Don't depend on them.
But this isn't the future OpenAI or Anthropic are gonna gift us. Not today, and not in a hundred years, because it's always gonna be more profitable to run a sycophant.
If we want LLMs to be the "better" instead of the "worse", we'll have to fight for it.
Yes, I wrote this comment under someone else's comment before, but it seems to apply to yours even better.
Your criticism of this study is roughly on point, IMO. It's not badly designed by any means, but it's an early look. There are already similar studies on the (cognitive) effects of LLMs on learning, but I suspect this one gets the attention because it's associated with the MIT brand.
That said, these kinds of studies are important, because they reveal that some cognitive changes are evidently happening. Like you said, it's up to us to determine if they're positive or negative, but as is probably obvious to many, it's difficult to argue for the status quo.
If it's a negative change, teachers have to go back to paper-and-pen essay writing, which I was personally never good at. Or they need to figure out stable ways to prevent students from using LLMs, if they are to learn anything about writing.
If it's a positive change, i.e., we now have more time to do "better" things (or do things better), then teachers need to figure out substitutes. Suddenly, a common way of testing is now outdated and irrelevant, but there's no clear thing to do instead. So, what do they do?
I think novels and tv are bad examples, as they are not substituting a process. The writing one is better.
Here’s the key difference for me: AI does not currently replace full expertise. In contrast, there is not a “higher level of storage” that books can’t handle and only a human memory can.
I need a senior to handle AI with assurances. I get seniors by having juniors execute supervised lower risk, more mechanical tasks for years. In a world where AI does that, I get no seniors.
Not sure "they've always been wrong before" applies to TV being the idiot box and everything after
> The historical pattern suggests cognitive abilities shift rather than disappear.
Shift to what? This? https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...
What the hell have I just read (or at least skimmed)?? I cannot understand if the author is:
a) serious, but we live on different planets
b) serious with the idea, tongue-in-check in the style and using a lot of self-irony
c) an ironic piece with some real idea
d) he is mocking AI maximalists
There was discussion about this here a couple of weeks ago: https://news.ycombinator.com/item?id=46458936
Steve Yegge's a famous developer, this is not a joke :) You could say he is an AI maximalist, from your options I'd go with (b) serious with the idea, tongue-in-check in the style and using a lot of self-irony.
It is exaggerated, but this is how he sees things ending up eventually. This is real software.
If things do end up in glorified kanban boards, what does it mean for us? That we can work less and use the spare time reading and doing yoga, or that we'll work the same hours with our attention even more fragmented and with no control over the outputs of these things (=> stress).
I'd really wish that people who think this is good for us and are pushing for this future do a bit better than:
1. More AI 2. ??? 3. Profit
1 reply →
Just ignore the rambling crypto shill.
I agree with Socrates and too many people have the wrong memory of him, making his prediction come true. There was a great philosophical book last year, Open Socrates [1], that explains his methods and ideas are the opposite direction of how most people use AI. Socrates believed we can only get closer to knowledge through the process of open, inquisitive conversation with other beings who are willing to refute us and be refuted in turn. He claimed ideas can only be expressed and shared in dialogue & live conversation. The one-direction communication of all the media since books have lacked this, and AI's version of dialogue is sycophancy and statistical common patterns instead of fresh ideas.
[1] https://www.nytimes.com/2025/01/15/books/review/open-socrate...
I'm sure you could train an AI to be skeptical/critical by default. The "you're absolutely right!" AIs are probably always going to be more popular, though.
> TV was the "idiot box."
TV is the uber idiot box, the overlord of the army of portable smart idiot boxes.
I think that is a VERY false comparison. As you say, LLMs try to take over entire cognitive and creative processes and that is a bigger problem then outsourcing arithmetic
> 4 months is barely enough time to adapt to a fundamentally new tool
Yes, but also the extra wrinkle that this whole thing is moving so fast that 4 months old is borderline obsolete. Same into the future, any study starting now based on the state of the art on 22/01/2026 will involve models and potentially workflows already obsolete by 22/05/2026.
We probably can't ever adapt fully when the entire landscape is changing like that.
> Previous tools outsourced partial processes - calculators do arithmetic, Google stores facts. LLMs can potentially take over the entire cognitive process from thinking to formulating. That's qualitatively different.
Yes, but also consider that this is true of any team: All managers hire people to outsource some entire cognitive process, letting themselves focus on their own personal comparative advantage.
The book "The Last Man Who Knew Everything" is about Thomas Young, who died in 1829; since then, the sum of recorded knowledge has broadened too much for any single person to learn it all, so we need specialists, including specialists in managing other specialists.
AI is a complement to our own minds with both sides of this: Unlike us, AI can "learn it all", just not very well compared to humans. If any of us had a sci-fi/fantasy time loop/pause that let us survive long enough to read the entire internet, we'd be much more competent than any of these models, but we don't, and the AI runs on hardware which allows it to.
For the moment, it's still useful to have management skills (and to know about and use Popperian falsification rather than verification) so that we can discover and compensate for the weaknesses of the AI.
> they've always been wrong before
Were they? It seems that often the fears came true, even Socrates’
Writing didn't destroy memory, it externalised it and made it stable and shareable. That was absolutely transformative, and far more useful than being able to re-improvise a once-upon-a-time heroic poem from memory.
It hugely enhanced synthetic and contextual memory, which was a huge development.
AI has the potential to do something similar for cognition. It's not very good at it yet, but externalised cognition has the potential to be transformative in ways we can't imagine - in the same way Socrates couldn't imagine Hacker News.
Of course we identify with cognition in a way we didn't do with rote memory. But we should possibly identify more with synthetic and creative cognition - in the sense of exploring interesting problem spaces of all kinds - than with "I need code to..."
> AI has the potential to do something similar for cognition. It's not very good at it yet, but externalised cognition has the potential to be transformative in ways we can't imagine - in the same way Socrates couldn't imagine Hacker News.
Wouldnt the endgame of externalized cognition be that humans essentially become cogs in the machine?
> in the same way Socrates couldn't imagine Hacker News.
Perhaps he could. If there’s an argument to be made against writing, social media (including HN) is a valid one.
Regardless of whether memory was externalised, it’s still the case that it was lost internally, that much is true. If you really care about having a great internal memory then of course you’ll think it’s a downside.
So we’ve externalised memory, we’ve externalised arithmetic. Personally the idea of externalising thinking seems to be the last one? It’s not clear what’s left inside us of being a human once that one is gone
It did destroy memory though. I would bet any amount of money that our memories in 2026 are far, far worse than they were in 1950 or 1900.
In fact, I can feel my memory is easily worse now than from before ChatGPT's release, because we are doing less hard cognitive work. The less we use our brain's the dumber we get, and we are definitely using our brains less now.
2 replies →
"Socrates worried writing would destroy memory".
He may have been right... Maybe our minds work in a different way now.
Back when I routinely dialed phone numbers by hand (either on a keypad or on a literal dial), I memorized the numbers I called most frequently. Many of those numbers I still have memorized today, years after some of those phone lines have been disconnected.
But now? I almost never enter a new phone number anywhere. Maybe someone shares a contact with me, and I tap to add it to my contact list. Or I copy-paste a phone number. Even some people that I contact frequently, I have no idea what their phone number is, because I've never needed to "know" it, I just needed to have it in my contact list.
I'm not sure that this is a bad thing, but definitely is a thing.
Ah, well, more memory space for other stuff, eh? I suppose. But like what? I could describe other scenarios, in which I used to have more facts and figures memorized, but simply don't any more, because I don't need to. While perhaps my memory is freed up to theoretically store more other things, in practice, there's not much I really "need" to store.
Even if no longer memorizing phone numbers isn't especially bad, I'm starting to think that no longer memorizing anything might not be a great idea.
> That said, I'm not sure "they've always been wrong before" proves they're wrong now.
I think a better framing would be "abusing (using it too much or for everything) any new tool/medium can lead to negative effects". It is hard to clearly define what is abuse, so further research is required, but I think it is a healthy approach to accept there are downsides in certain cases (that applies for everything probably).
Were any of the prior fears totally wrong?
> This reminds me of the recurring pattern with every new medium: Socrates worried writing would destroy memory, Gutenberg's critics feared for contemplation, novels were "brain softening," TV was the "idiot box." That said, I'm not sure "they've always been wrong before" proves they're wrong now.
What do you mean? All of them were 100% right. Novels are brain softening, TV is an idiot box, and writing destroys memory. AI will destroy the minds of people who use it much.
How do we know they were wrong before?
To be fair, writing did destroy memory. It's just that in the very long summer of writing, which may now be coming to an end thanks to AI, we have considered the upside more than worth it.
If you realize that what we remember are the extremized strawman versions of the complaints then you can realize that they were not wrong.
Writing did eliminate the need for memorization. How many people could quote a poem today? When oral history was predominant, it was necessary in each tribe for someone to learn the stories. We have much less of that today. Writing preserves accuracy much more (up to conquerors burning down libraries, whereas it would have taken genocide before), but to hear a person stand up and quote Desiderata from memory is a touching experience to the human condition.
Scribes took over that act of memorization. Copying something lends itself to memorization. If you have ever volunteered extensively for project Gutenberg you can also witness a similar experience: reading for typos solidifies the story into your mind in a way that casual writing doesn't. In losing scribes we lost prioritization of texts and this class of person with intimate knowledge of important historical works. With the addition of copyright we have even lost some texts. We gained the higher availability of works and lower marginal costs. The lower marginal costs led to...
Pulp fiction. I think very few people (but I would be disappointed if it was no one) would argue that Dan Brown's da Vinci Code is on the same level as War and Peace. From here magazines were created, even cheaper paper, rags some would call them (or use that to refer to tabloids). Of course this also enabled newspapers to flourish. People started to read things for entertainment, text lost its solemnity. The importance of written word diminished on average as the words being printed became more banal.
TV and the internet led to the destruction of printed news, and so on. This is already a wall of text so I won't continue, but you can see how it goes:
Technology is a double edged sword, we may gain something but we also can and did lose some things. Whether it was progress or not is generally a normative question that often a majority agrees with in one sense or another but there are generational differences in those norms.
In the same way that overuse of a calculator leads to atrophy of arithmetic skills, overuse of a car leads to atrophy of walking muscles, why wouldn't overuse of a tool to write essays for you lead to atrophy of your ability to write an essay? The real reason to doubt the study is because its conclusion seems so obvious that it may be too easy for some to believe and hide poor statistical power or p-hacking.
I think your take is almost irrefutable, unless you frame human history as the only possible way to achieve current humanity status and (unevenly distributed) quality of life.
I also find exhausting the Socrates reference that's ALWAYS brought up in these discussions. It is not the same. Losing the collective ability to recite a 10000 words poem by heart because of books it's not the same thing as stopping to think because an AI is doing the thinking for you.
We keep adding automation layers on top of the previous ones. The end goal would be _thinking_ of something and have it materialized in computer and physical form. That would be the extreme. Would people keep comparing it to Socrates?
None of the examples you provided were being sold as “intelligence”
What study? Try it yourself.
> TV was the "idiot box."
To be fair, I think this one is true. There's a lot of great stuff you can watch on TV, but I'd argue that TV is why many boomers are stuck in an echo chamber of their own beliefs (because CNN or fox news or whatever opinion-masquerading-as-journalism channel is always on in the background). This has of course been exacerbated by social media, but I can't think of many productive uses of TV other than sesame Street and other kids shows.
>TV was the "idiot box."
Still is.
What does not get used, atrophies.
Critical thinking, forming ideas, writing, etc, those are too stuff that can atrophy if not used.
For example, a lot of people can't locate themselves without a GPS today.
To be frank I see it really similar to our muscles: don't want to lose it? Use it. Whether that is learning a language, playing an instrument or the task llms perform.
Well said
> "they've always been wrong before"
In my opinion, they've almost always been right.
In the past two decades, we've seen the less-tech-savvy middle managers who devalued anything done on computer. They seemed to believe that doing graphic design or digital painting was just pressing a few buttons on the keyboard and the computer would do the job for you. These people were constantly mocked among online communities.
In programmers' world, you have seen people who said "how hard it could be? It's just adding a new button/changing the font/whatever..."
And strangely, in the end those tech muggles were the insightful ones.