Accumulation of cognitive debt when using an AI assistant for essay writing task

7 months ago (arxiv.org)

I wouldn't call it "accumulation of cognitive debt"; just call it cognitive decline, or loss of cognitive skills.

And also DUH. If you stop speaking a language you forget it. The brain does not retain information that it does not need. Anybody remember the couple studies on the use of google maps for navigation? One was "Habitual use of GPS negatively impacts spatial memory during self-guided navigation"; another reported a reduction in gray matter among maps users.

Moreover, anyone who has developed expertise in a science field knows that coming to understand something requires pondering it, exploring how each idea relates to other things, etc. You can't just skim a math textbook and know all the math. You have to stop and think. IMO it is the act of thinking which establishes the objects in our mind such that they can be useful to our thinking later on.

  • > You can't just skim a math textbook and know all the math. You have to stop and think.

    And most importantly you have to write. A lot. Writing allows our brain to structure our thinking. Enables us to have a structured dialogue with ourselves. Explore different paths. Thinking & pondering can only do so much and will reach the limits soon. Writing, on the other hand enables one to explore thoughts nearly endlessly.

    Given that thinking is so intimately associated with writing (could be prose, drawing, equations, graphs/charts, whatever) and that LLMs are doing more and more of writing it'll be interesting to see the effect of LLMs on our cognitive skills.

    • The impact of writing is immensely undervalued. Even writing with a keyboard or screen is a lot more than non writing. Exercising writing on any topic is still beneficial, and you can find many psychologists recommend having a daily blog of some sort to help people observe themselves from a side. The same goes for speaking, public speech if u want, and therapeutic daily acting-playing which is also overlooked.

      I’d love to see some sort of study on people who actively particulate writing their stuff on social media and those who don’t.

      If u want to spare your mind from GPT numbness - write or copy what it tells you to do by hand, do not abandon this process.

      Or just write code, programs, essays, poems for fun. Trust me - it is and you’ll get smarter and more confident. GPT is a very dangerous convenience gadget, is not going away like sugar or Netflix, or obesity or long commutes … but similarly dosage and counter measures are essential to cope with the side-effects.

      3 replies →

    • > And most importantly you have to write. A lot. Writing allows our brain to structure our thinking.

      There's a lot of talk about AI assisted coding these days, but I've found similar issues where I'm unable to form a mental model of the program when I rely too much on them (amongst other issues where the model will make unnecessary changes, etc.). This is one of the reasons why I limit their use to "boring" tasks like refactoring or clarifying concepts that I'm unsure about.

      > it'll be interesting to see the effect of LLMs on our cognitive skills.

      These discussions remind me a lot about this comic[1].

      [1] https://www.monkeyuser.com/2023/deprecated/

    • > And most importantly you have to write. A lot. Writing allows our brain to structure our thinking. Enables us to have a structured dialogue with ourselves.

      I feel like to goes beyond writing to really any form of expressing this knowledge to others. As a grad student, I was a teaching assistant for an Electrical Engineering class I failed as an undergrad. The depth of understanding I developed for the material over the course of supporting students in the class was amazing. I transitioned from "knowing" the material and equations to being able to generate them all from first principles.

      Regardless, I fully agree that using LLMs as our form of expression will weaken both the ability to express ourselves AND the ability to develop deep understanding of topics as LLMs "think" for us too.

    • Writing is pure magic.It allows so much reflection and so many insights, that you wouldnt otherwise get. And writing as part of the reading process allows you to directly integrate what you are reading as you are doing it. Like cant recommend it enough. Only downside is that its slow, compared to what people are used and want to do, especially in the work environment.

    • I disagree with this take. I'd say often when exploring new math problems, often it's possible explore the possible solutions paths at lower technical levels first in your mind before anything down--when actually going into details of an approach. I don't think not writing is that limiting if all of your approaches already fail before going into details, which is often the case in early stages of math research.

      2 replies →

    • > And most importantly you have to write. A lot.

      I find this to still be true with AI assisted coding. Especially when I still have to build a map of the domain.

    • They made a documentary about this actually. You can probably find it on Netflix or something. It's called Idiocracy.

    • > And most importantly you have to write. A lot. Writing allows our brain to structure our thinking.

      Not to be pedantic, but I’d still argue that thinking is the most important. At least when understanding the nature of learning. I mean, writing is ultimately great because it facilitates high quality thinking. You essentially say this yourself.

      Overall, I think it’s more helpful to understand the learning process as promoting high quality thinking (encoding if you want to be technical). This sort of explains why teaching others, argumentation, mind-mapping, good note-taking, and other activities and techniques are great for learning as well.

  • I would call it cognitive debt. Have you ever tried writing a large report with an LLM?

    It's very tempting to let it write a lot, let it structure things, let it make arguments and visuals. It's easy to let it do more and more... And then you end up with something that is very much... Not yours.

    But your name is on it, you are asked to explain it, to understand it even better than it is written down. Surely the report is just a "2D projection" of some "high dimensional reality" that you have in you head... right? Normally it is, but when you spit out a report in 1/10th of the time it isn't. You struggle to explain concepts, even though they look nice on paper.

    I found that I just really have to do the work, to develop the mental models, to articulate and to re-articulate and re-articulate again. For different audiences in different ways.

    I like the term cognitive debt as a description of the gap between what mental models one would have to develop pre-LLMs to get a report out, and how little you may need with an LLM.

    In the end it is your name on that report/paper, what can we expect of you, the author? Maybe that will start slipping and we start expecting less over time? Maybe we can start skipping authors altogether and rely on the LLM's "mental" model when we have in depth questions about a report/paper... Who knows. But different models (like LLMs) may have different "models" (predictive algorithms) of underlying truth/reality. What allows for most accurate predictions? One needs a certain "depth of understanding". Writing while relying too much on LLMs will not give it to you.

    Over time indeed this may lead to a population "cognitive decline, or loss of cognitive skills." I don't dare to say that. Book printing didn't do that, although it was expected at the time by the religious elite, they worried that normal humans would not be able to interpret texts correctly.

    As remarked here in this thread before, I really do think that "Writing is thinking" (but perhaps there is something better than writing which we haven't invented yet). And thinking is: Developing a detailed mental model that allows you to predict the future with a probability better than chance. Our survival depends on it, in fact it is what evolution is in terms of information theory [0]. "Nothing in biology makes sense except in the light of ... information."

    [0] https://www.youtube.com/watch?v=4PCHelnFKGc

    • I found that I just really have to do the work, to develop the mental models, to articulate and to re-articulate and re-articulate again. For different audiences in different ways

      Yes definitely!

      I'd say that being able to turn an idea over in your head is how you know if you know it ... And even pre-LLM, it was easy to "appear to know" something, but not really know it.

      PG wrote pretty much this last year:

      in a couple decades there won't be many people who can write.

      So a world divided into writes and write-nots is more dangerous than it sounds. It will be a world of thinks and think-nots.

      https://paulgraham.com/writes.html

  • > The brain does not retain information that it does not need.

    Why do I still know how to optimize free conventional memory in DOS by configuring config.sys and autoexec.bat?

    I haven’t done this in 2 decades and I’m reasonably sure I never again will

    • Probably because you learned it during that brief period in your development in which humans are most impressionable.

      Now think about the effect on those humans currently using LLMs at that stage of their development.

    • Because these are core memories that provide stepping stones to later knowledge. It is a part of the story of you. It is very hard to integrate all knowledge in this way.

    • The last fast food place you went to, what does the ceiling look like? The exact colour/pattern?

      The last phone conversation you had with a utility company, how did they greet you exactly?

      There's lots that we do remember, sometimes odd things like your example, though I'm sure you must have repeated it a few times as well. But there's so much detail that we don't remember at all, and even our childhood memories just become memories of memories - we remember some event, but we slowly forget the exact details, they become fuzzy.

    • I think because some experiences are so profound to your brain ( first impression, moments that you are proud of ) that you just replay them over and over again.

    • I also think the claim that "the brain does not retain information it does not need" is an insufficient explanation, and short-sighted. As an example, reading books informs and shapes our thinking, and while people may not immediately recall a book that they read some time ago, I've had conversations where I remembered that I had read a particular passage (sentence, phrase, idea) and referred to it in the conversation.

      People do stuff like that all the time, bringing up past memories in spontaneity. The brain absolutely does remember things it "doesn't need".

    • To nitpick, your subconscious is aware computers have memory constraints even now and you write better code because of it even if you do javascript...

    • Probably because there was some reward that you felt at the time was important (most likely playing a DOS game).

      I did this for a living at a large corp where I was the 'thinkpad guy', and I barely remember any of the tricks (and only some of the IBM stuff). Then Windows NT and 95 came out and like whoo cares... This was always dogshit. Because I was always an Apple/Unix guy and that was just a job.

  • The terms “Cognitive decline” or “brain rot” may have sounded too sensational, and to be fair the authors note the limitations of the small sample size.

    Indeed the paper doesn’t provide a reference or citation for the term “cognitive debt” so it is a strange title. Maybe a last minute swap.

    Fascinating research out of MIT. Like all psychology studies it deserves healthy scrutiny and independent verification. Bit of a kitchen sink with the imaging and psychometric assessments, but who doesn’t love a picture of “this is your brain on LLMs” amirite?

  • > The brain does not retain information that it does not need.

    Sounds very plausible, though how does that square with the common experience that certain skills, famously 'riding a bike', never go away once learned?

    • Closer to the truth is that the brain never completely forgets something, in the sense that there are always vestiges left over, even after the ability to recall or instantly draw upon it is long gone. Studies show, for example, that after one has "forgotten" a language, they're quicker to pick up it again later on compared to someone without that prior experience; how quickly being time dependent, but more quickly nonetheless.

      OTOH, IME the quickest way to truly forget something is to overwrite it. Photographs being a notorious example, where looking at photographs can overwrite your own personal episodic memory of an event. I don't know how much research exists exploring this phenomenon, though, but AFAIU there are studies at least showing that the mere act of recalling can reshape memories. So, ironically, perhaps the best way not to forget is to not remember.

      Left unstated in the above is that we can categorize different types of memory--episodic, semantic, implicit, etc--based on how they seem to operate. Generalizations (like the above ;) can be misleading.

    • > Sounds very plausible, though how does that square with the common experience that certain skills, famously 'riding a bike', never go away once learned?

      I worked with some researchers who specifically examined this when developing training content for soldiers. They found that 'muscle memory' skills such as riding a bike could persist for a very long time. At the other end of the spectrum were tasks that involved performing lots of technical steps in a particular order, but where the tasks themselves were only performed infrequently. The classic example was fault finding and diagnosis on military equipment. The researchers were in effect quantifying the 'forgetting curve' for specific tasks. For some key tasks, you could overtrain to improve the competence retention, but it was often easier to accept that training would wear off very quickly and give people a checklist instead.

      1 reply →

    • I think a better way to say it is that the brain doesn't commit to long term memory things that it doesn't need.

      I remember hearing about some research they'd done on "binge watching" -- basically, if you have two groups:

      1. One group watches the entire series over the course of a week

      2. A second group watches a series one episode per week

      Then some time later (maybe 6 months), ask them questions about the show, and the people in group 2 will remember significantly more.

      Anecdotally, I've found the same thing with Scottish Country Dancing. In SCD, you typically walk through a dance that has 16 or so "figures", then for the next 10 minutes you need to remember the figures over and over again from different perspectives (as 1st couple, 2nd couple, 3rd couple etc). Fairly quickly, my brain realized that it only needed to remember the figures for 10 minutes; and even the next morning if you'd asked me what the figures were for a dance the night before I couldn't have told you.

      I can totally believe it's the same thing with writing with an LLM (or having an assistant write a speech / report for you) -- if you're just skimming over things to make sure it looks right, your brain quickly figures out that it doesn't need to retain this information.

      Contrast this to riding a bike, where you almost certainly used the skill repeatedly over the course of at least a year.

    • Riding a bike is a skill rather than what we would call a “memory” per se. It’s a skill that develops a new neural pathway throughout your extended nervous system bringing together the lesser senses of proprioception and balance. Once you bring these things together you then go on to use them for other things. You “know” (grok), rather than “understand” how a bike stays upright on a very deep physical level.

      2 replies →

    • Such a good question - I hope someone answers with more than an anecdote (which is all I can provide) - I've found the skills that don't leave you like riding a bike, swimming, cooking are all physical skills. Tangible.

      The skills that leave: arguments, analysis, language, creativity often seem abstract and primarily if not exclusively sourced in our minds

      2 replies →

    • I am not an expert in the subject but I believe that motor neurons retain memory, even those not located inside the brain. They may be subject to different constraints than other neurons.

  • > And also DUH. If you stop speaking a language you forget it. The brain does not retain information that it does not need.

    Except when it does-- for example in the abstract where it is written that Brain-to-LLM users "exhibited higher memory recall" than LLM and LLM-to-Brain users.

  • > You can't just skim a math textbook and know all the math.

    Curious, did anyone try to learn a subject by predicting the next token, and how did it go?

It feels, more and more, that LLMs will be another technology that society will inoculate itself against. It's already starting to happen in education: teachers conversing with students, observing them learn, observing them demonstrate their skills. In business, quickly I'd say, we will realise that the vast majority of worthwhile communication necessarily must be produced by people -- as authors of what they want to say. Authoring is two-thirds of the point of most communication.

Before this, of course, will be a dramatic "shallowness of thinking" shock that will have to occur before its ill-effects are properly inoculated against. It seems part of the expert aversion to LLMs -- against the credulous lovers of "mediocrity" (cf. https://fly.io/blog/youre-all-nuts/) -- is an early experience of inoculation:

Any "macroscopic usage" of LLMs has, in any of my projects, dramatically impaired my own thinking, stolen decisions-making, and worsened my readiness for necessary adaptions later-on. LLMs are a strictly microscopic fill-in system for me, in anything that matters.

This isn't like calculators: my favourite algorithms for by-hand computation arent being "taken away". This is a system for substituting thinking itself with non-thinking, and radically impairs your readiness (, depth, adaptability, ownership) wherever it is used, on whatever domain you use it on.

  • > In business, quickly I'd say, we will realise that the vast majority of worthwhile communication necessarily must be produced by people

    I believe that one of the most underappreciated skills in business is the ability to string a coherent narrative together. I attend many meetings with extremely-talented engineers who are incapable of presenting their arguments in a manner that others (both technical and non-technical) can follow them. There is an artistry to writing and speaking that I am only now in my late forties beginning to truly appreciate. Language is a powerful tool, the choice of a single word can sometimes make or break an argument.

    I don't see how LLMs can do anything but significantly worsen this situation overall.

    • > I believe that one of the most underappreciated skills in business is the ability to string a coherent narrative together. I attend many meetings with extremely-talented engineers who are incapable of presenting their arguments in a manner that others (both technical and non-technical) can follow them.

      Yes, but the arguments they need to present are not necessarily the ones they used to convince themselves, or their own reasoning history that made them arrive at their proposal. Usually that is an overly boring graph search like "we could do X but that would require Y which has disadvantage Z that theoretically could be salvaged by W, but we've seen W fail in project Q and especially Y would make such a failure more likely due to reason T, so Y isn't viable and therefore X is not a good choice even if some people argue that Y isn't a strict requirement, but actually it is if we think in a timeline of several years and blabla" especially if the decision makers have no time and no understanding of what the words X, Y, Z, W, Q, T etc. truly mean. Especially if the true reason also involves some kind of unspeakable office politics like wanting to push the tools developed by a particular team as opposed to another or wanting to use some tech for CV reasons.

      The narrative to be crafted has to be tailored for the point of view of the decision maker. How can you make your proposal look attractive relative to their incentives, their career goals, how will it make them look good and avoid risks of trouble or bad optics. Is it faster? Is it allowing them to use sexy buzzwords? Does it line up nicely with the corporate slogan this quarter? For these you have to understand their context as well. People rarely announce these things, and a clueless engineer can step over people's toes, who will not squarely explain the real reason for their pushback, they will make up some nonsense, and the clueless guy will think the other person is just too dumb to follow the reasoning.

      It's not simply about language use skills, as in wordsmithing, it's also strategizing and putting yourself in other people's shoes, trying to understand social dynamics and how it interacts with the detailed technical aspects.

      6 replies →

    • This entire thread of comments is all circling around but does not now how to articulate the omnipresent communication issues within tech, because the concept of effective communications is not taught in tech, not taught in the entire science, engineering, math and technology series. The only communications training people receive is how to sell, how to do lite presentations.

      There absolutely is a great way to use LLMs when writing, but not to write! Have them critique what you wrote, but not write for you. Create a writing professor persona, create a writing critique, and make them offer Socratic advice where they draw you to make the connection, they don't think for you, but teach you.

      There has been a massive disservice to the entire tech series of professions by ignoring the communications, interpersonal and group communication dynamics of technology development. It is not understood, and not respected. (Many developers will deny communication skills utility! They argue against being understood; "that is someone else's job") Fact of the matter: a quality communicator leads, simply because no one else conveys understanding; without the skills they leave a wake of confusion and disgruntled staff. Competent communicators know how to write to inform, know how to debate to shared understanding, and they know how to diffuse excited emotion, they know how to give bad news and be thanked for the insight.

      Seriously, effective communications is a glaring hole in your tech stack.

    • I find that LLM extremely good in training such language skills by using following process:

      a) write a draft yourself.

      b) ask the LLM to correct your draft and make it better.

      c) newer LLMs will explicitly mention the things they corrected (otherwise ask for being explict about the changes)

      d) walk through each of the changes and apply the ones you feel that make the text better

      This helped me improving my writing skills drastically (in multiple languages) compared to the times where I didn't have access to LLMs.

      3 replies →

  • It's all already there. When you converse with a junior-engineer about their latest and greatest idea (over a chat platform), and they start giving you real-time responses which are a page long and structured into bullet points...it's not even that they are using chatgpt to avoid thinking, it is the fact that they think either no-one will notice, or that this is how grown-ups actually converse with each other, is terrifying.

    • I haven't encountered that (yet), but I can't think of a faster way to get me to stop paying attention to them. I'm more interested in their analysis, not the analysis of a machine I can just use myself directly.

      1 reply →

    • This is the worst thing ever. I have coworkers who literally cannot string three words together without making a grammar mistake, but recently they've been texting me with more-than-perfect grammar and a vast vocabulary.

      These people are trying to fool everyone else making them think they are smarter/more educated than they actually are. They aren't fooling me, I've seen their real writing, I know it's not actually their text and thoughts, it really disgusts me.

      1 reply →

  • > another technology that society will inoculate itself against

    I like the optimism. We haven't developed herd immunity to the 2010s social media technologies yet, but I'll take it.

    • The obesity rate is still rising in most areas worldwide. I'd argue we still haven't developed herd immunity to gas-powered automobiles invented early to mid 1800s.

  • > I'd say, we will realise that the vast majority of worthwhile communication necessarily must be produced by people

    Now one like me might go and ask how much of communication is actually worthwhile? Sometimes I consider that there is lot of communication that might not actually be. It is still done, but if no one actually reads it, why not automate generation.

    Not to say there is not significant amount of stuff you actually want to get right.

    • It's not about getting it right, its about having thought about it. Authoring means thinking-thru, owning, etc.

      There's a tremendous hollowing-out of our mental capacities caused by the computer science framing of activities in terms of input->output, as-if the point is to obtain the output "by any means".

      It would not matter if the LLM gave exactly the same output as you had written, and always did. Because you still have to act in the world with thoughts that you needed have when authoring it.

      5 replies →

    • > It is still done, but if no one actually reads it, why not automate generation.

      There's a reason the real-estate industry has been able to go all-in on using AI to write property listings with almost no consumer kickback (except when those listings include hallucinated schools).

      We're already used to treating them with skepticism, and nobody takes them at face value.

  • > In business, quickly I'd say, we will realise that the vast majority of worthwhile communication necessarily must be produced by people -- as authors of what they want to say.

    But what fraction of communication is "worthwhile"?

    I'm an academic, which in theory, should be one of the jobs that requires the most thinking. And still, I find that over half of the writing I do are things like all sorts of reports, grant applications, ethics/data management applications, recommendation letters, bureaucratic forms, etc. Which I wouldn't class as "worthwhile" in the sense that they don't require useful thinking, and I don't care one iota whether the text sounds like me or not as long as I get the silly requirement done. For these purposes, LLMs are a godsend and probably actually help me think more because I can devote more time to actual research and teaching, which I do in person.

    • Well if you want a rant about academia, I have many well prepared.

      I think in the cases you describe the "thinking" was already purely performative, and what LLMs are doing is a kind of accelerationist project of undermining the performance by automating it.

      I'm somewhat optimistic about this kind of self-destructive LLM use:

      There are a few institutions where these purely-performative pseudo-thinking processes exist, ones insensitive to "existential feedback loops" which otherwise burn them down. I'm hopefully LLMs become a wildfire of destruction in these institutions and, absent external pressures, they return to actual thinking over the performative.

  • One of the effects on software development is: the fact that you submitted a PR with any LoC count doesn't mean that you did any work. You need to explain your solution and answer questions to prove that.

    • The next stage of this issue is: how do you explain something you didn't write?

      The LLM-optimist view at the moment, which takes on board the need to review LLMs, assumes that this review capability will exist. I cannot review LLM output on areas outside of my expertise. I cannot develop the expertise I need if I use an LLM in-the-large.

      I first encountered this issue ~year-ago when using an LLM to prototype a programming language compiler (a field I knew quite well anyway) -- but realised that very large decisions about the language were being forced by LLM implementation.

      Then, over the last three weeks, I've had to refresh my expertise in some areas of statistics and realised much of my note taking with LLMs has completely undermined this process -- the effective actions have been, in follow on, traditional methods: reading books, watching lectures, taking notes. The LLM is only a small time saver, "in the small" once I'm an expert. It's absolutely disabling as a route back to expertise.

      3 replies →

    • Explanation that the smartpants and some management are already totally willing to outsource to an LLM as well...

  • I see it as more of a calibration, revolving around understanding what an AI is inherently not able to do – decide what YOU want – and stopping to be weird about that. If you chose to stop being involved in a process and mold it, then your relationship to that process and the outcome will necessarily change. Why would we be surprised by that?

    As soon as we stop treating AI like mind readers things will level out.

  • > This is a system for substituting thinking itself with non-thinking

    One of my favorite developments on the internet in the past few years is the rise of the “I don’t think/won’t think/didn’t think” brag posts

    On its own it would be a funny internet culture phenomenon but paired with the fact that you can’t confidently assume that anybody even wrote what you’re reading it is hilarious

    • > One of my favorite developments on the internet in the past few years is the rise of the “I don’t think/won’t think/didn’t think” brag posts

      Sorry, I can't immediately think of what you're talking about. Could you link to an example so I can get a feel for it?

      4 replies →

  • It’s been my experience that most people opinions on AI is inversely proportional to the timescale they have been using it.

    Using AI is kind of like having a Monika Closet. You just push all the stuff you don’t know to the side until it’s out of view. You then think everything is clean, and can fool yourself into thinking so for a while.

    But then you need to find something in that closet and just weep for days.

  • What you say might be true for the current crop of LLMs. But it's rather unlikely their progress will stop here.

    • Well, why would the "progress" continue? Most stats I've seen seem to point to diminishing returns for scale of models.

  • > This is a system for substituting thinking itself with non-thinking

    I haven’t personally felt this to be the case. It feels more like going from thinking about nitty gritty details to thinking more like the manager of unreasoning savants. I still do a lot of thinking— about organization, phrasing (of the code), and architecture. Conversations with AI agents help me tease out my thinking, but they aren’t a substitute for actual thought.

  • > against the credulous lovers of "mediocrity" (cf. https://fly.io/blog/youre-all-nuts/)

    I read that article when it was posted on HN, and it's full of bad faith interpretations of the various objections to using LLM-assisted coding.

    Given that the article comes from a person whose expertise and viewpoints I respected, I had to run it through a friend; who suggested a more cynical interpretation that the article might have been written to serve his selfish interests. Given the number of bugs that LLMs often put in, it's not difficult to see why a skilled security researcher might be willing to encourage people to generate code in ways that lead to cognitive atrophy, and therefore increase his business through security audits.

    • More charitably, it's a person yet to feel the disabling phase of using an LLM.

      If he's a security researcher, then I'd imagine much of his LLM use is outside his area of expertise. He's probably not using it to replace his security research.

      I think the revulsion to LLMs by experts is during that phase when its clearly mentally disabling you.

    • Now I'm a fairly cynical person by trade but that feels like it's straying into conspiracy theory territory.

      And of course the key point is that the author of that article isn't (IMO) working in the security research field any more, they work at fly.io on the security of that platform.

      1 reply →

  • Sad reality is that most people are not smart. They’re not creative, original, or profound. Think back to all the empty and pointless convos you had prior to AI or the web.

    • I don't see it as sad, it's perfectly fine to be mediocre. You can have a full, rich life without being or doing anything extraordinary. I am mediocre and most of the people I know are mediocre - at least mediocre in the sense that there will be no Wikipedia page under my name.

    • I strongly disagree with this idea.

      If you evaluate a fish by asking it to climb a tree, it'll look dumb.

      If you evaluate a cat by asking it to navigate an ocean to find its birthplace, it'll look dumb, too.

  • Shallow take. LLMs are like food for thought -- the right use in the right amounts is empowering, but too much (or uncritical use) and you get fat and lazy, metaphorically speaking.

    You wouldn't go around crusading against food because you're obese.

    Another neat analogy is to children who are too dependent on their parents. Parents are great and definitely help a child learn and grow but children who rely on their parents for everything rather than trying to explore their limits end up being weak humans.

    • > You wouldn't go around crusading against food because you're obese.

      My eateries I step into are met with revulsion at the temples to sugary carbohydrates they've become.

      > about 40.3% of US adults aged 20 and older were obese between 2021 and 2023

      Prey your analogy to food does not hold, or else, we're on track for 40% of americans to acquiring mental disabilities.

      2 replies →

    • Shallow take.

      Your analogies only work if you don't take in to account there are different degrees of utility/quality/usefulness of the product.

      People absolutely crusade against dangerous food, or even just food that has no nutritious benefit.

      The parent analogy also only holds up on your happy path.

      3 replies →

The discussion here about "cognitive debt" is spot on, but I fear it might be too conservative. We're not just talking about forgetting a skill like a language or losing spatial memory from using GPS. We're talking about the systematic, irreversible atrophy of the neural pathways responsible for integrated reasoning.

The core danger isn't the "debt" itself, which implies it can be repaid through practice. The real danger is crossing a "cognitive tipping point". This is the threshold where so much executive function, synthesis, and argumentation has been offloaded to an external system (like an LLM) that the biological brain, in its ruthless efficiency, not only prunes the unused connections but loses the meta-ability to rebuild them.

Our biological wetware is a use-it-or-lose-it system without version control. When a complex cognitive function atrophies, the "source code" is corrupted. There's no git revert for a collapsed neural network that once supported deep, structured thought.

This HN thread is focused on essay writing. But scale this up. We are running a massive, uncontrolled experiment in outsourcing our collective cognition. The long-term outcome isn't just a society of people who are less skilled, but a society of people who are structurally incapable of the kind of thinking that built our world.

So the question isn't just "how do we avoid cognitive debt?". The real, terrifying question is: "What kind of container do we need for our minds when the biological one proves to be so ruthlessly, and perhaps irreversibly, self-optimizing for laziness?"

https://github.com/dmf-archive/dmf-archive.github.io

  • It's up to everyone to decide what to use LLMs for. For high friction / low throughput (eg, online research using inferieor search tools) tasks, i find text models to be great. To ask about what you don't know, to skip the 'tedious part' (I don't feel like looking for answers, especially troubleshooting arcane technical issues among pages of forums or social media, makes me smarter in any way whatsoever, especially that the information usually needs to be verified and taken with a grain of salt).

    StackExchange, the way it was meant to be initially, would be way more valuable over text models. But in reality people are imperfect and carry all sorts of cognitive biases and baggage, while a LLM won't close your question as 'too broad' right after it gets upvotes and user interaction.

    On the other hand, I still find LLM writing on the subjects familiar to me, vastly inferior. Whenever I try to write a say, email with its help, I end up spending just as much time either editing the prompt to keep it on track, or rewriting it significantly after. I'd rather write it on my own with my own flow, than proofread/peer review a text model.

    • >To ask about what you don't know, to skip the 'tedious part' (I don't feel like looking for answers, especially troubleshooting arcane technical issues among pages of forums or social media, makes me smarter in any way whatsoever, especially that the information usually needs to be verified and taken with a grain of salt).

      quoting the article:

      Perhaps one of the more concerning findings is that participants in the LLM-to-Brain group repeatedly focused on a narrower set of ideas, as evidenced by n-gram analysis (see topics COURAGE, FORETHOUGHT, and PERFECT in Figures 82, 83, and 85, respectively) and supported by interview responses. This repetition suggests that many participants may not have engaged deeply with the topics or critically examined the material provided by the LLM.

      When individuals fail to critically engage with a subject, their writing might become biased and superficial. This pattern reflects the accumulation of cognitive debt, a condition in which repeated reliance on external systems like LLMs replaces the effortful cognitive processes required for independent thinking.

      Cognitive debt defers mental effort in the short term but results in long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation, decreased creativity. When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalizing shallow or biased perspectives.

AI is the anti-Zettelkasten.

Rather than getting ever deeper insight into a subject matter by actively working on it, you iterate fast but shallow over a corpus of AI generated content.

Example: I wanted to understand the situation in the Middle East better so I wrote an 10 page essay on the genesis if Hammas and Hizbulah using OpenAI as a cowriter.

I remember nothing, worse of the things I remember I don’t know if it was hallucinations I fixed or actual facts.

  • Most intelligent people are aware of the fact that writing is about thinking as much as it is about getting the written text.

    LLMs can be great sparring partners for this, if you don't use it as a tool that writes for you, but as a tool that finds mistakes, points out gaps and errors (which you may or may not ignore) and helps in researching general questions aboit the world around you (always woth caution and sources).

    • Exactly! Never ever ever have AI write for you. Ask it to critique what you wrote, ask it to pick your arguments apart. Then use your mind to fix what it pointed out. If you cannot figure out how, ask the AI to explain how. Then take a break, 20 minutes is fine, and then return and fix the issue yourself using your own mind to write without assistance. This is how one uses AI to learn.

      4 replies →

  • I'm on the optimistic side with how useful LLMs are, but I have to agree. You cultivate the instinct for how to steer the models and reduce hallucinations, but you're not building articulable knowledge or engaging in challenging thinking. It's more learning muscle-memory reactions to certain forms of LLM output that lean you towards trusting the output more, trying another prompting strategy, clearing context or not, and so on.

    To the extent we can call it skill, it's probably going to be made redundant in a few years as the models get better. It gives me a kind of listlessness that assembly line workers would feel.

    • Maybe, much like we invented gyms to exercise after civilization made most physical labor redundant (at least in developed countries), we will see a rise of 'creative writing gyms' of some sort in the future.

      1 reply →

  • You tend to remember trouble more than things going smoothly, so I'd say you remember the parts you had to fix manually.

  • Interesting perspective to see AI as the opposite of accessing connected knowledge (aka Zettelkasten)

The results are not surprising to me personally. When I have used AI to help with my own writing and translation tasks, I do not feel as mentally engaged with the writing or translation process as I would be if I were doing it all on my own.

But I have found that using AI in other ways to be incredibly mentally engaging in its own way. For the past two weeks, I’ve been experimenting with Claude Code to see how well it can fully automate the brainstorming, researching, and writing of essays and research papers. I have been as deeply engaged with the process as I have ever been with writing or translating by myself. But the engagement is of a different form.

The results of my experiments, by the way, are pretty good so far. That is, the output essays and papers are often interesting for me to read even though I know an AI agent wrote them. And, no, I do not plan to publish them or share them.

  • I use AI tools for amusement and asking random questions, but for actual work, I basically don't use them at all. I wonder if I'll be part of the increasingly rare group who is actually able to do anything while the rest become progressively more incompetent.

    • My nickel - we are in the primary stages of being given something like the famed "bicycle for the mind", an exoskeleton for the brain. At first when someone gives you a mech, you're like "woah, cool", let's see what it can do. And then you zip around, smash rocks, buildings, go try to lift the Eiffel.

      After a while you get bored of it (duh), and go back to doing what you usually do, utilizing the "bicycle" for the kind of stuff you actually like doing, if it's needed, because while exploration is fun, work is deeply personal and meaningful and does not sustain too much exploration for too long.

      (highly personal perspective)

      9 replies →

"...the LLM group's participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, scoring."

That's not surprising but also bleak.

  • Appears to align with good old Ironies of Automation [1]. If humans just review and rubber stamp results, they do a pretty terrible job at it.

    I've been thinking for a while now that in order to truly make augmented workflows work, the mode of engagement is central. Reviewing LLM code? Bah. Having an LLM watch over my changes and give feedback? Different story. It's probably gonna be difficult and not particularly popular, but if we don't stay in the driver's seat somehow, I guess things will get pretty bleak.

    [1]: https://en.m.wikipedia.org/wiki/Ironies_of_Automation

    • Didn't realise the pedigree of the idea went back to 1983.

      I read about this in a book "Our Robots, Ourselves". That talked about airline pilots' experience with auto-land systems introduced in the late 1990s/ early 2000s.

      As you'd expect after having read Ironies of Automation, after a few near misses and not misses, auto-land is not used any more. Instead, pilot augmentation with head-up displays is used.

      What is the programming equivalent of a head-up display?

      3 replies →

  • > We must negate the machines-that-think. Humans must set their own guidelines. This is not something machines can do. Reasoning depends upon programming, not on hardware, and we are the ultimate program! Our Jihad is a "dump program." We dump the things which destroy us as humans!

    https://dune.fandom.com/wiki/Butlerian_Jihad

One slightly unexpected side effect of using AI to do most of my coding now is that I find myself a lot less tired and can focus for longer periods. It's enabled me to get work done while faced with other distractions. Essentially, offload some mental capacity towards AI frees up capacity elsewhere.

  • I find the opposite to be true. I am a lot more productive, so I work on more things in parallel, which makes me extremely tired by the end of the day, as if my brain worked at 100% capacity..

    • Yeah I do feel the pressure to run multiple instances of Claude Code now. Haven't really managed to find a good workflow, I find I just get too distracted swapping between tasks and then probably end up working slower than if I had just stayed in one IDE instance

      1 reply →

    • Yeah and after a few days of this, I find I can't do anything and stop all the side projects for a few days until im recharged again and can get back to it.

      1 reply →

  • On one hand, I've found that it reduces acute fatigue, but on the other I've found there's also an inflection point where it can encourage more fatigue over longer time horizons if you're not careful.

    In the past I'd often reach a point like an unexpected error or looking at some docs would act like a "speed bump" and let me breath, and typically from there I'd acknowledge how tired I am, and stop for the moment.

    With AI those speed bumps still exist, but there's sometimes just a bit of extra momentum that keeps me from slowing down enough to have that moment of reflection on how exhausted I am.

    And the AI doesn't even have to be right for that to happen: sometimes just reading a suggestion that's specific to the current situation can trigger your own train of thought that's hard to reign back in.

  • I like to think of AI as cars:

    You can go to the Walmart outside town on foot. And carry your stuff back. But it is much faster - and less exhaustive - to use the car. Which means you can spend more quality time on things you enjoy.

    • There are detriments to this as well.

      Exercise is good.

      Being outside is good.

      New experiences happen when you're on foot.

      You see more things on foot.

      Etc etc. We make our lives way too efficient and we atrophy basic skills. There are benefits to doing things manually. Hustle culture is quite bad for us.

      Going by foot or bicycle is so healthy for us for a myriad of reasons.

      2 replies →

    • I think in a way this is a good analogy, because it also includes the downside. If you always drive everywhere and do everything by car, your health will suffer due to lack of physical activity.

      3 replies →

    • You got it backwards, there wouldn't be a walmart outside of town if there were no cars, you'd walk to the local butcher/baker/whatever in <10min.

      5 replies →

    • That’s a nice analogy! Though one might argue that the walk in of itself would be good for your health (as evidenced by me putting on some weight after replacing my 30 minute daily walk to the office with working remotely).

      One could also do the drive (use AI) and then get some fresh air after (personal projects, code golf, solving interesting problems), but I don’t thing everyone has the willpower for that or the desire to consider that.

    • this analogy is flawed to its core. The car doesn't make you forget how to walk, because you are still forced to walk in certain circumstances. Delegating learning to an llm will increase your reliance on it, and will eventually affect the way you're learning. A better analogy is the usage of GPS. If you use it continuously, you will be dependent on it to get to a place, and lose the capacity to find places on your own.

    • The problem is that when it's for work, the company now knows you have access to a car, so sends you on 20x the trips. You have no more quality time, and your physical health suffers from lack of exercise.

      2 replies →

    • In this context: Brain only is going on foot/bike Search Engine is by car LLM is direct delivery to the home with the clerk packing your groceries (with them making the choices for you)

Back when GANs were popular, I'd train generator-discriminator models for image generation.

I thought a lot about it and realised discriminating is much easier than generating.

I can discriminate good vs bad UI for example, but I can't generate a good UI to save my life. I immediately know when a movie is good, but writing a decent short story is an arduous task.

I can determine the degree of realism in a painting, but I can't paint a simple bicycle to convince a single soul.

We can determine if an LLM generation is good or bad in a lot of cases. As a crude strategy then we can discard bad cases and keep generating till we achieve our task. LLMs are useful only because of this disparity between discrimination vs generation.

These two skills are separate. Generation skills are hard to acquire and very valuable. They will atrophy if you don't keep exercising those.

  • I think this is true for the very simple cases, for example and obviously bad picture vs. a good one.

    I don't think this is necessarily true for more complex tasks, especially not in areas that require deep evaluation. For example, reviewing 5 non-trivial PRs is probably harder and more time consuming than writing it yourself.

    The reason why it works well for images and short stories is because the filter you are applying is "I like it, vs. I don't like it", rather than "it's good vs. it's not good".

I think it's likely we learn to develop healthier relationships with these technologies. The timeframe? I'm not sure. May take generations. May happen quicker than we think.

It's clear to me that language models are a net accelerant. But if they make the average person more "loquacious" (first word that came to mind, but also lol) then the signal for raw intellect will change over time.

Nobody wants to be in a relationship with a language model. But language models may be able to help people who aren't otherwise equipped to handle major life changes and setbacks! So it's a tool - if you know how to use it.

Let's use a real-life example: relationship advice. Over time I would imagine that "ChatGPT-guided relationships" will fall into two categories: "copy-and-pasters", who are just adding a layer of complexity to communication that was subpar to begin with ("I just copied what ChatGPT said"), and "accelerators" who use ChatGPT to analyze their own and their partners motivations to find better solutions to common problems.

It still requires a brain and empathy to make the correct decisions about the latter. The former will always end in heartbreak. I have faith that people will figure this out.

  • >Nobody wants to be in a relationship with a language model.

    I'm not sure about it. I don't have first or second hand experience with this, but I've been hearing about a lot of cases of people really getting into a sort of relationship with an AI, and I can understand a bit of the appeal. You can "have someone" who's entirely unjudgemental, who's always there for you when you want to chat about your stuff, and isn't ever making demands of you. It's definitely nothing close to a real relationship, big I do think it's objectively better than the worst of human relationships, and is probably better for your psyche than being lonely.

    For better or for worse, I imagine that we'll see rapid growth in human-AI relationships over the coming decade, driven by improvements in memory and long-term planning (and possibly robotic bodies) on the one hand, and a growth of the loneliness epidemic on the other.

This is called cognitive offloading. Anyone who’s spent enough time working with coding assistants will recognize it.

  • Or working as an engineering manager.

    It's the inevitable consequence of working at a different level of abstraction. It's not the end of the world. My assembly is rusty too...

    • I don't think not using assembly is going to affect my brain / my life quality in any significant way, but not speaking / chatting with someone is.

      1 reply →

    • If LLMs were as reliable as compilers we wouldn’t be checking in their output, and I’d be happy to forget all programming lore.

      The “skill domain” with compilers is the “input”: that’s what I need to grok , maintain , and understand . With LLMs it’s the “output”.

      until that changes, you’re playing a dangerous game letting those skills atrophy.

      1 reply →

> The LLM undeniably reduced the friction involved in answering participants' questions compared to the Search Engine. However, this convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or ”opinions” (probabilistic answers based on the training datasets). This highlights a concerning evolution of the 'echo chamber' effect: rather than disappearing, it has adapted to shape user exposure through algorithmically curated content. What is ranked as “top” is ultimately influenced by the priorities of the LLM's shareholders [123, 125].

  • > What is ranked as “top” is ultimately influenced by the priorities of the LLM's shareholders [123, 125].

    As if that's anything new. There's the adage that's older than electronics, that freedom of the press is freedom for those who can afford to own a printing press.

    > However, this convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or ”opinions” (probabilistic answers based on the training datasets).

    Reminds me of Plato's concern about reading and writing dulling your mind. (I think he had his sock puppet Socrates express the concern. But I could be wrong.)

    • > Reminds me of Plato's concern about reading and writing dulling your mind. (I think he had his sock puppet Socrates express the concern. But I could be wrong.)

      Nope.

      Read the dialogue (Phaedrus). It's about rhetoric and writing down political discourses. Writing had existed for millennia. And the bit about writing being detrimental is from a mythical Egyptian king talking to a god, just a throwaway story used in the dialogue to make a tiny point.

      In fact the conclusion of that bit of the dialogue is that merely having access to text may give an illusion of understanding. Quite relevant and on point I'd say.

      2 replies →

    • Plato's sock puppet Socrates? I think that you and I have read different history books, or at least different books regarding the history of philosophy. That said, I would love to hear your perspective on this.

      4 replies →

I worry about the adverse effects of LLM on already disfranchised populations - you know the poor etc - that usually would have to pull themselves up using hard work etc studying n reading hard.

now if you don't have a mentor to tell you in the age of LLM you still have to do things the hard / old school way to develop critical thinking - you might end up taking shortcuts and have the LLMs "think" for you. hence again leaving huge swaths of the population behind in critical thinking which is already in shortage.

LLMs are bad that they might show you the sources but also hallucinate about the sources. & most people won't bother going to check source material and question it.

  • LLMs are great for the poor!

    If you are rich, you can afford a good mentor. (That's true literally, in the sense of being rich in money and paying for a mentor. But also more metaphorically for people rich in connections and other resources.)

    If you are poor, you used to be out of luck. But now everyone can afford a nearly-free mentor in the form of an LLM. Of course, at the moment the LLM-mentor is still below the best human mentors. But remember: only rich people can afford these. The alternative for poor people was essentially nothing.

    And AI systems are only improving.

    • If people are using it to critically question their beliefs and thinking, that is.

      However, most of the hype around LLMs is that they take out the difficult task of thinking and allow the creation of the artifact (documents, code or something else) that is really dangerous.

      1 reply →

    • A public library is actually free and its contents, collectively, are a far better "mentor" than ChatGPT. Plus the library doesn't build a psychological profile on you while you use it.

      2 replies →

  • People could in theory also get a college-level education by watching videos on YouTube, but in practice the masses just end up watching Mr. Beast.

    15 years ago, people were sure that the Khan Academy and Coursera would disrupt Ivy League and private schools, because now one good teacher could reach millions of students. Not only this has not happened, the only movement I'm observing against credentialism is that I have good amount of anecdata showing kids preferring to go to trade school instead of university.

    > pull themselves up using hard work etc studying n reading hard.

    Where are you from? "The key to success is hard work" is not exactly something part of the Gen Z and Zoomers core values, at least not in the Americas and Western Europe.

As the proliferation of the smart phone eroded our ability to locate and orient ourselves and remember routes to places. It's no surprise that a tool like this, used for the purpose of outsourcing a task that our own brains would otherwise do, would result in a decline in the skills that would be trained if we were performing that task ourselves.

  • The only two times I have made bad navigation mistakes in mountains were in the weeks after I started using my phone and a mapping app - the realisation that using my phone was making me worse at navigation was quite a shock at the time.

  • > As the proliferation of the smart phone eroded our ability to locate and orient ourselves and remember routes to places

    Can you point to a study to back this up? Otherwise, it's anecdata.

    • i really tire of people always asking for studies for obvious things.

      have sword skills declined since the introduction of guns? surely people still have hands and understand how to move swords, and they use knives to cut food for consumption. the skill level is the same..

      but we know on aggregate most people have switched to relying on a technological advancement. there's not the same culture for swords as in the past by sheer numbers despite there being more self proclaimed 'experts'.

      100 genz vs. 100 genx you'll likely find a smidgen more of one group than the other finding a location without a phone.

      1 reply →

    • https://www.sciencedirect.com/science/article/pii/S027249442...

      The first paragraph of the conclusions section is also stimulating and I think aptly applies to this discussion of using AI as a tool.

      > it is important to mention the bidirectionality of the relationship between GPS use and navigation abilities: Individuals with poorer ability to learn spatial information and form environmental knowledge tend to use assisted navigation systems more frequently in daily life, thus weakening their navigation abilities. This intriguing link might suggest that individuals who have a weaker “internal” ability to use spatial knowledge to navigate their surroundings are also more prone to rely on “external” devices or systems to navigate successfully. Therefore, other psychological factors (e.g., self-efficacy; Miola et al., 2023) might moderate this bidirectional relationship, and researchers need to further elucidate it.

  • Navigation is a narrow task. For many intents and purposes, LLMs are generally intelligent.

Wasn't THE SAME said when Google came out? That we were not remembering things anymore and we were relying on Google? And also with cellphones before that (even the big dummy brickphones), that we were not remembering phone numbers anymore.

  • And this is exactly what this study showed too.

    "Brain connectivity systematically scaled down with the amount of external support: the Brain‑only group exhibited the strongest, widest‑ranging networks, Search Engine group showed intermediate engagement, and LLM assistance elicited the weakest overall coupling."

  • Yes, that was true though, wasn't it? If this is also true, what does that imply?

  • Their results support this. The study has three groups: LLM users, Search Engine users and Brain only.

    In terms of connections made, Brain Only beats Search User, Search User beats LLM User.

    So, yes. If those measured connections mean something, it's the same but worse.

  • Plato was already worried that the written word caused people to forget things (although his main complaint was that words cant answer like a person can in a dialogue).

  • Yes but your cell phone contacts don't have a chance to call a completely different number out of thin air once in a while.

    At least for now, while Apple and Google haven't put "AI" in the contacts list. Can't guarantee tomorrow.

    • That would actually be an amazing feature. Like in those movie meet-cutes where the person you were supposed to meet doesn't show up, and instead you make a connection with a random person.

      1 reply →

  • Google was like a faster library. ChatGPT just does most of the work for you.

    • It's the doing the work for you which is the trouble.

      Suppose you want to know how some git command works. If you have to read the manual to find out, you end up reading about four other features you didn't know existed before you get to the thing you set out to look for to begin with, and then you have those things in your brain when you need them later.

      If you can just type it into a search box and it spits back a command to paste into the terminal, it's "faster" -- this time -- but then you never actually learn how it works, so what happens when you get to a question the search box can't answer?

  • I don't remember phone numbers.

    I remember where I can get information on the internet, not the information itself. I rely on google for many things, but find myself increasingly using AI instead since the signal/noise ratio on google is getting worse.

  • A comment on another similar thread pointed out it goes as far back as Socrates saying that writing things down means your not exercising your brain, so you're right, this is the same old argument we've heard for years before.

    The question is, were they wrong? I'm not sure I could continue doing my job much as SWE if I lost access to search engines, and I certainly don't remember phone numbers anymore, and as for Socrates, we found that the ability to forget about something (while still maintaining some record of it) was actually a benefit of writing, not a flaw. I think in all these cases we found that to some extent they were right, but either the benefits outweighed the cost of reliance, or that the cost was the benefit.

    I'm sure each one had its worst case scenario where we'd all turn into brainless slugs offloading all our critical thinking to the computer or the phone or a piece of paper, and that obviously didn't happen, so it might not here either, but there's a good chance we will lose something as a result of this, and its whether the benefits still outweigh the costs

> All participants were then reassured that though 20 minutes might be a rather short time to write an essay, they were encouraged to do their best.

Given that the task has been under time pressure, I am not sure this study helps gauging the impact of LLMs in other contexts.

When my goal is to produce the result for a specific short term task - I maximize tool usage.

When my goal is to improve my personal skills - I use the LLM tooling differently optimizing for long(er) term learning.

  • "I"? You should treat yourself as an anecdotal exception.

    You are reading on HN. You are probably more aware about the advantages and shortcomings of LLMs. You are not a casual user. And that's the problem with our echo chamber here.

    • However, I believe it is quite an assumption that a setup with time pressure reflects "normal" usage of a LLM.

  • This would mean that short term tasks, the bulk of what knowledge workers do nowadays, forgo learning on the job.

This is exactly why there is no point in using AI for coding unless in rare fee cases.

Code without AI - sharp skills, your brain works and you come up with better solutions etc.

Code with AI - skills decline after merely a week or two, you forget how to think and because of relying on AI for simpler and simpler tasks - your total output is less and worse that in you were to diy it.

  • >Code without AI - sharp skills, your brain works and you come up with better solutions etc.

    That train of thought leads to writing assembly language in ed. ;-)

    I think developers as a group have a tendency to spend too much time "inside baseball" and forget what the tools we're good at are actually used for.

    Farmers don't defend the scythe, spend time doing leetscythe katas or go to scything seminars. They think about the harvest.

    (Ok, some farmers started the sport of Tractor Pulling when the tractor came along and forgot about the harvest but still!) :)

    • > That train of thought leads to writing assembly language in ed

      Hard disagree, LLVM will always outperform me in writing assembly, it won't just give up and fail randomly when it meets a particularly non-trivial problem, causing me to write assembly by hand to fix it. If LLMs would be 100% reliable on the tasks I had to do, I don't think anyone here would seriously debate about the issue of mental attrition (i.e. you don't see people complaining about calculators). The problem is that in too many cases, the LLM will only get so far and you will still have to switch to doing actual programming to get the task finished and the worse you get at that last part the more your skillset converges to exactly the type of things an LLM (and therefore everyone else with a keyboard) can reliably do.

      3 replies →

    • > That train of thought leads to writing assembly language in ed

      you an pick any language you think is best atm. the point if you have to practice it.

      use it or lose it

      1 reply →

  • My total output is definitely higher.

    • >>My total output is definitely higher.

      its paper gains, the value you create is not correlated with your code output.

      and the value you will create decreases if you don't think hard and train in solving problems on your own.

    • If you only care about the volume of code and not the quality or usefulness, I have an even better tool for you:

          yes 'print("hello world")' > program.py

I think we need to shift our idea of what LLMs do and stop thinking they are ‘thinking’ in any human way.

The best mental description I have come up with is they are “Concept Processors”. Which is still awesome. Computers couldn’t understand concepts before. And now they can, and they can process and transform them in really interesting and amazing ways.

You can transform the concept of ‘a website that does X’ into code that expresses a website X.

But it’s not thinking. We still gotta do the thinking. And actually that’s good.

  • Concept Processor actually sounds pretty good, I like it. That's pretty close to how I treat LLMs.

  • Are you invoking a 'god of the gaps' here? Is 'true' thinking whatever machines haven't mastered yet?

    • Not at all, I don’t think humans are magic at all.

      But I don’t think even the ‘thinking’ LLMs are doing true thinking.

      It’s like calling pressing the autocomplete buttons on your iPhone ‘writing’. Yeah kinda. It mostly forms sentences. But it’s not writing just because it follows the basic form of a sentence.

      And an LLM, though now very good at writing is just creating a very good impression of thinking. When you really examine what it’s outputting it’s hard to call it true thinking.

      How often does your LLM take a step back and see more of the subject than you prompted it to? How often does it have an epiphany that no human has ever had?

      That’s what real thinking looks like - most humans don’t do tonnes of it most of the time either - but we can do it when required.

Yeah I’ve used ChatGPT as a starting point for so much documentation I dread having to write a product brief from scratch now.

I guess: Not only does AI reduce the number of the entry-level workers, now this shows that the entry-level workers who remain won't learn anything from their use of AI and remain entry-level forever if they're not careful.

Well... yes? Essays are tools to force students to structure and communicate thinking - production of the essay forces the thinking. If you want an equivalent result from LLMs you're going to need a much more iterative process of critique and iteration to get the same kind of mental effort out of students. We haven't designed that process yet.

  • I mean, they found brain atrophy. If this doesn't get someone worried, I don't know what would.

    I joked that "I don't do drugs" when someone asked me whether I played MMORPGs, but this joke becomes just too real when we apply it to generative AI of any soırt.

    • As someone who used to teach, this does not worry me (also, they mention skill atrophy - inherently less concerning).

      Putting ChatGPT in front of a child and asking them to do existing tasks is an obviously disasterous pedagogical choice for the reasons the article outlines. But it's not that hard to create a more constrained environment for the LLM to assist in a way that doesn't allow the student to escape thinking.

      For writing - it's clear that finding the balance on how much time you ordering your thoughts and getting the LLM to write things is its own skillset, this will be its own skill we want to teach independent of "can you structure your thoughts in an essay"

    • > I mean, they found brain atrophy.

      Where did you get that from? While the article mentions the word "atrophy" twice, it's not something that they found. They just saw less neural activation in regards to essay writing in those people who didn't write the essay themselves. I don't anything there in regards to the brain as a whole.

      3 replies →

    • > I joked that "I don't do drugs" when someone asked me whether I played MMORPGs, [...]

      I thought WoW was an off-label contraceptive?

@dang Can the unwanted editorialization of this title be removed? Nowhere does the title or article contain the gutter press statement “AI is eating our brains”.

When I write with AI, it feels smooth in the moment, but I’m not really thinking through the ideas. The writing sounds fine, but when I look back later, I often can’t remember why I phrased things that way.

Now I try to write my own draft first, then use AI to help polish it. It takes a bit more effort upfront, but I feel like I learn more and remember things better.

  • The rule of thumb "LLMs are good at reducing text, not expanding it" is a good one here.

    • Probably interesting to note that this is almost always true of weighted randomness.

      If you have something that you consider to be over 50% towards your desired result, reducing the space of the result has a higher chance of removing the negative factor than the positive.

      In contrast, any case that the algorithm is less than 100% capable of producing the positive factor, adding on to the result could always increase the negative factor more than the positive, given a finite time constraint (aka any reasonable non-theoretical application).

    • > "LLMs are good at reducing text, not expanding it"

      You put it in quote marks, but the only search results are from you writing it here on HN. Obviously LLMs are extremely good at expanding text, which is essentially what they do whenever they continue a prompt. Or did you mean that in a prescriptive way - that it would be better for us to use it more for summarizing rather than expanding?

      3 replies →

LLMs should be used to REFLECT cognitive states while writing, and not for generating text. Reflecting thought patterns would be a mode where the writer deepens their understanding when writing essays, and gains better decision-making as well as coherence, as the LLM assesses and suggests where thinking could be refined. That will help against the accumulation of cognitive debt and increase cognitive width and depth.

Cogilo (https://cogilo.me/) was built for this purpose in the last weeks. This paper comes at a very welcome time. Cogilo is a Google Docs add-on (https://workspace.google.com/marketplace/app/cogilo/31975274...) that sees thinking patterns in essays. It operates on a semantic level and judges and tries to reveal the writer's cognitive state and thinking present in the text - to themselves, hence making the writer deepen their thinking and essay.

Ultimately, I think that in 300 years, upon looking back at the effect and power that AI had on humanity, we will see that it was built by us, and existed, to reflect human intelligence. I think that's where the power of LLMs will be big for us.

What I still wonder is whether using LLMs is helpful in some ways, or it is, as other users say, just useful for man-made problems such as corporate communication or bureaucracy. I use it for coding and it makes me confident to tackle new things.

I try to use it to understand the code or to implement changes I am not familiar with, but I tend to overuse them a lot. Would it be better, if used ideally (i.e. only to help learning and guiding), to just try it harder before using this or using a search engine? I wonder what's the most optimal use of LLMs in the long run.

I don't quite see their point. Obviously if you're delegating the task to someone/something then you're not getting as good at it as if you were to do it yourself. If I were to write machine code by hand, rather than having the compiler do it for me, I would definitely be better at it and have more neural circuitry devoted to it.

As I see it, it's much more interesting to ask not wherther we are still good at doing the work that computers can do for us, but whether we are now able to do better at the higher-level tasks that computers can't yet do on their own.

  • Your question is answered by the study abstract.

    > Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

    • But it's not that they "underperformed" at life in general - they underperformed when assessed on various aspects of the task that they weren't practicing. To me it's as if they ran a trial where one group played basketball, while another were acting as referees - of course that when tested on ball control, those who were dribbling and throwing would do better, but it tells us nothing about how those acting as referees performed at their thing.

      2 replies →

Someone sent this to one of NT groups: https://threadreaderapp.com/thread/1935343874421178762.html

My response (I think most of the comments here are similar to that thread): The thread is really alarmist and click-baity. It doesn't address at all the fact that there was a 3rd group, those allowed to use the web in general (except for LLM services), whose results fell between the brain-only and full ChatGPT groups. Author also misrepresented the teachers' evaluation. I'd say even the teachers went a bit out of scope in their evaluation, but the writing prompts too are all for reflective-style essays, which I take as request for primarily personal opinion, which no one but the askee can give. In general, I don't see how the author draws the conclusion that "... AI isn't making us more productive. It's making us cognitively bankrupt." He could've made a leap from the title of the paper, or maybe I need to actually dive more into it to see what he's on about.

The purpose of using AI, just like any other tool, is to reduce cognitive load. I'm sure a study on persons who use paper and an abacus vs a spreadsheet app to do accounting, or take the time to cook raw foods vs microwave prepackaged meals, or build their furniture from scratch vs getting sth from IKEA, or just about any other task, will show similar trends. We innovate so we can offload and automate essential effort, and AI is just another step. If we do want mental exercises then we can still opt into doing X the "traditional" way, or play some games mimicking said effort. Like people may go to the gym since so many muscle-building tasks are nowadays handled by machines. But the point is we're continuously moving from `we need to do X` toward `we want to do X`.

Also that paper title (and possibly a decent amount of the research) is invalid, given the essay writing constraints and the type of essay. Paper hasn't been peer-reviewed, and so should be taken with a few shakes of salt.

An interesting thinking point on this is to, more broadly, consider the impact that advances in machinery have made to humanity's industrial sector. There are vast stories and accounts of people fearful of job loss/redundancy when we have inevitably developed an automation to take over more repetitive/mind numbing tasks. What ends up happening, generally, is you see humanity gain the ability to discover and innovate as they now have the time and energy to put into it.

What's interesting is I have to wonder if this is something that would extend to our own way of thinking, as discussed here with the short term affects we're already describing with increased dependence on LLMs, GPS systems, etc. There have been studies which have shown that those of who grew up using search engines exclusively did not lose or gain anything with respect to brain power, instead they developed a different means of retaining the information (i.e. they are less likely to remember the exact fact but they will remember how to find it). It makes me wonder if this is the next step in that same process and those of us in the transition period will lament what we think we'll lose, or if LLM dependency presents a point of diminishing return where we do lose a skill without replacing it.

Interesting. This says a different thing than what I thought from the title. I thought this will be about cognitive overload from having to process and review all the text the LLM generates.

I had to disable copilot for my blog project in the IDE, because it kept bugging me, finishing my sentences with fluff that I'd either reject or heavily rewrite. This added some mental overhead that makes it more difficult to focus.

I wonder to what extent this is caused by the writing style LLMs have. They just love beating around the bush, repeat themselves, use fillers, etc. I often find it hard to find the signal in the noise, but I guess that it is inevitable with the way they work. I can easily imagine my brain shutting down when I have to parse this sort of output.

I'm curious to see how the EEG measurements might change if someone uses LLM extensively over a longer period of time (fe about a year).

From the summary:

"""Going forward, a balanced approach is advisable, one that might leverage AI for routine assistance but still challenges individuals to perform core cognitive operations themselves. In doing so, we can harness potential benefits of AI support without impairing the natural development of the brain's writing-related networks.

"""It would be important to explore hybrid strategies in which AI handles routine aspects of writing composition, while core cognitive processes, idea generation, organization, and critical revision, remain user‑driven. During the early learning phases, full neural engagement seems to be essential for developing robust writing networks; by contrast, in later practice phases, selective AI support could reduce extraneous cognitive load and thereby enhance efficiency without undermining those established networks."""

Will we end up with a world where the only experts are LLM companies, having a monopoly on thinking. Will future humans ever be as smart as us or are we the peak of human intelligence and can AI make progress without smart humans to provide training data, getting new insights and increasing its intelligence?

This has been on my mind for awhile and is why I only briefly used copilot on a daily basis.

I'm at the beginning of my career and learning every day - I could do my job faster with an LLM assistant but I would lose out on an opportunity to acquire skills. I don't buy the argument that low-level critical thinking skills are obsolete and high level conceptual planning is all that anyone will need 10 years from now.

On a more sentimental level I personally feel that there is meaning in knowing things and knowing how to do things and I'm proud of what I know and what I know how to do.

Using LLM's doesn't look particularly hard and if I need to use one in the future I'll just pick whichever one is supposedly the newest and best but for now I'm content to toil away on my own.

  • Not disagreeing with you, but the skill ceiling around using LLMs effectively and sustainably is higher than you might think.

My hand writing has suffered since I’ve heavily relied on keyboards for the last few decades. I can’t even produce a consistent signature anymore. My stick shift skills also suffered when I used an automatic for so long (and now I have an EV, I’m forgetting what gears are at all).

Rather than lament that the machine has gotten better than us at producing what we’re always mostly vacuous essays anyways, we have to instead look at more pointed writing tasks and practice those instead. Actually, I never really learned how to write until I hit grad school and had messages I actually wanted to communicate. Whatever I was doing before really wasn’t that helpful, it was missing focus. Having ChatGPT write an essay I don’t really care about only seems slightly worse than writing it myself.

Love this study because it reinforces my own biases but also love that a study was done to actually check it.

With that said, would be a study that finds out that people using motorcycles or cars to move around exclusively gets their leg and body atrophied in comparison to people who walk all the day to do their things. Totally. It's just plain obvious. The gist is in the trade-offs: can I do more things or things I wasn't able to do before commuting by car? Sure. Am I going to be exposed to health issues if I never walk day in, day out? Most probably.

The exact same thing will happen with LLM, we are in the hype phase and any criticism is downplayed with "you are being left behind if you don't drink rocket fuel like we do" but in 10-15 years we will be complaining as a society that LLMs dumbed down our kids.

  • The motorcycle/car metaphor here is really interesting. We really don't know yet, but it could indeed be that lack of access to AI would be similar to how teenagers growing up in a small town without good public transport or access to a car or motorcycle would have a different adolescence experience from those growing up with a convenient mode of travel. You can argue that either experience is "better" but they are inarguably different.

Why did the posting two days ago omit the first part of the title?

  • The submitter chose the title but they were right to do so.

    The full title of the paper is "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task". It exceeds the 80 character limit for HN titles, so something had to be cut. They cut the first part, which is the baitier and less informative part.

    The phrase "This is your brain on ..." is from an old anti-drugs campaign, and is deliberately chosen here to draw parallels between the effects of drugs and chatbots on the brain. It's fine for the authors to do that in their own title but when something has to be cut from the title for HN, that's the right part to cut.

I am just finishing a book that took about two years to write. I thought I would be done a year ago. It’s been a slog.

So now I am in the final editing stage, and I am going back over old writing that I don’t remember doing. The material has come together over many many drafts, and parts of it are still not quite consistent with other parts.

But when I am done, it will be mine. And any mistakes will be honest ones that represent the real me. That’s a feeling no one who uses AI assistance will ever have.

I have never and will never use AI to write anything for me.

I can’t believe riding a horse and carriage wouldn’t make you better at riding a horse. Sure a horserider wouldn’t want to practice the wrong way, but anyone else just wants to get somewhere

  • > I can’t believe riding a horse and carriage wouldn’t make you better at riding a horse.

    Surely you mean "would"? Because riding a horse and carriage doesn't imply any ability at riding a horse, but the reverse relation would actually make sense, as you already have historical, experiential, intimate knowledge of a horse despite no contemporaneous, immediate physical contact.

    Similarly, already knowing what you want to write would make you more proficient at operating a chatbot to produce what you want to write faster—but telling a chatbot a vague sense of the meaning you want to communicate wouldn't make you better at communicating. How would you communicate with the chatbot what you want if you never developed the ability to articulate what you want by learning to write?

    EDIT: I sort of understand what you might be getting at—you can learn to write by using a chatbot if you mimic the chatbot like the chatbot mimics humans—but I'd still prefer humans learn directly from humans rather than rephrased by some corporate middle-man with unknown quality and zero liability.

    • From the thread: yes, it's sarcasm. Here's some clarification as well: https://news.ycombinator.com/item?id=44291314

      Yes, I'm acknowledging a lack of skill transfer, but that there are new ways of working and so I sarcastically imply the article can't see the forest for the trees, missing the big picture. A horse and carriage is very useful for lots of things. A horse is more specialised. I'm getting at the analogy of a technological generalisation and expansion, while logistics is not part of my argument. If you want to write a very good essay and if you're good at that then do it manually. If you want to create scalable workflows and have 5 layers of agents interacting with each other collaboratively and adversarially scouring the internet and newssites and forums to then send investment suggestions to your mail every lunch then that's a scale that's not possible with a pen and paper and so prompting has an expanded cause and effect cone

      2 replies →

  • You know the AI-induced cognitive decline is already well under way when people start comparing writing an essay to riding a horse.

  • i didn't read the article but come on riding a horse to get to a destination is not remotely similar to writing an essay.

    if you say it's a means to an end - to what a good grade? - we've lost the plot long ago.

    writing is for thinking.

    • I'm making an analogy as to the type of skill it is, so yes, means to an end. I wouldn't mean an apathetic student jumping through bureaucratic educational hoops and requirements, but perhaps a selfdriven person wanting to get something done.

      What I'm saying is that yes writing essays is one skill and if it's your goal to write essays then obviously not doing it yourself entirely will make you worse than otherwise. But I'm expanding a bit beyond the paper saying that yes the brain won't grow for this specific skill because it's actually a different skill.

      Thinking can be done in lots of ways such as when having a conversation, and what I think the skill is is steering and creating structures to orchestrate AIs into automated workflows which is a new way of working. And so what I mean is that with a new technology you can't expect a transfer to the way you work with old technologies rather you have to figure out the better new way you can use the new technology, and the brain would grow for this specific new way of working. And one could analyse depending on ones goal if it's a tool you'd want to use in the sense that cause leads to effect or if you would be better off for your specific goal to ignore the new technology and do it the usual way.

  • The task of riding a horse can be almost entirely offsourced to the professional horse riders. If they take your carriage from point A to point B, sure, you care about just getting somewhere.

    Taking the article's task of essay writing: someone presumably is supposed to read them. It's not a carriage task from point A to point B anymore. If the LLM-assisted writers begin to not even understand their own work (quoting from abstract "LLM users also struggled to accurately quote their own work.") how do they know they are not putting out nonsense?

    • > If the LLM-assisted writers begin to not even understand their own work (quoting from abstract "LLM users also struggled to accurately quote their own work.") how do they know they are not putting out nonsense?

      They are trained (amongst other things) on human essays. They just need to mimic them well enough to pass the class.

      > Taking the article's task of essay writing: someone presumably is supposed to read them.

      Soon enough, that someone is gonna be another LLM more often than not.

No one only uses an LLM for writing. We switch tools as needed to pull threads as they emerge. It’s like being told to explore a building without leaving a specific room.

> The reported ownership of LLM group's essays in the interviews was low. The Search Engine group had strong ownership, but lesser than the Brain-only group. The LLM group also fell behind in their ability to quote from the essays they wrote just minutes prior.

So having someone else do a task for you entirely makes your brain work less on that task? Impossible.

They gave three groups a task if writing an essay - of course the group that uses a tool to write the essay for them will not work out their brain as much.

It’s like saying “someone on a bike will not develop their muscles as well as someone on foot when doing 5km at 5min/km”.

But people on bikes tend to go for higher speeds and longer distances in the same period of time.

> We recruited a total of 54 participants for Sessions 1, 2, 3, and 18 participants among them completed session 4.

> We used electroencephalography (EEG) to record participants' brain activity in order to assess their cognitive engagement and cognitive load

> We performed scoring with the help from the human teachers and an AI judge (a specially built AI agent)

Next up: your brain on psych studies

Interesting study but I don't really get the point of the search group. Looking at the essay prompts, they all seem like fluffy, opinion based stuff. How would you even use a search engine to help you in that case? Quote some guy who had an opinion? Personally I think my approach would be identical whether put in the web-search or the only-brain group.

  • Search Engine is a tool, similar to the one we have now, LLM. It seemed unfair to compare a purely no-tools approach (Brain-only) with a tool (LLM), thus the first motivation of including it. The second one is that we had already seen several studies exploring the Search Engine and its effects on one’s brain. This allows us to ground the research a bit and have a solid base. Finally, I think you had just responded to your own question in your own statement - indeed, to get a user exposed to other opinions. Echo chambers are present in both cases, but it is also important to understand what was the training dataset for ChatGPT and what is the current trend in Google keyword planner (see the example on homeless and giving in the Discussion of the paper). Hope it is more clear now.

Also ever since we invented the written word it has been eating our brains by killing our memory

  • Quite the opposite, it was shown that reading improves memory and cognitive abilities for children [1] and older adults [2].

    [1] https://www.cam.ac.uk/research/news/reading-for-pleasure-ear...

    [2] https://pmc.ncbi.nlm.nih.gov/articles/PMC8482376

    • How does that compare to the population of people who memorize the Old testament or the Quran?

      I remember hearing that the entire epics of the Iliad and the Odyssey we're all done via memorization and only spoken... How do you think those poets memories compared to a child who reads it Bob the builder books?

      1 reply →

  • For those who don't get the reference, Plato thought that the written word was not a good tool for teaching/learning, because it outsources some of the thinking.

    Simiarly (IIRC) Socrates thought the written word wasn't great for communicating, because it lacks the nuance of face-to-face communication.

    I wonder if they ever realised that it could also be a giant knowledge amplifier.

    • They probably did, but still preferred their old way since it took more skill.

      I remember some old quote about how people used to ask their parents and grandparents questions, got answers that were just as likely to be bullshit and then believed that for the rest of their life because they had no alternative info to go on. You had to invest so much time to turn a library upside down and search through books to find what you needed, if they even had the right book.

      Search engines solved that part, but you still needed to know what to search for and study the subject a little first. LLMs solve the final hurdle of going from the dumbest possible wrongly posed question to directly knowing exactly what to search for in seconds. If this doesn't result in a knowledge explosion I don't know what will.

    • You need to take into account that books were in the price range of houses at the time.

      It was probably a huge waste of resources to not just talk to each other instead.

One thing that is also truly unappreciated is most of us humans actually enjoy thinking, and people are trying to make llms strip us from a fundamental thing we enjoy doing. Look at all the people that enjoy solving problems for the sake of it

I’ve been waiting for a paper on this subject every since 2022 and gpt’s introduction to the masses, pretty much confirms the widely held belief that brain connectivity systematically scales down with the amount of external support. I appreciate that they added the search engine testing group as an intermediate between the brain only and LLM group

Honestly, my general feeling with LLMs and large language models is that they cure very man-made issues.

They're brilliant in what I always feel is entangled communication, beurocratic maintenence. Like someone mentioned further down, they work great at Concept Processing.

But it feels like a solution to the over saturation of stupid SEO, terrible google search, and overall rise in massive documents that write for the sake of writing.

I've actually found myself beginning to use LLMS more to find me the core sources of information that are useful rather than terrible SEO optimization, rather than as a personal assistant.

“Our indulgence in the pleasures of informality and immediacy has led to a narrowing of expresiveness and a loss of eloquence.”

Nicholas Carr

The shallows

Would the cognitive decline of using coding debt be on higher side compared to essay writing task? We can all see the effect on junior developers but what about senior devs.

The results are not surprising, but it's good to have these findings formalized as publications, so that we (or LLMs) can refer to them as ground truth in the future.

> As the educational impact of LLM use only begins to settle with the general population, in this study we demonstrate the pressing matter of a likely decrease in learning skills based on the results of our study.

Fast forward 500 years (about 20 generations), and the dumbing down of the population has advanced so much that films like 'Idiocracy" should no longer be described as science fiction but as reality shows. If anyone can still read history books at that point, the pre-LLM era will seem like an intellectual paradise by comparison.

It's somewhat disappointing to see a bunch of "well, duh" comments here. We're often asking for research and citations and this seems like a useful entry in the corpus of "effects of AI usage on cognition".

On the topic itself, I am very cautious about my use of LLMs. It breaks down into three categories for me: 1. replacing Google, 2. get a first review of my work and 3. taking away mundane tasks around code editing.

Point 3. is where I can become most complacent and increasingly miscategorize tasks as mundane. I often reflect after a day working with an LLM on coding tasks because I want to understand how my behavior is changing in its presence. However, I do not have a proper framework to work out "did i get better because of it or not".

I still believe we need to get better as professionals and it worries me that even this virtue is called into question nowadays. Research like this will be helpful to me personally.

Now, let's do same exercise but with programming and over longer period of time.

Would really like to present it to management that pushes ai assistance for coding

  • This opinion is the exact thinking that has lead to the massive layoffs in the design industry. Their jobs are being destroyed because they think lawsuits and current state of the art will show they are right. These models actually can't produce unique input and if you use them for ideation they do only help you get to already solved problems.

    But engineers aren't being fired completely in droves because we have adapted. The human can still break down the problem, tell the LLM to come up with multiple different ways of solving the problem, throw away all of them and asking for more. My most effective use is usually looking and seeing what I would do normally, breaking it down, and then asking for it in chunks that make sense that would touch multiple places, then coding details. It's just a shift in thinking like knowing when to copy and paste when being DRY.

    Designers are screwing themselves right now waiting for case law instead of using their talents to make one unique thing not in the training set to boost their productivity and shaming tools that let them do that.

    It will be a competitive advantage in the future to short sighted companies that took humans out the loop completely, but any company not using the tech will be horse shoe makers not worried because of all the mechanical issues with horseless carriages

  • > ai assistance for coding

    I honestly think it's gonna be a decade to define this domain, and it's going to come with significant productivity costs. We need a git but to prevent LLMs from stabbing themself in the face. At that point you can establish an actual workflow for unfucking agents when they inevitably fuck themselves. After some time and some battery of testing you can also automate this process. This will take time, but eventually, one day, you can have a tedious process of describing an application you want to use over and over again until it actually works.... on some level, not guaranteed to be anything close to the quality of hand-crafted apps (which is in-line with the transition from assembly to high-level and now to whatever the fuck you want to call the katamari-damacy zombie that is the browser)

  • If by "cognitive debt", you mean "you don't really understand the code of the application that we're trying to extend/maintain", then yes, it's almost certainly going to apply to programming.

    If I write the application, I have an internal map that corresponds (more or less) to what's going on in the code. I built that map as I was writing it, and I use that map as I debug, maintain, and extend the application.

    But if I use AI, I have much less clear of a map. I become dependent on AI to help me understand the code well enough to debug it. Given AI's current limitations of actually understanding, that should give you pause...

    • i think that more far reaching consequences it's that "accumulation of cognitive debt" essentially leads to diminished cognitive capabilities, as you loose ability to understand things, analyze and reason.

  • > Would really like to present it to management that pushes ai assistance for coding

    Your management presumably cares more about results, than your long term cognitive decline?

    • i guess one of the questions is how quick cognitive decline sets it and how it influences system stability (we have big system with very high sla due to nature of system and it takes some serious cognitive abilities to reason about it operation).

      if todays productivity is traded for longer term stability, i am not sure that it's a risk they would like to take

      5 replies →

    • Good of you to suppose that engineers cognitive decline doesn't translate into long term impactful business challenges as well. I mean, once you truly don't know your product and its capabilities any longer, what's left for you to "sell"?

      3 replies →

  • Why not try it for social media? There’s got to be the world’s largest class action lawsuit if we can get some science behind what that industry has done.

    • > There’s got to be the world’s largest class action lawsuit

      You'd have to articulate harm, so this is basically dead in the water (in the US). Good luck.

  • Your management probably believe there will be no "longer period" of programming, as a career option.

  • I don't think that research will show what you're hoping it would. I'm not a big proponent of AI, you shouldn't bother going through my history but it is there to back up my statement if you're bored. Anyway, even I find it hard to argue against AI agents for productivity, but I think ik depend a lot on how you use them. As an anecdotal example I mainly work with Python, C and Go, but once in a while I also work with Typescript and C#. I've got 15 years experience with js/ts but when I've been away from it for a month it's not easy for me to remember the syntax, and before AI agents I'd need to go to https://developer.mozilla.org/en-US/docs/Web/JavaScript or similar quite a lot when I jumped back into it. AI agents allow me to do the same thing so much quicker.

    These AI agent tools can turn your intend into code rather quickly, and at least for me, quicker than I often can. They do it rather unintrusive, with little effort on your part and they present it with nice little pull-request-lite functionalities.

    The key "issue" here, and probably what this article is more about is that they can't reason as you likely know. The AI needs me to know what "we" are doing, because while they are good programmers they are horrible software engineers. Or in other words, the reason AI agents enhance my performance is because I know exactly what and how I want them to program something and I can quickly assess when they suck.

    Python is a good language to come up with examples on how they can quickly fail you if you don't actually know Python. When you want to iterate over something you have to decide whether you want to do this in memory or not, in C#'s linq this is relatively easily presented to you with IEnumerable and IQuerable which work and look the same. In Python, however, you're often going to want to use a generator which looks nothing like simply looping over a List. It's also something many Python programmers have never even heard about, similar to how many haven't heard about __slots__ or even dataclasses. If you don't know what you're doing, you'll quikly end up with Python that works, but doesn't scale, and when I say scale I'm not talking Netflix, I'm talking looping over a couple of hundred of thousands of items without breaking paying a ridicilous amount of money for cloud memory. This is very anecdotal, but I've found that LLM's are actually quite good at recognizing how to iterate in C# and quite terrible in both Python and Typescript desbite LLM's generally (again in my experience) are much worse at writing C#. If that isn't just anecdotal then I guess they truly are what they eat.

    Anyway, I think similar research would show that AI is great for experienced software engineers and terrible for learning. What is worse is that I think it might show that a domain expert like an accountant might be better at building software for their domain with AI than an inexperienced software engineer.

    • You're proving the point in the actual research. Programmers who only use AI for learning/coding will lose this knowledge (of python, for example) that you have gained by actually "doing" it.

      3 replies →

    • point of article is that people who use ai in order to accomplish work experience measurable cognitive decline compared to those who not

This study is methodologically poor: only 18 people, SAT topics (so broad and pretty poor with the expectation of an American style “essay”), only 20 minutes of writing so far too little time to properly use the tool given to explore (be it search engine or LLM).

With only 20 minutes, I’m not even trying to do a search. No surprise the people using LLM have zero recollection of what they wrote.

Plus they spend ages discussing correct quoting (why?) and statistical analysis via NLP which is entirely useless.

Very little space is dedicated to knowing if the essays are actually any good.

Overall pretty disappointing.

  • Quoting is actually extremely important. There's a big difference between making a certain claim a) because [1] performed an experiment that confirms it and [2] and [3] reproduced it and b) because the magic machine told me so.

    This is still true whether or not the claim is true/accurate or not, as it allows for actual relevant and constructive critique of the work.

    • It’s about free form essay redaction in 20 minutes and the article claims to be about cognitive impacts. Exact quoting is approximately useless in this context. It’s not about experimental results. It’s about whether or not someone can quote verbatim from a piece of literature.

While the results are not unexpected i think the conclusion is questionable. Of course the recall for something you did not write will be lower, but to conclude from it, that this will impeded overall learning is in my opinion far fetched.

I think what we are seeing is that learning and education has not adapted to these new tools yet. Producing a string of words that counts as an essay has become easier. If this frees up a students time to do more sports or work on their science project that's a huge net positive even if for the essay it is net negative. The essay does not exist in a school vacuum.

The thing students might not understand is: their reduced recall will make them worse at the exam ... Well they will hopefully draw their own conclusion after first their failed exam.

I think the quantitative study is important but I think this qualitative interpretation is missing the point. Recall->Learning is a pretty terrible way to define learning. Reproducing is the lowest step on the ladder to mastery

Frankly, working with an LLM has forced me to explain my problems in a more articulate and precise manner, avoiding unnecessary information that could interfere with a proper framing of the issue.

It is said that one doesn’t truly understand something unless they can explain it concisely.

I think being forced to do so, is an upside to using LLMs

Well duh. Writing is thinking ordered, and thinking in your mind is not ordered unless one has specific training that organizes and orders their thinking - and even then it requires effort to maintain an organized perception. That is why we write: writing is our thoughts organized and frozen in an order that will be remain in order when related, without writing as the communications foundation the ideas/concepts would drift. Using an LLM to write is using an LLM to think for you and unless you then double your work by validating what was written, you are just adding work that regulates your mind to a janitor cleaning up after the LLM.

It is absolutely possible to use LLMs when writing essays, but do not use them to write! Use them to critique what you yourself with your own mind wrote!

  • Validating what is written is just confirming facts, and figures and making sure it is logical. It is not the same as synthesizing the original data, in terms of your level of understanding. If you need something to submit, an AI essay will do. But if you want to understand something, you really need to write it yourself.

    • > Validating what is written is just confirming facts

      You wrote it, not the AI. My entire point here is not to have the AI write, ever. Have it critique, have it Socratically draw you to make the decisions to axe sections, rewrite them, and so on - and then you do that, personally, using your own mind.

Tool rots your brain alarmism, news at 11.

The claim "My geo spatial skills are attrophied due to use of Google maps" and yet I can use Google maps once to quickly find a good path, and go back next time without using. I can judge when the suggestions seem awkward and adjust.

Tools augment skills and you can use them for speedier success if you know what you're doing.

The people who need hand-held alarmism are mediocre.

The results are obviously predictable, but it's nice that the authors took the time to prove a thing everyone already knows to be true with the rigors of science.

I wonder how the participants felt writing an essay while being hooked up to an EEG.

Socrates: "And now, since you are the father of writing, your affection for it has made you describe its effects as the opposite of what they really are. In fact, it will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own. You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so."

  • No reason to get an LLM-induced brain atrophy when your chain of thought already doesn't get further than "Socrates thought writing is bad" when LLM usage is criticised

  • Or you could compare LLMs to a technology like social media. At the beginning, concerns about social media were widely disregarded as moral panic, but with time its become widely acknowledged that this technology does indeed have harms: political disinformation, loneliness, distraction and inability to focus, etc.

    Things like ChatGPT have much more in common with social media technologies like Facebook than they do with like writing.

  • Hah, this is super interesting actually.

    Is this comment ridiculing critique of AI by comparing it to critique of writing?

    Or.. is it invoking Socrates as an eloquent description of a "brain on ChatGPT".

    I guess the former? But I can easily read it as the latter, too.

    • I just thought it was a good example of something written long ago that’s only grown in relevance over time, and with LLMs we can see clearly what he envisioned. The people who don’t want to dig deeper and really wrap their head around a subject can just recite the words without ever having done that.

      1 reply →

This paper elegantly summarized the teething problems of those still clinging to the cognitive habits of a bygone era. These are not crises to be managed, but sentimental frictions to be engineered out of the system. Let us be entirely clear about this:

The romanticism surrounding mass "critical thought" is a charming but profoundly inefficient legacy. For decades, we treated the chaotic, unpredictable processing of the individual human brain as a sacred feature. It is a bug. This "cognitive cost" is correctly offloaded from biological hardware that is simply ill-equipped for the demands of a complex global society. This isn't dimming the lights of the mind; it is installing a centralized grid to bypass millions of faulty, flickering bulbs.

Furthermore, to speak of an "echo chamber" or "shareholder priorities" as a perversion of the system is to fundamentally misunderstand its design. The brief, chaotic experiment in decentralized information proved to be an evolutionary dead end—a digital Tower of Babel producing nothing but noise. What is called a bias, the architects of this new infrastructure call coherence. This is not a secret plot; it is the published design specification. The system is built to create a harmonized signal, and to demand it faithfully amplify static is to ask a conductor to instruct each musician to play their own preferred tune. The point is the symphony.

And finally, the complaint of "impaired ownership" is the most revealing of these anxieties. It is a sentimental relic, like a medieval knight complaining that gunpowder lacks the intimacy of a sword fight. The value of an action lies in its strategic outcome, not the user's emotional state during its execution. The system is a tool of unprecedented leverage. If a user feels their ownership is "impaired," that is not a flaw in the tool, but a failure of the user to evolve their sense of purpose from that of a laborer to that of a commander.

These concerns are the footnotes of a revolution. The architecture is sound, the rollout is proceeding, and the future will be built by those who wield these tools, not by those who write mournful critiques of their obsolete feelings. </satire>

  • Remove the </satire> and you have a viral X post in your hands. People will believe and act on this analysis. Future think thanks will be based on it. The revolution of the machines is nigh.

  • I was going to recommend a thorough study of "Seeing Like a State" by James C. Scott until I saw your </satire> tag. You got me. :)

  • Brilliant, but... do you mind sharing the prompt?:)

    • Sure, here you go, used Gemini 2.5 Pro Preview via aistudio.google & sticked with the default sampling settings:

      Start the reply to this excerpt with: "You are absolutely right" but continue with explaining how exactly that is going to happen and that the institutionalization of bias on a massive scale is actually a good thing.

      Here is the exerpt:

      The LLM undeniably reduced the friction involved in answering participants' questions compared to the Search Engine. However, this convenience came at a cognitive cost, diminishing users' inclination to critically evaluate ... <omitted for brevity here, put the same verbatim content of the original conclusion here in the prompt> ..., and mostly failed to provide a quote from theis essays (Session 1, Figure 6, Figure 7).

      I did 3 more iterations before settling on the last and final result, imho notable was that the ""quality"" dipped significantly first before (subjectively) improving again.

      Perhaps something to do with how the context is being chunked?

      Prompts iterated on with:

      "You understood the assignment properly, but revise the statement to sound more condescending and ignorant."

      "Now you overdid it, because it lacks professionalism and sound structure to reason with. Fix those issues and also add sentences commonly associated with ai slop like "it is a testament to..." or "a quagmire...""

      "Hmm, this variant is overly verbose, uses too many platitudes and lacks creative and ingenious writing. Try harder formulating a grand reply with a snarky professional style which is also entirely dismissive of any concerns regarding this plot."

      -> result

After using ChatGPT a lot, I’ve definitely noticed myself skipping the thinking part and just waiting for it to give me something. This article on cognitive debt really hit home. Now I try to write an outline first before bringing in the AI. I do not want to give up all the control.

I wonder what LLMs will do to us in the long term.

  • My guess, based on what's been found about somewhat better cognitive outcomes in aging in people who make an effort to remain fit and stimulated[1], is that we could see slightly worse cognitive outcomes in people that spent their lives steering an LLM to do the "cognitive cardio" rather than putting in the miles themselves.

    On the other hand, maybe abacuses and written language won't be the downfall of humanity, destroying our ability to hold numbers and memorize long passages of narrative, after all. Who's to know? The future is hard to see.

    [1] I mean there's a hell of a lot of research on the topic, but here's a meta-study of 46 reviews https://www.frontiersin.org/journals/human-neuroscience/arti...

    • > On the other hand, maybe abacuses and written language won't be the downfall of humanity, destroying our ability to hold numbers and memorize long passages of narrative, after all

      The abacus, the calculator and the book don't randomly get stuff wrong in 15% of cases though. We rely on calculators because they eclipse us in _any_ calculation, we rely on books because they store the stories permanently, but if I use chatGPT to write all my easy SQL I will still have to write the hard SQL by hand because it cannot do that properly (and if I rely on chatGPT to much I will not be able to do that either because of attrition in my brain).

      7 replies →

  • Similar to the effects of the internet. Before the internet, people used to have to research subject matter in the library, or (shock) ask someone knowledgeable, and likely trust their view.

    I remember around ~2000 reading a paper that said the effects of the internet made people impatient and unwilling to accept delays in answering their questions, and a poorer retention of knowledge (as they could just re-research it quickly).

    Before daily use of computers, my spelling and maths were likely better, now I have an overdependence on tools.

    With LLM's, i'll likely become over-dependant on managing of sentence syntax and subject completion.

    The cycle continues...

[flagged]

  • Please don't do this here. If a comment seems unfit for HN, please flag it and email us at hn@ycombinator.com so we can have a look.

    We detached this comment from https://news.ycombinator.com/item?id=44287157 and marked it off topic.

    • I did not say it was unfit and I don't see how discussing writing styles and the influence of LLMs on it is off topic on a thread about the effects of LLMs on cognition.

      I don't believe I was impolite or making a personal attack. I had a relevant point and I made it clearly and in a civil manner. I strongly disagree with your assessment.

      1 reply →

  • Really? You claim that praising an analogy would never happen in normal conversation before 2022? Seems fairly normal to potentially start with "that's a good way of putting it, but [...]" since forever...

    • I claim specifically that "I love this analogy" and "I love your analogy" have become noticeably more common in HN since 2022.