What is happening to writing? Cognitive debt, Claude Code, the space around AI

14 hours ago (resobscura.substack.com)

I won't ever put my name on something written by an LLM, and I will blacklist any site or person I see doing it. If I want to read LLM output I can prompt it myself, subjecting me to it and passing it off as your own is disrespectful.

As the author says, there will certainly be a number of people who decide to play with LLM games or whatever, and content farms will get even more generic while having less writing errors, but I don't think that the age of communicating thought, person to person, through text is "over".

  • It's easy to output LLM junk, but I and my colleagues are doing a lot of incredible work that simply isn't possible without LLMs involved. I'm not talking a 10 turn chat to whip out some junk. I'm talking deep research and thinking with Opus to develop ideas. Chats where you've pressure tested every angle, backed it up with data pulled in from a dozen different places, and have intentionally guided it towards an outcome. Opus can take these wildly complex ideas and distill them down into tangible, organized artifacts. It can tune all of that writing to your audience, so they read it in terms they're familiar with.

    Reading it isn't the most fun, but let's face it - most professional reading isn't the most fun. You're probably skimming most of the content anyways.

    Our customers don't care how we communicate internally. They don't care if we waste a bunch of our time rewriting perfectly suitable AI content. They care that we move quickly on solving their problems - AI let's us do that.

    • > Reading it isn't the most fun, but let's face it - most professional reading isn't the most fun. You're probably skimming most of the content anyways.

      I find it difficult to skim AI writing. It's persuasive even when there's minimal data. It'll infer or connect things that flow nice, but simply don't make sense.

    • To build what, though? I’m truly curious. You talk about researching and developing ideas — what are you doing with it?

  • I assume if someone used an LLM to write for them that they must not be comfortabley familiar with their subject. Writing about something you know well tends to come easy and usually is enjoyable. Why would you use an LLM for that and how could you be okay with its output?

    • Writing a first draft may come easy, but there's more to the process than that. An LLM can go from outline to "article" in one step. I can't.

      I don't write often, so revising and rewriting is very slow for me. I'm not confident in my writing and it looks clunky to my eye.

      I see the appeal, though I want to keep developing my own skills.

    • > I assume if someone used an LLM to write for them that they must not be comfortabley familiar with their subject.

      This statement assumes that the writer is a native speaker in the language in which he writes the text.

  • some people might be better at prompting a LLM than you

    just like when you go to a restaurant to have a chef cook for you when you can cook yourself

    • Most restaurants, by volume, these days churn out ultra processed, mass-marketed slop.

      It’s true there is the occasional Michelin starred place or an amazing local farm to table place. There is also the occasional excellent use of LLMs. Most LLM output I have to read, though, is straight up spam.

Axios got traction because it heavily condensed news into more scannable content for the twitter, insta, Tok crowd.

So AI is this on massive steroids. It is unsettling but it seems a recurring need to point out that across the board many of "it's because of AI" things were already happening. "Post truth" is one I'm most interested in.

AI condenses it all on a surreal and unsettling timeline. But humans are still humans.

And to me, that means that I will continue to seek out and pay for good writing like The Atlantic. btw I've enjoyed listening to articles via their auto-generated NOA AI voice thing.

Additionally, not all writing serves the same purpose. The article makes these sweeping claims about "all of writing". Gets clicks I guess, but to the point, most of why and what people read is toward some immediate and functional need. Like work, like some way to make money, indirectly. Some hack. Some fast-forwarding of "the point". No wonder AI is taking over that job.

And then there's creative expression and connection. And yes I know AI is taking over all the creative industries too. What I'm saying is we've always been separating "the masses" from those that "appreciate real art".

Same story.

  • > Additionally, not all writing serves the same purpose.

    I think this is a really important point and to add on, there is a lot of writing that is really good, but only in a way that a niche audience can appreciate. Today's AI can basically compete with the low quality stuff that makes up most of social media, it can't really compete with higher quality stuff targeted to a general audience, and it's still nowhere close to some more niche classics.

    An interesting thought experiment is whether it's possible that AI tools could write a novel that's better than War and Peace. A quick google shows a lot of (poorly written) articles about how "AI is just a machine, so it can never be creative," which strikes me as a weak argument way too focused on a physical detail instead of the result. War and Peace and/or other great novels are certainly in the training set of some or all models, and there is some real consensus about which ones are great, not just random subjective opinions.

    I kind of think... there is still something fundamental that would get in the way, but that it is still totally achievable to overcome that some day? I don't think it's impossible for an AI to be creative in a humanlike way, they don't seem optimized for it because they are completely optimized for the sort of analytical mode of reading and writing, not the creative/immersive one.

    • > An interesting thought experiment is whether it's possible that AI tools could write a novel that's better than War and Peace. A quick google shows a lot of (poorly written) articles about how "AI is just a machine, so it can never be creative," which strikes me as a weak argument way too focused on a physical detail instead of the result. War and Peace and/or other great novels are certainly in the training set of some or all models, and there is some real consensus about which ones are great, not just random subjective opinions.

      I am sure it could but then what is the point? Consider this, lets assume that someone did manage to use LLM to produce a very well written novel. Would you rather have the novel that the LLM generated (the output), or the prompts and process that lead to that novel?

      The moment I know how its made, the exact prompts and process, I can then have an infinite number of said great novels in 1000 different variations. To me this makes the output way, way less valuable compared to the input. If great novels are cheap to produce, they are no longer novel and becomes the norm, expectation rises and we will be looking for something new.

      2 replies →

    • > Today's AI can basically compete with the low quality stuff that makes up most of social media, it can't really compete with higher quality stuff

      But compete in what sense? It already wins on volume alone, because LLM writing is much cheaper than human writing. If you search for an explanation of a concept in science, engineering, philosophy, or art, the first result is an AI summary, probably followed by five AI-generated pages that crowded out the source material.

      If you get your news on HN, a significant proportion of stories that make it to the top are LLM-generated. If you open a newspaper... a lot of them are using LLMs too. LLM-generated books are ubiquitous on Amazon. So what kind of competition / victory are we talking about? The satisfaction of writing better for an audience of none?

      7 replies →

  • > "Post truth" is one I'm most interested in.

    I have this theory that the post-truth era began with the invention of the printing press and gained iteratively more traction with each revolution in information technology.

    • Doesn't matter when post-truth started because it's now over, and it's more accurate to characterize this era as "post-rationality". Most people do seem to understand this, but we are in different stages of grief about it.

    • I think you're right, but I also think it's worthwhile to look at Edward Bernays in the early 1900s and his specific influence on how companies and governments to this day shape deliberately shape public opinion in their favor. There's an argument that his work and the work of his contemporaries was a critical point in the flooding of the collective consciousness with what we would consider propaganda, misinformation, or covert advertising.

      1 reply →

  • Same. New yorker is the other mag I subscribed to.

    Until 3 weeks ago I had a high cortisol inducing morning read: nyt, wsj, axios, politico. I went on a weeklong camping trip with no phone and haven't logged into those yet. It's fine.

    • People think I'm nuts when I tell them I ditched subscriptions for those sites and only check them maybe once a week, if that.

      But what you said is 100% true, it's fine. When things in your life provide net negative value it's in your best interest to ditch them.

      1 reply →

    • I agree with this in general but with caveats. For example I think reading national-sized news every day sucks. But if you're of a specific demographic it might be useful to keep pretty up to date on nuanced issues, like if you're a gun owner you will probably want to keep up to date on gun licensing in your area. Or if you're a trans person it's pretty important nowadays to be very aware of laws being passed to dictate your legally going to whatever bathroom or something.

    • Being able to check out is something of a privilege. Some folks have to know when masked men are surging into the neighborhood because they don't pass as white, speak Spanish, and don't want to be assaulted. Being a citizen and carrying your original birth certificate may not be enough.

      1 reply →

"Is Claude Code junk food, though? ... although I have barely written a line of code on my own, the cognitive work of learning the architecture — developing a new epistemological framework for “how developers think” — feels real."

Might this also apply to learning about writing? If have barely written a line of prose on my own, but spent a year generating a large corpus of it aided by these fabulous machines, might I also come to understand "how writers think"?

I love the later description of writing as a "special, irreplaceable form of thinking forged from solitary perception and [enormous amounts of] labor", where “style isn’t something you apply later; it’s embedded in your perception" (according to Amis). Could such a statement ever apply to something as crass as software development?

  • My current bugbear is how art is held up as creativity and worthy of societal protection and scorn against AI muscling in on it

    While the same people in the same comments say it’s fine to replace programming with it

    When pressed they talk about creativity, as if software development has none…

    • I haven't heard writers make any kind of stance on software engineering, but Brandon Sanderson has very publicly renounced AI writing because it lacks any kind of authentic journey of an authors own writing. Just as we would cringe at our first software projects, he cringes at his first published novel.

      I think that's a reasonable argument to make against generative art in any form.

      However, he does celebrate LLM advancements in health and accessibility, and I've seen most "AI haters" handwave away its use there. It's a weird dissonance to me too that its use is perfectly okay if it helps your grandparents live a longer, and higher quality of life, but not okay if your grandparents use that longer life to use AI-assisted writing to write a novel that Brandon would want to read.

    • Art has two facets. First is if you like it. If you do, you don't need to care where it came from. Second is the art as cultured and defined by the artistic elites. They don't care if art is liked or likable, they care about the pedigree, i.e. where it came from, and that it fits what they consider worthy art. Between these two is what I call filler art: stuff that's rather indifferent and not very notable, but often crosses over some minimum bar that it's accepted by, and maybe popular among average people who aren't that seriously interested in art.

      In the first category, AI is no problem. If you enjoy what you see or hear, it doesn't make a difference if it was created by which kind of artist or AI. In the second category, for the elite, AI art is no less unacceptable than current popular art or, for that matter, anything at all that doesn't fit their own definition of real art. Makes no difference. Then the filler art.. the bar there is not very high but it will likely improve with AI. It's nothing that's been seriously invested in so far, and it's cheaper to let AI create it rather than poorly paid people.

      2 replies →

    • a lot of artists don't mind use AI for art outside their field

      I was in a fashion show in tokyo in 2024.

      i noticed their fashion was all human designed. but they had a lot of posters, video, and music that was AI generated.

      I point blank asked the curator why he used AI for some stuff but didn't enhance the fashion with AI. I was a bit naive because I was actually curious to see if AI wasn't ready for fashion or maybe they were going for an aesthetic. I genuinely was trying to learn and not point out a hypocrisy.

      he got mad and didn't answer. i guess it is because they didn't want to pay for everything else. big lesson learned in what to ask lol.

      1 reply →

  • Thank you, this sort of insight is exactly why I've felt such kinship with what software engineers like Karpathy and Simon Willison have been writing lately. It seems obvious to me that there is something special and irreplaceable about the thought processes that create good code.

    However, I think there is also something qualitatively different about how work is done in these two domains.

    Example: refactoring a codebase is not really analogous to revising a nonfiction book, even though they both involve rewriting of a sort. Even before AI, the former used far more tooling and automated processes. There is, e.g., no ESLint for prose which can tell you which sentences are going to fail to "compile" (i.e., fail to make sense to a reader).

    The special taste or skillset of a programmer seems to me to involve systems thinking and tool use in a different way than the special taste of a writer, which is more about transmuting personal life experiences and tacit knowledge into words, even if tools (word processor) and systems (editors, informants, primary sources) are used along the way.

    Sort of half formed ideas here but I find this a really rich vein of thought to work through. And one of the points of my post is that writing is about thinking in public and with a readership. Many thanks for helping me do that.

    I don't have a good answer to your question, but I do think it might be comparable, yes. If you had good taste about what to get Opus 4.6 to write, and kept iterating on it in a way that exposes the results to public view, I think you'd definitely develop a more fine grained sense of the epistemological perspective of a writer. But you wouldn't be one any more than I'm a software developer just because I've had Claude Code make a lot of GitHub commits lately (if anyone's interested: https://github.com/benjaminbreen).

  • > Could such a statement ever apply to something as crass as software development?

    Absolutely. I think like a Python programmer, a very specific kind of Python programmer after a decade of hard lessons from misusing the freedom it gives you in just about every way possible.

    I carry that with me in how I approach C++ and other languages. And then I learned some hard lessons in C++ that informed my Python.

    The tools you have available definitely inform how you think. As your thinking evolves, so does your own style. It's not just the tool, mind, but also the kinds of things you use it for.

  • "My AI usage is justified, but what others are doing is generating slop."

    I'm still waiting for a famous people to say this so we can have a name of this psychological phenomenon.

> Anyone who has led a class discussion — much less led students on a tour of Egypt or Okinawa, as my colleagues regularly do — knows that there is a huge gap between solo learning online and collective learning in meat space

One thing this author misses, which I fear, is that it may become less important in the eyes of stakeholders to educate the masses when they have LLMs to do jobs instead. That is, it is fully possible that one of the futures we may see is one where education goes down as it is perceived as not important for most. Yes, meat space education may be better, but who decides if it is necessary?

Maybe vocational schools become more important instead? Jobs where you for all intents and purposes build out the infrastructure for the tetriary industry, mostly automated by LLM.

You may disagree with this, but the key here is to realize that even if we disagree, others don't. Education is also power, there's a perverse incentive to avoid educating people and feeding them with your narrative of how the world works instead. We are very much possibly on the way towards a buy-n-large style future.

This type of cadence.

You know the one.

Choppy. Fast. Saying nothing at all.

It's not just boring and disjointed. It's full-on slop via human-adjacent mimicry.

Let’s get very clear, very grounded, and very unsentimental for a moment.

The contrast to good writing is brutal, and not in a poetic way. In a teeth-on-edge, stomach-dropping way. The dissonance is violent.

Here's the raw truth:

It’s not wisdom. It’s not professional. It’s not even particularly original.

You are very right to be angry. Brands picking soulless drivel over real human creatives.

And now we finish with a pseudo-deep confirmation of your bias.

---

Before long everyone will be used to it and it'll evoke the same eugh response

Sometimes standing out or wuality writing doesn't actually matter. Let AI do that part

  • Why would anyone get sick of it if people have been happily doing it to each other for so many years prior?

    Does the fact that a machine can ape it so easily somehow reveal its vacuousness in a way that wasn't obvious already?

    I keep hearing people with job titles like "SEO growth hacker" saying it's depressing that AI can do their jobs better than they can.

    Really? That's the depressing part?

  • I don't really remember Claude 3.5 doing this, but it seems increasingly worse, with 4.6 being so bad I don't like using it for brainstorming. My shitty idea isn't "genuinely elegant".

  • This is what I don't grok...

    Your sample sounds exactly like an LLM. (If you wrote it yourself, kudos.)

    But, it needn't sound like this. For example, I can have Opus rewrite that block of text into something far more elegant (see below).

    It's like everyone has a new electric guitar with the cheapo included pedal, and everyone is complaining that their instruments all sound the same. Well, no shit. Get rid of the freebie cheapo pedal and explore some of the more sophisticated sounds the instrument can make.

    ----

    There is a particular cadence that has become unmistakable: clipped sentences, stacked like bricks without mortar, each one arriving with the false authority of an aphorism while carrying none of the weight. It is not merely tedious or disjointed; it is something closer to uncanny, a fluency that mimics the shape of human thought without ever inhabiting it.

    Set this against writing that breathes, prose with genuine rhythm, with the courage to sustain a sentence long enough to discover something unexpected within it, and the difference is not subtle. It is the difference between a voice and an echo, between a face and a mask that almost passes for one.

    What masquerades as wisdom here is really only pattern. What presents itself as professionalism is only smoothness. And what feels, for a fleeting moment, like originality is simply the recombination of familiar gestures, performed with enough confidence to delay recognition of their emptiness.

    The frustration this provokes is earned. There is something genuinely dispiriting about watching institutions reach for the synthetic when the real thing, imperfect, particular, alive, remains within arm's length. That so many have made this choice is not a reflection on the craft of writing. It is a reflection on the poverty of attention being paid to it.

    And if all of this sounds like it arrives at a convenient conclusion, one that merely flatters the reader's existing suspicion, well, perhaps that too is worth sitting with a moment longer than is comfortable.

    ----

    (prompt used: I want you to revise [pasted in your text], making it elegant and flowing with a mature literary-style. The point of this exercise is to demonstrate how this sample text -- held up as an example of the stilted LLM style -- can easily be made into something more beautiful with a creative prompt. Avoid gramatical constructions that call for m-dashes.)

    • >It is not merely tedious or disjointed; it is something closer to uncanny, a fluency that mimics the shape of human thought without ever inhabiting it.

      It still can't help itself from doing "it's not X it's Y". Changing the em-dash to a semi-colon is just lipstick

  • well done. :)

    and at the same time the chop becomes long-form slop, stretching out a little seed of a human prompt into a sea of inane prose.

The "cognitive debt" framing is slightly mislocated. The debt isn't from using AI — it's from confusing editorial fluency with generative fluency.

When you write, you discover what you think in the act of arranging words. That's why writing feels hard — it's thinking, not the output of thinking. When you prompt an AI and refine its output, you're doing editorial work, which builds different muscles. You get better at recognizing good prose without getting better at producing it.

That's the real debt: the growing gap between your ability to evaluate writing and your ability to generate it. Same thing happens when programmers who only use AI-assisted code generation start losing the ability to reason about systems from scratch.

I overheard a conversation between a uni professor and a phd student the other day. Professor was complaining 99% his students use chatgpt to write essays in uni. He seemed genuinely distressed about the effect this was having on all of them.

  • Not surprised, I work in Academia and there is a push from the Business side to start marking essays and performing lectures with ChatGPT/AI.

    I have my own personal reservation about it all.

  • i cant wait for the reverse effect to happen , where everyone themselves start sounding like large language models ... a true singularity where AI colonizes the noosphere instead of earth

As much as the general public seems to be turning against AI, people only seem to care when they're aware it's AI. Those of us intentionally aware of it are better tuned to identify LLM-speak and generated slop.

Most human writing isn't good. Take LinkedIn, for example. It didn't suddenly become bad because of LLM-slop posts - humans pioneered its now-ubiquitous style. And now even when something is human-written, we're already seeing humans absorb linguistic patterns common to LLM writing. That said, I'm confident slop from any platform with user-generated content will eventually fade away from my feeds because the algorithms will pick up on that as a signal. (edit to add from my feeds)

What concerns me most is that there's absolutely no way this isn't detrimental to students. While AI can be a tool in STEM, I'm hearing from teachers among family and friends that everything students write is from an LLM.

Leaning on AI to write code I'd otherwise write myself might be a slight net negative on my ability to write future code - but brains are elastic enough that I could close an n month gap in 1/2n months time or something.

From middle school to university, students are doing everything for the first time, and there's no recovering habits or memories that never formed in the first place. They made the ACT easier 2 years ago (reduced # of questions) and in the US the average score has set a new record low every year since then. Not only is there no clear path to improvement, there's an even clearer path to things getting worse.

  • I spent several years trying to get ground truth out of digital medical records and I would draw this parallel to AI slop:

    With traditional medical records, you could see what the practitioner did and covered because only that was in the record.

    With computerized records, the intent, thought process, most signal you would use to validate internal consistency, was hidden behind a wall of boilerplate and formality that armored the record against scrutiny.

    Bad writing on LinkedIn is self-evident. Everything about it stinks.

    AI slop is like a Trojan Horse for weak, undeveloped thoughts. They look finished, so they sneak into your field of view and consume whatever additional attention is required to finally realize that despite the slick packaging, this too is trash.

    So “AI slop,” in this worldview, is a complaint that historical signals of quality simply based on form, no longer are useful gatekeepers for attention.

    • re: traditional vs electronic medical records, if you haven't read Seeing Like a State, I highly recommend checking it out. The book is all about the unexpected side effects of improving the legibility of information for decision makers - these attempts can erase or elide important local detail, which ultimately sabotages the bureaucracy's aim of improving the system.

  • Did we lose something when we invented the calculator and stopped teaching the times table in schools? There have been millions of words discussing this, and the general consensus amongst us crusty old folks was that yes, the times table was useful and losing the ability to do mental arithmetic easily would be bad.

    Turns out we were wrong. Everyone carries a calculator now on their phone, even me. Doing simple maths is a matter of moments on the calculator app, and it's rare that I find myself doing the mental arithmetic that used to be common.

    I can't remember phone numbers any more. I used to have a good 50+ memorised, now I can barely remember my own. But the point is that I don't need to any more. We have machines for that.

    Do we need to be able to write an essay? I have never written one outside of an educational context. And no, this post does not count as an essay.

    I was expelled from two kindergartens as a kid. I was finally moved to a Montessori school where they taught individually by following our interests, where I thrived. Later, I moved back into a more conventional educational environment and I fucking hated every minute of it. I definitely learned despite my education not because if it. So if LLMs are about to completely disrupt education then I celebrate that. This is a good thing. Giving every kid a personal tutor that can follow their interests and teach them things that they actually want to learn, at the pace they want to learn them, is fucking awesome.

    • Any competent thinker should be able to structure an argument and present it in written form, that's an important skill to have.

      If someone is unable to write an essay arguing something, unable to articulate complex thoughts and back them up with evidence, what does that indicate about their thinking?

      I don't write essays either, but I'm sure I could. And maybe some of those docs or emails I write at work are made more effective by that.

      3 replies →

    • Calculators are good. But we still teach times tables and long division and prohibit calculators until kids learn how to do it the “hard way.”

      We can’t give a generation of kindergarteners calculators and expect them to produce new math when they’re adults: how will they ever form mathematical problem solving skills?

      I think the same principle applies for LLMs - they can be a tool but learning how to do things without them is still essential. Otherwise we might not have any more good authors in 10 years.

      Before CAD, engineers had to draw designs on drafting boards. Similar concept here, I believe most classes still find it valuable for students to start with pencil and paper and grasp something at its most fundamental level, even if obsolete, before moving on to modern tools.

      LLMs (and calculators, and CAD) should be used as a tool once the underlying mechanisms and skills are understood by its user, otherwise it’s like driving a car without knowing how to replace a flat tire. Sure you can call AAA, but eventually if nobody learns to change a tire with their own two hands, humanity won’t be able to drive. This obviously hyperbole but I hope it illustrates my point.

      I’m fairly confident LLMs will be a net positive on society in the long run, just as calculators have been. But just like calculators are restricted at certain times in math classes, LLMs should be restricted in writing classes.

About the article that's referenced in the beginning - that sentiment presented in it honestly sounds like AI version of cryptocurrency euphoria just as the bubble burst. "You are not ready for what's going to happen to the economy", "crypto will replace tradfi, experts agree". The article is sitting at almost 100M views after just a week and has strong FOMO vibes. To be honest, it's very conflicting for me to believe that, because I've been using AI and compared to crypto, it doesn't just feel like magic, it also does magic. However, I can't help but think of this parallel and the possibilty that somehow the AI bubble could right now be starting to stall/regress. The only problem is that I just don't see how such a scenario would play out, given how good and useful these tools are

The same snobs who were telling us that "The Old Man and the Sea" (written in the style of a fifth-grader) is 'art'...

the same people telling us that "Finnegan's Wake" (written in the style of a fifth-grader with a brain injury) is 'art'...

the same people telling us the poetry of Maya Angelou (written in the style of a fifth-grader with a brain injury and self-esteem issues) is 'art'...

the same people telling us that the works of Jackson Pollack, Mark Rothko, Piet Mondrian, etc., etc. are 'art'...

seem to be the ones complaining the most about AI generated content.

I wonder whether we will see a shift back toward human generated, organic content, writing that is not perfectly polished or exhaustively articulated. For an LLM, it is effortless to smooth every edge and fully flesh out every thought. For humans, it is not.

After two years of reading increasing amounts of LLM generated text, I find myself appreciating something different: concise, slightly rough writing that is not optimized to perfection, but clearly written by another human being

  • If LLMs presently aren't capable of matching the style quirks you're describing, isn't it likely they'll be able to in the near future? To me this feels like a problem that'll either need to be addressed legally or left to authors to somehow convince their audiences to trust that their work is their own.

I think people hate AI generated writing more than they like human curated writing. At the same time, I find that people like AI content more than my writing. I write, comment, and blog in many different places and I notice that my AI generated content does much better in terms of engagement. I'm not a writer, I code, so it might be that my writing is not professional. Whereas my code-by-hand still edges out against AI.

We need to value human content more. I find that many real people eventually get banned while the bots are always forced to follow rules. The Dead Internet hypothesis sounds more inevitable under these conditions.

Indeed we all now have a neuron that fires every time we sense AI content. However, maybe we need to train another neuron that activates when content is genuine.

I agree with the assessment that pure writing (by a human) is over. Content is going to matter a lot more.

It's going to be tough for fiction authors to break through. Sadly, I don't think the average consumer has sufficiently good taste to tell when something is genuinely novel. People often prefer the carefully formulated familiar garbage over the creative gems; this was true before AI and, IMO, will continue to be true after AI. This is not just about writing, it's about art in general.

There will be a subset of people who can see through the form and see substance and those will be able to identify non-AI work but they will continue to be a minority. The masses will happily consume the slop. The masses have poor taste and they're more interested in "comfort food" ideas than actually novel ideas. Novelty just doesn't do it for them. Most people are not curious, new ideas don't interest them. These people will live and breathe AI slop and they will feel uncomfortable if presented with new material, even if wrapped in a layer of AI (e.g. human-written core ideas, rewritten by AI).

I feel like that about most books, music and pop culture in general; it was slop and it will continue to be slop... It was the same basic ideas about elves, dragons, wizards, orcs, kings, queens, etc... Just reorganized and mashed with different overarching storylines "a difficult journey" or "epic battles" with different wording.

Most people don't understand the difference between pure AI-generated content (seeded by a small human input) and human-generated content which was rewritten by AI (seeded by a large human input) because most people don't care about and never cared about substance. Their entire lives may be about form over substance.

  • Who or what is "the masses" actually?

    • Reminded of this clip.

      https://www.youtube.com/watch?v=KHJbSvidohg

      But as much as it pains me to admit... the current state of America is the slopocalypse. A slopalanche. A slopnado. AI cats waking people up in the middle of the night, blasting down doors, glitching out. All produced by slop-slingers. It's rather bleak for long form attention content, human created or not.

      Its a war of/on attention. A war to secure your attention during the time that you would otherwise think for yourself. Keep off the short form content, is my advice.

That is a shallow piece of the new genre: I am a concerned academic who nevertheless uses these new tools to create vibe coded slop and has to tell the world about it.

Everything is inevitable but my own job is secure. Have I already told you how concerned I am?

No novelty. No intellectual challenge. No spirit. Just AI advertisements! /s

The "cognitive debt" framing resonates, but from an unexpected direction. I'm not a developer. I've never written a line of code. I built enterprise software, a live computer vision system monitoring industrial cranes, deployed on Google Cloud Run, generating six figures in contracts, entirely by chatting with Claude. No IDE, no terminal muscle memory to lose.

For me, there is no cognitive debt in the code. There's no ground truth I'm losing touch with, because I never had it. The ground truth I bring is domain knowledge: fifteen years of understanding what an industrial operator actually needs to see on a screen at 3am. What Breen describes as "junk food", the dopamine hit of watching Claude build a new feature is, for domain experts like me, the first time in history we could participate in building at all. The gap that existed wasn't "developer loses touch with code." It was "person closest to the problem could never build the solution." But his core point about writing holds, even here. The thinking that produces good software requirements, the careful articulation of what needs to be built and why, that remains irreducibly human. My most important contributions to my own codebase aren't commits. They're the precise questions I ask. Maybe cognitive debt is domain-specific. Developers accumulate it. Domain experts spend it.

  • You vibe-coded a computer-vision product that is to be used in monitoring industrial cranes? And people are using it?

    • The account is 47 minutes old and with the writing style plus the hefty dose of em dashes, I think they are an LLM.

  • Appreciate this take. It makes a lot of sense and can see this happening all over right now.

I had this worry at first but at this point we have hundreds of years of books written using legacy methods the best of what was possible already exists it's time for a change

In the near future will not even need to read anyway

  • For hundreds of years we've avoided eating rocks, just based on so-called "conventional wisdom". Witness all the problems we now have in the world. Well I, for one, am ready for a change. It's time to do things differently. If you're fed up with the status quo, it's time to start eating rocks.

As we move further into a world where data exfiltration is becoming more sophisticated, local-first processing isn't just a luxury—it’s a necessity. Hardware is finally powerful enough to handle what used to require a massive backend infrastructure.

As someone who uses Claude Code daily for indie projects, the "cognitive debt" framing resonates. But I'd push back slightly: for me it's not debt, it's delegation.

The key shift was realizing I'm not "writing code with AI help" — I'm directing an agent that writes code. Different mental model, different skills required. The writing I do now is more like technical specifications: precise, unambiguous, structured.

The "space around AI" mentioned in the article is exactly right. The value is in knowing what to build, why, and how to verify it works. The typing part was never the bottleneck.

  • its interesting how much this comment feels like its AI written.

    if it isn't, then it has seeped into your writing style and its quite a turn off as a reader; i dont care much to engage.

    if it is, then why should i read it? what come to this website and even bother reading AI bot comments?

    what is happening to writing indeed.

    • I think it's a joke comment intended to be as stereotypically AI as possible. It even has the emdash!