Comment by throw10920

1 month ago

Writing/reading and AI are so categorically different that the only way you could compare them is if you fundamentally misunderstand how both of them work.

And "other people in the past predicted doom about something like this and it didn't happen" is a fallacious non-argument even when the things are comparable.

The argument Socrates is making is specifically that writing isn't a substitute for thinking, but it will be used as such. People will read things "without instruction" and claim to understand those things, even if they do not. This is a trade-off of writing. And the same thing is happening with LLMs in a widespread manner throughout society: people are having ChatGPT generate essays, exams, legal briefs and filings, analyses, etc., and submitting them as their own work. And many of these people don't understand what they have generated.

Writing's invention is presented as an "elixir of memory", but it doesn't transfer memory and understanding directly - the reader must still think to understand and internalize information. Socrates renames it an "elixir of reminding", that writing only tells readers what other people have thought or said. It can facilitate understanding, but it can also enable people to take shortcuts around thinking.

I feel that this is an apt comparison, for example, for someone who has only ever vibe-coded to an experienced software engineer. The skill of reading (in Socrates's argument) is not equivalent to the skill of understanding what is read. Which is why, I presume, the GP posted it in response to a comment regarding fear of skill atrophy - they are practicing code generation but are spending less time thinking about what all of the produced code is doing.

yes, but people just really like to predict dooms and they also like to be convinced that they live in some special era in human history

  • It takes about 30 seconds of thinking and/or searching the Internet to realize that people also predict doom when it actually happens - e.g. with people correctly predicting that TikTok will shorten people's attention spans.

    It's then quite obvious that the fact that someone, somewhere, predicts a bad thing happening has ~zero bearing on whether it actually happens, and so the claim that "someone predicted doom in the past and it didn't happen then so someone predicting doom now is also wrong" is absurd. Calling that idea "intellectually lazy" is an insult to smart-but-lazy people. This is more like intellectually incapable.

    The fact that people will unironically say such a thing in the face of not only widespread personal anecdotes from well-respected figures, but scientific evidence, is depressing. Maybe people who say these things are heavy LLM users?

    • There is always some set of people predicting all sorts of dooms though. The saying about the broken clock comes to mind.

      With the right cherry picking, it can always be said that [some set of] the doomsayers were right, or that they were wrong.

      As you say, someone predicting doom has no bearing on whether it happens, so why engage in it? It's just spreading FUD and dwelling on doom. There's no expected value to the individual or to others.

      Personally, I don't think "TikTok will shorten people's attention spans" qualifies as doom in and of itself.

      1 reply →

I know managers who can read code just fine, they're just not able/willing to code it. Tho the ai helps with that too. I've had a few managers dabble back into coding esp scripts and whatnot where I want them to be pulling unique data and doing one off investigations.

I read grandparent comment as saying people have been claiming that the sky is falling forever… AI will be both good for learning and development and bad. It’s always up to the individual if it benefits them or atrophies their minds.

  • I'm not a big fan of LLMs, but while using it for day to day tasks, I get the same feeling I had when I first started the internet (I was lucky to start with broadband internet).

    That feeling was one of empowerment: I was able to satisfy my curiosity about a lot of topics.

    LLMs can do the same thing and save me a lot of time. It's basically a super charged Google. For programming it's a super charged auto complete coupled with a junior researcher.

    My main concern is independence. LLMs in the hands of just a bunch of unchecked corporations are extremely dangerous. I kind of trusted Google, and even that trust is eroding, and LLMs can be extremely personal. The lack of trust ranges from risk of selling data and general data leaks, to intrusive and worse, hidden ads, etc.

    • When I first started using the internet, I was able to instant text message (IRC) random strangers, using a fake name, and lie about my age. My teacher had us send an email to our ex-classmate who had move to Australia, and she replied the next day, I was able to download the song I just heard on the radio and play it as many times as I wanted on my winamp.

      These capabilities simply didn’t exist before the Internet. Apart for the email to Australia (which was possible with a fax machine; but much more expensive), LLMs don‘t give you any new capabilities. It just provides a way for you to do what you already can (and should) do with your brain, without using your brain. It is more like using replacing your social interaction with facebook, then it is to experience an instant message group chat for the first time.

      4 replies →

To understand the impact on computer programming per se, I find it useful to imagine that the first computer programs I had encountered were, somehow, expressed in a rudimentary natural language. That (somewhat) divorces the consideration of AI from its specific impact on programming. Surely it would have pulled me in certain directions. Surely I would have had less direct exposure to the mechanics of things. But, it seems to me that’s a distinction of degree, not of kind.