← Back to context

Comment by Terr_

5 hours ago

As pessimistic about it as I am, I do think LLMs have a place in helping people turn their text description into formal directives. (Search terms, command-line, SQL, etc.)

... Provided that the user sees what's being made for them and can confirm it and (hopefully) learn the target "language."

Tutor, not a do-for-you assistant.

I agree apart from the learning part. The thing is unless you have some very specific needs where you need to use ffmpeg a lot, there’s just no need to learn this stuff. If I have to touch it once a year I have much better things to spend my time learning than ffmpeg command

  • Agreed. I have a bunch of little command-line apps that I use 0.3 to 3 times a year* and I'm never going to memorize the commands or syntax for those. I'll be happy to remember the names of these tools, so I can actually find them on my own computer.

    * - Just a few days ago I used ImageMagick for the first time in at least three years. I downloaded it just to find that I already had it installed.

  • There is no universe where I would like to spend brain power on learning ffmpeg commands by heart.

    • No one learns those. What people do is just learning the UX of the cli and the terminology (codec, opus, bitrate, sampling,…)

Do most devs even look at the source code for packages they install? Or the compiled machine code? I think of this as just a higher level of abstraction. Confirm it works and not worry about the details of how it works

  • For the kinds of things you’d need to reach for an LLM, there’s no way to trust that it actually generated what you actually asked for. You could ask it to write a bunch of tests, but you still need to read the tests.

    It isn’t fair to say “since I don’t read the source of the libraries I install that are written by humans, I don’t need to read the output of an llm; it’s a higher level of abstraction” for two reasons:

    1. Most Libraries worth using have already been proven by being used in actual projects. If you can see that a project has lots of bug fixes, you know it’s better than raw code. Most bugs don’t show up unless code gets put through its paces.

    2. Actual humans have actual problems that they’re willing to solve to a high degree of fidelity. This is essentially saying that humans have both a massive context window and an even more massive ability to prioritize important things that are implicit. LLMs can’t prioritize like humans because they don’t have experiences.

  • I don’t because I trust the process to get the artifacts. Why? Because it’s easy to replicate and verify. Just like how proof works in math.

    You can’t verify LLM’s output. And thus, any form of trust is faith, not rational logic.

    • I don't install 3rd party dependencies if I can avoid them. Why? Because although someone could have verified them, there's no guarantee that anybody actually did, and this difference has been exploited by attackers often enough to get its own name, a "supply-chain attack".

      With an LLM’s output, it is short enough that I can* put in the effort to make sure it's not obliviously malicious. Then I save the output as an artefact.

      * and I do put in this effort, unless I'm deliberately experimenting with vibe coding to see what the SOTA is.

It you stretch it little further, those formal directives also include language and vocabulary of a particular domain (legalese, etc…).

The "provided" isn't provided, of course, especially the learning part, that's not what you'd turn to AI for vs more reliable tutoring alternatives