Comment by HelloUsername
6 hours ago
The one good usecase I've found for AI chatbots, is writing ffmpeg commands. You can just keep chatting with it until you have the command you need. Some of them I save as an executable .command, or in my .txt note.
LLMs are an amazing advance in natural language parsing.
The problem is someone decided that and the contents of Wikipedia was all something needs to be intelligent haha
The confusion was thinking that language is the same thing as intelligence.
You and me are great examples of that. We are both extremely stupid and yet we can speak.
This seems like a glib one liner but I do think it is profoundly insightful as to how some people approach thinking about LLMs.
It is almost like there is hardwiring in our brains that makes us instinctively correlate language generation with intelligence and people cannot separate the two.
It would be like if for the first calculators ever produced instead of responding with 8 to the input 4 + 4 = printed out "Great question! The answer to your question is 7.98" and that resulted in a slew of people proclaiming the arrival of AGI (or, more seriously, the ELIZA Effect is a thing).
As pessimistic about it as I am, I do think LLMs have a place in helping people turn their text description into formal directives. (Search terms, command-line, SQL, etc.)
... Provided that the user sees what's being made for them and can confirm it and (hopefully) learn the target "language."
Tutor, not a do-for-you assistant.
I agree apart from the learning part. The thing is unless you have some very specific needs where you need to use ffmpeg a lot, there’s just no need to learn this stuff. If I have to touch it once a year I have much better things to spend my time learning than ffmpeg command
Agreed. I have a bunch of little command-line apps that I use 0.3 to 3 times a year* and I'm never going to memorize the commands or syntax for those. I'll be happy to remember the names of these tools, so I can actually find them on my own computer.
* - Just a few days ago I used ImageMagick for the first time in at least three years. I downloaded it just to find that I already had it installed.
There is no universe where I would like to spend brain power on learning ffmpeg commands by heart.
1 reply →
Do most devs even look at the source code for packages they install? Or the compiled machine code? I think of this as just a higher level of abstraction. Confirm it works and not worry about the details of how it works
For the kinds of things you’d need to reach for an LLM, there’s no way to trust that it actually generated what you actually asked for. You could ask it to write a bunch of tests, but you still need to read the tests.
It isn’t fair to say “since I don’t read the source of the libraries I install that are written by humans, I don’t need to read the output of an llm; it’s a higher level of abstraction” for two reasons:
1. Most Libraries worth using have already been proven by being used in actual projects. If you can see that a project has lots of bug fixes, you know it’s better than raw code. Most bugs don’t show up unless code gets put through its paces.
2. Actual humans have actual problems that they’re willing to solve to a high degree of fidelity. This is essentially saying that humans have both a massive context window and an even more massive ability to prioritize important things that are implicit. LLMs can’t prioritize like humans because they don’t have experiences.
I don’t because I trust the process to get the artifacts. Why? Because it’s easy to replicate and verify. Just like how proof works in math.
You can’t verify LLM’s output. And thus, any form of trust is faith, not rational logic.
1 reply →
It you stretch it little further, those formal directives also include language and vocabulary of a particular domain (legalese, etc…).
The "provided" isn't provided, of course, especially the learning part, that's not what you'd turn to AI for vs more reliable tutoring alternatives
One that older AI struggled with was the "bounce" effect: play from 0:00 to 0:03, then backwards from 0:03 to 0:00, then repeat 5 times.
Just tried it and got this, is it correct?
> Write an ffmpeg command that implements the "bounce" effect: play from 0:00 to 0:03, then backwards from 0:03 to 0:00, then repeat 5 times.
But doesnt something like this interface kind of show the inefficiency of this? Like we can all agree ffmpeg is somewhat esoteric and LLMs are probably really great at it, but at the end of the day if you can get 90% of what you need with just some good porcelain, why waste the energy spinning up the GPU?
Requiring the installation of a massive kraken like node.js and npm to run a commandline executable hardly screams efficiency...
That's a deficiency with this particular implementation, not an inherent disadvantage to the method
Because FFmpeg is a swiss army knife with a million blades and I don't think any easy interface is really going to do the job well. It's a great LLM use case.
But you only need to find the correct tool once and mark it in some way. Aka write a wrapper script, jot down some notes. You are acting like you’re forced to use the cli each time.
Because getting 90% might not be good enough, and the effort you need to expend to reach 97% costs much more than the energy the GPU uses.
Because the porcelain is purpose built for a specific use case. If you need something outside of what its author intended, you'll need to get your hands dirty.
And, realistically, compute and power is cheap for getting help with one-off CLI commands.