Comment by progval
6 days ago
The other side of the coin is that if you give it a precise input, it will fuzzily interpret it as something else that is easier to solve.
6 days ago
The other side of the coin is that if you give it a precise input, it will fuzzily interpret it as something else that is easier to solve.
Well said, these things are actually in a tradeoff with each other. I feel like a lot of people somehow imagine that you could have the best of both, which is incoherent short of mind-reading + already having clear ideas in the first place.
But thankfully we do have feedback/interactiveness to get around the downsides.
When you have a precise input, why give it to an LLM? When I have to do arithmetic, I use a calculator. I don't ask my coworker, who is generally pretty good at arithmetic, although I'd get the right answer 98% of the time. Instead, I use my coworker for questions that are less completely specified.
Also, if it's an important piece of arithmetic, and I'm in a position where I need to ask my coworker rather than do it myself, I'd expect my coworker (and my AI) to grab (spawn) a calculator, too.
It will, or it might? Because if every time you use an LLM is misinterprets your input as something easier to solve, you might want to brush up on the fundamentals of the tool
(I see some people are quite upset with the idea of having to mean what you say, but that's something that serves you well when interacting with people, LLMs, and even when programming computers.)
Might, of course. And in my experience it's what happens most times I ask a LLM to do something I can't trivially do myself.
Well everyone's experience is different, but that's been a pretty atypical failure mode in my experience.
That being said, I don't primarily lean on LLMs for things I have no clue how to do, and I don't think I'd recommend that as the primary use case either at this point. As the article points out, LLMs are pretty useful for doing tedious things you know how to do.
Add up enough "trivial" tasks and they can take up a non-trivial amount of energy. An LLM can help reduce some of the energy zapped so you can get to the harder, more important, parts of the code.
I also do my best to communicate clearly with LLMs: like I use words that mean what I intend to convey, not words that mean the opposite.
7 replies →
I find this very very much depends on the model and instructions you give the llm. Also you can use other instructions to check the output and have it try again. Definitely with larger codebases it struggles but the power is there.
My favorite instruction is using component A as an example make component B