Comment by skydhash

3 days ago

The issue is that it is slow and verbose, at least in its default configuration. The amount of reading is non trivial. There’s a reason most references are dense.

Those issues you can partly solve by changing the prompt to tell it to be concise and don't explain its code.

But nothing will make them stick to the one API version I use.

  • > But nothing will make them stick to the one API version I use.

    Models trained for tool use can do that. When I use Codex for some Rust stuff for example, it can grep from source files in the directory dependencies are stored, so looking up the current APIs is trivial for them. Same works for JavaScript and a bunch of other languages too, as long as it's accessible somewhere via the tools they have available.

    • Hm, I never tried codex so far, but quite some other tools and models and none could help me in a consistent way. But I am sceptical, because also if I tell them explicitel, to only use one specific version they might or not might use that, depending on their training corpus and temperature I assume.

  • The less verbosity you allow the dumber the LLM is. It thinks in tokens and if you keep it from using tokens it's lobotomized.

    • It can think as much as it wants and still return just code in the end.

Well, compared to what method that would be faster to answer that kind of question?

  • Learning the thing. It’s not like I have to use all the libraries of the whole world at the job. You can really fly over a reference documentation if you’re familiar with the domain.

    • If your job only ever calls on you to use the same handful of libraries of course just becoming deeply familiar is better but that’s obviously not realistic if you’re jumping from this thing to that thing. Nobody would use resources like Stack Overflow either if it were that easy and practical to just “learn the thing.”