← Back to context

Comment by hannofcart

6 days ago

> It does not do the thing it promises to do. Software that sometimes works and very often produces wrong or nonsensical output...

Is that very unlike humans?

You seem to be comparing LLMs to much less sophisticated deterministic programs. And claiming LLMs are garbage because they are stochastic.

Which entirely misses the point because I don't want an LLM to render a spreadsheet for me in a fully reproducible fashion.

No, I expect an LLM to understand my intent, reason about it, wield those smaller deterministic tools on my behalf and sometimes even be creative when coming up with a solution, and if that doesn't work, dream up some other method and try again.

If _that_ is the goal, then some amount of randomness in the output is not a bug it's a necessary feature!