← Back to context

Comment by majormajor

8 months ago

That's not really true at all, at least at the end user level.

You can have a very thoughtful LLM prompt and get a garbage response if the model fails to generate a solid, sound answer to your prompt. Hard questions with verifiable but obscure answers, for instance, where it generates fake citations.

You can have a garbage prompt and get not-garbage output if you are asking in a well-understood area with a well-understood problem.

And the current generation of company-provided LLMs are VERY highly trained to make the answer look non-garbage in all cases, increasing the cognitive load on you to figure out.