← Back to context

Comment by mdp2021

2 months ago

Anything articulate (hence possibly convincing) which could be «merely [guessing]» should either be locked out of consequential questions, or fixed.

We're still on that's just how it works. The LLM isn't aware of any consequence, etc. All it does is complete patterns as trained. And the data contains many instances of articulate question answering.

It is for those using the LLM to be aware of its capabilities, or not - be allowed to - use it. Like a child unaware that running their finger on a sharp knife blade will lead to a bad slice; you don't dull the blade to keep the child safe, but keep the child from the knife until they can understand and respect its capabilities.

  • If your prototype of the «knife» is all blade and no handle, fix it and implement the handle.

    If the creation is planned, you will have also thought of the handle; if it is a serendipity, you will have to plan the handle afterwards.