Comment by skeledrew
2 months ago
We're still on that's just how it works. The LLM isn't aware of any consequence, etc. All it does is complete patterns as trained. And the data contains many instances of articulate question answering.
It is for those using the LLM to be aware of its capabilities, or not - be allowed to - use it. Like a child unaware that running their finger on a sharp knife blade will lead to a bad slice; you don't dull the blade to keep the child safe, but keep the child from the knife until they can understand and respect its capabilities.
If your prototype of the «knife» is all blade and no handle, fix it and implement the handle.
If the creation is planned, you will have also thought of the handle; if it is a serendipity, you will have to plan the handle afterwards.
Pretty sure it doesn't matter to the child whether the knife has a handle or not. They'll eventually find a way to cut themself.
It matters to the adult - who is also an user.
LLMs do not deliver (they miss important qualities related to intelligence); they are here now; so they must be superseded.
There is no excuse: they must be fixed urgently.
13 replies →