Comment by energy123
2 months ago
The real criticism should be the AI doesn't say "I don't know.", or even better, "I can't answer this directly because my tokenizer... But here's a python snippet that calculates this ...", so exhibiting both self-awareness of limitations combined with what an intelligent person would do absent that information.
We do seem to be an architectural/methodological breakthrough away from this kind of self-awareness.
For the AI to say this or to produce the correct answer would be easily achievable with post-training. That's what was done for the strawberry problem. But it's just telling the model what to reply/what tools to use in that exact situation. There's nothing about "self-awareness".
> But it's just telling the model what to reply/what tools to use in that exact situation.
So the exact same way we train human children to solve problems.
There is no inherent need for humans to be "trained". Children can solve problems on their own given a comprehensible context (e.g., puzzles). Knowledge does not necessarily come from direct training by other humans, but can also be obtained through contextual cues and general world knowledge.
I keep thinking of that, imagine teaching humans was all the hype with hundreds of billions invested in improving the "models". I bet if trained properly humans could do all kinds of useful jobs.
1 reply →