Comment by troupo
2 years ago
> The current usual approach is "one shot", you've got one shot at the prompt, then return your output, no seconds thoughts allowed, no recursion at all.
We've had the models for a while and still no one has shown this mythical lab where this regurgitation machine reasons about things and makes no mistakes.
Moreover, since it already has so much knowledge stored, why does it still hallucinate even in specific cases where the answer is known, such as the case I linked?
>We've had the models for a while and still no one has shown this mythical lab where this regurgitation machine reasons about things and makes no mistakes.
It would be a good experiment to interact with the unfiltered, not-yet-RHLFed interfaces provided to the initial trainers (nigerian folks/gals?).
Or maybe the - lightly filtered - interfaces used privately in demos for CEOs.
So the claim that LLMs are intelligent is predicated on the belief that there are labs running unfiltered output and that there are some secret demos only CEOs see.