← Back to context

Comment by AstralStorm

14 hours ago

It's not even that. Only a kernel of the LLM is trained using RLHF. The rest is self-trained from corpus with a few test questions added into the mix.

Because it still cannot reason about veracity of sources, much less empirically try things out, the algorithm has no idea what makes for correctness...

It does not even understand fiction. Tends to return sci-fi answers every now and then to technical questions.