Comment by bloppe

15 hours ago

I don't think LLMs are an abject failure, but I find it equally odd that so many people think that transformer-based LLMs can be incrementally improved to perfection. It seems pretty obvious to me now that we're not gonna RLHF our way out of hallucinations. We'll probably need a few more fundamental architecture breakthroughs to do that.