Comment by smithza

2 years ago

I hold little hope that LLM's will help us to reason through "correctness." If these AI's scourge through the troves of idiocy on the internet believing what it will according to patterns and not applying critical reasoning skills, it too will pick up the band-wagon's opinions and perpetuate them. Ad Populum will continue to be a persistent fallacy if we humans don't learn appropriate reasoning skills.

They've already proven that LLMs are capable of creating an internal model of the world (or, in the case of the study that proved it, a model of the game it was being trained on). If LLMs have a world model, then they are fully capable of generating truth beyond whatever they are trained on. We may not be there yet (and who knows how long it will take), but it is in principle true that LLMs can move beyond their training data.