Comment by kevingadd
5 days ago
If most of what an LLM spits out is a digested version of its training set, is it really an outside view of the world? If anything, seeing how easy it is to get these things to spit out conspiracy theories or bigotry suggests to me that we're far from being able to get a robot's view of the world.
Though for some people if the "robot" says bigoted things or supports their conspiracy theory of choice that's just "proof" that their viewpoint is correct. Tricky to navigate that problem.
Indeed, if LLMs are just distilled training data, their perspective will be quite human. Makes me think it could be interesting to train them on data from set periods instead, to get varied perspectives, and then see how their perspectives change. What would a conversation between a 1900s LLM, 2000s LLM, and 1600s LLM look like.
Or maybe some kind of mix and match, eg Train fully on Buddhist texts, and then a language dictionary from original material language to English. Maybe someone's already making hyper focused LLMS. Could be a nice change from know it all - but resultantly no unique perspective - LLMs I use now.
Well... enough thinking out loud for now.