Comment by tovej
2 months ago
Could you please cite these papers. If by AI you mean LLMs, that is not supported by what I know. If you mean a theoretical world-model-based AI, that's just a tautological statement.
2 months ago
Could you please cite these papers. If by AI you mean LLMs, that is not supported by what I know. If you mean a theoretical world-model-based AI, that's just a tautological statement.
https://arxiv.org/abs/2305.11169
https://arxiv.org/abs/2506.02996
Their world model is completely a byproduct of language though, not experience. Furthermore, they by deliberate design do not maintain any form of self-recognition or narrative tracking, which is the necessary substrate for developing validating experience. The world model of an LLM is still a map. Not the territory. Even though ours has some of the same qualities arguably, the identity we carry with us and our self-narrative are incredibly powerful in terms of allowing us to maintain alignment with the world as she is without munging it up quite as badly as LLM's seem prone to.
How do you know ours is any different, that we are not in a simulation or a solipsistic scenario? The truth is that one cannot know, it's a philosophical quandary that's been debated for millennia.
5 replies →
One conference proceeding paper and one preprint, about LLMs encoding either relative geometric information of objects or simple 2D paths.
One of the papers call this "programming language semantics", but it is using a 2D grid navigation DSL. The semantics of that language are nothing like actual programming language semantics.
These are not the same as the concept being discussed here, a human "world model" of a computer system, through which to interpret the semantics of a program.
Well I didn't find any papers off the bat for code world models but if they can create a world model for the task given, such as geometric manipulation, I don't see why they wouldn't in terms of code.
1 reply →