← Back to context

Comment by Philpax

3 months ago

> LLMs are stateless and they do not remember the past (as in they don't have a database), making the training data a non-issue here.

That's not what they said. They said that a LLM knows what elections are, which suggests they could have the requisite knowledge to act one out.

> Therefore, the claims made here in this paper are not possible because the simulation would require each agent to have a memory context larger than any available LLM's context window. The claims made here by the original poster are patently false.

No, it doesn't. They aren't passing in all prior context at once: they are providing relevant subsets of memory as context. This is a common technique for language agents.

> Agentic systems are not well-suited to achieve any of the things that are proposed in the paper, and Generative AI does not enable these kinds of advancements.

This is not new ground. Much of the base social behaviour here comes from Generative Agents [0], which they cite. Much of the Minecraft related behaviour is inspired by Voyager [1], which they also cite.

There isn't a fundamental breakthrough or innovation here that was patently impossible before, or that they are lying about: this combines prior work, iterates upon it, and scales it up.

[0]: https://arxiv.org/abs/2304.03442

[1]: https://voyager.minedojo.org/

Voyager's claims that it's a "learning agent" and that it "make new discoveries consistently without human intervention" are pretty much wrong considering how part of that system is using GPT's giant memory of ~~all~~ a lot of human knowledge (including how to play Minecraft, the most popular game ever made).

In the same sense, LLMs "not remembering the past" is wrong (especially when part of a larger system). This seems like claiming humans / civilizations don't have a "memory" because you've redefined long term memory / repositories of knowledge like books to not be counted as "memory" ?

Or am I missing something ??