Comment by wongarsu
1 year ago
You could imagine an LLM being called in a loop with a prompt like
You observe: {new input}
You remember: {from previous output}
React to this in the following format:
My inner thoughts: [what do you think about the current state]
I want to remember: [information that is important for your future actions]
Things I do: [Actions you want to take]
Things I say: [What I want to say to the user]
...
Not sure if that would qualify as an AGI as we currently define it. Given a sufficiently good LLM with good reasoning capabilities such a setup might be able to It would be able to do many of the things we currently expect AGIs to be able to do (given a sufficiently good LLM with good reasoning capabilities), including planning and learning new knowledge and new skills (by collecting and storing positive and negative examples in its "memory"). But its learning would be limited, and I'm sure as soon as it exists we would agree that it's not AGI
This already exists (in a slightly different prompt format); it's the underlying idea behind ReAct: https://react-lm.github.io
As you say, I'm skeptical this counts as AGI. Although I admit that I don't have a particularly rock solid definition of what _would_ constitute true AGI.
(Author here). I tried creating something similar in order to solve wordle etc, and the interesting part is that it is insufficient still. That's part of the mystery.
It works better to give it access to functions to call for actions and remembering stuff, but this approach does provide some interesting results.