Comment by twohearted
15 hours ago
This is a great idea and great work.
Context is intuitively important, but people rarely put themselves in the LLM's shoes.
What would be eye-opening would be to create an LLM test system that periodically sends a turn to a human instead of the model. Would you do better than the LLM? What tools would you call at that moment, given only that context and no other knowledge? The way many of these systems are constructed, I'd wager it would be difficult for a human.
The agent can't decide what is safe to delete from memory because it's a sort of bystander at that moment. Someone else made the list it received, and someone else will get the list it writes. The logic that went into why the notes exist is lost. LLMs are living the Christopher Nolan film Memento.
The canonical example I use is how good are (philosophical) you at programming on a whiteboard given one shot and no tools? Vs at your computer given access to everything? So judging LLMs on that rubric seems as dumb as judging humans by that rubric.