Comment by dartos
1 month ago
To your first point, the LLM still can’t know what it doesn’t know.
Just like you can’t google for a movie if you don’t know the genre, any scenes, or any actors in it, and AI can’t build its own context if it didn’t have good enough context already.
IMO that’s the point most agent frameworks miss. Piling on more LLM calls doesn’t fix the fundamental limitations.
TL;DR an LLM can’t magically make good context for itself.
I think you’re spot on with your second point. The big differentiators for big AI models will be data that’s not easy to google for and/or proprietary data.
Lucky they got all their data before people started caring.
> Just like you can’t google for a movie if you don’t know the genre, any scenes, or any actors in it,
ChatGPT was able to answer "What was the video game with cards where you play against a bear guy, a magic guy and a set of robots?" (it's Inscryption). This is one area where LLMs work.
“Playing cards against a bear guy” is a pretty iconic part of that game… that you, as a human, had the wherewithal to put into that context. Agents don’t have that wherewithal. They’d never come up with “playing cards against a beat guy” if you asked it “what game am I thinking of”
Let’s do another experiment. Do the same for the game I’m thinking of right now.
There were characters in it and one of them had a blue shirt, but that’s all I can remember.
LLMs are really good at 20 questions, so if you give it a chance to ask some follow-up (which they will do if given such a prompt) it will probably figure it out pretty quick.
1 reply →
You described all of those things to some extent, as much as they apply to video games. No magic here.