Comment by immibis

1 month ago

> Just like you can’t google for a movie if you don’t know the genre, any scenes, or any actors in it,

ChatGPT was able to answer "What was the video game with cards where you play against a bear guy, a magic guy and a set of robots?" (it's Inscryption). This is one area where LLMs work.

“Playing cards against a bear guy” is a pretty iconic part of that game… that you, as a human, had the wherewithal to put into that context. Agents don’t have that wherewithal. They’d never come up with “playing cards against a beat guy” if you asked it “what game am I thinking of”

Let’s do another experiment. Do the same for the game I’m thinking of right now.

There were characters in it and one of them had a blue shirt, but that’s all I can remember.

  • LLMs are really good at 20 questions, so if you give it a chance to ask some follow-up (which they will do if given such a prompt) it will probably figure it out pretty quick.

    • Sure, so if I, as a human, play 20 questions and add all that context to an LLM, it can perform.

      That’s true. It’s why these things aren’t useless.

      I’m saying that LLMs arent able to make context for themselves. It’s why these agentic startups are doomed to morph into a more sensible product like search, document QA, or automated browser testing.

You described all of those things to some extent, as much as they apply to video games. No magic here.