← Back to context

Comment by xtracto

2 days ago

For me, the problem is in the "chat" mechanic that OpenAI and others use to present the product. It lends itself to strong antropomorphizing.

If instead of a chat interface we simply had a "complete the phrase" interface, people would understand the tool better for what it is.

But people aren't using ChatGPT for completing phrases. They're using it to get their tasks done, or get their questions answered.

The fact that pretraining of ChatGPT is done with a "completing the phrase" task has no bearing on how people actually end up using it.

  • It's not just the pretraining, it's the entire scaffolding between the user and the LLM itself that contributes to the illusion. How many people would continue assuming that these chatbots were conscious or intelligent if they had to build their own context manager, memory manager, system prompt, personality prompt, and interface?

I agree 100%. Most people haven't actually interacted directly with an LLM before. Most people's experience with LLMs is ChatGPT, Claude, Grok, or any of the other tools that automatically handle context, memory, personality, temperature, and are deliberately engineered to have the tool communicate like a human. There is a ton of very deterministic programming that happens between you and the LLM itself to create this experience, and much of the time when people are talking about the ineffable intelligence of chatbots, it's because of the illusion created by this scaffolding.