Comment by deaux
16 days ago
> I consulted Claude chat and it admitted this as a major problem with Claude these days, and suggested that I should ask what are the coordinates of UI controls are on screenshot thus forcing it to look
If 3 years into LLMs even HNers still don't understand that the response they give to this kind of question is completely meaningless, the average person really doesn't stand a chance.
The whole “chat with an AI” paradigm is the culprit here. Priming people to think they are actually having a conversation with something that has a mind model.
It’s just a text generator that generates plausible text for this role play. But the chat paradigm is pretty useful in helping the human. It’s like chat is a natural I/O interface for us.
I disagree that it’s “just a text generator” but you are so right about how primed people are to think they’re talking to a person. One of my clients has gone all-in on openclaw: my god, the misunderstanding is profound. When I pointed out a particularly serious risk he’d opened up, he said, “it won’t do that, because I programmed it not to”. No, you tried to persuade it not to with a single instruction buried in a swamp of markdown files that the agent is itself changing!
I insist on the text generator nature of the thing. It’s just that we built harnesses to activate on certain sequences of text.
Think of it as three people in a room. One (the director), says: you, with the red shirt, you are now a plane copilot. You, with the blue shirt, you are now the captain. You are about to take off from New York to Honolulu. Action.
Red: Fuel checked, captain. Want me to start the engines?
Blue: yes please, let’s follow the procedure. Engines at 80%.
Red: I’m executing: raise the levers to 80%
Director: levers raised.
Red: I’m executing: read engine stats meters.
Director: Stats read engine ok, thrust ok, accelerating to V0.
Now pretend the director, when heard “I’m executing: raise the levers to 80%”, instead of roleplaying, she actually issue a command to raise the engine levers of a plane to 80%. When she hears “I’m executing: read engine stats”, she actually get data from the plane and provide to the actor.
See how text generation for a role play can actually be used to act on the world?
In this mind experiment, the human is the blue shirt, Opus 4-6 is the red and Claude code is the director.
9 replies →
> No, you tried to persuade it not to with a single instruction
Even persuade is too strong a word. These things dont have the motivation needed to enable persuation being a thing. Whay your client did was put one data point in the context that it will use to generate the next tokens from. If that one data point doesnt shift the context enough to make it produce an output that corresponds to that daya point, then it wont. Thats it, no sentience involved
> It’s just a text generator that generates plausible text for this role play.
Often enough, that text is extremely plausible.
I pin just as much responsibility on people not taking the time to understand these tools before using them. RTFM basically.
I think the mindset you have to have is "it understands words, but has no concept of physics".
It doesn’t help that a frequent recommendation on HN whenever someone complains about Claude not following a prompt correctly is to “ask Claude itself how to rewrite a prompt to get the result you want”.
Which sure, can be helpful, but it’s kinda just a coincidence (plus some RLHF probably) that question happens to generate output text that can be used as a better prompt. There’s no actual introspection or awareness of its internal state or architecture beyond whatever high level summary Anthropic gives it in its “soul” document et al.
But given how often I’ve read that advice on here and Reddit, it’s not hard to imagine how someone could form an impression that Claude has some kind of visibility into its own thinking or precise engineering. Instead of just being as much of a black box to itself as it is to us.
It’s not meaningless. It’s a signal that the agent has run out of context to work on the problem which is not something it can resolve on its own. Decomposing problems and managing cognitive (or quasi cognitive in this case) burden is a programmer’s job regardless of the particular tools.
I think you are saying what I was about to suggest:
For this single problem, open a new claude session with this particular issue and refining until fixed, then incorporating it into the larger project.
I think the QA agent might have been the same step here, but it depends on how that QA agent was setup.
> completely meaningless
This is way too strong isn't it? If the user naively assumes Claude is introspecting and will surely be right, then yeah, they're making a mistake. But Claude could get this right, for the same reasons it gets lots of (non-introspective) things right.
It's not too strong. If it answered from its weights, it's pretty meaningless. If it did a web search and found reports of other people saying this, you'd want to know that this is how it answered - and then you'd probably just say that here on HN rather than appealing to claude as an authority on claude.
They also said it "admitted" this as a major problem, as if it has been compelled to tell an uncomfortable truth.
GP here, this is indeed exactly whT I was getting at, thanks for wording it for me; you put it better than I would've.
In this specific case I'd go one step further and say that even if it did a web search, it's still almost certainly useless because of the low quality of the results and their outdatedness, two things LLMs are bad at discerning. From weights it doesn't know how quickly this kind of thing becomes outdated, and out of the box it doesn't know how to account for reliability.
Maybe I'm just being too literal, but I don't know if you're really disagreeing with me. I was disputing "the response they give to this kind of question is completely meaningless". An answer from its weights is out of date, but only completely meaningless if this is a completely new issue with nothing relevant in the training data. And, as you say, the answer could be search-based and up to date.