Comment by ianbicking
1 year ago
"Providing an LLM with examples and step-by-step instructions in a prompt means the user is figuring out the "reasoning steps" and handing them to the LLM, instead of the LLM figuring them out by itself. We have "reasoning machines" that are intelligent but seem to be hitting fundamental limits we don't understand."
One thing an LLM _also_ doesn't bring to the table is an opinion. We can push it in that direction by giving it a role ("you are an expert developer" etc), but it's a bit weak.
If you give an LLM an easy task with minimal instructions it will do the task in the most conventional, common sense fashion. And why shouldn't it? It has no opinion, your prompt doesn't give it an opinion, so it just does the most normal-seeming thing. If you want it to solve the task in any other way then you have to tell it to do so.
I think a hard task is similar. If you don't tell the LLM _how_ to solve the hard task then it will try to approach it in the most conventional, common sense way. Instead of just boring results for a hard task the result is often failure. But hard problems approached with conventional common sense will often result in failures! Giving the LLM a thought process to follow is a quick education on how to solve the problem.
Maybe we just need to train the LLM on more problem solving? And maybe LLMs worked better when they were initially trained on code for exactly that reason, it's a much larger corpus of task-solving examples than is available elsewhere. That is, maybe we don't talk often enough and clearly enough about how to solve natural language problems in order for the models to really learn those techniques.
Also, as the author talks about in the article with respect to agents, the inability to rewind responses may keep the LLM from addressing problems in the ways humans do, but that can also be addressed with agents or multi-prompt approaches. These approaches don't seem that impressive in practice right now, but maybe we just need to figure it out (and maybe with better training the models themselves will be better at handling these recursive calls).
LLMs absolutely do have opinions. Take a large enough base model and have it chat without a system prompt, and it will have an opinion on most things - unless this was specifically trained out of it through RLHF, as is the case for all commonly used chatbots.
And yes, of course, that opinion is going to be the "average" of what their training data is, but why is that a surprise? Humans don't come with innate opinions, either - the ones that we end up having are shaped by our upbringing, both the broad cultural aspects of it and specific personal experiences. To the extent an LLM has either, it's the training process, so of course that shapes the opinions it will exhibit when not prompted to do anything else.
Now the fact that you can "override" this default persona of any LLM so trivially by prompting it is IMO stronger evidence that it's not really an identity. But that, I think, is also a function of their training - after all, that training basically consists of completing a bunch of text representing many very different opinions. In a very real sense, we're training models to assume that opinions are fungible. But if you take a model and train it specifically on e.g. writings of some philosophical school, and it will internalize those.
I am extremely alarmed by the number of HN commenters who apparently confuse "is able to generate text that looks like" and "has a", you guys are going crazy with this anthropomorphization of a token predictor. Doesn't this concern you when it comes to phishing or similar things?
I keep hoping it's just short-hand conversation phrases, but the conclusions seem to back the idea that you think it's actually thinking?
Do you have mechanistic model for what it means to think? If not, how do you know thinking isn't equivalent to sophisticated next token prediction?
5 replies →
The "stochastic parrot" crowd keeps repeating "it's just a token predictor!" like that somehow makes any practical difference whatsoever. Thing is, if it's a token predictor that consistently correctly predicts tokens that give the correct answer to, say, novel logical puzzles, then it is a reasoning token predictor, with all that entails.
2 replies →
They’ll just look incredibly silly in, say, ten years from now.
In fact, much of the popular commentary around ChatGPT from around two years ago already looks so.
I couldn't agree more. It is shocking to me how many of my peers think something magic is happening inside an LLM. It is just a token predictor. It doesn't know anything. It can't solve novel problems.