Comment by wizzwizz4
6 months ago
> and when humans figure out patterns they're allowed to reproduce those as well as they want so long as it's not verbatim, doing so is even considered desirable and a sign of having intelligence
No, doing so is considered a sign of not having grasped the material, and is the bane of secondary-level mathematics teachers everywhere. (Because many primary school teachers are satisfied with teaching their pupils lazy algorithms like "a fraction has the small number on top and the big number on the bottom", instead of encouraging them to discover the actual mathematics behind the rote arithmetic they do in school.)
Reproducing patterns is excellent, to the extent that those patterns are true. Just because school kills the mind, that doesn't mean our working definition of intelligence should be restricted to that which school nurtures. (By that logic, we'd have to say that Stockfish is unintelligent.)
> Me, by hand: [you] [are] [not] [a] [mechanism]
That's decoding the example message. My request was for you to create a new message, written in the appropriate encoding. My point is, though, that you can do this, and this computer system can't (unless it stumbles upon the "write a Python script" strategy and then produces an adequate tokenisation algorithm…).
> but this specific challenge you've just suggested
Being able to reproduce the example for which I have provided the answer is not the same thing as completing the challenge.
> Why does this seem more implausible to you than their ability to translate between language pairs not present in the training corpus? I mean, games like this might fail, I don't know enough specifics of the tokeniser
It's not about the tokeniser. Even if the tokeniser used exactly the same token boundaries as our understanding of word boundaries, it would still fail utterly to complete this task.
Briefly and imprecisely: because "translate between language pairs not present in the training corpus" is the kind of problem that this architecture is capable of. (Transformers are a machine translation technology.) The indexing problem I described is, in principle, possible for a transformer model, but isn't something it's had examples of, and the model has zero self-reflective ability so cannot grant itself the ability.
Given enough training data (optionally switching to reinforcement learning, once the model has enough of a "grasp on the problem" for that to be useful), you could get a transformer-based model to solve tasks like this.
The model would never invent a task like this, either. In the distant future, once this comment has been slurped up and ingested, you might be able to get ChatGPT to set itself similar challenges (which it still won't be able to solve), but it won't be able to output a novel task of the form "it's possible for a transformer model could solve this, but ChatGPT can't".
> No, doing so is considered a sign of not having grasped the material, and is the bane of secondary-level mathematics teachers everywhere. (Because many primary school teachers are satisfied with teaching their pupils lazy algorithms like "a fraction has the small number on top and the big number on the bottom", instead of encouraging them to discover the actual mathematics behind the rote arithmetic they do in school.)
You seem to be conflating "simple pattern" with the more general concept of "patterns".
What LLMs do is not limited to simple patterns. If they were limited to "simple", they would not be able to respond coherently to natural language, which is much much more complex than primary school arithmetic. (Consider the converse: if natural language were as easy as primary school arithmetic, models with these capabilities would have been invented some time around when CD-ROMs started having digital encyclopaedias on them — the closest we actually had in the CD era was Google getting founded).
By way of further example:
> By that logic, we'd have to say that Stockfish is unintelligent.
Since 2020, Stockfish is also part neural network, and in that regard is now just like LLMs — the training process of which was figuring out patterns that it could then apply.
Before that Stockfish was, from what I've read, hand-written heuristics. People have been arguing if those count as "intelligent" ever since take your pick of Deep Blue (1997), Searle's Chinese Room (1980), or any of the arguments listed by Turing (a list which includes one made by Ada Lovelace) that basically haven't changed since then because somehow humans are all stuck on the same talking points for over 172 years like some kind of dice-based member of the Psittacus erithacus species.
> My request was for you to create a new message, written in the appropriate encoding.
> Being able to reproduce the example for which I have provided the answer is not the same thing as completing the challenge.
Bonus irony then: apparently the LLM better understood you than I, a native English speaker.
Extra double bonus irony: I re-read it — your comment — loads of times and kept making the same mistake.
> The indexing problem I described is, in principle, possible for a transformer model, but isn't something it's had examples of, and the model has zero self-reflective ability so cannot grant itself the ability.
You think it's had no examples of counting?
(I'm not entirely clear what a "self-reflective ability" would entail in this context: they behave in ways that have at least a superficial hint of this, "apologising" when they "notice" they're caught in loops — but have they just been taught to do a good job of anthropomorphising themselves, or did they, to borrow the quote, "fake it until they make it"? And is this even a boolean pass/fail, or a continuum?)
Edit: And now I'm wondering — can feral children count, or only subitise? Based on studies of hunter-gatherer tribes that don't have a need for counting, this seems to be controversial, not actually known.
> (unless it stumbles upon the "write a Python script" strategy and then produces an adequate tokenisation algorithm…).
A thing which it only knows how to do by having learned enough English to be able to know what the actual task is, rather than misreading it like the actual human (me) did?
And also by having learned the patterns necessary to translate that into code?
> Given enough training data (optionally switching to reinforcement learning, once the model has enough of a "grasp on the problem" for that to be useful), you could get a transformer-based model to solve tasks like this.
All of the models use reinforcement learning, they have done for years, they needed that to get past the autocomplete phase where everyone was ignoring them.
Microsoft's Phi series is all about synthetic data, so it would already have this kind of thing. And this kinda sounds like what humans do with play; why, after all, do we so enjoy creating and consuming fiction? Why are soap operas a thing? Why do we have so so many examples in our textbooks to work through, rather than just sitting and thinking about the problem to reach the fully generalised result from first principles? We humans also need enough training data and reinforcement learning.
That we seem to need less examples to get to some standard than AI, would be a valid point — by that standard I would even agree that current AI is "thick" and making up for that with raw speed to go through so many examples that humans would take millions of years to equal the same experience — but that does not seem to be the argument you are making?
> You seem to be conflating "simple pattern" with the more general concept of "patterns". What LLMs do is not limited to simple patterns.
There's no mechanism for them to get the right patterns – except, perhaps, training on enough step-by-step explanations that they can ape them. They cannot go from a description to enacting a procedure, unless the model has been shaped to contain that procedure: at best, they can translate the problem statement from English to a programming language (subject to all the limitations of their capacity to do that).
> if natural language were as easy as primary school arithmetic, models with these capabilities would have been invented some time around when CD-ROMs started having digital encyclopaedias on them
Systems you could talk to in natural language, that would perform the tasks you instructed them to perform, did exist in that era. They weren't very popular because they weren't very useful (why talk to your computer when you could just invoke the actions directly?), but 1980s technology could do better than Alexa or Siri.
> the training process of which was figuring out patterns that it could then apply
Yes. Training a GPT model on a corpus does not lead to this. Doing RLHF does lead to this, but it mostly only gives you patterns for tricking human users into believing the model's more capable than it actually is. No part of the training process results in the model containing novel skills or approaches (while Stockfish plainly does use novel techniques; and if you look at its training process, you can see where those come from).
> apparently the LLM better understood you than I, a native English speaker.
No, it did both interpretations. That's what it's been trained to do, by the RLHF you mentioned earlier. Blatt out enough nonsense, and the user will cherrypick the part they think answers the question, and ascribe that discriminating ability to the computer system (when it actually exists inside their own mind).
> You think it's had no examples of counting?
No. I think it cannot complete the task I described. Feel free to reword the task, but I would be surprised if even a prompt describing an effective procedure would allow the model to do this.
> but have they just been taught to do a good job of anthropomorphising themselves
That one. It's a classic failure mode of RLHF – one described in the original RLHF paper, actually – which OpenAI have packaged up and sold as a feature.
> And also by having learned the patterns necessary to translate that into code?
Kinda? This is more to do with its innate ability to translate – although using a transformer for next-token-prediction is not a good way to get high-quality translation ability. For many tasks, it can reproduce (customised) boilerplate, but only where our tools and libraries are so deficient as to require boilerplate: for proper stuff like this puzzle of mine, ChatGPT's "programming ability" is poor.
> but that does not seem to be the argument you are making?
It sort of was. Most humans are capable of being given a description of the axioms of some mathematical structures, and a basic procedure for generating examples of members of a structure, and bootstrapping a decent grasp of mathematics from that. However, nobody does this, because it's really slow: you need to develop tools of thought as skills, which we learn by doing, and there's no point slowly and by brute-force devising examples for yourself (so you can practice those skills) when you can let an expert produce those examples for you.
Again, you've not really read what I've written. However, your failure mode is human: you took what I said, and came up with a similar concept (one close enough that you only took three paragraphs to work your way back to my point). ChatGPT would take a concept that can be represented using similar words: not at all the same thing.