Comment by ChicagoDave
9 hours ago
My point is not about morality. It’s about ROI focus and that OpenAI can’t and won’t ever return anything remotely close to what’s been invested. Adult content is not getting them closer to profitability.
And if anyone believes the AGI hyperbole, oh boy I have a bridge and a mountain to sell.
LLM tech will never lead to AGI. You need a tech that mimics synapses. It doesn’t exist.
I have also a hard time understanding how AGI will magically appear.
LLMs have their name for a reason: they model human language (output given an input) from human text (and other artifacts).
And now the idea seems to be that when we do more of it, or make it even larger, it will stop to be a model of human language generation? Or that human language generation is all there is to AGI?
I wish someone could explain the claim to me...
Because the first couple major iterations looked like exponential improvements, and, because VC/private money is stupid, they assumed the trend must continue on the same curve.
And because there's something in the human mind that has a very strong reaction to being talked to, and because LLMs are specifically good at mimicking plausible human speech patterns, chatGPT really, really hooked a lot of people (including said VC/private money people).
LLMs aren't language models, but are a general purpose computing paradigm. LLMs are circuit builders, the converged parameters define pathways through the architecture that pick out specific programs. Or as Karpathy puts it, LLMs are a differentiable computer[1]. Training LLMs discovers programs that well reproduce the input sequence. Roughly the same architecture can generate passable images, music, or even video.
It's not that language generation is all there is to AGI, but that to sufficiently model text that is about the wide range of human experiences, we need to model those experiences. LLMs model the world to varying degrees, and perhaps in the limit of unbounded training data, they can model the human's perspective in it as well.
[1] https://x.com/karpathy/status/1582807367988654081
<< LLM tech will never lead to AGI.
I suspect this may be one of those predictions that may not quite pan out. I am not saying it is a given, but never is about as unlikely.
...Why?
Because always/never are absolutes that are either very easy or very hard to see through. For example, 'I will never die', 'I will never tell a lie', 'I will never eat a pie' all suffer through this despite dying being the most implausible. And it gets worse as we get most abstract:
'Machine will always know where to go from here on now'.
AGI might be possible with more Param+Data scaling for LLM. It is not completely within the realm of impossible given that there is no proof yet of "limits" of LLM. Current limitation is definitely on the hardware side.
5 replies →
>LLM tech will never lead to AGI. You need a tech that mimics synapses. It doesn’t exist.
Why would you think synapses (or their dynamics) are required for AGI rather than being incidental owing to the constraints of biology?
(This discussion never goes anywhere productive but I can't help myself from asking)
I don't see what is so complicated about modelling a synapse. Doesn't AlmostAnyNonLinearFunc(sum of weighted inputs) work well enough?