Comment by soraminazuki
16 days ago
That LLMs are by definition models of human speech and have no cognitive capabilities. There is no sound logic behind what LLMs spit out, and will stay that way because it merely mimics its training data. No amount of vague future transformers will transform away how the underlying technology works.
But let's say we have something more than an LLM, that still wouldn't make natural languages a good replacement for programming languages. This is because natural languages are, as the article mentions, imprecise. It just isn't a good tool. And no, transformers can't change how languages work. It can only "recontextualize," or as some people might call it, "hallucinate."
Citation needed. Modern transformers are much, much more than just speech models. Precisely define "cognitive capabilities", and provide proof as to why neural models cannot ever mimic these cognitive capabilities.
> But let's say we have something more than an LLM
We do. Modern multi-modal transformers.
> This is because natural languages are, as the article mentions, imprecise
Two different programmers can take a well-enough defined spec and produce two separate code bases that may (but not must) differ in implementation, while still having the exact same interfaces and testable behavior.
> And no, transformers can't change how languages work. It can only "recontextualize," or as some people might call it, "hallucinate."
You don't understand recontextualization if you think it means hallucination. Or vice versa. Hallucination is about returning incorrect or false data. Recontextualization is akin to decompression, and can be lossy or "effectively" lossless (within a probabilistic framework; again, the interfaces and behavior just need to match)
The burden of proof is on the one making extraordinary claims. There has been no indication from any credible source that LLMs are able to think for itself. Human brains are still a mystery. I don't know why you can so confidently claim that neural models can mimic what humanity knows so little about.
> Two different programmers can take a well-enough defined spec and produce two separate code bases that may (but not must) differ in implementation, while still having the exact same interfaces and testable behavior.
Imagine doing that without a rigid and concise way of expressing your intentions. Or trying again and again in vain to get the LLM produce the software that you want. Or debugging it. Software development will become chaotic and lot less fun in that hypothetical future.
The burden of proof is not on the person telling you that a citation is needed when claiming that something is impossible. Vague phrases mean nothing. You need to prove that there are these fundamental limitations, and you have not done that. I have been careful to express that this is all theoretical and possible, you on the other hand are claiming it is impossible; a much stronger claim, which deserves a strong argument.
> I don't know why you can so confidently claim that neural models can mimic what humanity knows so little about.
I'm simply not ruling it out. But you're confidently claiming that it's flat out never going to happen. Do you see the difference?
4 replies →