Comment by soraminazuki
15 days ago
The burden of proof is on the one making extraordinary claims. There has been no indication from any credible source that LLMs are able to think for itself. Human brains are still a mystery. I don't know why you can so confidently claim that neural models can mimic what humanity knows so little about.
> Two different programmers can take a well-enough defined spec and produce two separate code bases that may (but not must) differ in implementation, while still having the exact same interfaces and testable behavior.
Imagine doing that without a rigid and concise way of expressing your intentions. Or trying again and again in vain to get the LLM produce the software that you want. Or debugging it. Software development will become chaotic and lot less fun in that hypothetical future.
The burden of proof is not on the person telling you that a citation is needed when claiming that something is impossible. Vague phrases mean nothing. You need to prove that there are these fundamental limitations, and you have not done that. I have been careful to express that this is all theoretical and possible, you on the other hand are claiming it is impossible; a much stronger claim, which deserves a strong argument.
> I don't know why you can so confidently claim that neural models can mimic what humanity knows so little about.
I'm simply not ruling it out. But you're confidently claiming that it's flat out never going to happen. Do you see the difference?
You can't just make extraordinary claims [1][2], demand rigorous citation for those who question it, even going as far as to word lawyer the definition of cognition [3], and reverse the burden of proof. All the while providing no evidence beyond what essentially boils down to "anything and everything is possible."
> Vague phrases mean nothing.
Yep, you made my point.
> Do you see the difference?
Yes, I clearly state my reasons. I can confidently claim that LLMs are no replacements for programming languages for two reasons.
1. Programming languages are superior to natural languages for software development. Nothing on earth, not even transformers, can make up for the unavoidable lack of specificity in the hypothetical natural language programs without making things up because that's how logic works.
2. LLMs, as impressive as they may be, are fundamentally computerized parrots so you can't understand or control how they generate code unlike with compilers like GCC which provides all that through source code.
This is just stating the obvious here, no surprises.
[1]: https://news.ycombinator.com/item?id=43585498
Your error is in assuming (or at least not disproving) that natural language cannot fully capture the precision of a programming language. But we already see in real life how higher-level languages, while sometimes making you give up control of underlying mechanisms, allow you to still create the same programs you'd create with other languages, barring any specific technical feature. What is different here though is that natural language actually allows you to reduce and increase precision as needed, anywhere you want, offering both high and low level descriptions of a program.
You aren't stating the obvious. You're making unbacked claims based on your intuition of what transformers are. And even offering up the tired "stochastic parrot" claim. If you can't back up your claims, I don't know what else to tell you. You can't flip it around and ask me to prove the negative.
2 replies →