Comment by soulofmischief
17 days ago
You claimed they weren't deterministic, I have shown that they can be. I'm not sure what your point is.
And it is incorrect to base your analysis of future transformer performance on current transformer performance. There is a lot of ongoing research in this area and we have seen continual progress.
I reiterate:
> This is assuming by "deterministic," you mean the same thing I said about programming language implementations being "controllable, reproducible, and well-defined." If you mean it produces random but same results for the same inputs, then you haven't made any meaningful points.
"Determinism" is a word that you brought up in response to my comment, which I charitably interpreted to mean the same thing I was originally talking about.
Also, it's 100% correct to analyze things based on its fundamental properties. It's absurd to criticize people for assuming 2 + 2 = 4 because "continual progress" might make it 5 in the future.
What are these fundamental properties you speak of? 8 years ago this was all a pipe dream. Are you claiming to know what the next 8 years of transformer development will look like?
That LLMs are by definition models of human speech and have no cognitive capabilities. There is no sound logic behind what LLMs spit out, and will stay that way because it merely mimics its training data. No amount of vague future transformers will transform away how the underlying technology works.
But let's say we have something more than an LLM, that still wouldn't make natural languages a good replacement for programming languages. This is because natural languages are, as the article mentions, imprecise. It just isn't a good tool. And no, transformers can't change how languages work. It can only "recontextualize," or as some people might call it, "hallucinate."
7 replies →