Comment by throw310822
2 hours ago
> This isn't a rebuttal.
I don't really think it's possible to convince you. Basically everyone I talk to is using LLMs for work, and in some cases- like mine- I know for a fact that they do produce enormous amounts of value- to the point that I would pay quite some money to keep using them if my company stopped paying for them.
Yes LLMs have well known limitations, but at they're still a brand new technology in its very early stages. ChatGPT appeared little more than three years ago, and in the meantime it went from barely useful autocomplete to writing autonomously whole features. There's already plenty of software that has been 100% coded by LLMs.
"Intelligence", "understanding", "reasoning".. nobody has clear definitions for these terms, but it's a fact that LLMs in many situations act as if they understood questions, problems and context, and provide excellent answers (better than the average human). The most obvious is when you ask an LLM to analyse some original artwork or poem (or some very recent online comic, why not?)- something that can't be in its training data- and they come up with perfectly relevant and insightful analyses and remarks. We don't have an algorithm for that, we don't even begin to understand how those questions can be answered in any "mechanical" sense, and yet it works. This is intelligence.
You know what this reminds me of? Language X comes out (e.g., Lisp or Haskell), and people try it, and it's this wonderful, magical experience, and something just "clicks", and they tell everyone how wonderful it is.
And other people try it - really sincerely try it - and they don't "get it". It doesn't work for them. And those who "get it" tell those who don't that they just need to really try it, and keep trying it until they get it. And some people never get it, and are told that they didn't try enough (and also it gets implied that they are stupid if they really can't get it).
But I think that at least part of it is in how peoples' brains work. People think in different ways. Some languages just work for some people, and really don't work very well for other people. If a language doesn't work for you, it doesn't mean either that it's a bad language or that you're stupid (or just haven't tried). It can just be a bad fit. And that's fine. Find a language that fits you better.
Well, I wonder if that applies to LLMs, and especially to LLMs doing coding. It's a tool. It has capabilities, and it has limitations. If it works for you, it can really work for you. And if it doesn't, then it doesn't, and that doesn't mean that it's a bad tool, or that you are stupid, or that you haven't tried. It can just be a bad fit for how you think or for what you're trying to do.