Comment by diggan
6 days ago
> these tools don't have actual understanding, and are instead producing emergent output from pooling an incomprehensibly large set of pattern-recognized data
I mean, bypassing the fact that "actual understanding" doesn't have any consensus about what it is, does it matter if it's "actual understanding" or "kind of understanding", or even "barely understanding", as long as it produces the results you expect?
> as long as it produces the results you expect?
But it's more the case of "until it doesn't produce the results you expect" and then what do you do?
Then you do that part yourself. You let AI automate the 20/50/80% (*) of work it can, and you now only need to do the remainder manually.
(*) which one of these it is depends on your case. If you're writing a run-of-the-mill Next.js app, AI will automate 80%; if you're doing something highly specific, it'll be closer to 20%.
> "until it doesn't produce the results you expect" and then what do you do?
I'm not sure I understand what you mean. You're asking it to do something, and it doesn't do that?
if you give an LLM a spec with a new language and no examples, it can't write the new language.
until someone does that, I think we've demonstrated that they do not have understanding or abstract thought. they NEED examples in a way humans do not.
2 replies →
Then you teach it. Even humans don't always produce the results we expect.
Have you tried that? It generally doesn't go so well.
In this example there are several commits where you can see they needed to fix the code because they couldn't get (teach) the LLM to generate the required code.
And there's no memory there, you open a new prompt and it's forgotten everything you said previously.
No, I was not making a critique on its effectiveness at generating usable results. I was responding to what I've seen in several other articles here arguing towards anthropomorphism.