Comment by otabdeveloper4
6 days ago
> What's the difference between a "human interpretation of a good program" and a "good program" when we (humans) are the ones using it?
Correctness.
> and meets my requirements
It can't do that. "My requirements" wasn't part of the training set.
"Correctness" in what sense? It sounds like it's being expanded to an abstract academic definition here. For practical purposes, correct means whatever the person using it deems to be correct.
> It can't do that. "My requirements" wasn't part of the training set.
Neither are mine, the art of building these models is that they are generalisable enough that they can tackle tasks that aren't in their dataset. They have proven, at least for some classes of tasks, they can do exactly that.
Besides the fact that your statement is self contradicting, there is actually a solid definition [0]. You should click the link on specification too. Or better yet, go talk to one of those guys that did their PhD in programming languages.
Have they?
Or did you just assume?
Yeah, I know they got good scores on those benchmarks but did you look at the benchmarks? Look at the question and look what is required to pass it. Then take a moment and think. For the love of God, take a moment and think about how you can pass those tests. Don't just take a pass at face value and move on. If you do, well I got a bridge to sell you.
[0] https://en.wikipedia.org/wiki/Correctness_(computer_science)
Sure,
> In theoretical computer science, an algorithm is correct with respect to a specification if it behaves as specified.
"As specified" here being the key phrase. This is defined however you want, and ranges from a person saying "yep, behaves as specified", to a formal proof. Modern language language models are trained under RL for both sides of this spectrum, from "Hey man looks good", to formal theorem proving. See https://arxiv.org/html/2502.08908v1.
So I'll return to my original point: LLMs are not just generating outputs that look plausible, they are generating outputs that satisfy (or at least attempt to satisfy) lots of different objectives across a wide range of requirements. They are explicitly trained to do this.
So while you argue over the semantics of "correctness", the rest of us will be building stuff with LLMs that is actually useful and fun.
5 replies →