Comment by satvikpendem
2 months ago
Oftentimes, plausible code is good enough, hence why people keep using AI to generate code. This is a distinction without a difference.
2 months ago
Oftentimes, plausible code is good enough, hence why people keep using AI to generate code. This is a distinction without a difference.
There appears to be a similar approach in UX... plausible user experience is close enough.
Yes, especially because in UX there is no "correct" approach to it, it's all relative.
2 seconds to insert 100 rows in an empty database table is not "good enough" if you are doing anything that is worth doing.
Who said anything about this? I never did.
Tfa did
No. Plausible code is syntactically-correct BS disguised as a solution, hiding a countless amount of weird semantic behaviours, invariants and edge cases. It doesn't reflect a natural and common-sense thought process that a human may follow. It's a jumble of badly-joined patterns with no integral sense of how they fit together in the larger conceptual picture.
Why do people keep insisting that LLMs don't follow a chain of reasoning process? Using the latest LLMs you can see exactly what they "think" and see the resultant output. Plausible code does not mean random code as you seem to imply, it means...code that could work for this particular situation.
Because they don't. The chain-of-reasoning feature is really just a way to get the LLM to prompt more.
The fact that it generates these "thinking" steps does not mean it is using them for reasoning. It's most useful effect is making it seem to a human that there is a reasoning process.
10 replies →