← Back to context

Comment by satvikpendem

2 months ago

Oftentimes, plausible code is good enough, hence why people keep using AI to generate code. This is a distinction without a difference.

No. Plausible code is syntactically-correct BS disguised as a solution, hiding a countless amount of weird semantic behaviours, invariants and edge cases. It doesn't reflect a natural and common-sense thought process that a human may follow. It's a jumble of badly-joined patterns with no integral sense of how they fit together in the larger conceptual picture.

  • Why do people keep insisting that LLMs don't follow a chain of reasoning process? Using the latest LLMs you can see exactly what they "think" and see the resultant output. Plausible code does not mean random code as you seem to imply, it means...code that could work for this particular situation.

    • Because they don't. The chain-of-reasoning feature is really just a way to get the LLM to prompt more.

      The fact that it generates these "thinking" steps does not mean it is using them for reasoning. It's most useful effect is making it seem to a human that there is a reasoning process.

      10 replies →