← Back to context

Comment by sema4hacker

2 days ago

The latter. When "understand", "reason", "think", "feel", "believe", and any of a long list of similar words are in any title, it immediately makes me think the author already drank the kool aid.

In the context of coding agents, they do simulate “reasoning” when you feed them the output and it is able to correct itself.

I agree with “feel” and “believe” but what words would you suggest instead of “understand” and “reason’?

  • None. Don't anthropomorphize at all. Note that "understanding" has now been removed from the HN title but not the linked pdf.

    • Why not? We are trying to evaluate AI's capabilities. It's OBVIOUS that we should compare it to our only prior example of intelligence -- humans. Saying we shouldn't compare or anthropomorphize machine is a ridiculous hill to die on.

      1 reply →

kool aid or not -- "reasoning" is already part of the LLM verbiage (e.g `reasoning` models having `reasoningBudget`). The meaning might not be 1:1 to human reasoning, but when the LLM shows its "reasoning" it does look _appear_ like a train of thought. If I had to give what it's doing a name (like I'm naming a function), I'd be hard pressed to not go with something like `reason`/`think`.