← Back to context

Comment by xpe

5 days ago

> [AI] lacks actual reasoning capabilities.

Evals are showing reasoning (by which I mean multi-step problem solving, planning, etc) is improving over time in LLMs. We don't have to agree on metaphysics to see this; I'm referring to the measurable end result.

Why? Some combination of longer context windows, better architectures, hybrid systems, and so on. There is more research about how and where reasoning happens (inside the transformer, during the chain of thought, perhaps during a tool call).

[flagged]

  • > Please stick to facts here, not hype-filled wishful thinking. You are actively pushing misinformation that makes situations like the OP’s worse.

    I don't understand what you are talking about. I have to wonder if you posted in the wrong place. Care to explain:

    - What specifically did I write that was misinformation?

    - How do you justify saying it "makes situations like the OP’s worse". Connect the dots for me?

    Please remember to be charitable here.

    • I believe the fact that you edited your post after my reply, then disingenuously left this reply speaks quite plainly that you know exactly what my criticism meant.