Comment by jamincan

14 days ago

Humans aren't infallible and make mistakes in reasoning as well. What is fundamentally different about the mistakes we make versus the mistakes that Claude or Gemini make? Haven't LLM's even been shown to make the same posthoc rationalizations of mistakes that we as humans do all the time?

Unless you're pulling humans out of the streets at random and asking them questions or to do work, I guess you also shouldn't do that with statistical models of random human language.