Comment by bubblyworld
7 hours ago
Not sure why everyone is downvoting you as I think you raise a good point - these anthropomorphic words like "reasoning" are useful as shorthands for describing patterns of behaviour, and are generally not meant to be direct comparisons to human cognition. But it goes both ways. You can still criticise the model on the grounds that what we call "reasoning" in the context of LLMs doesn't match the patterns we associate with human "reasoning" very well (such as ability to generalise to novel situations), which is what I think the authors are doing.
""Sam Altman says the perfect AI is “a very tiny model with superhuman reasoning".""
It is being marketed as directly related to human reasoning.
Sure, two things can be true. Personally I completely ignore anything Sam Altman (or other AI company CEOs/marketing teams for that matter) says about LLMs.