Comment by danbruc

6 days ago

I think large language models have essentially zero reasoning capacity. Train a large language model without exposing it to some topic, say mathematics, during training. Now expose the model to mathematics, feed it basic school books and explanations and exercises just like a teacher would teach mathematics to children in school. I think the model would not be able to learn mathematics this way to any meaningful extend.

Current generation of LLMs have very limited ability to learn new skills at inference time. I disagree this means they cannot reason. I think reasoning is by an large a skill which can be taught at training time.

  • Do you have an example of some reasoning ability any of the large language models has learned? Or do you just mean that you think, we could train them in principle?