← Back to context

Comment by XenophileJKO

6 days ago

So this idea that they replay "text" they saw before is kind of wrong fundamentally. They replay "abstract concepts of varied conceptual levels".

The important point I'm trying to reinforce is that LLMs are not capable of calculation. They can give an answer based on the fact that they have seen lots of calculations and their results, but they cannot actually perform mathematical functions.

  • That is a pretty bold assertion for a meatball of chemical and electrical potentials to make.

    • Do you know what "LLM" stands for? They are large language models, built on predicting language.

      They are not capable of mathematics because mathematics and language are fundamentally separated from each other.

      They can give you an answer that looks like a calculation, but they cannot perform a calculation. The most convincing of LLMs have even been programmed to recognize that they have been asked to perform a calculation and hand the task off to a calculator, and then receive the calculator's output as a prompt even.

      But it is fundamentally impossible for an LLM to perform a calculation entirely on its own, the same way it is fundamentally impossible for an image recognition AI to suddenly write an essay or a calculator to generate a photo of a giraffe in space.

      People like to think of "AI" as one thing but it's several things.

      8 replies →