← Back to context

Comment by tasuki

5 hours ago

> in 3D Clifford algebras it repeatedly confuses exponential of bivectors and of pseudoscalars.

I have no idea what any of those words even mean. I'm sure LLMs make similar obvious-to-professors mistakes in all the domains. Not long ago, we didn't even have chatbots capable of basic conversation...

Ironically, it's sort of the other way around! Every frontier chatbot since GPT 4 (at least) has had a pretty good understanding of even very esoteric technical concepts.

Bivectors and pseudoscalars (in a 3D context) are "just" signed areas and volumes. Easy!

Back around the GPT 3, 3.5, and 4.0 era I used to ask the bots to explain "counterfactual determinism", which is one of the most complex topics I personally understand.

Then I would lie to the bot about it, and see if it corrected me or not.

This test is useless now, the frontier models can't be fooled any longer on such "basic" concepts.

Conversely, LLMs are basically useless at anything that doesn't have enough (or no) public information for their training. Think: obscure proprietary product config files and the like, even if the concepts involved are trivial.

Similarly, Clifford Algebra is a relatively niche (even "alternative") area of mathematics and physics, with vastly less written material about it than the competing linear algebra. Hence, the AIs are bad at it.