Comment by tptacek

11 hours ago

Wait, what do you mean "LLMs are still absolutely useless at actual maths computation"? I rely on them constantly for maths (linear algebra, multivariable calc, stat) --- literally thousands of problems run through GPT5 over the last 12 months, and to my recollection zero failures. But maybe you're thinking of something more specific?

They are bad at math. But they are good at writing code and as an optimization some providers have it secretly write code to answer the problem, run it and give you the answer without telling you what it did in the middle part.

  • Someone should tell the mathematicians if they use a calculator or a whiteboard or heavens forbid a computer they are "bad at math".

  • What would I do to demonstrate that they are bad at math? If by "maths" we mean things like working out a double integral for a joint probability problem, or anything simpler than that, GPT5 has been flawless.

What tier are you using? I have run lots of problems and am very impressed, but I find stupid errors a lot more frequently than that, e.g., arithmetic errors buried in a derivation or a bad definition, say 1/15 times. I would love to get zero failures out of thousands of (what sounds like college-level math) posed problems.

  • I have a standard OpenAI/ChatGPT Pro account; GPT5 is my daily driver for math, and Claude for code.

calc, stat etc from a text book is something they would naturally be good at but I don't think book based computations thats in the training set and its extrapolations is what is at question here.

They are not great at playing chess as well - computational as well as analytic.

  • I think this is wrong and a category error (none of the problems I've given it are in a textbook; they're virtually all randomized), but, try this: just give me a problem to hand off to GPT5, and we'll see how it does.

    Further evidence for the faultiness of your claim, if you don't want to take me up on that: I had problems off to GPT5 to check my own answers. None of the dumb mistakes I make or missed opportunities for simplification are in the book, and, again: it's flawless at pointing out those problems, despite being primed with a prompt suggesting I'm pretty sure I have the right answers.

I only have rudimentary understanding of calculus, trigonometry, Google Sheets, and astronomy, but I was able to construct an accurate spreadsheet for astrometry calculations by using Grok and Gemini (both free, no subscription, just my personal account) to surface the formulas for measuring the distance between 2-3 points on the celestial sphere. The LLMs assisted me in also writing functions to convert DMS/HMS coordinates to decimal, and work in radians as well.

I found and fixed bugs I wrote into the formulas and spreadsheets, and the LLMs were not my sole reference, but once the LLM mentioned the names of concepts and functions, I used Wikipedia for the general gist of things, and I appreciated the LLMs' relevant explanations that connected these disciplines together.

I did this on March 14, 2026

>I rely on them constantly for maths (linear algebra, multivariable calc, stat)

That's one way to waste a ton of tuition money to just have a clanker do your learning for you.

Unless you're teaching it, in which case I hope your salary is cut by whatever percentage your clanker reduces your workload.

  • Perhaps learning how to get AI to solve your problems is the most important lesson to learn now? The rest seems like the current equivalent of learning cursive.