← Back to context

Comment by goku12

10 days ago

That's a very relevant question. And as your question implies, we all know which society the billionaires talk about. But AI is just a technology like any other. It does have the potential to bring great benefits to humanity if developed with that intent. It's the corruptive influence of the billionaire and autocrat greed that turns all technologies against us.

When I say benefits to humanity, I don't mean the AI slop, deepfakes and laziness enabler that we have today. There are niche applications of AI that already show great potential. Like developing new medicines to devising new treatments for dangerous diseases, solving long standing mathematical problems, creating new physics theories. And who knows? Perhaps even create viable solutions for the climate crisis that we are in. They don't receive as much attention as they deserve, because that's not where the profit lies in AI. Solving real problems require us to forgo profits in the short term. That's why we can't leave this completely up to the billionaires. They will just use it to transfer even more wealth from the poor and middle classes to themselves.

What are the actual benefits? Where are all these medicines that humans couldn’t develop on their own? Have we not been able to develop medicine? What theorems are meaningful and impactful that humans can’t prove without AI? I don’t know what a solution to the climate crisis is but what would it even say that humans wouldn’t have realistically thought of?

  • You're most likely correct in thinking 'we would get there eventually'. But in the case of medicine, would you like to make that case to those who don't have the time to wait for 'eventually' - or who'll spend their lives in misery?

  • It's a matter of prompt engineering, you have to be a really good engineer to pick the correct words in order to get the cure for cancer from ChatGPT, or the actual crabby patty recipe

    ;)

    • May I ask why people immediately imagine AI slop whenever anybody mentions LLMs? This is exactly what I meant. Those companies ruined their reputation. LLM/AI applications extend well beyond chat and drawing bots.

  • Here are some domain-specific examples of how AI improved (not replaced) human performance like a tool:

    1. How AI revolutionized protein science, but didn’t end it: https://www.quantamagazine.org/how-ai-revolutionized-protein...

    This is about DeepMind's AlphaFold 2. It's arguably a big deal in medical science. How do you propose humans do it?

    2. Code vulnerability detection across different programming languages with AI Models: https://arxiv.org/abs/2508.11710

    > What theorems are meaningful and impactful that humans can’t prove without AI?

    I'm not a mathematician. I cannot give a definitive answer. But I read somewhere that some proofs these days fill an entire book. There is no way anybody is creating that without machine validation and assistance. AI is the next step in that, just like how programming support is advancing from complex tools to copilots. I know that overuse of copilots is a reason for making some developers lose quality. But there are also experienced developers who have found ways to use them optimally to significantly increase their speed without filling the code base with AI slop. The same will arguably happen with Mathematics.

    The point ultimately is, I don't have definitive answers to any of the questions you ask. I'm not a domain expert in any of those fields and I can't see the future. But none of that is relevant here. What's relevant is to understand how LLMs and AI in general can be leveraged to augment your performance in any profession. The exact method may vary by domain. But the general tool use will be similar. Think of it like "How can a computer help me do accounting, cook a meal, predict weather, get me an xray or pay my bills?" It's as generic as that.

I have a phd in mathematics and I assure you I am not happy that AI is going to make doing mathematics a waste of time. Go read Gower's essay on it from the 90s. He is spot on.

  • I would have loved to engage in a conversation, if only to learn something new. But something in the way you framed your reply tells me that that's not what you have in mind. Instead, here's what Dr. Terrence Tao thinks about the same subject [1]. Honestly, I can relate to what he says.

    I'm not someone who likes or promotes LLMs due to the utterly unethical acts that the big corporations committed to make profits with them. However, people often forget that LLMs are a technology that was developed by people who practice Mathematics and Computer Science. That was also PhD level work. The fact that LLMs got such a bad reputation has nothing to do with those wonderful ideas, but was a result of the greed of those who are obsessed with endless profits. LLMs aren't just about vacuuming up the IP on the internet, dumping kilotonnes of CO2 into the atmosphere or endless streams of AI slop and low effort fakes.

    Human minds process logic and the universe in extraordinary ways. But it's still very limited in the set of tools it uses to achieve that. That's where LLMs and AI in general raise the tantalizing possibility of perceiving and interpreting domains under Mathematics and Physics in ways that no living being has ever done or even imagined. Perhaps its training data won't be stolen text or art. It could be the petabytes of scientific data locked up in storage because nobody knows what to do with it yet. And instead of displacing us, it's likely to complement and augment us. That's where the brilliance of mathematicians and scientists are going to be needed. Nobody knows for sure. But how will one know if you close the doors to that possibility?

    I admire Dr. Tao for keeping his mind open to anything new at his age. I wish I had as much curiosity as him.

    [1] https://www.scientificamerican.com/article/ai-will-become-ma...

    • (Terence Tao is his name.) Yes, he takes a rather measured view on AI, but I think for myself, not in terms of what X great person thinks. He is smarter than I am (you probably are not even aware how amazing he is, frankly, and I only say that to convey my immense admiration), and more successful by a million times, and is a millionaire with a tenure track job, and he's basically a fields medalist among fields medalists. The effect of AI on his life is very little compared to the effect of AI on mine. I am always impressed by Terence Tao, but there's basically no life lesson the average mathematician can glean from him. He is truly astounding (to be fair, there are a few other astonishing people in mathematics).

      The truth is that with a few more innovations, even Terence Tao will have little to add to an AI's problem solving ability. I will personally enjoy having mathematics explained to me by the AI, but it will be in relative poverty and material insecurity caused by the AI.

      A recent AI data point occurred this last weekend with many coming together to answer MathOverflow post, because Terence Tao answered it with some tedious parts done by AI. https://mathstodon.xyz/@tao/115325229364313459

      I think his tone is optimistic, but falsely so.