Comment by wrqvrwvq

6 days ago

It's only because humans came up with a problem, worked with the ai and verified the result that this achievement means anything at all. An ai "checking its own work" is practically irrelevant when they all seem to go back and forth on whether you need the car at the carwash to wash the car. Undoubtedly people have been passing this set of problems to ai's for months or years and have gotten back either incorrect results or results they didn't understand, but either way, a human confirmation is required. Ai hasn't presented any novel problems, other than the multitudes of social problems described elsewhere. Ai doesn't pursue its own goals and wouldn't know whether they've "actually been achieved".

This is to say nothing of the cost of this small but remarkable advance. Trillions of dollars in training and inference and so far we have a couple minor (trivial?) math solutions. I'm sure if someone had bothered funding a few phds for a year we could have found this without ai.

>It's only because humans came up with a problem, worked with the ai and verified the result that this achievement means anything at all.

Replace ai with human here and that's...just how collaborative research works lol.

The only things moving faster than AI are the goalposts in conversations like this. Now we're at "sure, AI can solve novel problems, but it can't come up with the problems themselves on its own!"

I'm curious to see what the next goalpost position is.

  • > I'm curious to see what the next goalpost position is.

    I am as well. That's the point. Ai can do some things well and other things better than humans, but so can a garden hose and all technology. Is ai just a tool or is it the future of all work? By setting goalposts we can see whether or not it is living up to the hype that we're collectively spending trillions on.

    The garden hose manufacturers aren't claiming that they're going to replace all human workers, so we don't set those kinds of goalposts to measure whether it's doing that.

Funding a few PhDs for a year costs orders of magnitude more than it did to solve this problem in inference costs. Also, this has been active research for some time. Or I guess the people working on it are just not as good as a random bunch of students? It's amazing the lengths that people go to maintain their worldview, even if it means belittling hardworking people.

I take it you're not a mathematician. This is an achievement, regardless of whether you like LLMs or not, so let's not belittle the people working on these kinds of problems please.

  • >It's amazing the lengths that people go to maintain their worldview, even if it means belittling hardworking people.

    This is the most baffling and ironic aspects of these discussions. Human exceptionalism is what drives these arguments but the machines are becoming so good you can no longer do this without putting down even the top percenter humans in the process. Same thing happening all over this thread (https://news.ycombinator.com/item?id=47006594). And it's like they don't even realize it.

  • > Funding a few PhDs for a year costs orders of magnitude more than it did to solve this problem in inference costs.

    I don't think PhD students are sitting around and solving one problem for a year. Also PhD students are way cheaper

    • How many math PhD students do you have? If you set the problem right, something like this per year on average is a good pace.

      How are they cheaper? Your average grant where I am can pay for a couple of PhD students. I could afford to pay for inference costs out of my own salary, no grant needed. Completely different economic scales here. I like students better of course, but funding is drying up these days.

      1 reply →

  • Inference costs are heavily subsidised. My point was that we've spent trillions collectively on ai, and so far we have a few new proofs. It's been active research but the problem estimates only 5-10 people are even aware that it is a problem. I wrote "math phd's" not "random students", but regardless, I wouldn't know how you interpreted my statement that people could have discovered without ai this as "belittling the people working on this". You seem like a stupid person with an out of control chatbot that can't comprehend basic arguments.

    • > You seem like a stupid person

      And now you're belittling me. Yeah, good one, that'll convince people.

      > out of control chatbot that can't comprehend basic arguments

      I don't see how it is out of control. It is a tool. It is being used for a job. For low-level jobs it often succeeds. For tougher jobs, it is succeeding sufficiently often to be interesting. I don't care if it understands worldview semantics, that's for humans to do.

      > we've spent trillions collectively on ai

      The economics around AI do not suggest that continuing to perform large training runs is sustainable. That's also not relevant to the discussion. Once the training is done, further costs are purely on inference, and that is the comparison I was making.

      > Inference costs are heavily subsidised

      Even if you pay to run inference on your own hardware, economics of scale dictate that it is still cheaper than students.

      > It's been active research but the problem estimates only 5-10 people are even aware that it is a problem.

      That sounds about right for most pure math problems. Were you expecting more?

      Let's not pretend that society would have invested that kind of money into pure mathematics research. It is extraordinarily difficult to get funding for that kind of work in most parts of the world. Mathematicians are relatively cheap, yes, but the money coming into AI was from blind VCs with a sense of grandeur. It wasn't to do maths research. If it's here anyway, and causing nightmares for actually teaching new students, may as well try to make some good of it. It has only recently crossed the edge of being useful. Most researchers I know are only now starting to consider it, mostly as a search engine, but some for proof assistance. Experiences a year ago were highly negative. They're a lot more positive now.

      I'm trying to give a perspective from someone who actually does do math research at a senior level, who actually does have a half dozen math PhD students to supervise, to say that your blind attitude toward this is not sensible or helpful. Your comments about the problem being trivial do belittle the actual effort people have put into the problem without success. If they could easily have discovered this without AI, they would have already done so. Researchers do not have unlimited time and there are many more problems than students, especially good ones (hence my random comment).