Comment by nateglims
2 days ago
Are you a mathematician? I’m not an expert on the math field but it seems like they are hitting the same issues everyone else has: current LLMs still more or less need to be supervised by an expert and struggle to do something actually novel or build out a complicated proof correctly.
There's a limit to how much novelty you're going to get from an LLM, especially in areas like programming and math where they've been heavily RL'd NOT to be novel, even to extent that the base model supports, and instead generate much narrower more proscribed outputs.
The limit to the novelty you are going to get from an LLM is essentially the "deductive/generative closure" of the training data. To be truly novel and move past the limits of your own past experience requires things like curiosity, continual learning, and the autonomy/agency to explore and learn.
but what is the share of PhD workforce who is doing novel and creative things compared to following some mechanical workflow?
I work in math heavy applied setting. Randomly hired PhDs are also need to be supervised, end results being monitored, code be reviewed or they will make lots of mistakes, and my view is if you throw some problem like: build optimization model for this kind of problem on this kind of data, LLMs may produce better results.