Comment by HarHarVeryFunny

2 days ago

There's a limit to how much novelty you're going to get from an LLM, especially in areas like programming and math where they've been heavily RL'd NOT to be novel, even to extent that the base model supports, and instead generate much narrower more proscribed outputs.

The limit to the novelty you are going to get from an LLM is essentially the "deductive/generative closure" of the training data. To be truly novel and move past the limits of your own past experience requires things like curiosity, continual learning, and the autonomy/agency to explore and learn.

but what is the share of PhD workforce who is doing novel and creative things compared to following some mechanical workflow?