Comment by daxfohl
1 day ago
Hard to say. In business we'll still have to make hard decisions about unique situations, coordinate and align across teams and customers, deal with real world constraints and complex problems that aren't suitable to feed to an LLM and let it decide. In particular, deciding whether or not to trust an LLM with a task will itself always be a human decision. I think there will always be a place for analytical thinking in business even if LLMs do most of the actual engineering. If nothing else, the speed at which they work will require an increase in human analytical effort, to maximize their efficacy while maintaining safety and control.
In the academic world, and math in particular, I'm not sure. In a way, you could say it doesn't change anything because proofs already "exist" long before we discover them, so AI just streamlines that discovery. Many mathematicians say that asking the right questions is more important than finding the answers. In which case, maybe math turns into something more akin to philosophy or even creative writing, and equivalently follows the direction that we set for AI in those fields. Which is, perhaps less than one would think: while AI can write a novel and it could even be pretty good, part of the value of a novel is the implicit bond between the author and the audience. "Meaning" has less value coming from a machine. And so maybe math continues that way, computers solving the problems but humans determining the meaning.
Or maybe it all turns to shit and the sheer ubiquity of "masterpieces" of STEM/art everything renders all human endeavor pointless. Then the only thing that's left worth doing is for the greedy, the narcissists, and the power hungry to take the world back to the middle ages where knowledge and search for meaning take a back seat to tribalism and war mongering until the datacenters power needs destroy the planet.
I'm hoping for something more like the former, but, it's anybody's guess.
No comments yet
Contribute on Hacker News ↗