Comment by daxfohl
1 day ago
Though I think it's a very steep sigmoid that we're still far on the bottom half of.
For math it just did its first "almost independent" Erdos problem. In a couple months it'll probably do another, then maybe one each month for a while, then one morning we'll wake up and find whoom it solved 20 overnight and is spitting them out by the hour.
For software it's been "curiosity ... curiosity ... curiosity ... occasionally useful assistant ... slightly more capable assistant" up to now, and it'll probably continue like that for a while. The inflection point will be when OpenAI/Anthropic/Google releases an e2e platform meant to be driven primarily by the product team, with engineering just being co-drivers. It probably starts out buggy and needing a lot of hand-holding (and grumbling) from engineering, but slowly but surely becomes more independently capable. Then at some point, product will become more confident in that platform than their own engineering team, and begin pushing out features based on that alone. Once that process starts (probably first at OpenAI/Anthropic/Google themselves, but spreading like wildfire across the industry), then it's just a matter of time until leadership declares that all feature development goes through that platform, and retains only as many engineers as is required to support the platform itself.
And then what? Am I supposed to be excited about this future?
Hard to say. In business we'll still have to make hard decisions about unique situations, coordinate and align across teams and customers, deal with real world constraints and complex problems that aren't suitable to feed to an LLM and let it decide. In particular, deciding whether or not to trust an LLM with a task will itself always be a human decision. I think there will always be a place for analytical thinking in business even if LLMs do most of the actual engineering. If nothing else, the speed at which they work will require an increase in human analytical effort, to maximize their efficacy while maintaining safety and control.
In the academic world, and math in particular, I'm not sure. In a way, you could say it doesn't change anything because proofs already "exist" long before we discover them, so AI just streamlines that discovery. Many mathematicians say that asking the right questions is more important than finding the answers. In which case, maybe math turns into something more akin to philosophy or even creative writing, and equivalently follows the direction that we set for AI in those fields. Which is, perhaps less than one would think: while AI can write a novel and it could even be pretty good, part of the value of a novel is the implicit bond between the author and the audience. "Meaning" has less value coming from a machine. And so maybe math continues that way, computers solving the problems but humans determining the meaning.
Or maybe it all turns to shit and the sheer ubiquity of "masterpieces" of STEM/art everything renders all human endeavor pointless. Then the only thing that's left worth doing is for the greedy, the narcissists, and the power hungry to take the world back to the middle ages where knowledge and search for meaning take a back seat to tribalism and war mongering until the datacenters power needs destroy the planet.
I'm hoping for something more like the former, but, it's anybody's guess.
You have to remember that half these people think they are building god.
If machines taking over labor and allowing humans to live a life of plenty instead of slaving away in jobs isn't exciting, then I don't know what is.
I guess cynics will yap about capitalism and how this supposedly benefits only the rich. That seems very unimaginative to me.
> That seems very unimaginative to me.
Does it? How exactly is the common Joe going to benefit from this world where the robots are doing the job he was doing before, as well as everyone else's job (aka, no more jobs for anyone)? Where exactly is the money going to come from to make sure Joe can still buy food? Why on earth would the people in power (aka the psychotic CxOs) care to expend any resources for Joe, once they control the robots that can do everything Joe could? What mechanisms exist for everyone here to prosper, rather than a select few who already own more wealth and power than the majority of the planet combined?
I think believing in this post-scarcity utopian fairy tale is a lot less imaginative and grounded than the opposite scenario, one where the common man gets crushed ruthlessly.
We don't even have to step into any kind of fantasy world to see this is the path we're heading down, in our current timeline as we speak, CEOs are foaming at the mouth to replace as many people as they can with AI. This entire massive AI/LLM bubble we find ourselves in is predicated on the idea that companies can finally get rid of their biggest cost centers, their human workers and their pesky desires like breaks and vacations and worker's rights. And yet, there's still somehow people out there that will readily lap up the bullshit notion that this tech is going to somehow be used as a force of good? That I find completely baffling.
5 replies →