Comment by gcanyon
6 days ago
99.99+% of all problems humans face do not require particularly original solutions. Determining whether LLMs can solve truly original (or at least obscure) problems is interesting, and a problem worth solving, but ignores the vast majority of the (near-term at least) impact they will have.
15 years ago they were predicting that AI would turn everything upside down in 15 years time. It hasn't.
People who say this don’t understand the breakthrough we had in the last couple of years. 15 years ago I was laughing at people predicting AI would turn everything upside down soon. I’m not laughing anymore. I’ve been around long enough to see some AI hype cycles and this time it is different.
15 years ago I, working on AI systems at a FAANG, would have told you “real” AI probably wasn’t coming in my lifetime. 15 years ago the only engineers I knew who thought AI was coming soon were dreamers and Silicon Valley koolaiders. The rest of us saw we needed a step-function break through that may not even exist. But it did, and we got there, a couple of years ago.
Now I’m telling people it’s here. We’ve hit a completely different kind of technology, and it’s so clear to people working in the field. The earthquake has happened and the tsunami is coming.
Thank you for sharing your experience. It makes the impact of the recent advances palpable.
the value of human beings isn't in their capacity to do routine tasks but to respond with some common sense to all the critical issues in the 2% at the tail.
This is why original problems are important, it's a measure of how sensible something is in an open-ended environment, and here they're completely useless, not just because they fail but how they fail. The fact that these LLMS according to the article "invent non-existent math theorems", i.e. gibberish instead of even being able to know what they don't know, is an indication of how limited this still is.
To be frank, I take precisely the opposite view. Most people solve novel problems every day, mostly without thinking much about it. Our inability to perceive the immense complexity of the things we do every day is merely due to familiarity. In other words we're blind to the details because our brain handles them automatically, not because they don't exist.
Software engineers understand this better than most - describing a task in general terms, and doing it yourself, can be incredibly easy, even while writing the code to automate the task is difficult or impossible, because of all the devilish details we don't often think about.
I work with developers every day. Between us we often give the AI directions like:
Some of those work better than others, but none of them are guaranteed failures.
I really doubt a contest for high schoolers contains any truly original problems.
"or at least obscure"