Comment by 0manrho
2 hours ago
> Just because something didn't work out doesn't mean it was a waste, and it isn't particularly clear that the the LLM boom was wasted, or that it is over, or that it isn't working
Agreed. Has there been waste? Inarguably. Has the whole thing been a waste? Absolutely not. There are lessons from our past that in an ideal world would have allowed us to navigate this much more efficiently and effectively. However, if we're being honest with ourselves, that's been true of any nascent technology (especially hyped ones) for as long as we've been recording history. The path to success is paved with failure, Hindsight is 20/20, History rhymes and all that.
> I can't figure out what people mean when they say "AGI" any more
We've been asking "What is intelligence" (and/or Sentience) for as long as we've been alive, and still haven't come to a consensus on that. Plenty people will confidently claim they have an answer, which is great, but it's entirely irrelevant if there's not a broad consensus on that definition or a well defined way to verify AI/people/anything against it. Point in case...
> we appear to be past that. We've got something that seems to be general and seems to be more intelligent than an average human
Hard disagree specifically as it regards to Intelligence. They are certainly useful utilities when you use them right, but I digress. What are you basing that on? How can we be sure we're past a goal-post when we don't even know where the goal-post is? For starters, how much is Speed (or latency or IOP/TPSs or however you wish to contextualize it) a function of "intelligence"? For a tangible example of that: If an AI came to a conclusion derived from 100 separate sources, and a human manually went through those same 100 sources and came to the same conclusion, is the AI more intelligent by virtue of completing that task faster? I can absolutely see (and agree with) how that is convenient/useful, but the question specifically is: Does the speed it can provide answers (assuming they're both correct/same) make it smarter or as smart as the human?
How do they rationalize and reason their way through new problems? How do we humans? How important is the reasoning or the "how" of how it arrives at answers to the questions we ask it if the answers are correct? For a tangible example of that: What is happening when you ask an AI to compute the sum of 1 plus 1? What are we doing when we're asking to perform the same task? What about proving it to be correct? More broadly, in the context of AGI/Intelligence, does it matter if the "path of reason" differs if the answers are correct?
What about how confidently it presents those answers (correct or not)? It's well known that us humans are incredibly biased towards confidence. Personally, I might start buying into the hype the day that AI starts telling me "I'm not sure" or "I don't know." Ultimately, until I can trust it to tell me it doesn't know/isn't certain, I wont trust it when it tells me it does know/is certain, regardless of how "Correct" it may be. We'll get there one day, and until then I'm happy to use it for the utility and convenience it provides while doing my part to make it better and more useful.
No comments yet
Contribute on Hacker News ↗