Comment by kopirgan
1 month ago
May be I'm being dumb but isn't the AI approach somewhat like brute force hacking of password? I mean humans don't learn that way - yes we do study code to be better coders but not in the way AI learning does?
Will someone truly discover a way to start from fundamentals and do better not just crunch zillions of petabytes faster and faster.
Or is that completely wrong? Happy to stand corrected
That's part of the problem though, isn't it? We still don't really understand how human intelligence works. We don't know how we learn. We don't even know where or how memories are stored.
We have ideas about how it works, sure. We know a bit about how the basics of the brain works and we know some correlations of what areas of the brain electrically light up in various conditions, but that's about the extent of it.
Its hard to really create an artificial version of something without pretty well understanding the real thing. In the meantime, brute forcing it is probably the best (and most common) approach.
Is it really the best approach though if we sink all this capital into it if it can never achieve AGI? It’s wildly expensive and if it doesn’t achieve all the lofty promises, it will be a large waste of resources IMO. I do think LLMs have use cases, but when I look at the current AI hype, the spend doesn’t match up with the returns. I think AI could achieve this, but not with a brute force like approach.
There's still even a more fundamental question before getting there, how are we defining AGI?
OpenAI defines it based on the economic value of output relative to humans. Historically it had a much less financially arrived definition and general expectation.
2 replies →
Market will sort that out just like it did dotcom or tulip madness.
Another big push back is copyrighted content. Without proper revenue model how to pay for that?
That will also restrict what can be "learned". Already there's lawsuit, allegations of using pirated books etc
1 reply →
Yes for sure AI or even basic computing prior to that has done wonders, chess being one example. Simply removing some of the issues with humans - inconsistency, errors due to mood or form etc.
I agree this approach is only viable one but I do hope the other way is also being tried. Who knows there may be a breakthrough. Like they try to create life from basic chemicals present in nature.
As long as people are not willing to acknowledge that we're not blank slates, but posses vast intelligence inherently, and that even the simplest life forms are infinitely more intelligent than LLMs, there won't be progress.
Is that something we can acknowledge without at least somewhat understanding how that works?
It sure seems like we have a lot of inherent knowledge, but by the book we're little more than the product of an instruction set that is our DNA and there's no explanation that includes inherited knowledge in that DNA.
I think the harder thing we need to acknowledge is that we understand much less than we think we know. Concepts like inherent, or collective, knowledge would all roll down hill from that.
I think the brilliant minds working on this know this. It shouldn't stop things just as we'll die one day doesn't stop accumulating wealth or working hard.
But at some point I hope there's breakthrough from any entirely different paradigm.
Yes humans are different learners. Its not a problem. At least we dont know if its a problem.
Regarding brute force, of course not. We are 2 years into chatgpt people are still thinking its just ngram statistical models smh. Dont listen to bad youtubers.... Not even ppl like 3b1b, sabine, thor, or primeagen. Why, when you can take it from so many ppl actually working in ai instead.
Anyway, yes chatgpt is huge. But if you use ngram statistical models like its 1980, you'd need the whole universe as a server and it still wont be as good. Big!=infinitely big. Its not 'brute force'.
Maybe its not what you meant. But i heard this a lot. Anyway, sorry for venting!