Comment by sreekanth850
5 days ago
I was wondering on what basis @Sama keeps saying they are near AGI, when in reality LLMs just calculate sequences and probabilities. I really doubt this bubble is going to burst soon.
5 days ago
I was wondering on what basis @Sama keeps saying they are near AGI, when in reality LLMs just calculate sequences and probabilities. I really doubt this bubble is going to burst soon.
I'm unaware of any proof (in the mathematician sense, for example) that _we_ aren't just kickass machines calculating sequences at varying probabilities, though.
perhaps that is how the argument persists?
Humans do this but this is not all they do. How do we explain humans who invent new concepts, new words, new numerical systems, new financial structures, new legal theories. These are not exactly predictions (since they don't exist in a training set) but they may be composed from such sets.
> How do we explain humans who invent new concepts
Simple: they are hallucinations that turn out to be correct or useful.
Ask ChatGPT to create a million new concepts that weren't in its training data and some of them are bound to be similarly correct or useful. The only difference is that humans have hands and eyes to test their new ideas.
Efficiency matters. We do it with a fraction of the processing power.
true in the caloric/watts sense, but we might well have way higher computational power architecturally?
1 reply →