Comment by labrador
3 years ago
The AI maximalists think we're on a exponential curve to the singularity and potential AI disaster or even eternal dominance by a AI dictator [Musk].
Realistically though, the road to AGI and beyond is like the expansion of the human race to The Moon, Mars and beyond, slow, laborious, capital and resource intensive with vast amounts of discoveries that still need to be made.
Without having an understanding of the architecture required for general intelligence, it is impossible to make claims like this. Nobody has this understanding. Literally nobody.
The human brain uses on the order of 10 watts of power and there are almost 8 billion examples of this. So we have hard proof that from a thermodynamic perspective general intelligence is utterly and completely mundane.
We almost certainly already have the computational power required for AGI, but have no idea what a complete working architecture looks like. Figuring that out might take decades, or we might get there significantly quicker. The timespan is simply not knowable ahead of time.
I'm not concerned in the slightest about "the singularity" and non-aligned superintelligences. AGI in the hands of malicious human actors is already a nightmare scenario.
I found out today I don't exactly have Covid brain fog. Covid has triggered my bipolar disorder, so I have flu-like symptoms and hypomania, a combo I've never experienced before so I'm not used to it. It's a bit wild.
https://www.google.com/search?q=bipolar+covid
Sorry to hear. Hope you get back to normal soon!
1 reply →
Take a look at Auto-GPT, it doesn't seem like AGI is far off. I say the AGI in a weak form is already here, it just needs to strengthen.
Tracking problematic actions back to the person that own the AGI will likely not be a difficult task. The owner of an AGI would be held responsible for its actions. The worry is that these actions would happen very quickly. This too can be managed by safety systems, although they may need to be developed more fully in the near future.
Human brains are not built from digital circuits - perhaps they have far more compute than we think.
You asserted an analogy but didn't actualize the connection between the two concepts.
Sorry I have Covid with brain fog right now so maybe you could help me out
Edit: Off the top of my foggy head, LLMs as I understand them are text completion predictors based on statistical probabilities trained on vast amounts of examples of what humans have previously written, whose output is styled with neuro linguistic programming also based on vast numbers of styles of human writing. This is my causual amatuer understanding. There is no logical, reasoning programing such as the Lisp programmers attempted in the 1980's, but clearly the logical abilities of the current LLMs fall short and they are not AGI for that reason. So how do we add logic abilities to make LLMs AGI? Should we revisit the approaches of the Lisp machines of the 1980's? This requires much research and discovery. Then there's the question of just what is general intelligence. I've always thought that emotional intelligence played a huge role in high intelligence, a balance between logic and emotion or Wise Mind is wisdom. Obviously we won't be building emotions into silicon machines or will we? Is anyone proposing this? This could take hundreds of years to accomplish if it is even possible. We could simulate emotion but that's not the same, that's logic. Logical intelligence and emotional capability I think are a prerequisite for consciousness and spirituality. If the Universe is conscious and it arises in a focused manner in brains that are capable of it then how do we build a machine capable of having consciousness arise in it? That's all I'm saying.
https://en.wikipedia.org/wiki/Dialectical_behavior_therapy