Comment by AndrewKemendo

12 days ago

Why would you be surprised?

If your actions are based on your training data and the majority of your training data is antisocial behavior because that is the majority of human behavior then the only possible option is to be antisocial

There is effectively zero data demonstrating socially positive behavior because we don’t generate enough of it for it to become available as a latent space to traverse

The issue with this is when creating artificial general intelligence objective shouldn’t be to replicate the statistical mean of human behavior with all its frailties and crookedness. Ultimately our ambition should be to create an intelligence that is at the peak and at the frontier of cosmic intelligence so if these LLM methods are resulting in a statistical mean then they’re dead end on the AGI journey. And we need to revise our methodology in research and engineering in order to produce results at the frontier that represent frontier cosmic intelligence for lack of a better term.