Comment by aresant

2 years ago

Prediction - the business model becomes an external protocol - similar to SSL - that the litany of AI companies working to achieve AGI will leverage (or be regulated to use)

From my hobbyist knowledge of LLMs and compute this is going to be a terrifically complicated problem, but barring a defined protocol & standard there's no hope that "safety" is going to be executed as a product layer given all the different approaches

Ilya seems like he has both the credibility and engineering chops to be in a position to execute this, and I wouldn't be suprised to see OpenAI / MSFT / and other players be early investors / customers / supporters

I like your idea. But on the other hand, training an AGI, and then having a layer on top “aligning” the AGI sounds super dystopian and good plot for a movie.

  • We create superintelligence, but just feed it a steady dose of soma.

  • the aligning means it should do what the board of directors wants, not what's good for society.

    • Poisoning Socrates was done because it was "good for society". I'm frankly even more suspicious of "good for society" than the average untrustworthy board of directors.

      6 replies →