← Back to context

Comment by TaylorAlexander

2 years ago

> why are we talking about things like this?

> this is such a transparent attention grab (and, by extension, money grab by being overvalued by investors and shareholders)

Ilya believes transformers can be enough to achieve superintelligence (if inefficiently). He is concerned that companies like OpenAI are going to succeed at doing it without investing in safety, and they're going to unleash a demon in the process.

I don't really believe either of those things. I find arguments that autoregressive approaches lack certain critical features [1] to be compelling. But if there's a bunch of investors caught up in the hype machine ready to dump money on your favorite pet concept, and you have a high visibility position in one of the companies at the front of the hype machine, wouldn't you want to accept that money to work relatively unconstrained on that problem?

My little pet idea is open source machines that take in veggies and rice and beans on one side and spit out hot healthy meals on the other side, as a form of mutual aid to offer payment optional meals in cities, like an automated form of the work the Sikhs do [2]. If someone wanted to pay me loads of money to do so, I'd have a lot to say about how revolutionary it is going to be.

[1] https://www.youtube.com/watch?v=1lHFUR-yD6I

[2] https://www.youtube.com/watch?v=qdoJroKUwu0

EDIT: To be clear I’m not saying it’s a fools errand. Current approaches to AI have economic value of some sort. Even if we don’t see AGI any time soon there’s money to be made. Ilya clearly knows a lot about how these systems are built. Seems worth going independent to try his own approach and maybe someone can turn a profit off this work even without AGI. Tho this is not without tradeoffs and reasonable people can disagree on the value of additional investment in this space.

His paycheck is already dependent on people believing this world view. It’s important to not lose sight of that.

  • I mean I think he can write his own ticket. If he said "AGI is possible but not with autoregressive approaches" he could still get funding. People want to get behind whatever he is gonna work on. But a certain amount of hype about his work is needed for funding, yes.

    • Kinda, as long as it’s cool. If he said ‘this is all just plausible text generation’, I think you’d find his options severely limited compared to the alternatives.

  • Dude he’s probably worth > 1 Billion.

    • As long as the stock is hot, sure.

      If he crashes it by undermining what is making it hot, he’ll be worth a lot less. Depending on how hard he crashes the party, maybe worthless.

      Though this party is going hard enough right now, I doubt he alone could do it.