Comment by reissbaker
2 years ago
I'm pretty sure "Altman and company" don't have much to do with this — this is Ilya, who pretty famously tried to get Altman fired, and then himself left OpenAI in the aftermath.
Ilya is a brilliant researcher who's contributed to many foundational parts of deep learning (including the original AlexNet); I would say I'm somewhat pessimistic based on the "safety" focus — I don't think LLMs are particularly dangerous, nor do they seem likely to be in the near future, so that seems like a distraction — but I'd be surprised if SSI didn't contribute something meaningful nonetheless given the research pedigree.
I actually feel that they can be very dangerous. Not because of the fabled AGI, but because
1. they're so good at showing the appearance of being right;
2. their results are actually quite unpredictable, not always in a funny way;
3. C-level executives actually believe that they work.
Combine this with web APIs or effectors and this is a recipe for disaster.
I got into an argument with someone over text yesterday and the person said their argument was true because ChatGPT agreed with them and even sent the ChatGPT output to me.
Just for an example of your danger #1 above. We used to say that the internet always agrees with us, but with Google it was a little harder. ChatGPT can make it so much easier to find agreeing rationalizations.
The ‘plausible text generator’ element of this is perfect for mass fraud and propaganda.
3. Sorry, but how do you know what do they believe in?
My bad, I meant too many C-level executives believe that they actually work.
And the reason I believe that is that, as far as I understand, many companies are laying off employees (or at least freezing hiring) with the expectation that AI will do the work. I have no mean to quantify how many.
The word transformer nor LLM appear anywhere in their announcement
It’s like before the end of WWII the world sees the US as a military super power , and THEN we unleash the atomic bomb they didn’t even know about
That is Ilya. He has the tech. Sam had the corruption and the do anything power grab
> I don't think LLMs are particularly dangerous
“Everyone” who works in deep AI tech seems to constantly talk about the dangers. Either they’re aggrandizing themselves and their work, or they’re playing into sci-fi fear for attention or there is something the rest of us aren’t seeing.
I’m personally very skeptical there is any real dangers today. If I’m wrong, I’d love to see evidence. Are foundation models before fine tuning outputting horrific messages about destroying humanity?
To me, the biggest dangers come from a human listening to a hallucination and doing something dangerous, like unsafe food preparation or avoiding medical treatments. This seems distinct from a malicious LLM super intelligence.
That's what Safe Super intelligence misses. Superintelligence isn't practically more dangerous. Super stupidity is already here, and bad enough.
They reduce the marginal cost of producing plausible content to effectively zero. When combined with other societal and technological shifts, that makes them dangerous to a lot of things: healthy public discourse, a sense of shared reality, people’s jobs, etc etc
But I agree that it’s not at all clear how we get from ChatGPT to the fabled paperclip demon.
We are forgetting the visual element
The text alone doesn’t do it but add some generated and nearly perfect “spokesperson” that is uniquely crafted to a persons own ideals and values, that then sends you a video message with that marketing .
We will all be brainwashed zombies
> They reduce the marginal cost of producing plausible content to effectively zero.
This is still "LLMs as a tool for bad people to do bad things" as opposed to "A(G)I is dangerous".
I find it hard to believe that the dangers everyone talks about is simply more propaganda.
1 reply →