Comment by zztop44
2 years ago
They reduce the marginal cost of producing plausible content to effectively zero. When combined with other societal and technological shifts, that makes them dangerous to a lot of things: healthy public discourse, a sense of shared reality, people’s jobs, etc etc
But I agree that it’s not at all clear how we get from ChatGPT to the fabled paperclip demon.
We are forgetting the visual element
The text alone doesn’t do it but add some generated and nearly perfect “spokesperson” that is uniquely crafted to a persons own ideals and values, that then sends you a video message with that marketing .
We will all be brainwashed zombies
> They reduce the marginal cost of producing plausible content to effectively zero.
This is still "LLMs as a tool for bad people to do bad things" as opposed to "A(G)I is dangerous".
I find it hard to believe that the dangers everyone talks about is simply more propaganda.
There are plenty of tools which are dangerous while still requiring a human to decide to use them in harmful ways. Remember, it’s not just bad people who do bad things.
That being said, I think we actually agree that AGI doomsday fears seem massively overblown. I just think the current stuff we have is dangerous already.