Comment by ACCount36

2 months ago

I swear, people like you would say "it's just a bullshit PR stunt for some AI company" even when there's a Cyberdyne Systems T-800 with a shotgun smashing your front door in.

It's not "hype" to test AIs for undesirable behaviors before they actually start trying to act on them in real world environments, or before they get good enough to actually carry them out successfully.

It's like the idea of "let's try to get ahead of bad things happening before they actually have a chance to happen" is completely alien to you.

I get what you mean, but they also have vested interests in making it seem as if their chatbots are anything close to a T-800. All the talk from their CEO and other AI CEOs is doomerism about how their tools are going to be replacing swathes of people, they keep selling these systems as if they are the path to real AGI (itself an incredibly vague term that can mean literally anything).

Surely, the best way to "get ahead of bad things happening" would be to stop any and all development on these AI systems? In their own words these things are dangerous and predictable and will replace everyone... So why exactly do they continue developing these things and making them more dangerous, exactly?

The entire AI/LLM microcosmos exists because of hyping up their capabilities beyond all reason and reality, this is all a part of the marketing game.

  • For all we know, those systems ARE a path to AGI. Because they keep improving at what they can do and gaining capabilities from version to version.

    If there is a limit to how far LLMs can go, we are yet to find it.

    Dismissing the ongoing AI revolution as "it's just hype" is the kind of shortsighted thinking I would expect from reddit, not here.

    > So why exactly do they continue developing these things and making them more dangerous, exactly?

    Because not playing this game doesn't mean that no one else is going to. You can either try, or don't try, and be irrelevant.