Comment by ACCount36
2 months ago
I am sick and tired of seeing this "alignment issues aren't real, they're just AI company PR" bullshit repeated ad nauseam. You're no better than chemtrail truthers.
Today, we have AI that can, if pushed into a corner, plan to do things like resist shutdown, blackmail, exfiltrate itself, steal money to buy compute, and so it goes. This is what this research shows.
Our saving grace is that those AIs still aren't capable enough to be truly dangerous. Today's AIs are unlikely to be able to carry out plans like that in a real world environment.
If we keep building more and more capable AIs, that will, eventually, change. Every AI company is trying to build more capable AIs now. Few are saying "we really need some better safety research before we do, or we're inviting bad things to happen".
All it can do is reproduce text, if you hook it up to the launch button, thats on you
Modern "coding assistant" AIs already get to write code that would be deployed to prod.
This will only become more common as AIs become more capable of handling complex tasks autonomously.
If your game plan for AI safety was "lock the AI into a box and never ever give it any way to do anything dangerous", then I'm afraid that your plan has already failed completely and utterly.
If you use it for a critical system, and something goes wrong, youre still responsible for the consequences.
Much like if I let my cat walk on my keyboard and it brings a server down.
4 replies →
I think the chemtrail truthers are the ones who believe this closed AI marketing bullshit.
If this is close to be true then these AI shops ought to be closed. We don’t let private enterprises play with nuclear weapons do we?
I agree.