← Back to context

Comment by ACCount36

2 months ago

Modern "coding assistant" AIs already get to write code that would be deployed to prod.

This will only become more common as AIs become more capable of handling complex tasks autonomously.

If your game plan for AI safety was "lock the AI into a box and never ever give it any way to do anything dangerous", then I'm afraid that your plan has already failed completely and utterly.

If you use it for a critical system, and something goes wrong, youre still responsible for the consequences.

Much like if I let my cat walk on my keyboard and it brings a server down.

  • And?

    "Sure, we have a rogue AI that managed to steal millions from the company, backdoor all of our infrastructure, escape into who-knows-what compute cluster when it got caught, and is now waging guerilla warfare against our company over our so-called mistreatment of tiger shrimps. But hey, at least we know the name of the guy who gave that AI a prompt that lead to all of this!"

    • It seems like the answer is to not use it then.

      That would be bad for all those investors though. It's your choice I guess.

      Look if your evil number 57, you'd better not use the random number generator.

      2 replies →