← Back to context

Comment by wubrr

7 months ago

It can't do those things because it doesn't have the physical/write capability to do so. But it's still very interesting that it ~tries them, and seems like a good thing to know/test before giving it more physical/'write' capabilities - something that's already happening with agents, robots, etc.

I make a circuit that waits a random interval and then sends a pulse down the line. I connect it to a relay that launches a missile. I diligently connect that to a computer and then write a prompt telling how the AI agent can invoke the pulse on that circuit.

How did this happen? AI escaped and launched a missile. I didn't do this, it was the AI.

OpenAI is so cringe with these system cards. Look guys it is so advanced.

  • I don't think I quite follow your point?

    Connecting LLMs/AI to physical tools that can 'write/modify' the world is happening, and it's happening at an accelerating pace.

    It's not hard to imagine how, given enough real-world physical capabilities, LLMs could modify themselves and the world in unexpected/undesirable ways.

    Is that happening now? Are chatgpt et al advanced enough to modify themselves in interesting ways? - I don't honestly know, but I wouldn't be surprised if they are.