Comment by seanmcau
2 days ago
It would/will be extremely irresponsible to put non-deterministic and fallible models in charge of weapons. We are not close to having solved the problem of ensuring AI pursues good outcomes
2 days ago
It would/will be extremely irresponsible to put non-deterministic and fallible models in charge of weapons. We are not close to having solved the problem of ensuring AI pursues good outcomes
I agree completely. Anybody who uses the models extensively know it can do something amazing for a prompt and something awful for another. But I also know that wars are unfortunately real and there are real enemies between countries and they don't want a limited model.
How exactly does the "limitation" affect any war the US may be in with another country?
Probably drones targeting and automatically killing Russian people by a thinking model guessing if its Russian on Ukrainian person is a red line.
Elon Musk already denied Starlink for being used for remote killing, but at some point all these technologies will be nationalized, as they are too important not to be.