← Back to context

Comment by anigbrowl

20 hours ago

It's a clear enough moral issue that whichever side of it you end up on is likely to have life-shaping consequences 5 or 10 years down the line. It's predictable that there will be domestic or international conflict with a high cost in lives and political coherence over that timescale, and being someone who 'was in AI' at a government scale vendor is qualitatively different from being a database admin o font designer or UX specialist.

Substantively, individual employees of these firms may have little or no actual impact on this. But AI is ubiquitous enough and disruptive enough that being professionally connected with it at a time of great geopolitical instability has the potential to be a very very bad look later.

But hasn’t that always been true at Google? They’ve been military contractors for decades.

  • No, because 'military contractor' is vague and people don't associate logistics or mapping info with death directly and assign responsibility to some generic person in uniform. 'AI systems that hunt down and kill you' is the sort of sci-fi nightmare people relate to personally.