Comment by 2001zhaozhao
18 hours ago
I can't shake the thought that Claude is quite possibly helping to conduct these attacks.
Maybe it's a good thing that Anthropic will no longer be associated with the US government's attacks in another six months.
I still cannot understand what "Claud helping to conduct attacks" could possibly mean. Like, they asked an LLM to use tool calls to look up strategic info, maps, and military asset inventory and then write a plan for where to point the missiles? How is a text generator helpful here, whose job could it make meaningfully easier in the chain of command?
Target selection?
"Here is 10 petabytes of signals intelligences, you can run queries, give me the hierarchy of my enemy, the house address of anyone within 3 degrees of separation of their leadership or weapons industry, the next house address they're likely to be at if trying to flee my strikes, and the time they're all most likely to be there. Then schedule drone strikes on the houses."
I would not expect that prompt to work unless there's a fairly trivial query that can be crafted to give the right answer when run against the relevant datastore. If there is a query like that I would hope you have a guy on staff well-versed enough to know that and just run it himself.
Getting publicly kicked to the curb by the Trump admin mere hours before it starts another war is probably the best thing that could have happened to Anthropic. Not sure how well OpenAI's parachuting in is gonna look with hindsight. I have a feeling we won't have to wait that long to find out.
agreed. Although "starts another war" dismisses 50 years of history. Iran never stopped being at war with US and Israel and they clearly were never going to agree to a deal that left them without the nuclear capability to wipe both US and Israel off the map.
That's funny, I can't shake the thought that China's AI tech could be helping the ayatollahs' conduct their retaliation strikes.