← Back to context

Comment by jmward01

5 days ago

"Until this week, however, Anthropic’s Claude product was the only model permitted for use in the military’s classified systems."

I hadn't realized. This does make me consider using alternatives more.

This is most likely because getting a SaaS software to conform to federal regulations and to promise the security needed by the US military is difficult and expensive. FedRAMP is onerous.

And LLM products Are new-ish. It suggests that Anthropic made federal government contracts a priority while OpenAI, Alphabet, AWS didn’t.

They always focused on the safety. (Their own safety). They only backed off from us military once they were in the bad press. As usual, they are not an ethical company. I can’t say it’s bad as all corporations are the same. Just don’t look at the illusion they create.

If you look at my post history you can see I’m always calling them out about how sketchy they are.