← Back to context

Comment by skybrian

16 hours ago

If we’re sharing vibes, “our product is dangerous” seems like an unusual sales tactic outside the defense industry. I’m doubtful that’s how it works?

Meanwhile, another reason to make a press release is that you’ll be criticized for the coverup if you don’t. Also, it puts other companies on notice that maybe they should look for this?

Yeah. You'd think nuclear power would be incredibly popular, given that "our product is dangerous" is a apparently genius marketing strategy. After all, if it can make a whole region of ukraine uninhabitable and be weaponized to turn people into shadows on pavement, it can surely power your fridge. Yet oddly companies making nuclear reactors always market them as being very safe instead of leaning into the danger.

I think it might be a "our product IS dangerous but look we are on top of it!" kind of deal. Still leaves a funny taste either way.

The bulk of OpenAI and Anthropic’s statements about doomsday AGI and AI safety in general also present the company as sole ethical gatekeeper of the technology, whom we must trust and protect lest its unscrupulous rivals win the AI race. So this article is very much in line with that marketing strategy.

>unusual sales tactic outside the defense industry. I’m doubtful that’s how it works?

given the valuation and money these companies burn through marketing wise they basically need to play by the same logic as defense companies. They're all running on "we're reinventing the world and building god" to justify their spending, "here's a chatbot (like 20 other ones) that going to make you marginally more productive" isn't really going to cut it at this point, they're in too deep