Comment by staticassertion

4 days ago

[flagged]

Conflicting evidence: the fact that literally everyone in tech is posting about how they're using AI.

  • Different sets of people, and different audiences. The CEO / corporate executive crowd loves AI. Why? Because they can use it to replace workers. The general public / ordinary employee crowd hates AI. Why? Because they are the ones being replaced.

    The startups, founders, VCs, executives, employees, etc. crowing about how they love AI are pandering to the first group of people, because they are the ones who hold budgets that they can direct toward AI tools.

    This is also why people might want to remain anonymous when doing an AI experiment. This lets them crow about it in private to an audience of founders, executives, VCs, etc. who might open their wallets, while protecting themselves from reputational damage amongst the general public.

  • I feel like it depends on the platform and your location.

    An anonomyous platform like Reddit and even HN to a certain extent has issues with bad faith commenters on both sides targeting someone they do not like. Furthermore, the MJ Rathburn fiasco itself highlights how easy it is to push divisive discourse at scale. The reality is trolls will troll for the sake of trolling.

    Additionally, "AI" has become a political football now that the 2026 Primary season is kicking off, and given how competitive the 2026 election is expected to be and how political violence has become increasingly normalized in American discourse, it is easy for a nut to spiral.

    I've seen less issues when tying these opinions with one's real world identity, becuase one has less incentive to be a dick due to social pressure.

    • Just wondering, who is it you think is contributing most to the normalization of political violence in the discourse?

      Your answer to that can color how I read your post by quite a bit.

    • In an attention economy, trolling is a rewarded behavior. Show me the incentives and I will show you the outcome.

    • That’s a big reason I am open about my identity, here (and elsewhere, but I’m really only active, hereabouts).

      At one time, I was an actual troll. I said bad stuff, and my inner child was Bart Simpson. I feel as if I need to atone for that behavior.

      I do believe that removing consequences, almost invariably brings out the worst in people. I will bet that people are frantically creating trollbots. Some, for political or combative purposes, but also, quite a few, for the lulz.

  • There is a massive difference between saying "I use AI" and what the author of this bot is doing. I personally talk very little about the topic because I have seen some pretty extreme responses.

    Some people may want to publicly state "I use AI!" or whatever. It should be unsurprising that some people do not want to be open about it.

    • The more straightforward explanation for the original OP's question is that they realized what they were doing was reckless and given enough time was likely to blow up in their face.

      They didn't hide because of a vague fear of being associated with AI generally (which there is no shortage of currently online), but to this specific, irresponsible manifestation of AI they imposed on an unwilling audience as an experiment.

  • I personally know some of those people. They are basically being forced by their employers to post those things. Additionally, there is a ton of money promoting AI. However, in private those same people say that AI doesn't help them at all and in fact makes their work harder and slower.

    You are assuming people are acting in good faith. This is a mistake in this era. Too many people took advantage of the good faith of others lately and that has produced a society with very little public trust left.

  • I mean, this is very obviously false. Literally everyone is not. Some people are, some people are absolutely condemning the use, some people use it just a bit, etc.

> You can easily get death threats if you're associating yourself with AI publicly.

That's a pretty hefty statement, especially the 'easily' part, but I'll settle for one well known and verified example.

  • I upvoted you, but wouldn't “verified” exclude the vast majority of death threats since they might have been faked? (Or maybe we should disregard almost all claimed death threats we hear about since they might have been faked?)

  • I'm surprised that you consider this hefty or find this surprising. I think you can just Google this and decide on what you consider "verified". There's quite a lot of "AI drama" out there that I'm sure you can find. I'm reluctant to provide examples just to have you say "that's not meeting my bar for verified" for what I consider such a low stakes conversation.

  • Is it that hard to believe? As far as I can tell, the probability of receiving death threats approaches 1 as the size of your audience increases, and AI is a highly emotionally charged topic. Now, credible death threats are a different, much trickier question.

    • Yes, it's quite hard to believe. That's why one single example is sufficient for me. Then I'll be happy to extrapolate that one example to many more so it is a low bar I would say, given the OPs statement about how common this is. Note the 'easily'.

      5 replies →

> This is not intended to be AI advocacy

I think it is: It fits the pattern, which seems almost universally used, of turning the aggressor A into the victim and thus the critic C into an aggressor. It also changes the topic (from A's behavior to C's), and puts C on the defensive. Denying / claiming innocence is also a very common tactic.

> You can easily get death threats if you're associating yourself with AI publicly.

What differentiates serious claims from more of the above and from Internet stuff is evidence. Is there some evidence somewhere of that?

  • Feel free to think that I'm lying or whatever. This is just armchair psychologizing.

    This has nothing to do with aggressors or victims. A hypothesis has been provided to explain the data we have, the hypothesis was rejected because it it seemed unintuitive that someone would have distanced themselves, I provided an explanation that accounts for why they would have.

    That is, my explanation accounts for the user distancing themselves from AI by appealing to the risk of reputational harm that exists. You don't have to accept that, you can say some other explanation is more plausible, or whatever, but all I have done is provide an explanation - in no way is this an attempt to frame anyone as "aggressor" or "victim".

    If you think this is a "pro AI" or "anti AI" stance (A) I don't give a shit, it isn't, and you can just think I'm lying (B) you seem confused about the purpose of the post, which is merely to provide an explanation that accounts for the data.