← Back to context

Comment by CamperBob2

17 hours ago

Let's wait one more year, and perhaps everyone who didn't fall victim to these "slimming pills” for developers' brains will be glad about the choice they made.

In that year, AI will get better. Will you?

AI is only getting better at consuming energy and wasting people's time communicating with this T9. However, if talented engineers continue to use it, it might eventually provide more accurate replies as a result.

Answering your question, no matter how much I personally degrade or improve, I will not be able to produce anything even remotely comparable in terms of negative impact that AI brings to humanity these days.

  • I see this logical pairing a lot.

    1) AI is basically useless, a mere semi-random word generator. 2) And it is so powerful that it is going to hurt (or even destroy) humanity.

    This is this is called "having your cake, and letting it eat you too".

    • There’s nothing incongruent about that pairing (though I also think you’re not being entirely fair in describing what your parent comment said). Atom bombs also fit: They are basically useless and they are so powerful that they can destroy humanity.

      With LLMs, the destruction is less immediate and overt, but chatbots do provable harm to people, and can be manipulated to warp our sense of reality.

      https://en.wikipedia.org/wiki/Chatbot_psychosis

      People are having romantic relationships with their chatbots and committing suicide because of them. That is harm.

    • That's a dishonest framing of their argument. There's nothing logically inconsistent in believing wide adoption of AI tools causes developers' skills to atrophy and that the tools also fail to deliver on the hype/promises.

      You're inserting "destroy humanity" when OP is suggesting the problem is offloading all thinking to an unreliable tool (I don't entirely agree with their position but it's defensible and not as you stated).

      1 reply →

    • There's no point arguing with someone who's not only wrong, but who doesn't care if they're wrong. ("I will not be able to produce anything even remotely comparable in terms of negative impact that AI brings to humanity these days.")

      There are basically no conditions under which one party can or will reach a legitimate common ground with the other. Sucks, but that's HN nowadays.

      1 reply →