← Back to context

Comment by lostlogin

9 hours ago

The end of the article is wild.

“I experienced a mental breakdown at 22. I had panic attacks and severe social anxiety…

…I still use AI, but very carefully”.

It reads like an alcoholic describing their new plan where they only drink a little bit.

Is that really so crazy? People who overcome addictive eating disorders still have to eat a little bit. LLMs are going to be pervasive in all aspects of human society so avoiding them will be much harder than avoiding alcohol.

  • Well eating is not optional, LLM use certainly is. If the risk is that you might jump into a psychosis and hurt yourself and others, it's probably not worth it.

  • Alcohol is not a necessity, just to be fair. In that sense alcoholism is not a simple eating disorder, it is a drug addiction.

  • From what I have seen, people who get through eating disorders describe it as having a healthier relationship with food.

    Getting to that point requires doing substantial work.

AI guardrails continue to make safety improvements — comparing a rapidly evolving advanced technology to a drug is a broken analogy to me. One gets safer over time; the other gets more dangerous.

But also, the risk profile and statistics are radically different: alcohol is inherently dangerous (toxic) to everyone. Chatbots are just another tool — there are a small percentage of people with unhealthy relationships to any tool, but that does not make the tool a dangerous drug.

  • The underlying models are improving at the same time as the guardrails and I'm not convinced the guardrails will keep up, especially given the perverse incentives. At some point the endless investor billions will dry up and a whole bunch of folks will be desperate to monetize their AI projects any way possible.

This is, like, literally 47.1% of the posts here and elsewhere. "AI is terrible and is a scourge on humanity, but I used it to do this one thing, and..."

Same shit as social media.