← Back to context

Comment by blibble

2 months ago

given the "AI" industry's long term goals, I see contributing in any way to generative "AI" to be deeply unethical, bordering on evil

which is the exact opposite of improving the world

you can extrapolate to what I think of YOUR actions

I imagine you think I'm an accelerant of all of this, through my efforts to teach people what it can and cannot do and provide tools to help them use it.

My position on all of this is that the technology isn't going to uninvented and I very much doubt it will be legislated away, which means the best thing we can do is promote the positive uses and disincentivize the negative uses as much as possible.

  • I don't see you as an accelerant

    they're using your exceptional reputation as a open-source developer to push their proprietary parasitic products and business models, with you thinking you're doing good

    I don't mean to be rude, but I suspect "useful idiot" is probably the term they use to describe open source influencers in meetings discussing early access

  • You know. I'm realizing im my head Im comparing this to Nazism and Hitler. Im sure many people thought he was bringing change to the world and since its going to happen anyway we should all get on-board with it. In the end there was a reckoning.

    IMHO their are going to be consequences of these negative effects, regardless of the positives.

    Looking at it in this light, you might want to get out now, while you still can. Im sure its going to continue, its not going to be legislated away, but it's still wrong to be using this technology in the way it's being used right now, and I will not be associated with the harmful effects this technology is being used for because a few corporations feel justified in pushing evil on to the world wrapped positives.

    • Whoa, did not expect a Hitler comparison here.

      I think of LLMs as more like the invention of cars or railways: enormous negative externalities, but provided enough benefit to humanity that we tend to think they were worthwhile.

      Are the negatives of LLMs really that bad? Most of them look more like annoyances to me.

      The ones that upset me the most are the ChatGPT psychosis episodes which have lead to loss of life. I'm reassured by the fact that the AI labs are taking genuine steps to reduce the risk of that happening, which seems analogous to me to the development of car safety features.

Your post, full of well formed, English sentences is also going to contribute to generative AI, so thanks for that.

  • oh I've thought of that :)

    my comments on the internet are now almost exclusively anti-"AI", and anti-bigtech