Comment by neilv

17 hours ago

> Clearly, the authors in NeurIPS don't agree that using an LLM to help write is "plagiarism",

Or they didn't consider that it arguably fell within academia's definition of plagiarism.

Or they thought they could get away with it.

Why is someone behaving questionably the authority on whether that's OK?

> Nobody I know in real life, personally or at work, has expressed this belief. I have literally only ever encountered this anti-AI extremism (extremism in the non-pejorative sense) in places like reddit and here.

It's not "anti-AI extremism".

If no one you know has said, "Hey, wait a minute, if I'm copy&pasting this text I didn't write, and putting my name on it, without credit or attribution, isn't that like... no... what am I missing?" then maybe they are focused on other angles.

That doesn't mean that people who consider different angles than your friends do are "extremist".

They're only "extremist" in the way that anyone critical at all of 'crypto' was "extremist", to the bros pumping it. Not coincidentally, there's some overlap in bros between the two.

> Why is someone behaving questionably the authority on whether that's OK?

Because they are not. Using AI to help writing is something literally every company is pushing for.

  • How is that relevant? Companies care very little about plagiarism, at least in the ethical sense (they do care if they think it's a legal risk, but that has turned out to not be the case with AI, so far at least).

    • What do you mean how is that relevant? Its a vast majority opinion in society that using ai to help you write is fine. Calling it "plagiarism" is a tiny minority online opinion.

      2 replies →

  • As long as AI companies have paid them to train on their data (see a number of licensing deals between OpenAI and news agencies and such).