← Back to context

Comment by qwertox

1 day ago

It would be great if those scientists who use AI without disclosing it get fucked for life.

> It would be great if those scientists who use AI without disclosing it get fucked for life.

There need to be dis-incentives for sloppy work. There is a tension between quality and quantity in almost every product. Unfortunately academia has become a numbers-game with paper-mills.

Harsh sentiment. Pretty soon every knowledge worker will use AI every day. Should people disclose spellcheckers powered by AI? Disclosing is not useful. Being careful in how you use it and checking work is what matters.

  • What they are doing is plain cheating the system to get their 3 conference papers so they can get their $150k+ job at FAANG. It's plain cheating with no value.

    • We are only looking at one side of the equation here, in this whole thread.

      This feels a bit like the "LED stoplights shouldn't be used because they don't melt snow" argument.

      1 reply →

    • Rookie numbers. After NeurIPS main conference, you’re dumb not to ask for 300K YOY. I watched IBM pay that amount prorated to an intern with a single first author NeurIPS publication.

    • Cheating by people in high status positions should get the hammer. But it gets the hand-wringing what-have-we-come-to treatment instead.

  • > Should people disclose spellcheckers powered by AI?

    Thank you for that perfect example of a strawman argument! No, spellcheckers that use AI is not the main concern behind disclosing the use of AI in generating scientific papers, government reports, or any large block of nonfiction text that you paid for that is supposed to make to sense.

  • People are accountable for the results they produce using AI. So a scientist is responsible for made up sources in their paper, which is plain fraud.

    • "responsible for made up sources" leads to the hilarious idea that if you cite a paper that doesn't exist, you're now obliged to write that paper (getting it retroactively published might be a challenge though)

  • In general we're pretty good at drawing a line between purely editorial stuff like using a spellchecker, or even the services a professional editor (no need to acknowledge), and independent intellectual contribution (must be acknowledged). There's no slippery slope.

  • >Pretty soon every knowledge worker will use AI every day.

    Maybe? There's certainly a push to force the perception of inevitability.

  • False equivalence. This isn't about "using AI" it's about having an AI pretend to do your job.

    What people are pissed about is the fact their tax dollars fund fake research. It's just fraud, pure and simple. And fraud should be punished brutally, especially in these cases, because the long tail of negative effects produces enormous damage.

    • I was originally thinking you were being way too harsh with your "punish criminally" take, but I must admit, you're winning me over. I think we would need to be careful to ensure we never (or realistically, very rarely) convict an innocent person, but this is in many cases outright theft/fraud when someone is making money or being "compensated" for producing work that is fraudulent.

      For people who think this is too harsh, just remember we aren't talking about undergrads who cheat on a course paper here. We're talking about people who were given money (often from taxpayers) that committed fraud. This is textbook white collar crime, not some kid being lazy. At a minimum we should be taking all that money back from them and barring them from ever receiving grant money again. In some cases I think fines exceeding the money they received would be appropriate.

  • "Pretty soon every knowledge worker will use AI every day" is a wild statement considering the reporting that most companies deploying AI solutions are seeing little to no benefit, but also, there's a pretty obvious gap between spell checkers and tools that generate large parts of the document for you

Instead of publishing their papers in the prestigious zines - which is what they're after - we will publish them in "AI Slop Weekly" with name and picture. Up the submission risk a bit.