Comment by caseyy

1 year ago

OpenAI knows the tool it markets as “research” does not pass muster. It hallucinates, mid-quotes sources, and does not follow the formal inference logic used in research.

AI slop already produces many plausible-sounding articles used as infotainment and in academia. We already know this slop adds much noise to the signal and that poor signal slows actual research in both cases. But until now, the slop wasn't masquerading specifically as research! It was presented as an assistant, which provides no accuracy guarantees. “Research” by the word’s common meanings does.

This is why it will do harm. There is no doubt in my mind. And I believe OpenAI knows it. They have quite smart engineers, certainly clever enough to figure it out.

If your concern is primarily about researchers in academia using this and believing what it says without skepticism, then higher education has failed them.

And if you think that all published “research” was guaranteed to be accurate before AI tools became available, then I think you should start looking more critically at sources yourself.

  • Regardless of the “no true scientist would use it” argument and the argument about what I believe, the fact is that LLM slop is flooding the academia.

    AI companies promising their LLMs will now do “research” won’t help.

    And research that’s done outside of the academia (like business or independent thinker research) will be more muddied, with more people misled.