Comment by noobermin
13 hours ago
Seeing the usual LLM hypers angry replying to this on twitter is such a tell. Just like the comments on the LLM poisoning articles, some people just can't accept that some people don't like LLMs and get upset when you put any amount of hindrance to their rapid acceptance.
It's hard for me to even understand their perspective. Researching references for a published academic paper isn't some incidental busywork task, it's supposed to be a core part of doing research which is the core of the job. If you don't have sympathy for someone who, say, paid a person on Fiverr to cook up a paper rather than writing it themselves and then didn't even bother to check the references, why is using an LLM and not checking any better?
There is a lot of "throw it against the wall, and if it sticks, write it up" empirical work against benchmarks. It leads to post-hoc rationalization of the work and browser plugins using LLMs to find references for work that is already written. It is a bureaucratic view about "you need a citation for this", where people misunderstand the citation as a checkbox, instead of "you need to substantiate this claim, as I, the reviewer, do not accept this as a fact".
It's also hilarious that they complain about this because, from what I've seen, most LLM hypers will talk about something being irrelevant or taken over by AI with no understanding of what that something really is or involves.
> some people don't like LLMs
It's not even that they "don't like LLMs". They just don't like academic fraud! If references were fabricated with a Markov chain it would be just as bad!
While this arXiv policy seems reasonable enough, I don't care for the kind of drivel some post on HN because they don't like LLMs.
I'm here because I enjoy building things. And today this mostly happens with AI. I could do without the often thoughtless comments and conspiracy theories about "LLM hypers" posted by people who don't like LLMs.
Crazy that this is graytexted. So basically HN consensus is that we need to be hyper and accelerate llm adoption everywhere.
Bonkers. At the same time peak hn