Comment by fuzzer371

2 days ago

Yup. And they all sound like slop. Read the papers, comprehend the papers, don't make someone else's computer do it for you.

Every scientist I ever met (and myself included) has a backlog of papers to read that never seems to shrink. It really is not trivial to stay up to date on research, even in niche fields, considering the huge volume of research that is being produced.

It is not uncommon for me to read a recently published review and find 2-3 interesting papers in the lot. Plus the daily Google scholar alerts. It can definitely be beneficial to have a LLM summarize a paper. Of course, at this point, one should definitely decide "is this worth reading more carefully?" and actually read at least some parts if needed.

Anti-tech contrarian sentiment happens with every new technology. Someone older than you probably said the same thing about the internet.

  • Yep. Even windows, the most widely used OS on the planet has a fringe group of contrarians still today. Amazing.

    • I grew up using windows and was a fan of it, but now I am a contrarian because of how shitty it has become. The fact that it is widely used is not an argument that it is good. It is widely used because of existing market share and reluctance of change by people.

    • Even tobacco, the second most widely used drug, has a group of contrarians still today. Amazing.

  • True, and they were right about it when they said that. They wouldn't be right anymore, because the Internet has evolved. The same might happen to LLMs, but currently one would be right to call LLM output "slop".

    • Depending on the criticism at the time, they were probably wrong at the time and are correct now. There were always trolls and bad people but at least there were no mega-corp playing with people's minds.

  • What's sad is that there's so much of that at this site. This page in particular is a disaster, and what we're actually seeing a lot of at HN is claims that real humans are bots. And the people who make these accusations are certain of their validity.

    • Have you considered that this suspicion is because the number of obvious bots has exploded in the last half year or so, particularly after OpenClaw became the latest fad?

      Start going to the profiles of every comment from a green account you see for a week and you’ll see how bad it is.

      There will be friendly fire but unfortunately that’s to be expected when you click the top comment in a thread and realize an account has been posting 100% slop for months.

      1 reply →

> Read the papers, comprehend the papers, don't make someone else's computer do it for you

Why not?

Personally, I don't have the specialized knowledge, nor the time needed, to read and understand papers outside my own 2-3 domains. LLMs do. And I appreciate what they can do for me. They do it better, faster, and more accurately than most 'popular science', provide better coverage and also provide the ability to interact with the material to any degree or depth that I care to, better than any article.

It would be silly to pass up this capability to make my life better simply because random folks on the Internet disparage the quality of the output (contrary to my own experience) and make hand-wavy points about 'someone else's computer) while offering no credible or useful alternative :)

  • How do you evaluate the quality of a summary of a paper you do not have the knowledge to read and understand?

    • > How do you evaluate the quality of a summary of a paper you do not have the knowledge to read and understand?

      Tough question. I think the straightforward answer is that you can't.

      That said, there is some confidence gained in an LLM's abilities based on its performance on papers in domains that I do understand. Yes, it's not going to be the same across all domains, but the frontier labs do publish capability scores across different domains, and that helps scrutinize the answers it provides, and how much salt to take with those.

  • I wonder if you have asked the same LLMs to explain or summarize a paper in one of your fields and see if it still makes sense.

    It could be that the LLMs are good at stringing words together in a way that seems reasonable when you are not an expert yourself, much like people from other fields seem very knowledgeable until you compare many of them or hear/see them talk with each other.

    • > I wonder if you have asked the same LLMs to explain or summarize a paper in one of your fields and see if it still makes sense.

      I have, and it does, hence my confidence in its ability to do the same in other domains. Depending on what you're using it for, it is advisable to maintain some level of quality control (spot checks, sampling, deep dives, more rigorous continuous review) as in any process control.

      1 reply →