Comment by transcriptase
4 days ago
Having spent tens of thousands of hours contributing to scientific discovery by reading dense papers for a single piece of information, reverse engineering code written by biologists, and tweaking graphics to meet journal requirements… I can say with certainty it’s already contributing by allowing scientists to spend time on science versus spending an afternoon figuring out which undocumented argument in a R package from 2008 changes chart labels.
This. Even if LLM’s ultimately hit some hard ceiling as substantially-better-Googling-automatons they would already accelerate all thought-based work across the board, and that’s the level they’re already at now (arguably they’re beyond that).
We’re already at the point where these tools are removing repetitive/predictable tasks from researchers (and everyone else), so clearly they’re already accelerating research.
Not sure how you get around the contamination problems. I use these everyday and they are extremely problematic about making errors that are hard to perceive.
They are not reliable tools for any tasks that require accurate data.
That is not what they mean by contributing to scientific discovery.
Perhaps not, but my point stands from personal experience and knowing what’s going on in labs right now that AI is greatly contributing to research even if it’s not doing the parts that most people think of when they think science. A sufficiently advanced AI in the near term isn’t going to start churning out novel hypotheses and being able to collect non-online data without first being able to secure funding to hire grad students or whatever robots can replace those.