← Back to context

Comment by mgraczyk

10 hours ago

Because it will take you years to read all the information you can get funneled through an LLM in a day

Except you have no idea if what the LLM is telling you is true

I do a lot of astrophysics. Universally LLMs are wrong about nearly every astrophysics questions I've asked them - even the basic ones, in every model I've ever tested. Its terrifying that people take these at face value

For research at a PhD level, they have absolutely no idea what's going on. They just make up plausible sounding rubbish

  • Astrophysicist David Kipping had a podcast episode a month ago reporting that LLMs are working shockingly well for him, as well as for the faculty at the IAS.[1]

    It's curious how different people come to very different conclusions about the usefulness of LLMs.

    https://youtu.be/PctlBxRh0p4

    • The problem with these long videos is that what I really want to see is what questions were asked of it, and the accuracy of the results

      Every time I ask LLMs questions I know the answers to, its results are incomplete, inaccurate, or just flat out wrong much of the time

      The idea that AI is an order of magnitude superior to coders is flat out wrong as well. I don't know who he's talking to

  • Somehow we went from writing software apps and reading API docs to research level astrophysics

    Sure it's not there yet. Give it a few months

    • It doesn't even work for basic astrophysics

      I asked chatgpt the other day:

      "Where did elements heavier than iron come from?"

      The answer it gave was totally wrong. Its not a hard question. I asked it this question again today, and some of it was right (!). This is such a low bar for basic questions