← Back to context

Comment by tyre

5 days ago

This is where LLMs shine for learning imo: throwing a paper in Claude and getting an overview then being able to ask questions.

Especially for fields that I didn’t study at the Bachelors or Masters level, like biology. Getting to engage with deeper research with a knowledgeable tutor assistant has enabled me to go deeper than I otherwise could.

How do you know its correct? And how do you learn to engage with the theory heavy subject doing it this way?

  • How do you know anything is correct? LLMs can be wrong, humans can be wrong, you can be wrong. The motto of the royal society, is "Nullius in verba" which is a latin phrase : "take nobody's word for it," that's LITERALLY the motto of the royal society. Its your job as a scientist and critical thinker to test assumptions, oberve reality and use empirical inquiry to seek truth, and in the process, question ALL sources and test all assertions, from multiple angles if required.

    • amusing that this comment contains a subtle appeal to authority. "take nobody's word for it" -- you can take the Royal Society's word for that

  • You don't - the way I use LLMs for explanations is that I keep going back and forth between the LLM explanation and Google search /Wikipedia. And of course asking the LLM to cite sources helps.

    This might sound cumbersome but without the LLM I wouldn't have (1) known what to search for, in a way (2) that lets me incrementally build a mental model. So it's a net win for me. The only gap I see is coverage/recall: when asked for different techniques to accomplish something, the LLM might miss some techniques - and what is missed depends upon the specific LLM. My solution here is asking multiple LLMs and going back to Google search.

If you did not study these topics, the chances are good you do not know what questions to even ask, let alone how to ask them. Add to the fact that you don't even know if the original summary is accurate.

  • The original summary is the paper’s abstract, which I read. The questions I ask are what I don’t understand or am curious about. Chances are 100% that I know what these are!

    I’m not trying to master these subjects for any practical purpose. It’s curiosity and learning.

    It’s not the same as taking a class; not worse either. It’s a different type of learning for specific situations.

  • Asking the right questions (in the right language) was important before and it's even more important with LLMs, if you want to get any real leverage out of them.

Isn't there a risk that you're engaging with an inaccurate summarization? At some point inaccurate information is worse than no information.

Perhaps in low stakes situations it could at least guarantee some entertainment value. Though I worry that folks will get into high stakes situations without the tools to distinguish facts from smoothly worded slop.

  • Yes. I usually test AI assistants by giving them my own work to summarize, and have nearly always found errors in their interpretation of the work.

    The texts have to be short and high-level for the assistants to have any chance of accurately explaining them.

    • I can probably process anything short and highlevel by myself in a reasonable time, and if I can’t, I will know, while the LLM will always simulate perfect understanding.

  • There is, but there is an equal risk if you were to engage about any topic with any teacher you know. Everyone has a bias, and as long as you dont base your worldview and decisions fully on one output you will be fine.

    • Experimenting with LLMs, I've had examples like it providing the Cantor Set (a totally disconnected topological space) as an example of a Continuum immediately after it provides the (correct) definition as a non-empty compact, connected (Hausdorff) topological space. This is immediately obvious as nonsense if you understand the topic, but if one was attempting to learn from this, it could be very confusing and misleading. No human teacher would do this.

      1 reply →

    • It’s my experience that humans are far, far, far more trustworthy about their limitations than LLMs. Obviously, this varies by human.

    • It’s only equal if you consider two outcomes: some risk and no risk.

      And there’s always some risk.

    • Are you just saying that broadly, e.g. original 2022 chatgpt was also an equal risk if you use this way?

      You won't be able to verify everything taught from first principles, so do have to at some point give different sources different credibility I think.

  • I've been doing this a fair amount recently, and way I manage it is: first, give the LLM the PDF and ask it to summarize + provide high-level reading points. Then read the paper with that context to verify details, and while doing so, ask the LLM follow-up questions (very helpful for topics I'm less familiar with). Typically, everything is either directly in the original paper or verifiable on the internet, so if something feels off then I'll dig into it. Through the course of ~20 papers, I've run into one or two erroneous statements made by the LLM.

    To your point, it would be easy to accidentally accept things as true (especially the more subjective "why" things), but the hit rate is good enough that I'm still getting tons of value through this approach. With respect to mistakes, it's honestly not that different from learning something wrong from a friend or a teacher, which, frankly, happens all the time. So it pretty much comes down to the individual person's skepticism and desire for deep understanding, which usually will reveal such falsehoods.

  • There is, but just ask it to cite the foundational material. A huge issue with reading papers in topics you don't know about is that you lack the prerequisite knowledge and without a professor in that field, it may be difficult to really build that. Chat GPT is a huge productivity boost. Just ask it to cite references and read those.

I'm not sure the exact dollar value of feeling safe enough to ask really stupid questions that I should already know the answer to and I'd be embarrassed if anyone saw me ask Claude, but it's more than I'm paying them. Maybe that's the enshittification play. Extra $20/month if you don't want it to sound judgey about your shit.