Comment by beej71
12 hours ago
> I believe that explicitly teaching students how to use AI in their learning process, that the beautiful paper direct from AI is not something that will help them later, is another important ingredient.
IMNSHO as an instructor, you believe correctly. I tell my students how and why to use LLMs in their learning journey. It's a massively powerful learning accelerator when used properly.
Curricula have to be modified significantly for this to work.
I also tell them, without mincing words, how fucked they will be if they use it incorrectly. :)
> powerful learning accelerator
You got any data on that? Because it's a bold claim that runs counter to all results I've seen so far. For example, this paper[^1] which is introduced in this blog post: https://theconversation.com/learning-with-ai-falls-short-com...
[^1]: https://doi.org/10.1093/pnasnexus/pgaf316
Only my own two eyes and my own learning experience. The fact is students will use LLMs no matter what you say. So any blanket "it's bad/good" results are not actionable.
But if you told me every student got access to a 1-on-1 tutor, I'd say that was a win (and there are studies to back that up). And that's one thing LLMs can do.
Of course, just asking your tutor to do the work for you is incredibly harmful. And that's something LLMs can do, as well.
Would you like to have someone 24-7 who can give you a code review? Now you can. Hell yeah, that's beneficial.
How about when you're stuck on a coding problem for 30 minutes and you want a hint? You already did a bunch of hard work and it's time to get unstuck.
LLMs can be great. They can also be horrible. The last thing I wrote in Rust I could have learned nothing by using LLMs. It would have take me a lot less time to get the program written! But that's not what I did. I painstakingly used it to explore all the avenues I did not understand and I gained a huge amount of knowledge writing my little 350 line program.
I don't think that study supports your assertion.
Parent is saying that AI tools can be useful in structured learning environments (i.e. curriculum and teacher-driven).
The study you linked is talking about unstructured research (i.e. participants decide how to use it and when they're done).
You can no true Scotsman it, but that study is a structured task. It's possible to generate an ever-more structured tutorial, but that's asking ever more more from teachers. And to what end? Why should they do that? Where's the data suggesting it's worth the trouble? And cui bono?
Students have had access to modern LLMs for years now, which is plenty long to spin up and read out a study...