Comment by linhns
17 hours ago
This is the way it should be. AI to speed up the understanding process, and one final evaluation without any help to cement the understanding.
17 hours ago
This is the way it should be. AI to speed up the understanding process, and one final evaluation without any help to cement the understanding.
I don't think the final evaluation is to "cement the understanding" so much as _verify_ that students have taken accountability for their own learning process.
^ This
This is what a student, who truly wants to learn rather than simply complete a course / certification, would do... Use AI tools to explain + learn, but not outsource the learning process itself to the tools.
> AI to speed up the understanding process
What’s your hypothesis of how AI can accelerate how your brain understands something?
> What’s your hypothesis of how AI can accelerate how your brain understands something?
What are your beliefs / hypothesis of how having a human teacher can help you understand something?
AI explanations are no longer terrible garbage. The LLM might not be doing original research, but it has definitely read the textbook. :/ And 1000 related works.
You shouldn't believe the LLM when it tells you how to micro-optimize your code, but you can take suggestions as a starting point and verify them.
"What are your beliefs / hypothesis of how having a human teacher can help you understand something?"
One precondition is the human teacher can challenge you independently of your own self-control/will.
This is a bad case of whataboutism (I hate this word but it describes the answer you gave), what do you mean by accelerating understanding? Maybe they are good as suggestion engines, but it is very early to state what you did.
1 reply →
I have some success with this method: I try to write an explanation of something, then ask the LLM to find problems with the explanation. Sometimes its response leads me to shore up my understanding. Other times its answer doesn’t make sense to me and we dig into why. Whether or not the LLM is correct, it helps me clarify my own learning. It’s basically rubber duck debugging for my brain.
Quick, easy access to explanations and examples on complex topics.
In my case, learning enough trig and linear algebra to be useful in game engine programming / rendering has been made a lot easier / more efficient.
The same way Google or Wikipedia enables learning.
I disagree. I think we should treat AI tools like calculators for the exam.