Comment by elbci
7 hours ago
rare here: well written and insightful, I would take this course. I'm curious about why he penalized chatbot mistakes more, at first glance sounds like just discouraging their use but the hole setup indicates genuine desire to let it be a possibility. In my mind the rule should be "same penalty and extra super cookies for catching chatbot mistakes"
I wrote this before to another comment like yours:
I thought this part of penalizing mistakes made with the help of LLMs more was quite ingenious.
If you have this great resource available to you (an LLM) you better show that you read and checked its output. If there's something in the LLM output you do not understand or check to be true, you better remove it.
If you do not use LLMs and just misunderstood something, you will have a (flawed) justification for why you wrote this. If there's something flawed in an LLM answer, the likelihood that you do not have any justification except for "the LLM said so" is quite high and should thus be penalized higher.
One shows a misunderstanding, the other doesn't necessarily show any understanding at all.
Here is my guess: Usually marks are given for partially correct answers, partially to be less punishing for human error whether caused by stress or other factors, there’s a good chance the student understood the topic. If instead they are using a chat bot, but didn’t catch the mistake themselves, it’s an indication of less understanding and marked accordingly.