← Back to context

Comment by jcattle

7 hours ago

I wrote this before to another comment like yours:

I thought this part of penalizing mistakes made with the help of LLMs more was quite ingenious.

If you have this great resource available to you (an LLM) you better show that you read and checked its output. If there's something in the LLM output you do not understand or check to be true, you better remove it.

If you do not use LLMs and just misunderstood something, you will have a (flawed) justification for why you wrote this. If there's something flawed in an LLM answer, the likelihood that you do not have any justification except for "the LLM said so" is quite high and should thus be penalized higher.

One shows a misunderstanding, the other doesn't necessarily show any understanding at all.