Comment by TZubiri

3 days ago

Fun technical note:

>"Begin with ‘Yes.’ or ‘No.’ followed by a brief explanation. ”"

GPT models generate tokens from left to right, they are causal. That prompt causes the model to lock in to an answer and then generate the explanation after the fact. This is why you can sometimes see the failure mode "The answer is X because the answer can't be X so the answer is Y"

Asking for the Yes/No to be placed at the end would put the CoT before and generate 100% objectively better results.

I used to think prompt engineering was a bullshit term like you don't need to be trained at all to use this thing. But apparently you need to a little bit.

So if the idea alone that an application is fed into chatgpt isn't dumb enough, consider that they failed to even use chatgpt correctly, which apparently is a thing.

It matters less these days with the thinking models. They'll automatically inject some extra content before the answer for basically the same purpose. But if you're using something simpler with immediate responses - yeah, the order is important.

  • Were thinking models even already widely available when DOGE was doing its thing?