Comment by jaccola

5 days ago

I've noticed the new OpenAI models do self contradiction a lot more than I've ever noticed before! Things like:

- Aha, the error clearly lies in X, because ... so X is fine, the real error is in Y ... so Y is working perfectly. The smoking gun: Z ...

- While you can do A, in practice it is almost never a good idea because ... which is why it's always best to do A

I've seen it so this too. I had it keeping a running tally over many turns and occasionally it would say something like: "... bringing the total to 304.. 306, no 303. Haha, just kidding I know it's really 310." With the last number being the right one. I'm curious if it's an organic behavior or a taught one. It could be self learned through reinforcement learning, a way to correct itself since it doesn't have access to a backspace key.

Yeah.

I worked with Grok 4.1 and it was awesome until it wasn't.

It told me to build something, just to tell me in the end that I could do it smaller and cheaper.

And that multiple times.

Best reply was the one that ended with something algong the lines of "I've built dozens of them!"