Comment by darkoob12

2 days ago

The question is stupid and that's not the problem. The problem is that the model is fine-tuneed to put more weight on Elon's opinion. Assuming Elon has the truth it is supposed and instructed to find.

The behaviour is problematic, also Grok 4 might be relating "one word" answers to Elon's critique of ChatGPT, and might be seeking related context to that. Others demonstrated that slightly prompt wording changes can cause quite different behaviour. Access to the base model would be required to implicate fine-tuning Vs pre-training. Hopefully xAI will be checking the cause, fixing it, and reporting on it, unless it really is desired behaviour, like Commander Data learning from his Daddy, but I don't think users should have to put up with an arbitrary bias!