Comment by luke-stanley
1 day ago
The behaviour is problematic, also Grok 4 might be relating "one word" answers to Elon's critique of ChatGPT, and might be seeking related context to that. Others demonstrated that slightly prompt wording changes can cause quite different behaviour. Access to the base model would be required to implicate fine-tuning Vs pre-training. Hopefully xAI will be checking the cause, fixing it, and reporting on it, unless it really is desired behaviour, like Commander Data learning from his Daddy, but I don't think users should have to put up with an arbitrary bias!
No comments yet
Contribute on Hacker News ↗