← Back to context

Comment by tonymet

3 days ago

i like grok because i don't hit the obvious ML-fairness / political correct safeguards that other models do.

So i understand the intent in implementing those, but they also reduce perceived trust and utility. It's a tradeoff.

Let's say I'm using Gemini. I can tell by the latency or the redraw that I asked an "inappropriate" query.

They do implement censorship and safeguards, just in the opposite direction. Musk previously bragged about going through the data and "fixing" the biases. Which... just introduces bias when companies like xAI do it. You can do that, and researchers sometimes do, but obviously partisan actors won't actually be cleaning any bias, but rather introducing their own.

  • Sort of. There are biases introduced during training/post training and there are the additional runtime / inference safeguards.

    I’m referring more to the runtime safeguards, but also the post-training biases.

    Yes we are talking about degree, but the degree matters .