← Back to context

Comment by woodrowbarlow

9 hours ago

here's an example of how model censorship affects coding tasks: https://github.com/orgs/community/discussions/72603

Oh, lol. This though seems to be something that would affect only US models... ironically

You conversely get the same issue if you have no guardrails. Ie: Grok generating CP makes it completely unusable in a professional setting. I don't think this is a solvable problem.

  • Why does it having the ability to do something has mean it is ‘unusable’ in a professional setting?

    Is it generating CP when given benign prompts? Or is it misinterpreting normal prompts and generating CP?

    There are a LOT of tools that we use at work that could be used to do horrible things. A knife in a kitchen could be used to kill someone. The camera on our laptop could be used to take pictures of CP. You can write death threats with your Gmail account.

    We don’t say knives are unusable in a professional setting because they have the capability to be used in crime. Why does AI having the ability to do something bad mean we can’t use it at all in a professional setting?

  • I'm struggling to follow the logic on this. Glocks are used in murders, Proton has been used to transmit serious threats, C has been used to program malware. All can be legitimate tools in professional settings where the users don't use it for illegal stuff. My Leatherman doesn't need to have a tipless blade so I don't stab people because I'm trusted to not stab people.

    The only reason I don't use Grok professionally is that I've found it to not be as useful for my problems as other LLMs.

  • > Ie: Grok generating CP makes it completely unusable in a professional setting

    Do you mean it's unusable if you're passing user-provided prompts to Grok, or do you mean you can't even use Grok to let company employees write code or author content? The former seems reasonable, the latter not so much.

I can't believe I'm using Grok... but I'm using Grok...

Why? I have a female sales person, and I noticed they get a different response from (female) receptionists than my male sales people. I asked chatGPT about this, and it outright refused to believe me. It said I was imagining this and implied I was sexist or something. I ended up asking Grok, and it mentioned the phenomena and some solutions. It was genuinely helpful.

Further, I brought this up with some of my contract advisors, and one of my female advisors mentioned the phenomena before I gave a hypothesis. 'Girls are just like this.'

Now I use Grok... I can't believe I'm saying that. I just want right answers.