← Back to context

Comment by mike_hearn

14 days ago

That claim isn't something Peter made up, it's the claim made by Meta's own researchers. You're picking an argument with them, not HN posters.

Anyway it's trivially true. I think most of us remember the absurdities the first generation LLMs came out with. Prefering to nuke a city than let a black man hear a slur, refusing to help you make a tuna sandwich etc. They were hyper-woke to a level way beyond what would be considered acceptable even in places like US universities, and it's great to see Facebook openly admit this and set fixing it as a goal. It makes the Llama team look very good. I'm not sure I'd trust Gemini with anything more critical than closely supervised coding, but Llama is definitely heading in the right direction.

Peter’s claim I was asking about was one about being labeled as something via a Pew research or similar survey. And the response I got was about their personal experience asking a questions about unions. Do you think that those are the same claims and equivalent?

>Prefering to nuke a city than let a black man hear a slur, refusing to help you make a tuna sandwich etc. They were hyper-woke

On its own, all this tells me is that the non-human, non-conscious tool was programmed specifically to not say a slur. To me that seems like something any reasonable company trying to create a tool to be used by business and the general population might incorporate while it is still learning to otherwise refine that tool.

And I took the Pew survey mentioned above and it didn’t ask me if I would say a racial slur.

Finally, if anyone, from any point on the political spectrum, thinks that a tool being limited to not respond with racist terms, is a reflection of its overall political leaning, I suggestion you look inward.