Comment by avhception
2 days ago
> Because what this AI-generated SEO slop formed from an extremely vulnerable and honest place shows is that women’s pain is still not taken seriously.
Companies putting words in people's mouth on social media using "AI" is horrible and shouldn't be allowed.
But I completely fail to see what this has to do with misogyny. Did Instagram have their LLM analyze the post and then only post generated slob when it concluded the post came from a woman? Certainly not.
Obviously I am putting words in the author's mouth here, so take with a grain of salt, but I think the reasoning is something like: such LLM-generated content disproportionately negatively affects women, and the fact that this got pushed through shows that they didn't take those consequences into account, e.g. by not testing what it would look like in situations like these.
> such LLM-generated content disproportionately negatively affects women,
Major citation needed
> Ahead of the International Women's Day, a UNESCO study revealed worrying tendencies in Large Language models (LLM) to produce gender bias, as well as homophobia and racial stereotyping. Women were described as working in domestic roles far more often than men ¬– four times as often by one model – and were frequently associated with words like “home”, “family” and “children”, while male names were linked to “business”, “executive”, “salary”, and “career”.
https://www.unesco.org/en/articles/generative-ai-unesco-stud...
> Our analysis proves that bias in LLMs is not an unintended flaw but a systematic result of their rational processing, which tends to preserve and amplify existing societal biases encoded in training data. Drawing on existentialist theory, we argue that LLM-generated bias reflects entrenched societal structures and highlights the limitations of purely technical debiasing methods.
https://arxiv.org/html/2410.19775v1
> We find that the portrayals generated by GPT-3.5 and GPT-4 contain higher rates of racial stereotypes than human-written por- trayals using the same prompts. The words distinguishing personas of marked (non-white, non-male) groups reflect patterns of othering and exoticizing these demographics. An inter- sectional lens further reveals tropes that domi- nate portrayals of marginalized groups, such as tropicalism and the hypersexualization of mi- noritized women. These representational harms have concerning implications for downstream applications like story generation.
https://aclanthology.org/2023.acl-long.84.pdf
3 replies →
Unfortunately I can't provide that, since I'm merely trying to come up with the reasoning of the author. If they have sources, though, that could lead to this reasoning.
> Did Instagram have their LLM analyze the post and then only post generated slob when it concluded the post came from a woman? Certainly not.
I actually am sympathetic to your confusion—perhaps this is semantics, but I agree with the trivialization of the human experience assessment from the author and your post, but don't read it as an attack on women's pain as such. I think the algorithm sensed that the essay would touch people and engender a response.
--
However, I am certain that Instagram knows the author is a woman, and that the LLM they deployed can do sentiment analysis (or just call the Instagram API and ask whether the post is by a woman). So I don't think we can somehow absolve them of cultural awareness. I wonder how this sort of thing influences its output (and wish we didn't have to puzzle over such things).
When all one has is a hammer, everything looks like a nail.
[dead]