← Back to context

Comment by forgotpwd16

23 days ago

The LLM output isn't an unfiltered result of an unbiased model. Rather, some texts may be classified high-quality (where the em-dash, curly quotes, a more sophisticated/less-everyday vocabulary are more expected to appear), some low-quality, and some choices are driven by human feedback (aka fine-tuning), either to improve quality (OpenAI employs Kenyans, Kenyan/Nigerian English considered more colonial) or engagement through affirmative/reinforcing responses ("You're absolutely right. Universe is indeed a donut. Want me to write down an abstract? Want me to write down the equations?"). Some nice relevant articles are [1],[2].

[1]: https://marcusolang.substack.com/p/im-kenyan-i-dont-write-li... [2]: https://www.nytimes.com/2025/12/03/magazine/chatbot-writing-...