← Back to context

Comment by TeMPOraL

7 months ago

> I don't see LLMs playing a big role in this yet. People don't derive their opinions on abortion for example from ChatGPT. They derive them from group leaders, personal experience and interactions with their peers.

I think that's slowly changing now. Technically the views of ChatGPT are sourced from people and reflect a similar mix of group beliefs and personal experience, but they're blended with a much broader (approximation of) perspective of the LLM, and subject to the limited "reasoning" skills of the models, creating a somewhat unique take (or family of takes - across models, prompts) on the world. And people absolutely do use ChatGPT to refine or challenge their opinions on things[0]. It'll take some time before it'll start affecting society in general, and it'll take time before next-gen LLMs pick up on it, completing the loop, but we're definitely on our way there.

> And in this context of small things contributing to something big I would wager that all the small interactions we have with other humans do a lot more to form a society than the small interactions have on building an LLM. So to your original point again: I don't think contributing to an LLM is the biggest contribution online content has on a society.

That's fair, and I won't challenge it. I guess my original point is narrower than I thought. I arrived at it when thinking of blog posts, comments, and self-publishing, and more in terms of contributing discrete knowledge and ideas; I didn't really think much about interactions (like comment threads where people engage in a discussion) and communicating vibes[1]. Most importantly, I evaluated this in context of whether one's wronged and entitled to compensation when such content gets pulled into LLM training data without their knowledge or consent.

All this to say, because of our exchange here, I'm no longer convinced of my original point (contributing to an LLM being the biggest value most online content can provide); I'll need to rethink it thoroughly. Thanks!

--

[0] - First example that comes to mind: it's well-known that a lot of people are using ChatGPT as therapist. And in this role, ChatGPT isn't a glorified search engine - it's being mostly asked for opinions, not citations. I'm guilty of that myself, too, with several LLMs from OpenAI and Anthropic. They helped me work through a few minor personal issues, and in a way, you could call it me deriving some opinions from ChatGPT.

[1] - This term is getting increasingly uncomfortable to use for some reason.