Comment by techpineapple

8 days ago

I kind of wonder if I care if comments are real people and actually probably don’t as long as they’re thought provoking. I actually thought it would be an interesting experiment to make my own walled garden LLM link aggregator, sans all the rage bait.

I mean, I care if meetup.com has real people, and I care if my kids’ schools Facebook group has real people, and other forums where there is an expectation of online/offline coordination, but hacker news? Probably not.

I feel like part of why comments here are thought provoking is because they're grounded in something? It's not quite coordination, but if someone talks about using software at a startup or small company I do assume they're genuine about that, which tells you more about something being practical in the real world.

And use cases like bringing up an issue on HN to get companies to reach out to you and fix it would be much harder with llms taking up the bandwidth probably.

  • Yeah, this is the trick, for example in the sort of private hacker news example I was talking about creating, I haven’t created it yet, but I sort of suspect that getting the comments to not sound canned would take a lot of prompt engineering, and I also suspect that even if say an individual comment is good, the style over time would be jarring.

    On the internet, maybe you have people using character.io, or other complex prompts to make the comments sound more diverse and personal. Who knows.

    I wonder how many different characters you would need on a forum like hacker news to pass a sort of collective Turing test.

I could understand that position, except that I don't think most LLM generated text are for the purpose of producing thought provoking conversation.

My expectation would be that anyone going through the effort to put a LLM generated comment bot online is doing it for some ulterior motive, typically profit or propaganda.

Given this, I would equate not caring about the provenance of the comment, to not caring if you're being intentionally misinformed for some deceptive purpose.

Agree. Another complicating factor for detection is that I don't personally mind seeing a sliver of self-promotion in a comment/post if I feel it's "earned" by the post being on-topic and insightful overall. If such a comment was posted by an LLM, I think I would actually be fine with that.