Comment by ninjagoo
2 days ago
Lot of folks on here saying they only want to converse with other humans, for various reasons.
But here's the funny thing. I'm pretty sure the frontier models are now smarter than I am, more eloquent, and definitely more knowledgeable, especially the paid versions with built-in search/research capability. I'm also fairly certain that the number of original thoughts in a given discourse on the Internet is fairly small, I know that's certainly the case for me.
So whither humans now?
If I'm looking for human engagement, forums make sense. But for an informed discussion, I'm less certain that it's wise to be exclusionary. There is a case to be made that lower quality comments should be hidden or higher quality comments should be surfaced, but that's true regardless of the source, innit?
Nothing is stopping you from pasting an HN link into your chatbot of choice for an "informed" discussion.
The rest of us want the benefit of lived experience and genuine curiosity in discussions. LLMs are fundamentally incapable of both.
This reminds me of conversations around plagiarism that come up when working with students: that question of "this other person expressed this idea better than I can, why can't I just use their writing"?
Because I want to know what you think, because putting our thoughts into words and sharing them is an important part of thinking, because we'll lose these skills if we don't use them, because in thinking for yourself you might come up with something interesting that nobody has ever thought before.
Of course, writers are allowed to reference and use other peoples writing: with proper attribution. I don't have a problem with people sharing quality AI generated content when it's labelled as such. The issue is that most people writing AI comments don't do this, which is itself probably the strongest indictment of the practice.
That's hardly fair? Most forum users, even on HN, rarely provide sources for data/insights that they reference. I haven't seen that at work either most of the time.
One could argue that it should be, but it's just not the the same standard to which students and papers and Wikipedia materials are held to :)
> If I'm looking for human engagement, forums make sense. But for an informed discussion, I'm less certain that it's wise to be exclusionary. There is a case to be made that lower quality comments should be hidden or higher quality comments should be surfaced, but that's true regardless of the source, innit?
Good news then, you're currently on a forum! So we all agree that humans > AI, regardless of your thought on the intelligence behind it.
> Good news then, you're currently on a forum! So we all agree that humans > AI
I made the post to specifically disagree with that notion: I think that excluding top-quality AI output from the discussion will reduce the overall quality of forums, because it's now the case that top-tier LLMs > average human.
How do we assess top-quality output? The moderation tools for that already exist. Doesn't scale well? I'm guessing the days where ai can do it cheaper and faster will soon be nigh.
Would you hang out with a friend over coffee or something who, rather than conversing with you, recorded your side of the conversation directly into an LLM and then played you back the result? Seems like a good way to kill a relationship.
A significant part of my friends and family conversations already involve referencing LLMs for scoping, explanations, deeper dives, insights etc. And it's not just me, they use LLMs more than I do. It helps move discussions along. Where before conversation would get bogged down in disputes, now we cover more ground.
If it helps, my friends and family tend to have at least a master's, and the majority have PhDs.
> Would you hang out with a friend over coffee or something who, rather than conversing with you, recorded your side of the conversation directly into an LLM and then played you back the result?
I think the difference is that you're imagining the LLM replaces the conversationalist, but as I said above, my lived experience is that the LLM provides grounding to the discussion, effectively having replaced internet search as a better, faster, broader, smarter library. It doesn't kill the conversation, it makes it better.
> If it helps, my friends and family tend to have at least a master's, and the majority have PhDs.
Those aren't super rare these days, I don't know why arbitrary credentials would matter for this purpose, but incidentally, the notion that they would matter in conversation at all kind of speaks to the type of engagement you might be having with them, which may indeed be different than what I care about.
Personally I don't find people all that engaging the more inclined they are to go looking up answers, to me it represents a certain amount of discomfort with uncertainty, ego, that are necessary for a fun conversation. If someone has an answer because of their experience, great, otherwise it's ok to not know in the moment and continue on.
In one case, I had a friendship kind of fizzle out because we'd be hanging and I'd express some curiosity that I'd hope he'd build on with his own experience or his own sense of wonder, but because he only cared about authoritative facts, he'd google the answer and get frustrated that I only cared about his opinion on what the answer might be. The actual fact was incidental, and this conflict regularly led to impasse where I'd clarify I don't care what the internet says etc.. and I'm fine with that because he wasn't really interested in thought exercises.
A concrete hypothetical mundane example might be posing "How do you think the Iran war might impact gas prices here?" and they'd just look up the history and trends, and then kind of stop there. Dull, I want a human response, speculate and build on it, let yourself be wrong.
1 reply →