Comment by abnercoimbre

3 months ago

> Even now if you put an effort into prompting and context building, you can achieve 100% human like results.

Are we personally comfortable with such an approach? For example, if you discover your favorite blogger doing this.

> Are we personally comfortable with such an approach?

I am not, because it's anti-human. I am a human and therefore I care about the human perspective on things. I don't care if a robot is 100x better than a human at any task; I don't want to read its output.

Same reason I'd rather watch a human grandmaster play chess than Stockfish.

  • There are umpteenth such analogies. Watching the world's strongest man lift a heavy thing is interesting. Watching an average crane lift something 100x heavier is not.

I generally side with those that think that it's rude to regurgitate something that's AI generated.

I think I am comfortable with some level of AI-sharing rudeness though, as long as it's sourced/disclosed.

I think it would be less rude if the prompt was shared along whatever was generated, though.

Should we care? It's a tool. If you can manage to make it look original, then what can we do about it? Eventually you won't be able to detect it.

  • Objectively we should care because the content is not the whole value proposition of a blog post. The authenticity and trust of validity of the content comes from your connection to the human that made it.

    I don't need to fact check a ride review from an author I trust, if they actually ride mountain bikes. An AI article about mountain bikes lacks that implicit trust and authenticity. The AI has never ridden a bike before.

    Though that reminds me if an interaction with Claude AI, I was at the edge of its knowledge with a problem and I could tell because I had found the exact forum post it quoted. I asked if this command could brick my motherboard, and it said "It's worked on all the MSI boards I have tried it on." So I didn't run the command, mate you've never left your GPU world you definitely don't actually have that experience to back that claim.

    • “It's worked on all the MSI boards I have tried it on.”

      I love when they do that. It’s like a glitch in the matrix. It snaps you out of the illusion that these things are more than just a highly compressed form of internet text.

  • We should care if it is lower in quality than something made by humans (e.g. less accurate, less insightful, less creative, etc.) but looks like human content. In that scenario, AI slop could easily flood out meaningful content.

I am 100% comfortable with anybody who openly discloses that their words were written by a robot.

I don't care one bit if the content is interesting, useful, and accurate.

The issue with AI slop isn't with how it's written. It's the fact that it's wrong, and that the author hasn't bothered to check it. If I read a post and find that it's nonsense I can guarantee that I won't be trusting that blog again. At some point there'll become a point where my belief in the accuracy of blogs in general is undermined to the point where I shift to only bothering with bloggers I already trust. That is when blogging dies, because new bloggers will find it impossible to find an audience (assuming people think as I do, which is a big assumption to be fair.)

AI has the power to completely undo all trust people have in content that's published online, and do even more damage than advertising, reviews, and spam have already done. Guarding against that is probably worthwhile.

  • Even if it's right there's also the factor of: why did you use a machine to make your writing longer just to waste my time? If the output is just as good as the input, but the input is shorter, why not show me the input.