← Back to context

Comment by Aerroon

1 year ago

I can understand it for AI generated text, but I think there are a lot of people that like AI generated images. Image sites like get a ton of people that like AI generated images. Civitai gets a lot of engagement for AI generated images, but so do many other image sites.

People who submit blog posts here sure do love opening their blogs with AI image slop. I have taken to assuming that the text is also AI slop, and closing the tab and leaving a comment saying such.

Sometimes this comment gets a ton of upvotes. Sometimes it gets indignant replies insisting it's real writing. I need to come up with a good standard response to the latter.

  • > People who submit blog posts here sure do love opening their blogs with AI image slop.

    It sucks, but it doesn't suck any more than what was done in the past: Litter the article with stock photos.

    Either have a relevant photo (and no, a post about cooking showing an image of a random kitchen, set of dishes, or prepared food does not count), or don't have any.

    The only reason blog posts/articles had barely relevant stock images was to get people's attention. Is it any worse now that they're using AI generated images?

  • > I need to come up with a good standard response to the latter.

    How about, "I'm sorry, but if you're willing to use AI image slop, how should I know you wouldn't also use AI text slop? AI text content isn't reliable, and I don't have time to personally vet every assertion."

    • Trying to gaslight your enemy is certainly an option for something, not always the best nor the one in line with HN guideline. Frankly it just rarely reduce undesirable behaviors even if you're in the mood to be manipulative.

      3 replies →

I don’t understand the problem with AI generated images.

(I very much would like any AI generated text to be marked as such, so I can set my trust accordingly)

  • > I don’t understand the problem with AI generated images.

    Depends on what they are used for and what they are purporting to represent.

    For example, I really hate AI images being put into kids books, especially when they are trying to be psuedo-educational. A big problem those images have is from one prompt to the next, it's basically impossible to get consistent designs which means any sort of narrative story will end up with pages of characters that don't look the same.

    Then there's the problem that some people are trying to sell and pump this shit like crazy into amazon. Which creates a lot of trash books that squeeze out legitimate lesser known authors and illustrators in favor of this pure garbage.

    Quite similar to how you can't really buy general products from amazon because drop shipping has flooded the market with 10 billion items with different brands that are ultimately the same wish garbage.

    The images can look interesting sometimes, but often on second glance there's just something "off" about the image. Fingers are currently the best sign that things have gone off the rails.

Despite what people think there is a sort of art to getting interesting images out of an ai model.

  • That’s not the issue though, it should be marked as such or be found in a section people looking for it can easily find it instead of shoving it everywhere. To me placing that generated content in human spaces is a strong signal for low effort. On the other hand generated content can be extremely interesting and useful and indeed there’s an art to it

    • I agree. AI generated text and images should be marked as such. In the US there was a push to set standards on watermarking AI generated content (feasible for images/video, but more difficult for text, because it's easier to delete). Unfortunately, the effort to study potential watermarking standards was rescinded as of Jan 22 2025.

    • They know everyone, especially the ones they seek attention from, has such labels in their muted keywords list.