← Back to context

Comment by sbarre

3 days ago

I feel like we're having 2 different conversations here. I never said you should not feel what you feel about AI imagery.

"Human for reference" is about scale. There's nothing about "human for reference" that implies any kind of authority or accuracy in the content.

If I put a human in my image standing next to an imaginary creature, or a sci-fi spaceship, or a building I rendered in Blender, am I implying that those things are in fact real?

I am not (although you may choose to assume I am). I am providing a reference for scale with something that everyone will recognize.

Look, I accept your aversion to AI imagery. I certainly don't understand it, but I don't need to understand to accept, so all good.

> I feel like we're having 2 different conversations here.

Yes. You're very focused on real/not-real, but that's not at all the issue.

An artist's depiction for scale (versus a real photo) is fine if it's intentionally drawn to scale. There's no reason to believe ChatGPT did anything here other than go "imagine what would a big beaver look like". The point is "can I trust the reference depiction gives me accurate information?", not "is this a real photo of a living giant beaver?"

> If I put a human in my image standing next to an imaginary creature, or a sci-fi spaceship, or a building I rendered in Blender, am I implying that those things are in fact real?

If you post your worldbuilding art with a "human for reference", assumptions about your imaginary world can be drawn from it. For example, on https://news.ycombinator.com/item?id=44529224

  • >> I feel like we're having 2 different conversations here.

    > Yes. You're very focused on real/not-real, but that's not at all the issue.

    I'm actually more focused on the purpose of an image than it's provenance or accuracy.

    Hence the "visualization" part. I felt like you were specifically upset about the fact that this visualization was made with AI, and I was wondering if you would have felt similarly upset if this had been visualized with a 3D render or a human-drawn illustration.

    > An artist's depiction for scale (versus a real photo) is fine if it's intentionally drawn to scale.

    So you trust a human artist's ability to draw to-scale more than AI? The human could be doing their best to draw to-scale and still get it very wrong. I don't know how you could measure, in aggregate, AI vs. human when it comes to ability to draw to-scale.

    > The point is "can I trust the reference depiction gives me accurate information?"

    Again how would a human who just draws or renders a thing next to a human be more trustworthy than AI when it comes to getting the scale correct?

    > I have an aversion to undisclosed bullshit invented out of whole cloth being dropped into a discussion of a real academic subject.

    Fair point in this context. I guess this perhaps answers my first question about whether this was AI-specific (which it really felt like based on your initial language) or just about the fact that the image - regardless of how it was made - was not scientifically accurate enough.

    • > So you trust a human artist's ability to draw to-scale more than AI?

      Yes. There is a pretty good chance that someone drawing even a stick figure representation of the two to post “for reference” has at least some relevant information on the sizes. People don’t tend to draw giant extinct beavers for fun out of the blue.

      AI slop, in this context, is the same as a child’s crayon drawing of a dragon. It may be cute. Pretty. It may make you proud. But it offers little scientific value for assessing the size of real dragons.

      AI makes convincing looking art, but that’s worse, because people fall for it in ways unlikely with crayon. And it can be done fast.