Comment by ceejayoz
4 days ago
> You are bringing your own interpretation here.
Yeah, that's how/why implying things works. Darkly whispering "Bob is always hanging out with underage girls and buying them expensive gifts" implies a conclusion. It is intended to cause you to "bring your own interpretation".
"Human for reference" heavily implies it should be considered a useful reference. There's no point to it otherwise!
Nah. I'll keep flagging undisclosed low-value AI posts like this. It appears others agree.
I feel like we're having 2 different conversations here. I never said you should not feel what you feel about AI imagery.
"Human for reference" is about scale. There's nothing about "human for reference" that implies any kind of authority or accuracy in the content.
If I put a human in my image standing next to an imaginary creature, or a sci-fi spaceship, or a building I rendered in Blender, am I implying that those things are in fact real?
I am not (although you may choose to assume I am). I am providing a reference for scale with something that everyone will recognize.
Look, I accept your aversion to AI imagery. I certainly don't understand it, but I don't need to understand to accept, so all good.
> I feel like we're having 2 different conversations here.
Yes. You're very focused on real/not-real, but that's not at all the issue.
An artist's depiction for scale (versus a real photo) is fine if it's intentionally drawn to scale. There's no reason to believe ChatGPT did anything here other than go "imagine what would a big beaver look like". The point is "can I trust the reference depiction gives me accurate information?", not "is this a real photo of a living giant beaver?"
> If I put a human in my image standing next to an imaginary creature, or a sci-fi spaceship, or a building I rendered in Blender, am I implying that those things are in fact real?
If you post your worldbuilding art with a "human for reference", assumptions about your imaginary world can be drawn from it. For example, on https://news.ycombinator.com/item?id=44529224
>> I feel like we're having 2 different conversations here.
> Yes. You're very focused on real/not-real, but that's not at all the issue.
I'm actually more focused on the purpose of an image than it's provenance or accuracy.
Hence the "visualization" part. I felt like you were specifically upset about the fact that this visualization was made with AI, and I was wondering if you would have felt similarly upset if this had been visualized with a 3D render or a human-drawn illustration.
> An artist's depiction for scale (versus a real photo) is fine if it's intentionally drawn to scale.
So you trust a human artist's ability to draw to-scale more than AI? The human could be doing their best to draw to-scale and still get it very wrong. I don't know how you could measure, in aggregate, AI vs. human when it comes to ability to draw to-scale.
> The point is "can I trust the reference depiction gives me accurate information?"
Again how would a human who just draws or renders a thing next to a human be more trustworthy than AI when it comes to getting the scale correct?
> I have an aversion to undisclosed bullshit invented out of whole cloth being dropped into a discussion of a real academic subject.
Fair point in this context. I guess this perhaps answers my first question about whether this was AI-specific (which it really felt like based on your initial language) or just about the fact that the image - regardless of how it was made - was not scientifically accurate enough.
1 reply →