← Back to context

Comment by bodge5000

22 days ago

To me it seems like it'd only get more visible as it gets more normal, or at least more predictable.

Remember back in the early 2000's when people would photoshop one animals head onto another and trick people into thinking "science has created a new animal". That obviously doesn't work anymore because we know thats possible, even relatively trivial, with photoshop. I imagine the same will happen here, as AI writing gets more common we'll begin a subconscious process of determining if the writer is human. That's probably a bit unfairly taxing on our brains, but we survived photoshop I suppose

we didn't really survive photoshop.

The obviously fake ones were easy to detect, and the less obvious ones took some some sleuthing to detect. But the good fakes totally fly under the radar. You literally have no idea how much of the images you see are doctored well because you can't tell.

Same for LLMs in the near future (or perhaps already). What will we do when we'll realize we have no way of distinguishing man from bot on the internet?

  • I'd say the fact that you know theres some photoshop jobs you can't detect is proof enough that we're surviving it. It's not necessarily that we can identify it with 100% accuracy, but that we consider it a possibility with every image we see online

    > What will we do when we'll realize we have no way of distinguishing man from bot on the internet?

    The idea is this is a completely different scenario if we're aware of this being a potential problem versus not being at all aware of it. Maybe we won't be able to tell 100% of the time, but its something which we'll consider.