Comment by sejje
3 days ago
LLM output might be harder to spot when it's mostly commands to drive the browser.
I often interact with the web all day and don't write any text a human could evaluate.
3 days ago
LLM output might be harder to spot when it's mostly commands to drive the browser.
I often interact with the web all day and don't write any text a human could evaluate.
Perhaps, but that's somewhat off topic since that's not what Turing's thought experiment was about.
However, I'd have to guess that given a reasonable amount of data an LLM vs human interacting with websites would be fairly easy to spot since the LLM would be more purposeful - it'd be trying to fulfill a task, while a human may be curious, distracted by ads, put off by slow response times, etc, etc.
I don't think it's a very interesting question whether LLMs can sometimes generate output indistinguishable from humans, since that is exactly what they were trained to do - to mimic human-generated training samples. Apropos a Turing test, the question would be can I tell this is not a human, even given a reasonable amount of time to probe it in any way I care ... but I think there is an unspoken assumption that the person administering the test is qualified to do so (else the result isn't about AGI-ability, but rather test administrator ability).
> an LLM vs human interacting with websites would be fairly easy to spot since the LLM would be more purposeful - it'd be trying to fulfill a task, while a human may be curious, distracted by ads, put off by slow response times, etc, etc.
Even before modern LLMs, some scrape-detectors would look for instant clicks, no random mouse moves, etc., and some scrapers would incorporate random delays, random mouse movements, etc.