Comment by inejge
4 months ago
I noticed that too, but I suspect that people's GPT-meters may be a bit too hair-trigger these days.
Idea for a study: take a bunch of GPT-sounding snippets from a verified pre-LLM corpus, along with an equal number of typical LLM generated ones. Randomize and ask test subjects to tell them apart. I suspect it would be a bloodbath. (Random chance at best, or heavily biased toward false positives.)
No comments yet
Contribute on Hacker News ↗