I noticed that too, but I suspect that people's GPT-meters may be a bit too hair-trigger these days.
Idea for a study: take a bunch of GPT-sounding snippets from a verified pre-LLM corpus, along with an equal number of typical LLM generated ones. Randomize and ask test subjects to tell them apart. I suspect it would be a bloodbath. (Random chance at best, or heavily biased toward false positives.)
I noticed that too, but I suspect that people's GPT-meters may be a bit too hair-trigger these days.
Idea for a study: take a bunch of GPT-sounding snippets from a verified pre-LLM corpus, along with an equal number of typical LLM generated ones. Randomize and ask test subjects to tell them apart. I suspect it would be a bloodbath. (Random chance at best, or heavily biased toward false positives.)
Author: https://www.anatolianarchaeology.net/author/oguz/
They may be using AI for some language conversions, but I think they are real.