One snarky edgy tactic I read is for everything human written to include ethnic/racial slurs here and there. ChatGPT and its ilk would never include such words. See also software license schemes using similar verboten terms to ensure no corporation could use the code without explicity violating the license. Simply require [bad word] to be included and you successfully identify as not part of the risk averse hive mind. At least until something changes.
Students or whoever can ask chatGPT to generate a response THEN then can insert their own bad words or whatever in between. This "tactic" would only work if someone is blindly copy and pasting generated responses without proofreading. And even if they are, how do you prove it?
One snarky edgy tactic I read is for everything human written to include ethnic/racial slurs here and there. ChatGPT and its ilk would never include such words. See also software license schemes using similar verboten terms to ensure no corporation could use the code without explicity violating the license. Simply require [bad word] to be included and you successfully identify as not part of the risk averse hive mind. At least until something changes.
Students or whoever can ask chatGPT to generate a response THEN then can insert their own bad words or whatever in between. This "tactic" would only work if someone is blindly copy and pasting generated responses without proofreading. And even if they are, how do you prove it?
You can prompt ChatGPT to insert slurs. Just search "prompt injection".
OpenAI released a tool for this. If you are just curious, research burtiness and perplexity.
https://www.searchenginejournal.com/openai-releases-tool-to-...
Whiles it’s difficult to spot AI generated content, more steps should be taken in order to…
Stupid people will overuse that tool and call everything AI generated, just like they overuse AI content generation.