Comment by noduerme

3 years ago

An LLM won't tell you that the authors obfuscated it because they don't know what the fuck they did. You need a human for that.

I haven't tried other models, but if you prompt a recent ChatGTP with "academic style" and ask it to "review and provide feedback" of a paragraph you wrote it will reword it using the most fancy, overselling words it can find. I liked to use it for improving grammar and style, but in later iterations ChatGTP started writing garbage...

I'm not sure if that is because training, feedback from users or an attempt to make usage is LLMs obvious to teachers.