← Back to context

Comment by cactusplant7374

16 hours ago

Unless you've discovered the secret sauce, LLM comments are very obvious. Even Altman revealed that they focused on coding at the expense of writing.

The obvious ones are the ones you notice

  • LLMs are not good at writing. If they were we would have entire libraries of new, amazing literature.

    • Exactly, they aren't good at creating new material. But many discussions in comment section are simply regurgitations of existing material, which they are good at rearranging. New novel discussions in places like this are actually a very rare thing, as many comment sections are simply people who already know informing those who don't. I'm doing that right now, funnily enough.

      1 reply →

With the current batch of SOTA models, it is not hard to prompt a model to pass the sniff test on social media forums. If you don't believe me, try it.

All you really need to do is give it some guidelines of a style to follow and styles to avoid. There's also a bunch of skills people have already written to accomplish this.

I have worked with LLMs for a couple years at a very non-technical level and it was not that difficult to give it proper prompting and reference material.

If you are reading LLM content just about everywhere and have no idea. Obviously there are easy to spot things, but the stuff you don't spot is the stuff you don't spot

People that like to fancy themselves as good llm content detectors just end up accusing everything they don't like as llm content.

The only thing worst than a slop comment are the people that bitch about it incessantly. I'm convinced it's become a new expression of a mental illness.

  • The main thing I suspect of being LLM written is the sort of LinkedIn style: very short sentences, overly focused on sort of… making an impact on the user. But that’s also how a certain type of bad human writer writes. So in the end, I’m not sure I know if anything in particular was written by an LLM.

    I guess… “that’s not just an AI red flag, it’s generally shit prose” would be how ChatGPT would describe most things nowadays.

    • It’s the distilled mediocrity of the statements. Never venturing beyond a 10% margin of what you would get if you sampled the opinions of 1,000 people who underwent jury selection by west coast liberals.

  • A mere opinion is not mental illness.

    • Was that written by an LLM? It isn't that it's a mere opinion, it's that when every word out there has to be scrutinized for the possibility that an AI output it instead of a human intelligence that it gets pathological. Am I an LLM with the right prompts set up to respond this way? I mean, I know I'm not, but everyone else out there is just going to have to trust me that I'm not.

    • I wasn't suggesting you have a mental illness for having an opinion.

      More, commenting that just as bad as generated content if not worse is every thread where the top comment is an accusation and ensuing witch hunt.

      So, no, having an opinion is not a mental illness. Feeling compelled to call it out and discuss it on everything one reads may just be.

      4 replies →