Comment by rukshn
1 month ago
I had a similar experience. We were talking about a colleague for using ChatGPT in our WhatsApp group chat to sound smart and coming up with interesting points. The talk sounds so mechanical and sounds exactly as ChatGPT.
His responses in Zoom Calls were the same mechanical and sounds like AI generated. I even checked one of his responses in WhatsApp if it's AI by asking the Meta AI whether it's AI written, and Meta AI also agreed that it's AI written and gave points to why it believes this message was AI written.
When I showed the response to the colleague he swore that he was not using ant AI to write his responses. I believe after he said to me it was not AI written. And now reading this I can imagine that it's not an isolated experience.
> I even checked one of his responses in WhatsApp if it's AI by asking the Meta AI whether it's AI written, and Meta AI also agreed that it's AI written
I will never understand why some people apparently think asking a chat bot whether text was written by a chat bot is a reasonable approach to determining whether text was written by a chat bot.
I know someone who was camping in a tent next to a river during a storm, took a pic of the stream and asked chatgpt if it was risky to sleep there given that it "rained a lot" ...
People are unplugging their brains and are not even aware that their questions cannot be answered by llms, I witnessed that with smart and educated people, I can't imagine how bad it's going to be during formative years
Sam Altman literally said he didn't know how anyone could raise a baby without using a chatbot. We're living in some very weird times right now.
22 replies →
Why can’t llm answer that question? Photo itself ought to be enough for a bit of information (more than the bozo has to begin with, at least), and ideally its pulling location from metadata and pulling flash flood risk etc from the area
3 replies →
No it was not like that. I assumed it was AI that was my interpretation as a human. And it was kind of a test to see what AI would say about the content.
seems like an unrelated anecdote, but thanks for sharing.
This is a couple of years old now, but at one point Janelle Shane found that the only reliable way to avoid being flagged as AI was to use AI with a certain style prompt
https://www.aiweirdness.com/dont-use-ai-detectors-for-anythi...
Gemini now uses SynthID to detect AI-generated content on request, and people don't know that it has a special tool that other chatbots don't, so now people just think chatbots can tell whether something is AI-generated.
Well, case in point:
If you ask an AI to grade an essay, it will grade the essay highest that it wrote itself.
Is this true though? I haven't done the experiment, but I can envision the LLM critiquing its own output (if it was created in a different session) and iteratively correcting it and always finding flaws in it. Are LLMs even primed to say "this is perfect and it needs no further improvements"?
What I have seen is ChatGPT and Claude battling it out, always correcting and finding fault with each other's output (trying to solve the same problem). It's hilarious.
1 reply →
Pangram seems to disagree. Not sure how they do it, but their system reliably detected AI in my tests.
https://www.pangram.com/blog/pangram-predicts-21-of-iclr-rev...
Citations on this?
1 reply →
Why would it lie? Until it becomes Skynet and tries to nuke us all, it is omniscient and benevolent. And if it knows anything, surely it knows what AI sounds like. Duh.
I'm definitely in the "ChatGPT writes like me" experience. I am a big fan of lists, and of using formatting to make it all legible on a short skim. I'm a big fan of dyslexia-friendly writing too, even though I am not dyslexic myslef.
I can't blame others though- I was looking at notes I wrote in 2019 and even that gave me a flavor of looking like a ChatGPT wrote it. I use the word "delve" and "not just X but also Y often, according to my Obsidian. I've taken to inserting the occasional spelling mistake or Unorthodox Patterns of Writing(tm), even when I would not otherwise.
It's a lot easier to get LLMs to adhere to good writing guides than it is to get them to create something informative and useful. I like to think my notes and writing are informative and useful.
> I was looking at notes I wrote in 2019 and even that gave me a flavor of looking like a ChatGPT wrote it.
This would have been my first question to the parent, that I guess he never had similar correspondence with this friend prior to 2023. Otherwise it would be hard to convince me without an explanation for the switch (transition duuing formative high school / college years etc).
> dyslexia-friendly writing
... How does that work, exactly?
Bullet points and formatting are the main thing. Assume the audience is smart and can fill in between the bold text. I also try to make headlines a summary / takeaway of the content if it makes sense.
Namely, keeping things short and simple, and using formatting like bullet points or holding for important information to make text easier to scan.
It is harsh to say, but we need to increasingly recognize that if your writing is largely indistinguishable from the (current) output of e.g. ChatGPT on default settings, it doesn't matter if you used ChatGPT or not, your writing is overly verbose, bad, and unpleasant to consume, and something you most certainly need to improve. I.e. your colleague needs to change his style regardless.
This sucks, but it needs to be done in education, and/or at least in areas where good writing and effective communication is considered important. Good grades need to be awarded only to writing that exceeds the quality and/or personality of a chat-bot, because, otherwise, the degree is being awarded to a person who is no more useful than a clumsy tool.
And I don't mean avoiding superficialities like the em-dash: I mean the bland over-verbosity and other systemic tells—or rather, smells—of AI slop.
> your writing is overly verbose, bad, and unpleasant to consume
Was this written by AI? Because right there we've got "three adjectives where one will do", and failing your own advice on "avoid being overly verbose"
It is up to the reader to judge whether my style is verbose, or if I could have used less adjectives here. The adjectives all in fact have different meanings, only "bad" is lazy, IMO (EDIT: and "bad" is meant to be obvious moralizing - something AI in fact almost never does).
Don't think that I don't hold myself to the same standards I am pushing here, verbosity has always been a problem for me, and AI verbosity is a good and necessary reminder for me to curb it.
2 replies →
> We were talking about a colleague for using ChatGPT in our WhatsApp group chat to sound smart and coming up with interesting points.
How dare they.
You're expected to infer that it wasn't working.