Comment by mort96
1 month ago
> I even checked one of his responses in WhatsApp if it's AI by asking the Meta AI whether it's AI written, and Meta AI also agreed that it's AI written
I will never understand why some people apparently think asking a chat bot whether text was written by a chat bot is a reasonable approach to determining whether text was written by a chat bot.
I know someone who was camping in a tent next to a river during a storm, took a pic of the stream and asked chatgpt if it was risky to sleep there given that it "rained a lot" ...
People are unplugging their brains and are not even aware that their questions cannot be answered by llms, I witnessed that with smart and educated people, I can't imagine how bad it's going to be during formative years
Sam Altman literally said he didn't know how anyone could raise a baby without using a chatbot. We're living in some very weird times right now.
He didn’t say “how could anyone”. His words:
"I cannot imagine figuring out how to raise a newborn without ChatGPT. Clearly, people did it for a long time, no problem."
Basically he didn’t know much about newborn and relied on ChatGPT for answers. That was a self deprecating attempt on a late night show. Like every other freaking guests would do, no matter how cliché. With a marketing slant of course. He clearly said other people don’t need ChatGPT.
Given all of the replies on this thread, HN is apparently willing to stretch the truth if Sam Altman can be put under any negative light.
https://www.benzinga.com/markets/tech/25/12/49323477/openais...
5 replies →
We should refrain from the common mistake of anthropomorphizing Sam Altman.
Sounds like a great way for someone to accidentally harm their infant. What an irresponsible thing to say. There are all sorts of little food risks, especially until they turn 1 or so (and of course other matters too, but food immediately comes to mind).
The stakes are too high and the amount you’re allowed to get wrong is so low. Having been through the infant-wringer myself yeah some people fret over things that aren’t that big of a deal, but some things can literally be life or death. I can’t imagine trying to vet ChatGPT’s “advice” while delirious from lack of sleep and still in the trenches of learning to be a parent.
But of course he just had to get that great marketing sound bite didn’t he?
7 replies →
For people invested in AI it is becoming something like Maslow's Hammer - "it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail"
Wow, that's profoundly dangerous. Personally, I don't see how anyone could raise a kid without having a nurse in the family. I wouldn't trust AI to determine if something were really a medical issue or not, and would definitely have been at the doctors far, far more often otherwise.
1 reply →
To be fair he can't imagine many other aspects of what it is like to be a normal human being.
Sam Altman has revealed himself to be the type of tech bro who is embarrassingly ignorant about the world and when faced with a problem doesn’t think “I’ll learn how to solve this” but “I know exactly what’ll fix this issue I understand nothing about: a new app”.
He said they have no idea how to make money, that they’ll achieve AGI then ask it how to profit; he’s baffled that chatbots are making social media feel fake; the thing you mentioned with raising a child…
https://www.startupbell.net/post/sam-altman-told-investors-b...
https://techcrunch.com/2025/09/08/sam-altman-says-that-bots-...
https://futurism.com/artificial-intelligence/sam-altman-cari...
1 reply →
Ironic, given Sam Altman's entire fortune and business model is predicated on the infantilization of humanity.
Why can’t llm answer that question? Photo itself ought to be enough for a bit of information (more than the bozo has to begin with, at least), and ideally its pulling location from metadata and pulling flash flood risk etc from the area
Probably the correct answer the LLM should give is "if you have to ask, definitely don't do that". Or... it can start asking diagnostic questions, expert-system style.
But yeah, I can imagine a multi-modal model actually might have more information and common sense than a human in a (for them) novel situation.
If only to say "don't be an idiot", "pick higher ground" . Or even just as a rubber duck!
2 replies →
No it was not like that. I assumed it was AI that was my interpretation as a human. And it was kind of a test to see what AI would say about the content.
seems like an unrelated anecdote, but thanks for sharing.
This is a couple of years old now, but at one point Janelle Shane found that the only reliable way to avoid being flagged as AI was to use AI with a certain style prompt
https://www.aiweirdness.com/dont-use-ai-detectors-for-anythi...
Gemini now uses SynthID to detect AI-generated content on request, and people don't know that it has a special tool that other chatbots don't, so now people just think chatbots can tell whether something is AI-generated.
Well, case in point:
If you ask an AI to grade an essay, it will grade the essay highest that it wrote itself.
Is this true though? I haven't done the experiment, but I can envision the LLM critiquing its own output (if it was created in a different session) and iteratively correcting it and always finding flaws in it. Are LLMs even primed to say "this is perfect and it needs no further improvements"?
What I have seen is ChatGPT and Claude battling it out, always correcting and finding fault with each other's output (trying to solve the same problem). It's hilarious.
There is a study in German that came to this conclusion, there's an english news article discussing it at https://heise.de/-10222370
Pangram seems to disagree. Not sure how they do it, but their system reliably detected AI in my tests.
https://www.pangram.com/blog/pangram-predicts-21-of-iclr-rev...
Citations on this?
https://arxiv.org/abs/2412.06651 (in German, hopefully machine translation works well)
English article:
https://www.heise.de/en/news/38C3-AI-tools-must-be-evaluated...
If you speak German, here is their talk from 38c3: https://media.ccc.de/v/38c3-chatbots-im-schulunterricht
Why would it lie? Until it becomes Skynet and tries to nuke us all, it is omniscient and benevolent. And if it knows anything, surely it knows what AI sounds like. Duh.