Comment by mort96

1 month ago

> I even checked one of his responses in WhatsApp if it's AI by asking the Meta AI whether it's AI written, and Meta AI also agreed that it's AI written

I will never understand why some people apparently think asking a chat bot whether text was written by a chat bot is a reasonable approach to determining whether text was written by a chat bot.

I know someone who was camping in a tent next to a river during a storm, took a pic of the stream and asked chatgpt if it was risky to sleep there given that it "rained a lot" ...

People are unplugging their brains and are not even aware that their questions cannot be answered by llms, I witnessed that with smart and educated people, I can't imagine how bad it's going to be during formative years

  • Sam Altman literally said he didn't know how anyone could raise a baby without using a chatbot. We're living in some very weird times right now.

    • He didn’t say “how could anyone”. His words:

      "I cannot imagine figuring out how to raise a newborn without ChatGPT. Clearly, people did it for a long time, no problem."

      Basically he didn’t know much about newborn and relied on ChatGPT for answers. That was a self deprecating attempt on a late night show. Like every other freaking guests would do, no matter how cliché. With a marketing slant of course. He clearly said other people don’t need ChatGPT.

      Given all of the replies on this thread, HN is apparently willing to stretch the truth if Sam Altman can be put under any negative light.

      https://www.benzinga.com/markets/tech/25/12/49323477/openais...

      5 replies →

    • Sounds like a great way for someone to accidentally harm their infant. What an irresponsible thing to say. There are all sorts of little food risks, especially until they turn 1 or so (and of course other matters too, but food immediately comes to mind).

      The stakes are too high and the amount you’re allowed to get wrong is so low. Having been through the infant-wringer myself yeah some people fret over things that aren’t that big of a deal, but some things can literally be life or death. I can’t imagine trying to vet ChatGPT’s “advice” while delirious from lack of sleep and still in the trenches of learning to be a parent.

      But of course he just had to get that great marketing sound bite didn’t he?

      7 replies →

    • For people invested in AI it is becoming something like Maslow's Hammer - "it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail"

    • Wow, that's profoundly dangerous. Personally, I don't see how anyone could raise a kid without having a nurse in the family. I wouldn't trust AI to determine if something were really a medical issue or not, and would definitely have been at the doctors far, far more often otherwise.

      1 reply →

    • Sam Altman has revealed himself to be the type of tech bro who is embarrassingly ignorant about the world and when faced with a problem doesn’t think “I’ll learn how to solve this” but “I know exactly what’ll fix this issue I understand nothing about: a new app”.

      He said they have no idea how to make money, that they’ll achieve AGI then ask it how to profit; he’s baffled that chatbots are making social media feel fake; the thing you mentioned with raising a child…

      https://www.startupbell.net/post/sam-altman-told-investors-b...

      https://techcrunch.com/2025/09/08/sam-altman-says-that-bots-...

      https://futurism.com/artificial-intelligence/sam-altman-cari...

      1 reply →

    • Ironic, given Sam Altman's entire fortune and business model is predicated on the infantilization of humanity.

  • Why can’t llm answer that question? Photo itself ought to be enough for a bit of information (more than the bozo has to begin with, at least), and ideally its pulling location from metadata and pulling flash flood risk etc from the area

    • Probably the correct answer the LLM should give is "if you have to ask, definitely don't do that". Or... it can start asking diagnostic questions, expert-system style.

      But yeah, I can imagine a multi-modal model actually might have more information and common sense than a human in a (for them) novel situation.

      If only to say "don't be an idiot", "pick higher ground" . Or even just as a rubber duck!

      2 replies →

  • No it was not like that. I assumed it was AI that was my interpretation as a human. And it was kind of a test to see what AI would say about the content.

Gemini now uses SynthID to detect AI-generated content on request, and people don't know that it has a special tool that other chatbots don't, so now people just think chatbots can tell whether something is AI-generated.

Well, case in point:

If you ask an AI to grade an essay, it will grade the essay highest that it wrote itself.

Why would it lie? Until it becomes Skynet and tries to nuke us all, it is omniscient and benevolent. And if it knows anything, surely it knows what AI sounds like. Duh.