← Back to context

Comment by dirkc

6 days ago

People talk about AI getting things wrong all the time, why is it "so clearly irrational" to be doubtful of a recipe that might include ingredients that can make you sick?

Because I hope that someone who's hands were required to assemble the recipe didn't blindly add ingredients like "bleach" if the AI happened to hallucinate them.

  • A naive hope perhaps, but this ignores the risk of LLMs just creating a bad recipe based on the blind combination of various recipes in their training data.

    • As the parent comment said the people seemed to be enjoying the food otherwise so the LLM didn't create an unpalatable combination, and I can't think of any combination of edible and unharmful ingredients that might combine to something harmful (when consuming a reasonable amount)

      2 replies →

  • Your personal hope aside, why is it irrational for them?

    • Because the implication is a random human-generated recipe from wherever has any more risk than the one generated. People who would trust a 'bleach recipe' from AI would also trust it from a Tiktok video or whatever.

      Edit: it is irrational to think this way when someone prepares your food¿

      1 reply →

let's take a second to think about the threat vectors here. The two obvious ones I can think of are: "AI hallucinates and tells you to put non-food into the food" and "AI hallucinates and gives you unsafe prep instructions" (e.g. "heat the chicken to an internal temperature of 110 degrees"). For both of those, it's not clear why "random recipe from an internet blog" is safer than something the AI generates. At some level if someone is preparing your food you need to trust that they know how to prepare food, no matter where they're getting their instructions from.

  • People who do not understand or even use AI are not in a position to even begin "thinking about threat vectors". That isn't how they've come to their worldview, at all.

    • Yeah, it's ideological, like a religion as someone else mentioned, that's then supported ex post facto.

  • Take more than a second! For starters, this isn't the only alternative source of recipes!

    > not clear why "random recipe from an internet blog" is safer

    So maybe those folks would've reacted similarly to a literal random source.

    But also it is pretty clear - because it's way easier to make up completely random stuff with no guardrails of anyone even noticing with an hallucinations, that's a built-in feature of the tool.

  • Yeah, but I would trust a human writing a blog not to suggest heating chicken to 110F because the human writing the blog understands that they are taking responsibility for that recipe... The AI LLM model doesn't have a clue about responsibility except to regurgitate feel-good snippets about responsibility.

    • Wild takes in this thread. Copy and blog writing industry is just random fiverrs or hires from countries with cheap labour to pump up the SEO rankings.

      Everyone grew up with an understanding to “never trust the random internet content for 100%”, now we’re trying to say that AI has to be 100% reliable.

      1 reply →

Because it assumes the person actually making the food has no common sense?

Someone once try to feed me dinner from a recipie they found on the internet. I punched their lights out and then called the cops.

People get things wrong all the time as well, so I wouldn't trust them either.

  • People get things wrong in a different, more observable/predictable way. Sure, we are easily tricked dummies and we can't know if a human is right or wrong, but our human-trust heuristics are highly developed. Our AI-trust heuristics don't exist.

    • I mean I had people serve me expired food and chicken that was half raw. The latter I could observe, the former I couldn't so easily. Both were things that could have made me sick.

      1 reply →