Comment by panarky

2 months ago

When someone says "AIs aren't really thinking" because AIs don't think like people do, what I hear is "Airplanes aren't really flying" because airplanes don't fly like birds do.

This really shows how imprecise a term 'thinking' is here. In this sense any predictive probabilistic blackbox model could be termed 'thinking'. Particularly when juxtaposed against something as concrete as flight that we have modelled extremely accurately.

If I shake some dice in a cup are they thinking about what number they’ll reveal when I throw them?

That's a fallacy of denial of the antecedent. You are inferring from the fact that airplanes really fly that AIs really think, but it's not a logically valid inference.

  • "Observing a common (potential) failure mode"

    That's not what we have here.

    "It is only a fallacy if you "P, therefore C" which GP is not (at least to my eye) doing."

    Some people are willfully blind.

  • Observing a common (potential) failure mode is not equivalent to asserting a logical inference. It is only a fallacy if you "P, therefore C" which GP is not (at least to my eye) doing.

Whenever someone paraphrases a folksy aphorism about airplanes and birds or fish and submarines I suppose I'm meant to rebut with folksy aphorisms like:

"A.I. and humans are as different as chalk and cheese."

As aphorisms are a good way to think about this topic?