← Back to context

Comment by panarky

3 days ago

When someone says "AIs aren't really thinking" because AIs don't think like people do, what I hear is "Airplanes aren't really flying" because airplanes don't fly like birds do.

This really shows how imprecise a term 'thinking' is here. In this sense any predictive probabilistic blackbox model could be termed 'thinking'. Particularly when juxtaposed against something as concrete as flight that we have modelled extremely accurately.

If I shake some dice in a cup are they thinking about what number they’ll reveal when I throw them?

  • If I take a plane apart and throw all the parts off a cliff, will they achieve sustained flight?

    If I throw some braincells into a cup alongside the dice, will they think about the outcome anymore than the dice alone?

  • that depends, if you explain the rules of the game you're playing and give the dice a goal to win the game, do they adjust the numbers they reveal according to the rules of the game?

    If so, yes, they're thinking

    • The rules of the game are to reveal two independent numbers in the range [1,6].

Whenever someone paraphrases a folksy aphorism about airplanes and birds or fish and submarines I suppose I'm meant to rebut with folksy aphorisms like:

"A.I. and humans are as different as chalk and cheese."

As aphorisms are a good way to think about this topic?

That's a fallacy of denial of the antecedent. You are inferring from the fact that airplanes really fly that AIs really think, but it's not a logically valid inference.

  • Observing a common (potential) failure mode is not equivalent to asserting a logical inference. It is only a fallacy if you "P, therefore C" which GP is not (at least to my eye) doing.