← Back to context

Comment by keybored

6 months ago

I thought that AGI covered that. AGI to my mind doesn’t have to surpass human thinking. It just has to be categorically the same as it (it can be less powerful, or more). It has to be general. A chess machine in a box which can’t do anything else is not general.[1]

I’ve always been fine with calling things AI even though they are all jumbles of stats nonsense that wouldn’t be able to put their own pants on. Does a submarine swim? No, but that’s just the metaphor that the most vocal adherents are wedded to (at the hips). The metaphor doesn’t harm me. And to argue against it is like Chomsky trying to tell programming language designers that programming languages being languages is just a metaphor.

[1] EDIT: In other words it can be on the level of a crow. Or a dog. Just something general. Something that has some animalistic-like intelligence.

I think the point of the Wikipedia article is that human categories are flexible, and they get redefined to suit human ego needs regardless of what's happening in the objective outside world.

Say that you have a closed system that largely operates without human intervention - for example, the current ad fraud mess where you have bots pretending to be humans that don't actually exist to inflate ad counts, all of which gets ranked higher by the ML ad models because it inflates their engagement numbers, but it's all to sell products that don't really work anyway so that the company can post better revenue numbers to Wall Street and unload the shares on prop trading bots and index funds that are all investing algorithmically anyway. On some level, this is a form of "intelligence" even though it doesn't put pants on. For that matter, many human societies don't put pants on, nor do my not-quite-socialized preschool kids. It's only the weight of our collective upbringing, coupled with a desire to feel intelligent, that leads us to equate putting pants on with intelligence. Plenty of people don't put pants on and consider themselves intelligent as well. And the complexity of what computers actually do do is often well beyond the complexity of what humans do.

I often like to flip the concept of "artificial intelligence" on its head and instead think about "natural stupidity". Sure, the hot AI technologies of the moment are basically just massive matrix computations that statistically predict what's likely to come next given all the training data they've seen before. Humans are also basically just massive neural networks that respond to stimulus and reward given all the training data they've seen before. You can make very useful predictions about, say, what is going to get a human to click on a link or open their wallet using these AI technologies. And since we too are relatively predictable human machines that are focused on material wealth and having enough money to get others to satisfy our emotions, this is a very useful asset to have.

  • > I think the point of the Wikipedia article is that human categories are flexible, and they get redefined to suit human ego needs regardless of what's happening in the objective outside world.

    I know what the point is. Of course computer scientists that make AI (whatever that means) want to be known for making Intelligence. And they get cross when the marvel of yesterday becomes a humdrum utility.

    As you can see this part cuts both ways:

    > > and they get redefined to suit human ego needs

    > Say that you have a closed system that largely operates without human intervention - for example, the current ad fraud mess where you have bots pretending to be humans that don't actually exist to inflate ad counts, all of which gets ranked higher by the ML ad models because it inflates their engagement numbers, but it's all to sell products that don't really work anyway so that the company can post better revenue numbers to Wall Street and unload the shares on prop trading bots and index funds that are all investing algorithmically anyway. On some level, this is a form of "intelligence" even though it doesn't put pants on. For that matter, many human societies don't put pants on, nor do my not-quite-socialized preschool kids. It's only the weight of our collective upbringing, coupled with a desire to feel intelligent, that leads us to equate putting pants on with intelligence. Plenty of people don't put pants on and consider themselves intelligent as well. And the complexity of what computers actually do do is often well beyond the complexity of what humans do.

    I bet your AI of choice could write a thesis on how putting pants on is a stupid social construct. Yet if it is incapable of doing it it would just be a bunch of hot air.

    > I often like to flip the concept of "artificial intelligence" on its head and instead think about "natural stupidity".

    This philosophy tends to go with the territory.

    > Sure, the hot AI technologies of the moment are basically just massive matrix computations that statistically predict what's likely to come next given all the training data they've seen before. Humans are also basically just massive neural networks that respond to stimulus and reward given all the training data they've seen before.

    “Basically” doing some heavy lifting here.

    This is obviously false. We would have gone extinct pretty much immediately if we had to tediously train ourselves from scratch. We have instincts as well.

    “But that’s just built-in training.” Okay, now we’re back to it not basically being stimulus-responses to training data they’ve seen before. So what’s the point? When it’s not basically just that.

    > You can make very useful predictions about, say, what is going to get a human to click on a link or open their wallet using these AI technologies. And since we too are relatively predictable human machines that are focused on material wealth and having enough money to get others to satisfy our emotions, this is a very useful asset to have.

    Yes. Humans have wants and needs and act in ways consistent with cause and effect. E.g. as the clueless “consumer subject” against billions of dollars of marketing money and AI owned by those same marketing departments.

    Amazingly: Humans are what you allow them to be.

    We could treat all humans according to Skinner Box theory. We could treat them as if Skinner’s stimulus-response theories are correct and only allow them to act inside that framework. That would (again, amazingly) confirm that Skinner was right all along.

    Any organism can express itself maximally only in a maximally free setting. A free dog is a dog; a chained human might only be a dog.

    The only difference is that humans have words that they can express through their mouthholes about what kind of future they want. If they want to be humans (i.e. human ego needs, sigh) or if they want to be the natural stupidity subjects of the artificial intelligence.

    Or they don’t care because they don’t think AI will ever be able to put its pants on.