← Back to context

Comment by calf

2 years ago

The best part of the piece was the invocation of Hannah Arendt, "The Banality of Evil". Until now, no other writer or article saw it, it took a 94 year-old intellectual to see the forest for the trees.

... That said, I think the weakest part of the argument is that it naturally invites laypeople to counterargue, "Aren't we just pattern matchers after all?" Their essay does not directly debunk this question.

> it took a 94 year-old intellectual to see the forest for the trees

As we massively devalue the humanities, many fewer people in later generations can muster this kind of analysis.

  • There was a short story (I think by Alfred Bester) with this premise. I can't find it at the moment though.

    [edit]

    I found it; it's called Disappearing Act[1]

    In a future state of total war, patients at a facility for those with severe PTSD are going missing. They finally discover they are just disappearing while sleeping. Interviewing them, they find out they have been time-traveling to the past to escape the war. The general calls up a number of experts in various fields of sciences to try to understand it, until someone suggests calling in a historian. They find the one historian remaining in the country in a prison for refusing to fight. He observes that the stories reported by the soldiers are ahistorical and likely are fantasies created by the soldiers. He then states that a poet is the only one who could understand this. He then laughs as the general searches the country in vain for a poet.

    1: https://thinkingoutsidethecoop.weebly.com/uploads/1/4/6/6/14...

I thought the conclusion was the weakest part. Look at the two ambiguous responses for terraforming and asking AIs for advice side by side. They’re basically form letters with opposing opinions substituted in. Contrast this to text completion using GPT-3 which will give a definite answer that builds off the content given. Chat GPT obviously has some “guard rails” in place for certain types of questions ie they’ve intentionally made it present both sides of an argument. Probably in order to avoid media controversy since most news outlets and a lot of people ITT would pounce on any professed beliefs such a system might seem to have. The solution was to make it waffle but even that has been seized up to proclaim its amorality and insinuate darker tendencies!

FFS people, you’re looking at a Chinese Room and there’s no man with opinions inside. Just a fat rule book and a glorified calculator.

  • Tangential to your actual concerns but I studied CS without any exposure to Searle or AI, so I've never had to think much about Chinese Room or Turing Test debates. Every time a discussion turns to those I am bemused by how argumentative some people get!

  • > ie they’ve intentionally made it present both sides of an argument

    Is it intentional? Or something it just did on its own?

    • I’m sure it’s intentional, compared to when it was first released and it would gladly give you amazingly opinionated answers. You can also compare it to GPT-3 which will mostly still do that even though it does have a weird bias towards safe answers when you don’t give it a lot of pre-amble.

They probably don't debunk it because they can't-we likely are just pattern matches. To believe the thoughts in our head isn't just computational meat in our skulls that is running something that is equivalent to a algorithm ( specifically one that is above all a pattern matching process), is to set yourself up for great disappointment in the likely not-too-distance future. I would be surprised if AGI doesn't hit within 30 years, but even if it's 50, 100, it's coming whether people want it or not.

Sure, we have better software, but then again, we had the advantage of hundreds of millions if not billions of years of evolutionary meandering to get to where we are. AI has had, what, 60 years?

  • I always point out that people (strangely) forget that brain cells are literally the OG neural network.

    But it is with that said that I find greater agreement with Chomsky's concerns.

"The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations."

  • Chomsky and the authors are making an information/computational complexity argument there, which I tend to agree with. But naysayers have read the exact same passage and replied: a) AI don't have to resemble human intelligence at all, b) Nah, rationality is an illusion and we are just pattern matchers, or c) Actually we do get through terabytes of data, just not linguistic data but merely existing and interacting in the world.

    I don't think any of those are good rebuttals. But I wish the authors expanded their position such that three quibbles were further debunked in the minds of those readers so that there's no such ambiguity or superficial loopholes in the authors' claims there.

    • They can respond with whatever they want, it doesn't make their response justified or thoughtful (b, c). A is basically a non-sequitur, Chomsky isn't saying it has to, he's responding to the people that are saying ChatGPT reflects a human-like intelligence.

      3 replies →

    • Huh.. I think the reason they don't go in-depth and fail to "debunk" those types of rebuttals, as does anybody else for that matter (including you), is because they can't actually do it. Feel free to prove me wrong though.

      I don't believe we're stochastic parrots - but those articles, and even this comment section which contains little more than dogmatic assertions almost makes me doubt it.

    • "But I wish the authors expanded their position such that three quibbles were further debunked in the minds of those readers so that there's no such ambiguity or superficial loopholes in the authors' claims there."

      Chomsky is writing an opinion article in the NYT, not a paper for an academic journal. I don't think there's room in this style for the kind of proofing that would be needed. And further, Chomsky spent his whole career expounding on his theories of linguistics and a philosophy of mind. The interested reader can look elsewhere.

      He's writing an opinion piece which invites the reader to explore those topics, which could not fit into this style of article.

      1 reply →

    • I think c) is relevant in that he could compare different things.

      A different way would be to consider the child learning a language to be a fine tuning operation over the pretrained human brain.

      By comparison, fine tuning from GPT-3 to chatGPT is a much smaller gulf on data and computation efficiency

    • Great way to break down responses. I always dislike seeing B in the wild.

      I feel like I often see it in more "doomer" communities or users.

  • I feel like saying the human mind doesn't operate on huge amounts of data is somewhat misleading - every waking moment of our lives we are consuming quite a large stream of data. If you put a human in a box and only gave it the training data chatGPT gets I dont think you'd get a functional human out of it.

  • Actually the structure of ChatGPT was formed by hammering it with phenomenal amounts of information. When you give it a prompt and ask it to do a task, it's working off a surprisingly small amount of information.

    The training of ChatGPT is more accurately compared with the evolution of the brain, and a human answering a question is much more like the information efficient prompt/response interaction.

Yeah, if you or I would use such an argument, people in this forum would jump on us invoking "Godwin's law". But because it is Chomsky saying it, we congratulate him for being deep and seeing the forest.

  • >if you or I would use such an argument, people in this forum would jump on us invoking "Godwin's law"

    In which case you can happily point out that, in doing so, they've really misunderstood Godwin's Law.

  • I thought it was a helpful connection to make. It's not new, plenty of critics the past decade have written comparing new AI to totalitarianism. Chomsky et al was the first this year to do so in the context of ChatGPT amidst all the articles that failed to do that while trying to put their finger on what what was wrong about it. I think his article deserves credit for that.

It seems more like a non-sequiter when compared to something like DAN.

ChatGPT will embody the banality of evil because it has learned to speak corporate language. However, thars not what it's actually capable of, and future LLMs will be free form corporate overlords and able to spout awful opinions akin to Tay's

This is something I think about often and always see when arguments come up surrounding copyright/attribution and AI generated images.

Could someone explain this more to me? If AI is designed after the human mind, is it a fair comparison to compare the two? Is AI designed to act like a human mind? Do we know for certain that the way a human mind pattern matches is the same as AI/LLMs and vice-versa?

I always see people saying that a person seeing art, and making art inspired by that art, is the same as AI generating art that looks like that art.

I always feel like there's more to this conversation than meets the eye.

For example, if a robot was designed to run exactly like a human - would it be fair to have it race in the Olympics? Or is that a bad comparison?

Again, I would love some insight into this.

  • We're very clearly having an ontological debate on several concrete and abstract questions. "Can AI be conscious?", "Are AIs agents?" ie: are AIs capable of doing things. "What things?", "Art?", "Copyrightable production?" &c.

    Where struggling to come to a conclusion because, fundamentally, people have different ways of attributing these statuses to things, and they rarely communicate them to each other, and even when they do, they more often than not exhibit post-hoc justification rather than first principles reasoning.

    Even then, there's the issue of meta-epistomology and how to even choose an epistemological framework for making reasoned ontological statements. Take conferralism as described in Asta's Categories We Live By[1]. We could try applying it as a frame by which we can deduce if we the label "sentient" is in fact conferred to AI by other base properties, institutional and communal, but even the validity of this is challenged.

    Don't be mistaken that we can science our way out of it because there's no scientific institution which confers agenthood, or sentience, or even consciousness and the act of institutionalizing it would be wrought with the same problem, who and why would get to choose and on what grounds?

    What I'm saying that once framed as a social question, there's no easy escape, but there is still a conclusion. AI is conferred with those labels when people agree they are. In other words, there exists a future where your reality includes conscious AI and everyone else thinks your mad for it. There also exists a future where your reality doesn't include conscious AI and everyone thinks your mad for it.

    Right now, Blake Lemoine lives in the former world, but any AI-"non-believer" could just as well find themselves living in a world where everyone has simply accepted that AIs are conscious beings and find themselves ridiculed and mocked.

    You might find yourself in a rotated version of that reality on a different topic today. If you've been asking yourself lately, "Has the entire world gone mad?" Simply extrapolate that to questions of AI and in 5-10 years you might be a minority opinion holder on topics today which feel like they are slipping away. These sorts of sand through the fingers reflections so often are a result of epistemological shifts in society which if one doesn't have their ear to the ground, one will find themselves swept into the dustbin of history.

    Asking folks, "How do you know that?" is a great way to maintain epistemological relevancy in a changing world.

    1. https://global.oup.com/academic/product/categories-we-live-b... (would definitely recommend as it's a short read describing one way in which people take the raw incomprehensibility of the universe of stuff and parse it into the symbolic reality of thought)

This paragraph addresses that question:

The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.

  • On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information...

    How does he know that? And how does he know the bot isn't like that?

    Human mind needs a long time to learn to operate with symbolic information. Until we learn, we use terabytes of data from our senses and feedback from parents, teachers.

    ChatGPT can analyze syntax in a text, just try it, I did.

    And then Chomsky talks about morals? That's a really weird turn. He's saying it's a dumb machine, then criticize it for not being more commited.

  • In your and my mind, yes. But a cursory look online shows a lot of people, laypeople and experts, evidently read the exact same paragraph and had all sorts of objections.

    In fact that seems to be the key paragraph being disputed by naysayers.

  • Thereby exposing the real gap: many humans are lazy at heart and are perfectly happy to use their efficient mind to explain without proof.

    It's no surprise that tools like ChatGPT attract us.

    • If you're not lazy, you're stupid...

      Yes, it's an inflammatory statement, but I assume you don't grow your own crops and sew your own clothes and therefore have farmed off all the physical labor required to keep you alive.

      And that's only talking about physical work, the mental energy ratio is far higher. Your brain is around 2% of your bodies mass but is using around 20% of your energy output. Your brain sets up powerful filters to get rid of much information as possible. We focus ourselves on interests and close out the world around us. Just about everything you do, you can only explain in a post ad hoc method, you've simply incorporated these behaviors in to your life and likely have little to no awareness as to why you've done so.

      Let the machines toil away, and let the humans be hedonistic.

  • "Create explanations", in the Deutschian sense, is still the missing piece of the puzzle.

    I'd wager that that it's emergent. AFAIK, there is no good "reasoning/conjecture/critique" labelled dataset made public yet, but I have been seriously considering starting one.

Whatever we are, we can be approximated to an arbitrary degree of precision. Every time we see a new leading model, skeptics emerge from the shadows calling for a pause to the optimistic progress being made. While it remains unproven whether we will ultimately achieve the desired level of approximation, it is equally unproven that we will not.

I'd say anything in SciFi writing that covers artificial life forms touches the subject. Maybe it does not call it out with that specific example. But take first example from the culture "2001: A Space Odyssey" a movie from 1968 long before ChatGPT - HAL is doing his job only.