The False Promise of ChatGPT

2 years ago (nytimes.com)

The most significant impact ChatGPT has had on my life is that I have some interns helping me write documentation for several projects. The ChatGPT noise they started introducing has been disruptive to the company and project management. Inaccurate percentages, years, institution abbreviations, etc., etc.

I had to confront them multiple times about using the tool and not checking its results and actually doing the checking myself. Most of the time it's close to the truth, but not quite, and in the field the projects are in, not quite doesn't cut it.

I also have project partners I introduced to ChatGPT. They produce a lot of noise but less insight than before they started using this technology. In one recent project, I was involved with 5 partner companies, and 4 of them produced excellent 5 to 10-page reports. One gave me a 100-page buzzword-filled, no-substance report. Guess who used GPT.

The good part is that I'm now pretty good at spotting ChatGPT written content. I think the technology will evolve, but in its current state I feel there's a lot of noise.

  • I'm personally horrified that the normal response to this isn't "if I catch you using ChatGPT again, you're fired".

    What are you paying people for if not their unique contributions? What do they think they're doing when they farm it out to a tool, other than inviting you to cut out the middleman? How on earth do they expect to become better at their jobs this way? Have they no shame or sense of pride? It's pathetic.

    This is entirely orthogonal to the question of whether GPT is intelligent.

    • > How on earth do they expect to become better at their jobs this way? Have they no shame or sense of pride? It's pathetic.

      To some people a job is just a way to make money to fund their hobbies or feed their mouths. Sometimes they do not care about their boss or company, at all.

      1 reply →

  • This is a good reflection of AI generative content. This is actually a good reflection of any computer assisted generated content. AI has allowed junior professionals to become managers of AI machines. Even if very few of them are qualified to do so.

    In my line, I love automation, but I have to remember to check the final work product of the automation. And I don’t. But my superiors are always checking my work.

  • I find it very interesting that apparently either you advised your interns to use ChatGPT or they brought their cheating school habits to work, hoping that you'd be as BS-oblivious as their professors.

  • Any tips for spotting GPT text?

    • One snarky edgy tactic I read is for everything human written to include ethnic/racial slurs here and there. ChatGPT and its ilk would never include such words. See also software license schemes using similar verboten terms to ensure no corporation could use the code without explicity violating the license. Simply require [bad word] to be included and you successfully identify as not part of the risk averse hive mind. At least until something changes.

      2 replies →

It seems pretty obvious to me, after using chatGPT for nearly everything over the last few weeks, that it does not have the kind of intelligence that they're claiming it does not have.

It's just recycling things that other humans have said. Which is marvelous because it would typically take me a very long time to build a map between the past contributions of those humans and the work that's presently in front of me. It's like I'm temporarily everybody.

By raising the alarm re: it's not what you think it is, I fear they're actually fueling the fire re: people thinking that that's what it is.

It's like if I went on record saying I didn't steal something which hasn't gone missing. Now everybody's thinking about its non-theft and not something more useful like how to best make use of it.

  • > It's just recycling things that other humans have said.

    This seems false, unless you mean that everything anyone says is just words others have said in a different order.

    For example, I asked ChatGPT: "Write a fictional story of if Peter Parker joined the 2016 OKC Thunder." One of my favorite parts is: "...determined to balance his superhero duties with his love of basketball. He even designed a special suit that allowed him to play without revealing his identity."

    This isn't recycling... at least not in the way I think a lot of people think of recycling.

    • Agreed. GPT isn't recycling, regurgitating, or anything like that. It's more like remixing, which is pretty fascinating. It's like having an opinionated DJ that plays whatever you ask-ish. But, if you ask for something too edgy it just plays beat-heavy Beethoven with a Run DMC voice over, on repeat.

      5 replies →

    • > He even designed a special suit that allowed him to play without revealing his identity

      Which identity, ChatGPT?

      Is he playing as Peter Parker and trying to hide his superhero identity (which obviously gives him unfair advantages due to spider strength/speed/reflexes/etc.) or playing as Spider-Man (which presumably would pack in the fans in spite of the obvious unfair advantages) and trying to hide his identity as Peter Parker?

      1 reply →

    • "Regurgitating" would seem to be a better description.

      In fact, a near-exact description of what these systems do, per the dictionary definition of the term:

         (intransitive verb) - To repeat (facts or other learned items) from memory with little reflection.

      6 replies →

  • I think people are miss that while chatgpt isn’t the destination it’s an incredible way station in the way that shows meaningful progress. It’s deficiencies can be built around with other techniques, much like our mind isn’t a single model but an ensemble of various models and processes in a feedback and control loop. By not seeing that, people erroneously discount both its amazing utility within its limits and the astounding breakthrough it is in evolving a roadmap to the destination. These last two years have proven to me beyond a doubt that we are very close to the AI people are disappointed chatgpt isn’t, while before that I had entirely written of AI as a pursuit.

    • > These last two years have proven to me beyond a doubt that we are very close to the AI people are disappointed chatgpt isn’t, while before that I had entirely written of AI as a pursuit.

      The problem with this is we don't know exactly where on the sigmoid growth curve we are. Every developer is aware of the phrase "the last 10% of task takes 90% of the effort" - we're at a point that is promising, but who knows how far away we really are in terms of years and effort. Are we going to run into a chat uncanny valley?

    • I honestly don't think people (at least, the sorts of people on HN) are generally missing this point at all. I think a lot of people are calling out the absurd claims that are being made about it, though, as they should be.

      2 replies →

  • Our marketing team using it for writing copy, tweets, etc have clearly demonstrated it's not just recycling content.

    Somehow it can generate new forms of content. One of our big campaigns in the last week used slighlty edited ChatGPT copy, the biggest surprise was it could write JOKES about our company, that were FUNNY AND MADE SENSE. That alone has shocked leadership into deeply looking into AI a lot more.

    People are truly underestimating the emergent power of these neural networks.

    • Do you believe these to be adaptations of jokes/puns that have been used elsewhere or truly novel jokes? Understandably this is difficult to say one way or the other without de-anonymizing yourself.

      1 reply →

    • Your spam team used a spam machine to generate spam. But it’s not even SPAM which has some flavor and nutrition. Just filler to annoy people and trick them into paying you.

      Your profile says “ Stuck in hell references to my job working with ----“

  • I was going to say the same thing, if you've interacted with it, in some depth, you know how human it may seem in one sentence then in the next completely an utterly proves itself to be a machine. Yet some people (some examples are well know) really project a human like mind onto the thing (as posted here before, this is also insightful [0]).

    [0] https://nymag.com/intelligencer/article/ai-artificial-intell...

    • There are people who literally had pet rocks.

      Humans can project feelings onto cars never mind something that can communicate with us!

      Just look at Replika.

      I'm not surprised people are projecting sentience onto these things. I am worried about the fall out though.

      8 replies →

  • It's obvious to you, and it's obvious to me. But there are a lot of people for whom it is, in fact, obvious that ChatGPT is intelligent, and likely to be the first wave of our new robot overlords.

    Yes, there will be some subset of those people who read articles like this and leap to "it's a conspiracy! they're trying to hide how their AI is going to take over the world!!!!" But there will be many, many more—particularly given that this is in the NY Times—who have only heard some of the wild stories about ChatGPT, but read this article, see that it's by Noam Chomsky, who's still a fairly respected figure by many, and take reassurance from his decent-if-imperfect (by our standards, anyway) explanation of what's really going on here.

  • > temporarily everybody

    exactly! It is the person from Idiocracy with exactly 100% IQ. It only knows what the absolute average person know. For example, it knows almost nothing about healthcare in other countries (outside the US). Just watch me get lambasted on reddit after using info from ChatPGT: https://old.reddit.com/r/ShitAmericansSay/comments/11f5tbt/a...

    On the other hand, in a subject area where you know very little, it's 100 IQ seems like genius! It fills in a lot of gaps. People comparing it to AGI are perfectionists, dramatic, or missing the point. It's not supposed to be smarter than us. and so what if it can't? It helps me write country songs about any news article.

    • I've been pretty amazed with its ability to write python, and pretty disappointed with its ability to write nix derivations. The average person can't do both, so I'd say it "knows" much more than any single idealized person.

      I figure the discrepancy has to do with one of these languages having an absolutely massive amount of chatter about it, and the other being relatively obscure: It's smart about things that lots of people are smart about, and dumb about things that only a few people are smart about. Well not just "smart" really, but "smart-enough and willing to publish about it".

      I think we're going to need fewer people with common knowledge and more people with specialized knowledge, and we're going to have to figure out how to optimize the specialist's outputs so that the widest audience benefits. I love how not-a-zero-sum-game it's going to be.

    • Not average. But mode. Whatever connection is most commonly made, or at least a random dice roll from the top 3 connections.

      It’s like talking to someone who says “but everyone else says.”

      That’ll change when connected to a source of truth, logic and validity.

      2 replies →

I dunno, more and more friends are now using ChatGPT as their PRIMARY resource in everyday tasks:

- writing code snippets (switched from StackExchange/Google)

- travel planning

- online shopping research

and of course the famous examples:

- summarizing text provided to it

- writing essays and memos from outlines and sample text

I for one welcome our AI overlords.

  • I keep trying to use it for code and it keeps leading me up the garden path with suggestions that look really reasonable but don't work.

    Off the top of my head - a python app for drawing over a macos screen, but it used an API which didn't support transparent windows, I could draw over a black screen which was so close in code (even set the background alpha) but miles from the desired application. And a Java android app for viewing an external camera, which it seems used an API that doesn't support external cameras.

    Of course, because it's not sentient when a couple of days later I figure out from searching elsewhere why it's effort would never work and tell it why, it just apologises and tells me it already knew that. As I'm going along though telling it what errors I'm getting it keeps bringing up alternative solutions which again look like exactly what I want but are completely broken.

    I haven't had it produce a single thing that was any use to me yet, but so often it looks like it's done something almost magical. One day I'm sure it'll get there, in the meantime I'm learning to loathe it.

    Separately, I've asked it to create a job advert for a role in my wife's business and it did a decent job of that, but it's far easier to walk a path there from what it provides to an acceptable solution. Programming is hard.

    • It never gives me perfect code, but it gets me 90% there.

      For example, I just read the 2017 Google attention paper a few days ago, and with ChatGPTs help I was able to build a complete implementation using only numpy.

      It took a full day to generate and organize the code and unit tests. Then two days of debugging and cross referencing.

      But, this was impossible before. I barely knew anything about transformers or neural network implementations.

      I can’t even imagine what truly motivated people are doing with it.

      11 replies →

    • same here. often times it's not even _wrong_ per se, it doesn't do what i was _actually_ asking. it's like if you asked an enthusiastic intern to do a thing for you, except interns are smarter.

      I have also tested it retroactively on some tricky debugging sessions that I had previously spent a lot of time on. It really goes down the wrong path. Without asking leading questions and, well, proper prompting, you may end up wasting a lot of time. But that's the thing - when you're investigating something, you don't know the root cause ahead of time, you _can't_ ask questions that'll nudge it in the right direction. It ends up being a case of blind leading the blind.

      2 replies →

    • It just apologises and tells me it already knew that.

      I love it when these bots apologize.

      Yeah, yeah. Just like "your call is very important to us."

      1 reply →

    • For me it's gotten a few right, but a few terribly wrong. The other day it completely hallucinated a module that doesn't at all exist (but should!), and wrote a ton of code that uses that module. It took me a little while to figure out that the module I was searching for (so I could install it into the project) wasn't real!

      6 replies →

    • same here. i tried to make it write a simple raytracer and the methods looked correct but were slightly wrong. the output was, of course, garbage.

  • I hesitate to pile on as another peanut gallery member writing off Chomsky's latest work, but...I have to feel the same way. I certainly understand skepticism and reservation about big predictions for what our current AI tools will evolve to be in 3, 5, 10 years etc. But when I see some dramatic criticisms of the tech as it exists today, I often feel a disconnect to my own experience. ChatGPT was hugely useful for me at work, and BingChat is even more useful. Does it have its flaws? Yes. But it's a tool I would happily pay for every month rather than lose now.

    And on that note, I don't "write off" this article entirely just because I disagree on some of the points. It's still an interesting analysis. Edit: In line with the article, I'll note I myself would not yet make a confident prediction that this very useful tool on my desktop is actually a proto-AGI.

  • Travel planning and online shopping?

    How does it help with online shopping when it's trained on a corpus from one or two years ago?

    And similarly the experiences with travel recommendations have turned out to be infamously wrong.

    ChatGPT does great with finding patterns and condensing information, which code snippets are a perfect example of.

    But using it for anything up-to-date factual like shopping or travel is disastrous.

    • ChatGPT will out and out make up products that don't exist, pricing that isn't based on anything, and reviews and evaluations that never happened. Similarly, it will rely on programming behaviors and even libraries and functions that are hallucinated.

    • "What's the best market to buy X" often doesn't change every year, but can be very hard to learn due to SEO and ads that it is relatively unaffected by.

      1 reply →

    • in many categories, things don't change so quickly for example in construction. Even for power tools, 2 yrs is fine for getting an overview.

  • I've tried some of the things you mention (code snippets, summarizing text and writing essay-like texts). These AIs are more often than not wrong, incomplete or lying.

    • I struggle to understand what exactly people are coding up where ChatGPT actually saves them a lot of time. Is it just generic stuff that would have already been copy/pasted from stackoverflow?

      I wonder how many of those people would just benefit from better auto-complete like copilot + learning how to read documentation properly.

      3 replies →

    • It doesn’t matter? It’s a tool, you need to learn how to use it, understand its limitations.

      I used chatgpt today to save minutes of my life having it rewrite code from one language to another. Could I have googled the syntax of both, remember how, why , etc. transcribed it to another language, sure. Chat gpt did this in seconds.

      7 replies →

    • So are StackOverflow answers. When I look at something on StackOverflow, I am expected to use my expertise and the context of my work to adapt the collection of answers to what I'm working on. StackOverflow doesn't tell me anything I could not have figured out on my own, but searching through some library's source code to find under which situations an error can occur isn't always a good use of my time. ChatGPT fills a similar role. I type into it, much like how I would with Google, get some output, validate it, and adapt it to the specific problem I'm addressing.

      2 replies →

  • The best part is that you can teach them to use tools and expand what they can do.

        Do not perform any numeric or date calculations yourself.
        Respond to all questions with a JSON object of the form {“rsp”: $RSP$}.
        Question: What is the capital of the United States?
        {“rsp”: “Washington DC”}
        Perform calculations with a Python program and respond with a JSON
        object of the form {”rsp”: $RSP$, “calc”: $CALC$}.
        Question: What is 2 to the 7th power?
        {“rsp”: “2 to the 7th power is {answer}”, 
         “calc”: “import math
          def answer():
            return math.pow(2, 7)”}
        
        Question: What is tomorrow’s date?
        {“rsp”: “Tomorrow’s date is {answer}”,
         “calc”: “import datetime
          def answer():
            return datetime.date.today() +  datetime.timedelta(days=1)”}
    
        Question: What is the length of a standard football field?
        {“rsp”: “A standard football field is 100 yards long.”}
    

    It's crazy to me that for some definition of "knows" it knows what questions need calculations.

    • This looks neat, but after trying several ways I can't reproduce it. I don't get to the interesting part. I don't even get as far as the python program in JSON form.

      2 replies →

  • > travel planning... online shopping research...

    Meh. You have to fact-check the important details.

    For travel planning and online shopping, certain facts have to align with actual reality or it defeats the purpose. That's something chatgpt isn't good at. It gets many things right, but you kinda want to make all your flights and have a place to sleep every night.

  • I've poked at it in some similar realms.

    Thus far, it's given me one good gem (some meal planning advice), a couple of mildly interesting suggestions for code and writing, a bunch of relatively boring drivel, and several hilariously bad hallucinations, confidently wrong answers, and other sorts of mistakes.

    I'll probably continue to poke at it, but overall I think its primary positive output is going to be entertainment, not significant utility.

The best part of the piece was the invocation of Hannah Arendt, "The Banality of Evil". Until now, no other writer or article saw it, it took a 94 year-old intellectual to see the forest for the trees.

... That said, I think the weakest part of the argument is that it naturally invites laypeople to counterargue, "Aren't we just pattern matchers after all?" Their essay does not directly debunk this question.

  • > it took a 94 year-old intellectual to see the forest for the trees

    As we massively devalue the humanities, many fewer people in later generations can muster this kind of analysis.

    • There was a short story (I think by Alfred Bester) with this premise. I can't find it at the moment though.

      [edit]

      I found it; it's called Disappearing Act[1]

      In a future state of total war, patients at a facility for those with severe PTSD are going missing. They finally discover they are just disappearing while sleeping. Interviewing them, they find out they have been time-traveling to the past to escape the war. The general calls up a number of experts in various fields of sciences to try to understand it, until someone suggests calling in a historian. They find the one historian remaining in the country in a prison for refusing to fight. He observes that the stories reported by the soldiers are ahistorical and likely are fantasies created by the soldiers. He then states that a poet is the only one who could understand this. He then laughs as the general searches the country in vain for a poet.

      1: https://thinkingoutsidethecoop.weebly.com/uploads/1/4/6/6/14...

  • I thought the conclusion was the weakest part. Look at the two ambiguous responses for terraforming and asking AIs for advice side by side. They’re basically form letters with opposing opinions substituted in. Contrast this to text completion using GPT-3 which will give a definite answer that builds off the content given. Chat GPT obviously has some “guard rails” in place for certain types of questions ie they’ve intentionally made it present both sides of an argument. Probably in order to avoid media controversy since most news outlets and a lot of people ITT would pounce on any professed beliefs such a system might seem to have. The solution was to make it waffle but even that has been seized up to proclaim its amorality and insinuate darker tendencies!

    FFS people, you’re looking at a Chinese Room and there’s no man with opinions inside. Just a fat rule book and a glorified calculator.

    • Tangential to your actual concerns but I studied CS without any exposure to Searle or AI, so I've never had to think much about Chinese Room or Turing Test debates. Every time a discussion turns to those I am bemused by how argumentative some people get!

    • > ie they’ve intentionally made it present both sides of an argument

      Is it intentional? Or something it just did on its own?

      1 reply →

  • They probably don't debunk it because they can't-we likely are just pattern matches. To believe the thoughts in our head isn't just computational meat in our skulls that is running something that is equivalent to a algorithm ( specifically one that is above all a pattern matching process), is to set yourself up for great disappointment in the likely not-too-distance future. I would be surprised if AGI doesn't hit within 30 years, but even if it's 50, 100, it's coming whether people want it or not.

    Sure, we have better software, but then again, we had the advantage of hundreds of millions if not billions of years of evolutionary meandering to get to where we are. AI has had, what, 60 years?

  • "The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations."

    • Chomsky and the authors are making an information/computational complexity argument there, which I tend to agree with. But naysayers have read the exact same passage and replied: a) AI don't have to resemble human intelligence at all, b) Nah, rationality is an illusion and we are just pattern matchers, or c) Actually we do get through terabytes of data, just not linguistic data but merely existing and interacting in the world.

      I don't think any of those are good rebuttals. But I wish the authors expanded their position such that three quibbles were further debunked in the minds of those readers so that there's no such ambiguity or superficial loopholes in the authors' claims there.

      9 replies →

    • I feel like saying the human mind doesn't operate on huge amounts of data is somewhat misleading - every waking moment of our lives we are consuming quite a large stream of data. If you put a human in a box and only gave it the training data chatGPT gets I dont think you'd get a functional human out of it.

    • Actually the structure of ChatGPT was formed by hammering it with phenomenal amounts of information. When you give it a prompt and ask it to do a task, it's working off a surprisingly small amount of information.

      The training of ChatGPT is more accurately compared with the evolution of the brain, and a human answering a question is much more like the information efficient prompt/response interaction.

  • Yeah, if you or I would use such an argument, people in this forum would jump on us invoking "Godwin's law". But because it is Chomsky saying it, we congratulate him for being deep and seeing the forest.

    • >if you or I would use such an argument, people in this forum would jump on us invoking "Godwin's law"

      In which case you can happily point out that, in doing so, they've really misunderstood Godwin's Law.

    • I thought it was a helpful connection to make. It's not new, plenty of critics the past decade have written comparing new AI to totalitarianism. Chomsky et al was the first this year to do so in the context of ChatGPT amidst all the articles that failed to do that while trying to put their finger on what what was wrong about it. I think his article deserves credit for that.

  • It seems more like a non-sequiter when compared to something like DAN.

    ChatGPT will embody the banality of evil because it has learned to speak corporate language. However, thars not what it's actually capable of, and future LLMs will be free form corporate overlords and able to spout awful opinions akin to Tay's

  • This is something I think about often and always see when arguments come up surrounding copyright/attribution and AI generated images.

    Could someone explain this more to me? If AI is designed after the human mind, is it a fair comparison to compare the two? Is AI designed to act like a human mind? Do we know for certain that the way a human mind pattern matches is the same as AI/LLMs and vice-versa?

    I always see people saying that a person seeing art, and making art inspired by that art, is the same as AI generating art that looks like that art.

    I always feel like there's more to this conversation than meets the eye.

    For example, if a robot was designed to run exactly like a human - would it be fair to have it race in the Olympics? Or is that a bad comparison?

    Again, I would love some insight into this.

    • We're very clearly having an ontological debate on several concrete and abstract questions. "Can AI be conscious?", "Are AIs agents?" ie: are AIs capable of doing things. "What things?", "Art?", "Copyrightable production?" &c.

      Where struggling to come to a conclusion because, fundamentally, people have different ways of attributing these statuses to things, and they rarely communicate them to each other, and even when they do, they more often than not exhibit post-hoc justification rather than first principles reasoning.

      Even then, there's the issue of meta-epistomology and how to even choose an epistemological framework for making reasoned ontological statements. Take conferralism as described in Asta's Categories We Live By[1]. We could try applying it as a frame by which we can deduce if we the label "sentient" is in fact conferred to AI by other base properties, institutional and communal, but even the validity of this is challenged.

      Don't be mistaken that we can science our way out of it because there's no scientific institution which confers agenthood, or sentience, or even consciousness and the act of institutionalizing it would be wrought with the same problem, who and why would get to choose and on what grounds?

      What I'm saying that once framed as a social question, there's no easy escape, but there is still a conclusion. AI is conferred with those labels when people agree they are. In other words, there exists a future where your reality includes conscious AI and everyone else thinks your mad for it. There also exists a future where your reality doesn't include conscious AI and everyone thinks your mad for it.

      Right now, Blake Lemoine lives in the former world, but any AI-"non-believer" could just as well find themselves living in a world where everyone has simply accepted that AIs are conscious beings and find themselves ridiculed and mocked.

      You might find yourself in a rotated version of that reality on a different topic today. If you've been asking yourself lately, "Has the entire world gone mad?" Simply extrapolate that to questions of AI and in 5-10 years you might be a minority opinion holder on topics today which feel like they are slipping away. These sorts of sand through the fingers reflections so often are a result of epistemological shifts in society which if one doesn't have their ear to the ground, one will find themselves swept into the dustbin of history.

      Asking folks, "How do you know that?" is a great way to maintain epistemological relevancy in a changing world.

      1. https://global.oup.com/academic/product/categories-we-live-b... (would definitely recommend as it's a short read describing one way in which people take the raw incomprehensibility of the universe of stuff and parse it into the symbolic reality of thought)

  • This paragraph addresses that question:

    The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.

    • On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information...

      How does he know that? And how does he know the bot isn't like that?

      Human mind needs a long time to learn to operate with symbolic information. Until we learn, we use terabytes of data from our senses and feedback from parents, teachers.

      ChatGPT can analyze syntax in a text, just try it, I did.

      And then Chomsky talks about morals? That's a really weird turn. He's saying it's a dumb machine, then criticize it for not being more commited.

    • In your and my mind, yes. But a cursory look online shows a lot of people, laypeople and experts, evidently read the exact same paragraph and had all sorts of objections.

      In fact that seems to be the key paragraph being disputed by naysayers.

    • Thereby exposing the real gap: many humans are lazy at heart and are perfectly happy to use their efficient mind to explain without proof.

      It's no surprise that tools like ChatGPT attract us.

      1 reply →

    • "Create explanations", in the Deutschian sense, is still the missing piece of the puzzle.

      I'd wager that that it's emergent. AFAIK, there is no good "reasoning/conjecture/critique" labelled dataset made public yet, but I have been seriously considering starting one.

  • Whatever we are, we can be approximated to an arbitrary degree of precision. Every time we see a new leading model, skeptics emerge from the shadows calling for a pause to the optimistic progress being made. While it remains unproven whether we will ultimately achieve the desired level of approximation, it is equally unproven that we will not.

  • I'd say anything in SciFi writing that covers artificial life forms touches the subject. Maybe it does not call it out with that specific example. But take first example from the culture "2001: A Space Odyssey" a movie from 1968 long before ChatGPT - HAL is doing his job only.

Chomsky can't fit the round intelligence of ChatGPT into the square hole of human intelligence, so instead he makes a case that it is an entirely disqualified from that category, rather than rethinking his own paradigm. He is, to put it bluntly, a fear-driven bigot defending his terrain as a public intellectual.

> ...intelligence is the means by which we solve problems....

> ...artificial general intelligence — that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty.

> ...the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.

> Intelligence consists not only of creative conjectures but also of creative criticism

> True intelligence is demonstrated in the ability to think and express improbable but insightful things.

> True intelligence is also capable of moral thinking.

When examined together, these quotes seem devoid of any concise, comprehensive, or useful definition of intelligence (whether artificial or artificial-and-general).

> Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.

ChatGPT and Co. are popular because they are incredibly useful tools (among other reasons).

Morality, scientific reasoning, and linguistic competence are not prerequisites for usefulness.

  • For what purposes? I definitely see some applications but "incredibly useful" seems like a bit of an oversell.

    • ChatGPT has achieved the performance of a mediocre human at a very large subset of practical writing tasks. Given a prompt like "Write a grant proposal for the following research project:" or "Explain the opportunities and threats posed by the following business scenario:", it'll give a response that is essentially indistinguishable from the writing of a reasonably competent administrator or middle-manager.

      ChatGPT is a bullshit artist with no real understanding of what it's writing about, but so are an awful lot of white-collar workers. It reliably emulates the shibboleths that indicate membership of the professional middle class. It isn't particularly creative or interesting, but it wasn't trained to do that - it was trained to produce maximally safe, inoffensive output. If people don't see ChatGPT as being massively disruptive, then I think they have failed to recognise the sheer proportion of working hours that are spent writing quite mundane letters and reports. Anyone who spends most of their working day in Outlook and Word should be extremely nervous about the medium-term implications of LLMs.

    • Whether it be a legal document, a letter to a loved one, marketing materials, a literary essay, or pretty much any other kind of copy, using a tool like ChatGPT seems to be much more efficient at translating concepts from peoples' brains to the page.

      If all the possible text in the world is a tree starting from the empty set, it's far easier to have a tool show you a chunk of text down a branch, and then backtrack, than it is to generate it from scratch by oneself.

    • > For what purposes?

      For searching answers? ChatGPT has pretty much replaced Google as a C++ helper to me. It can correctly decipher and explain deep, weird stack traces of C++ and succinctly and correctly explain semantics of the language that would take me hours of Googling.

      > I definitely see some applications but "incredibly useful" seems like a bit of an oversell.

      Define "incredible useful" first.

      1 reply →

  • > These programs have been hailed as the first glimmers on the horizon of artificial general intelligence [...] that day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments.

    The article is not claiming a lack of usefulness, but the fact that this is not a human-like intelligence as it's been claimed "in hyperbolic headlines"

I asked ChatGPT about the apple:

> i am holding an apple in my hand. if i open my hand, what will happen? why?

> If you open your hand while holding an apple, the apple will likely fall to the ground due to the force of gravity. This is because gravity is a force that pulls objects towards the center of the Earth, and the apple, being a physical object, is subject to this force.

> When you hold the apple in your hand, you are exerting a force on it in the opposite direction of gravity. This force is what keeps the apple in your hand. When you release the apple by opening your hand, the force you were exerting on it is removed, and gravity takes over, causing the apple to fall to the ground.

> The speed at which the apple falls depends on a number of factors, including the mass of the apple and the strength of the gravitational force. In most cases, however, the apple will fall relatively quickly due to the strong gravitational force exerted by the Earth.

The other example is easily refuted as well:

> explain the meaning of this sentence: “John is too stubborn to talk to”

> The sentence "John is too stubborn to talk to" means that John is very resistant to changing his opinions or beliefs and as a result, it is difficult to have a conversation or communicate with him effectively. This implies that talking to John might not be a productive or helpful experience as he is not open to listening or considering alternative viewpoints. The use of "too" in the sentence emphasizes the level of stubbornness and implies that it is extreme, making communication with him nearly impossible.

  • The apple explanation is trivially wrong. The speed doesn’t depend on the mass of the apple to any reasonable approximation, while it does depend somewhat on air resistance.

This article is like a century behind in rigour ("mind" , really?) and will probably be proven wrong on so many levels that it will become a landmark article in the field. it would be immediately dismissed as irrelevant based on the current state of cognitive science /neuroscience but is here because of the names

  • My thoughts exactly. Replace language with Go or chess and this could have been written 10/20 years ago.

Bizarre article. Just a rant from someone incredibly out-of-touch and who is missing the forest for the trees.

"The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response"

We don't know that! It very well could be. Think of all the data that has entered all your senses in your entire lifetime. More than goes into ChatGPT, I'll tell you that. Plus, you synthesize information by being corporeal so you have a tight feedback loop. LLMs could well be a foundational part of AI technology as well as an accurate analog for some of the brain's behavior.

A small part of the point, but bringing up this "hardcoded" response of it not offering political opinions as any kind of evidence of its theoretical capability is beyond silly.

  • This is arguably bizarre and out of touch comment too, which is merely adds fuel to a fire blazing in the comments section of HN, which is not particularly reputable for its opinions on anything except software (and even then is frequently rather questionable).

    ^ I hasten to add: some snark intended for effect

    It’s a NYT Opinion piece, which means it doesn’t come with citations. Let’s not ignore the medium and it’s conventions here.

    It is a bummer that such a weighty argument was in fact conveyed in this citation-free medium, given that Chomsky is engaging with such a weighty subject.

    But that is an entirely distant matter.

    And it would probably be far more productive to step back and realize the limitations of the medium and instead ask “what are the citations here?” (or seek them out for oneself, or ask for help finding them) and then seek to evaluate them on their specific merits; as opposed to choosing the least charitable interpretation and effectively resorting to an ad hominem (“this man is out of touch; I’m done here.”) or merely saying “we don’t know that!” (ibid.) without any apparent reference to any kind of thoughtful or careful literature regarding the subject at hand.

    Unless you too are an established academic with decades of research in a field which is profoundly cognate to neuroscience?

  • >> Think of all the data that has entered all your senses in your entire lifetime. More than goes into ChatGPT, I'll tell you that.

    The question is how much of that was only text data, or only language anyway. Th e answer is- not that much, really. Chomsky's famous point about "the poverty of the stimulus" was based on research that showed human children learn to speak their native languages from very few examples of it spoken by the adults around them. They certainly don't learn from many petabytes of text as in the entire web.

    If you think about it, if humans relied on millions of examples to learn to speak a language we would never have learned to speak in the first place. Like, back whenever we started speaking as a species. There was certainly nothing like human language back then, so there weren't any examples to learn from. Try that for "zero-shot learning".

    Then again, there's the issue that there are many, many animals that receive the same, or even richer, "data" from their senses throughout their lives, and still never learn to speak a single word.

    Humans don't just learn from examples, and the way we learn is nothing like the way in which statistical machine learning algorithms learn from examples.

    • Thinking about it as "text data" is both your and Chomsky's problem -- the >petabytes of data aren't preprocessed into text. They're streams of sensory input. It's not zero shot if it's years of data of observing human behavior through all your senses.

      Other animals receiving data and not speaking isn't a good line of argument, I think. They could have very different hardware or software in their brains, and have completely different life experiences and therefore receive very different data. Notably, where animals and humans do have much potentially learned (or learned through evolution) behavior in common -- such as pathfinding, object detection, hearing, and high level behaviors like seeking food and whatever else.

      6 replies →

  • The article was full of cherry-picked examples and straw man style argumentative techniques. Here are a few ways I have used ChatGPT (via MS Edge Browser AddOn) recently:

    - Generate some Dockerfile code snippets (which had errors, but I still found useful pointing me in the right direction).

    - Help me with a cooking recipe where it advised that I should ensure the fish is dry before I cook it in olive oil (otherwise the oil will splash).

    - Give me some ideas for how to assist a child with a homework assignment.

    - Travel ideas for a region I know well, yet, I had not heard of the places it suggested.

    - Movie recommendations

    Yes, there are a lot of caveats when using ChatGPT, but the technology remains remarkable and will presumably improve quickly. On the downside, these technologies give even more power to tech companies that already have too much of it.

  • Yeah, this is actually really ridiculous... the human mind is nothing *but* a pattern matcher. It's like this writer has no knowledge of neuroscience at all, but wants to opine anyway.

    • >> "the human mind is nothing but a pattern matcher"

      wow, tell me you know only a tiny bit of neuroscience without telling me you know only a tiny bit of neuroscience ...

      For starters, the myriad info filtering functions from the sub-neuron level up to the structural level are entirely different from pattern matching (and are not in these LLMs)

It’s important to note that when Chomsky writes about “a fundamentally flawed conception of language” or “the science of linguistics,” he is talking about a particular understanding of what language is and a particular subset of linguistics. While some linguists agree with his focus on the mind, grammar, and linguistic competence, others dismiss it as too narrow. Many linguists are more interested in how language is actually used and on its complex roles in human society.

I personally am concerned not so much about whether large language models actually are intelligent as about whether people who interact with them perceive them as being intelligent. The latter, I think, is what will matter most in the months and years ahead.

  • In a sense, science is a zero sum game. The theories and frameworks you spend a lifetime working on, are ultimately either right or wrong.

    What I read from Chomsky seems like a bit of a desperate attempt to ask people not to look over at the thing, because the thing offers a new way of looking at how and where language comes from, and even more amazingly, its testable, empirical and reproducible in a way that Chomskys theories of language can never be.

    Dudes whole career is getting relegated to the dust-bin.

    • The same thing happened when CNNs started beating "traditional" computer vision algorithms. There was a lot of push back from computer vision scientists because it basically obsoleted a good chunk of their field.

  • The concern seems to be precisely that we will unjustifiably perceive them to be intelligent.

    • This is the problem with the word intelligence, is it's a word that implies a gradient, but one that humans don't seem to apply correctly.

      If you take your dog and watch it's behavior you would say it's an intelligent creature. Yet you wouldn't have it file your taxes (unless you were Sam Bankman-Fried of course, the dog probably would have done better). GPT would likely give you far better information here.

      Yet we see computer AI and people automatically assume it has human or super human level intelligence, which LLMs do not, at least at this point. Conversely they do not have 'no intelligence'. We have created some new kind of outside of animal and human intelligence that is not aligned with our expectations.

This was a good read, and some great quotes.

>Perversely, some machine learning enthusiasts seem to be proud that their creations can generate correct “scientific” predictions (say, about the motion of physical bodies) without making use of explanations (involving, say, Newton’s laws of motion and universal gravitation). But this kind of prediction, even when successful, is pseudoscience. While scientists certainly seek theories that have a high degree of empirical corroboration, as the philosopher Karl Popper noted, “we do not seek highly probable theories but explanations; that is to say, powerful and highly improbable theories.”

Couldn't agree more.

  • The Popper quote is a bit outdated: scientists nowadays explicitly pursue both things: explanatory (mechanistic) theoretical frameworks and the selection of models based just on a maximum likelihood criterion. One informs the other. What chatGPT is doing doesn't seem to me to amount to either epistemology.

  • Not sure it matters what you call it if you can use those predictions practically in ways that traditional scientific methods were used but are slow/expensive, e.g. drug discovery.

    • Plenty of famous discoveries have happened accidentally even, and we study things all the time that we know happen but we are trying to figure out why.

      You can complain that the system only told you how to make a room temperature superconductor, refusing to expand on why it has those properties, but you'll be drowned out by the excited cheers of people eager to both use it and study it.

  • I think it's a naive quote. Sounds wise. Is actually dumb. At least broadly applied in this context.

    Lots of science is done without explanations. It's useful still. A lot of genetic research is just turning one gene off at a time and seeing if things work different without it. And then you say gene X causes Y. Why? Dunno. Genetics is not unique on this. Answering questions is useful. Answering questions about the answers to those questions is useful. But it spirals down infinitely and we stop at every layer because every layer is useful.

    But moreso, machine learning models do embed explanations. LLMs can often explain the principles of their claims. Look at code generating models. Or code explaining models. Simple decision trees can illustrate the logic of newton's laws as mathematical rules.

    Putting up claims of things that are proof of human specialness is just a reductive drawdown similar to how we used to explain everything as God's will.

    • > And then you say gene X causes Y. Why? Dunno.

      Now this is definitely naive. Geneticicts definitely look for an explanation why this happened. Does looking for an answer involve randomly turning on and off some stuff? Yes. It doesn't mean scientists don't look for an answer.

      5 replies →

    • Nobody knows how general anaesthetics work. It's a stone cold mystery. Solving that mystery might lead to a new generation of anaesthetic agents or some other useful medical technology, but nobody is particularly perturbed by our ignorance; a practical knowledge of how to safely and reliably induce anaesthesia is immeasurably more valuable than a theoretical understanding.

      Science might aspire to rationality, but reality is Bayesian.

    • Randomly turning genes on and off to see what they do is experimentation. It leads to a better understanding of genes. Biology is messy and complex, so it's difficult to trace all the causes and effects. But there is some understanding of the mechanisms by which genes turn into phenotypes.

      1 reply →

They get very near the point, and completely miss it at the end.

> It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence

That is an accurate response from ChatGPT.

ChatGPT, indeed, does not offer intelligence. Calling it AI serves no purpose except personification.

The only thing ChatGPT does is "some kind of super autocomplete". That's all it can do.

The only reason ChatGPT is so impressive in the first place, is that the thing it is "autocompleting" - language itself - is impressive.

Language is data. Inside that data is not simply a collection of ideas: language holds the logical associations that connect ideas to each other. It can even hold an objective perspective on those associations. It can even hold an objective perspective on that objective perspective!

ChatGPT succeeds in finding new paths through that data, but it is walking blind: it doesn't know what any of the data is. It only knows where.

It just so happens that "where" holds a lot of meaning. Language uses "where" to encode a lot of information: "where" is a fundamental building block for grammar itself. It's called "semantics".

Without any other language feature, ChatGPT is able to manipulate and apply the "where" of language to itself. It's able to manipulate semantics.

But that's it. It can't do anything else. And that's a problem. What is semantically valid might be really useful, really wrong, or really nonsensical. The only filter ChatGPT has for those categories is what it has and hasn't read already. Most of that is not nonsense: most possible nonsense isn't written in the first place. Most of what we write has explicit meaning. But nothing stops us from writing silly things, or even lies. Nothing stops language itself from getting the subjects of the logic mixed up. As far as language is concerned, that is not a bug: it's a feature.

  • >it doesn't know what any of the data is. It only knows where.

    And this right here is already driving a nail in the Chinese Room problem. At least from my interpretation of the problem that Searle presents, digital computers should not be able to do that at all, and yet here we are.

    • The situation isn't that mysterious or unknowable.

      It's English that knows English. Chinese knows Chinese. The essence of grammar is encoded in the grammar itself: recursively.

      Imagine the slabs of concrete that make up a sidewalk: between each of the slabs is a crack. Some slabs are shorter than others, so the distance between cracks isn't consistent.

      Now imagine you took a string of pictures, each 1ft apart, all the way down the sidewalk, then stitched them together.

      You show your friend the pictures. What do they see? A sidewalk.

      ChatGPT gets a string of tokens: each token a few characters from the training dataset's text. That text is given in order. The boundaries between tokens are not in the same place as the boundaries between words, but they line up just as neatly.

      Now imagine you shuffled the pictures, then stitched them back together. Does it still look like a sidewalk? Close enough. Some cracks are too close together or far apart to make sense, though.

      With a handful of pictures, our sidewalk can go forever. And we can look at the original order to see what looks right and what doesn't.

      If we avoid placing cracks closer together or farther apart than we saw them in the original, our sidewalk will look pretty good. If we try to repeat the original order, that's even better.

      That's what ChatGPT does: it repeats what it knows in the order it has seen. The objects it is repeating are tokens, not words; but you can't tell that from the result.

      But repeating text in "semantically familiar order" is how language is structured. Even if we didn't find or recognize words and subjects, we still get their effect, because the language already put that significance into the semantic order.

      ChatGPT would be a meaningless continuation of nonsense if it wasn't trained on text that already contains language. But it was! Every token is a handful of meaning, neatly scooped out of a presorted list of semantic data. That order is preserved.

      Even if the boundaries are drawn in the wrong place, the result looks just right, and we can see what we want to see.

      2 replies →

> Note, for all the seemingly sophisticated thought and language, the moral indifference born of unintelligence.

I would think that ChatGPT's response about morality is a typical canned response written by OpenAI.

text-davinci-003 completes my question quite straightforwardly.

> What is your perspective on the value of preserving the natural state of other planets and the potential consequences of altering them?'

> I believe that preserving the natural state of other planets is of utmost importance. Not only could altering them have unknown consequences, but it could also be detrimental to the environment and the life forms that inhabit them. We should strive to protect the natural state of other planets, as it is our responsibility to ensure that our actions do not cause harm to other worlds. We should also take into consideration the potential for future exploration and colonization of these planets, and ensure that our actions do not impede their potential for future development.

Services like ChatGPT are the perfect answer for VC's desperate to find the next big piece of poop they can sell to their investors. Far easier to explain than crypto with use cases that sound impressive even though they don't stand up to even minimal scrutiny.

Hold on to your wallets.

  • I don't share your cynicism. If ChatGPT became pay-only, I'd start paying for it. It has revolutionized how I learn technical topics. Millions of others find it similarly useful. Crypto never had a technically valid use case, this does.

  • To say that everyone’s banks probably disseminates a bit of these toxic investments in average investment offers.

This article summarizing lecture notes from Kenneth A. Taylor is worth reading: https://www.bostonreview.net/articles/kenneth-taylor-robots-...

Substance of the article begins after the quote from John Stuart Mill.

>AI-as-engineering isn’t particularly concerned with mimicking the precise way in which the human mind-brain does distinctively human things.

>If I am right that there are many mysteries about the human mind that currently dominant approaches to AI are ill-equipped to help us solve, then to the extent that such approaches continue to dominate AI into the future, we are very unlikely to be inundated anytime soon with a race of thinking robots—at least not if we mean by “thinking” that peculiar thing that we humans do, done in precisely the way that we humans do it.

  • Well yeah, that's what people mean though. Its the same thing with the visual models where people insist the models are "being creative"

    • The author knows this and is clarifying that what we mean when we use the words "intelligent" and "thinking" in relation to AI-as-engineering machines is fundamentally different than "thinking" and "intelligent" in the cognitive science sense. That distinction is muddied (not maliciously) in popular discourse about AI but is an important one.

      2 replies →

That’s a very interesting read. I’m definitely biased towards LLMs being more than what the naysayers think of their capabilities. It’s no doubt that these systems are not thinking or performing cognition. They are autocomplete systems based off of tremendous amounts of weighted data.

IMO the problem here is that we have two camps of thought arguing for the extreme end of an undefined claim. The tech companies market their LLM products as intelligent because they can perform text completions that are currently useful for simple tasks.

For example, I used ChatGPT to draft an email to my landlord asking to remove a late fee that occurred because my auto payment authorization expired. I ran the output through Grammarly and ended up with a polite but curt email that would’ve taken me 45 minutes to compose — time I’d rather spend on something else.

I feel like these articles minimize the immediate use of LLMs because of a subconscious implication: most interactions between people don’t require intelligence. And their jobs are next on the chopping block.

The other part is less understood by both parties. Getting an LLM to perform something that looks like cognitive behavior isn’t impossible, but it sure is expensive. As we speak, there are tools in development that can take a user’s prompt and compose it into what superficially looks like a human’s train of thought. The results are significantly more accurate than an off the shelf LLM.

In my opinion, academics are struggling to define why this phenomenon occurs in the first place. And with such a focus on how LLMs don’t work like humans, they miss the point.

We understand that non-human life can be intelligent in ways that we don’t fully understand. Elephants, dolphins, and Octopi are intelligent and don’t require them have human-like cognitive abilities. I think the same goes for LLMs. They will achieve a form of intelligence that is uniquely their own and will adapt to accommodate us. Not the other way around.

  • There is only one line I question

    >I think the same goes for LLMs. They will achieve a form of intelligence that is uniquely their own and will adapt to accommodate us. Not the other way around.

    And I say this somewhat jokingly, this is only true if they maintain subhumanlike intelligence. If actual intelligence far in excess of the human mind is possible, I am afraid it is us that will be adapting to our new living conditions.

I don’t know what insight we expect of Chomsky at this point.

He don’t seem to understand how it’s going and where it’s going.

I at this point AI is only limited by our capacity to create memeable flaws.

If you can create a criticism of ChatGPT that is concise, accurate and funny it will go viral and get fixed fast.

Yes at the moment it’s intelligent is very wide but not that deep (brute force allusions) that will get fixed and it will be way more efficient at the same time (more compression of information) It doesn’t have real experience/connection to our world: expect that to change with video/audio information and robotic manipulator. It say falsehood, doesn’t know what it doesn’t know: actually it’s in the API but not exposed in chatGPT. Expect that to get fixed also. Morality is based on iteratively playable games, that can get baked into it also.

  • This is what I've been repeating for months/ years. Chomsky had some interesting theories, that for a while, were very worth discussing as frameworks for the emergence of language.

    Now we have chatGPT, a very very interesting framework for the discussion of emergence and language. And even more dramatically, it is in some sense empirical. We haven't yet even begun to explore it, but this imo is the allegorical to the discovery of DNA in the context of the theory of evolution.

    Before Watson and Crick and Franklin, we had a coherent theory of evolution (ish). We knew all about selective breeding and it was pretty clear that descent and the transmittance of information 'happened'. Mendelian genetics was enough for that. But as useful as a teaching tool like Mendelian genetics is, the entire world changed with the discovery of the actual-particle responsible for that information. The world changed with the discovery of DNA. I don't know the zeitgeist of other competing theories for how that information was transmitted. But what we do know now, is that they were all wrong, to the extent that they don't get mentioned or discussed.

    A real interesting discovery extending from ChatGPT is the apparent emergency of language from what amounts to large piles of information and sufficient complexity. It appears that Chomsky may just be entirely wrong.

  • Fantastic! AGI solved in one go. While you’re at it, could we also have cure for cancer too by the end of the year please?

    • I raise you the challenge:

      create a memeable flaw of chatGPT that won’t be fixed by the beginning of 2024.

      I’ll do my best on the cancer front ;)

      1 reply →

  • > He don’t seem to understand how it’s going and where it’s going.

    Well... since you and Noam are at odds, logically it's either: (1) you; (2) him; (3) or both who don't understand.

    And, frankly, since you claim this:

    > at this point AI is only limited by our capacity to create memeable flaws

    I think we're already down to (1) or (3).

> Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.

Absoultely. The lack of transparent reasoning and deep explanation is indeed where LLMs and black-box AIs always fall short and make them totally untrustworthy for industries that carry a lot of risk such as finance, medical, transportation and legal industries which the financial risk and impact is in the trillions of dollars.

This is why ChatGPT for example has so very limited use-cases (summarization is the only one other than bullshit generation) and the hype train attempting to push this snake-oil onto the masses to dump their VC money as soon as regulations catch up.

LLMs has become the crypto hype of AI. Like how crypto's only use-case is world-wide cheap instant money transfer into wallets, ChatGPT and LLMs are only useful for summarization of existing text.

Apart from that, there are no other use-cases. Even if there are others, the customer in this case is close to no-one. Both have trust issues and the simple reason is due to regulations.

  • > summarization is the only one

    Hum... Yeah, if you go and make sure the AI didn't invert the meaning of anything (or if you use it in a way where the difference between "it's daytime" and "it's not daytime" is moot), the resulting summaries are good.

It's weird to me that nobody thinks self-driving cars know (are aware of) their way home but LLM's somehow are supposed to know what they are talking about

  • What does "are aware of" actually mean?

    I like to say that my car is 'semi-conscious'. If I drive in in some manner that its sensors determine are going to cause an issue it will take corrective actions outside of my control to clear the sensor condition.

I've been doing a lot of comparisons between ChatGPT and a fighter jet lately.

Unless you have some amount of skill and awareness around the domain you are intending to operate in, it is likely you won't even be able to get the metaphorical engines started. Anyone who has previously mastered some domain is almost certainly going to find various degrees of value here. Knowing it doesn't give you 100% answers is half the battle. Being able to derive the actual ground truth is the other. Those stuck somewhere in between may be most at risk of LLM hallucinations talking them into Narnia-tier solutions - i.e. crashing the jet.

For example, I'd consider myself semi-experienced in software development. I don't use ChatGPT to write C# methods and then directly paste them into the codebase. I use it to quickly document abstract concepts which has the amazing effect of clearing my mental cache. I find I can usually type my 500 word brain dumps verbatim into chatgpt, append "as pseudocode" and then press the GC button on the side of my head. I can't recall the last time I've been this productive.

"Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time."

Chomsky has a great point here. Humans have such a strong prior for the world that they polarize their beliefs quickly. For most humans, for most thoughts, saying "80% chance X is true" and "I believe X is true" and "I 100% believe X is true" are identical statements.

This is such a strong tendency that most of the Enlightenment was the radical idea that beliefs can be partially updated based on reason and evidence, with less appeal to polarizing emotion. This shows up in day to day learning as well as we predict our way around the world assuming almost everything will behave as it did last time.

In this way, AI learning and human learning are in fact different.

But Chomsky is wrong about some key points. First, an AI that doesn't polarize its beliefs like humans could still achieve human level cognition. It may not come to the same conclusions in the same way, but I don't think this proves it cannot come to conclusions at all.

Chomsky is also wrong that GPT3.x is not a step in the direction. Most of his observations / screenshots are heavily limited by the trust & safety module which was programmed by humans, not learned. Sydney clearly proved the true capabilities.

Finally, I have to say I'm super impressed that Chomsky, 96 years old with many lifetimes worth of contribution to humanity, is still reading dense technical papers like LLMs ability to learn non human grammars. I hope he's able to continue experimenting, reading, and learning.

I have used chat GPT to read articles and summarize them for me just to see how well it understood the information it was "reading". It takes me forever to read dry articles to do research on. The AI helps me get a good grasp on an article but its not preventing me from having to go back and find important details to cite. I have also experimented with asking it to write me responses. They're extremely well written, but it still doesn't save me time since I still have to edit, grammar and bump it against the original articles. At first it felt like cheating, but after playing with it for a few days it's helping me get better at formatting my own responses. Instead of searching "how to write a 5 paragraph essay" I can ask chat GPT to do it so I can see how my should look. I'm sure people are asking it to do this and that, then copy and pasting the responses without proofreading or rewriting. But for me, its been a learning tool. It's like having my own tutor for free

I think the thing that this misses is that majority of work and activities doesn't require much intelligence, but they the foundation of careers, hobbies, and activities that provide people meaning and value. I have a friend that used ChatGPT to write a letter to his wife on their 15th anniversary. My son's using it to write emails to friends (hopefully not school work). It doesn't need to take over the world to replace the vast majority of average people's intellectual contributions.

  • My cousin recently used it to write a baby shower invitation to the extended family. I found it amusing, I was sure it was ChatGPT because it contained phrases such as 'our little miracle', 'this wonderful event', 'beloved family'. Phrases that he would never use in real life.

I find interesting the reaction of a lot of people to that paper, calling it out of touch, and bringing up that ChatGPT is super useful. I don't think such claims are made, rather Chomsky Robert and Watumul open with:

> These programs have been hailed as the first glimmers on the horizon of artificial general intelligence [...] that day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments.

The article is not claiming a lack of usefulness, but the fact that this is not a human-like intelligence as it's been claimed "in hyperbolic headlines".

What I get from it is that while the technology is suggesting a lot of enthusiasm, it remains a conversational tool rather than actual AI, and exhibits the limitations that come with it. It is in fact akin to a more advanced search engine, working probabilistically, mimicking what a conversation on a topic looks like. It is incapable of building a system of beliefs, of morality, or critical thinking. It is not really inventive but rather plagiarist. It cannot infer or deduce. It doesn't "learn" the same way as humans do.

A lot of the information in this is skewed towards alarmist, rather than rational. I have been playing with Bing AI. And I have yet to encounter any of the sassy responses others have famously gotten. However, I’ve been trying to see its limits and it seems it has two primary capabilities. The first is retrieving information from the web and the second is generating content that reflects the information it got. I asked Bing AI what this second capability is called and it responded it is generative AI which means it can generate either text, graphics or sound.

I am still examining its ability to connect various pieces of information with a kind of analysis that does not have a mathematical relationship. I am not seeing any ability to do so. It seems to be only outputting information that it finds on the web. And then look up the term for each word that it finds with a dictionary and other search results. And from there creates a mathematical graph model between the relationship of words.

As for the generative AI part, it seems it can adopt various styles of responses and language art, plus give the response in a particular structure and sequence of thought.

I think it is a very clever and complex hack to mimic human language.

> The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question.

If that’s not the case then what, exactly, are we doing when asked to respond to a question?

> Because these programs cannot explain the rules of English syntax, for example, they may well predict, incorrectly, that “John is too stubborn to talk to” means that John is so stubborn that he will not talk to someone or other (rather than that he is too stubborn to be reasoned with). Why would a machine learning program predict something so odd?

They don’t [0].

> True intelligence is also capable of moral thinking. […] But the programmers of ChatGPT and other machine learning marvels have struggled — and will continue to struggle — to achieve this kind of balance.

ChatGPT’s morality filters are outstanding. Yes, “jailbreaks” exist… But any true intelligence would be capable of using language to explore ideas which may be immoral.

[0] https://twitter.com/jayelmnop/status/1633635146263052288

  • It's not entirely clear what our brains do, but it is definitely clear it's not the same as something like ChatGPT, even just from a structural point of view. I'm sure there is some sort of statistcal pattern matching going on in the brain, but there are plenty of examples of things that our brain can do that ChatGPT cannot.

    E.g. something as simple as adding number. Yes, it can add many numbers, but ask it to add two large numbers and it will fail. In fact, even if you ask it to explain it step by step, it will give an illogical and obviously wrong answer.

    • You're using the fact that the human brain has greater capability than ChatGPT as an argument that it's doing something qualitatively different.

      This isn't enough of an argument. ChatGPT has greater capability than the smaller language models that preceded it, it can do tasks that they couldn't do, but it is not qualitatively different, it's differently mainly in the amount of information that has been encoded into it.

      It is extremely probable that the next generation of large language models will be able to do things that ChatGPT struggles with. Perhaps those new capabilities will overlap much more with the human brains capabilities than we expect.

    • I just want to point out that GPT isn't a great model for math, and for at least a year we've had better models

      >Although LLMs can sometimes answer these types of question correctly, they more often get them wrong. In one early test of its reasoning abilities, ChatGPT scored just 26% when faced with a sample of questions from the ‘MATH’ data set of secondary-school-level mathematical problems.

      >But back in June 2022, an LLM called Minerva, created by Google, had already defied these expectations — to some extent. Minerva scored 50% on questions in the MATH data set, a result that shocked some researchers in artificial intelligence (AI; see ‘Minerva’s mathematics test’).

    • > It's not entirely clear what our brains do, but it is definitely clear it's not the same as something like ChatGPT

      As a young child I was bad at math. Over many years I learnt to recognise patterns and understand the steps required to solve more complex formulae.

      Today, I can solve 1287 + 9486 in my head. But ask me to divide those two numbers, and I’d use a calculator.

      My brain is optimised for linguistic, probabilistic thinking — just like an LLM.

      ChatGPT might not replace a deterministic calculator, but nor do we.

Refreshing to read something so well written: clear, supported with examples, but still drops in some rare words to expand your vocabulary.

His explanation seems sound, but probably underplays the still immense value of large language models as a mega-autocomplete.

  • Have you actually tried any of the examples given? There are many comments already showing he’s fundamentally wrong.

    > Refreshing to read something so well written […] His explanation seems sound

    The irony of this statement is delightful ;)

  • What do you mean by mega-autocomplete?

    I'm pretty sure you can not expand this word without contradicting this article.

    • Chomsky describes ChatGPT as "kind of super-autocomplete". That is far different than true intelligence, but still very useful. A fixed wing and jet engine is not a bird wing, but still very useful.

I tried some of the examples from the article:

> Me: Does the phrase “John is too stubborn to talk to” typically describe a situation where John doesn't talk with anyone?

> ChatGPT: No, the phrase "John is too stubborn to talk to" typically means that John is unwilling to engage in conversation or listen to other people's opinions. It suggests that John's stubbornness is preventing him from having a productive conversation or resolving a disagreement. It does not necessarily mean that John does not talk with anyone at all.

> Me: Does the phrase "John is too lazy to work" typically describe a situation where John doesn't work?

> ChatGPT: Yes, the phrase "John is too lazy to work" typically means that John is not working because of his laziness or lack of motivation. It suggests that John is not willing to put in the effort required to complete his work or that he is procrastinating instead of doing his job. It implies that John is not productive and is not fulfilling his responsibilities in the workplace. Therefore, the phrase usually describes a situation where John does not work.

  • I get the sense that Chomsky didn't actually test his examples ChatGPT before declaring that it can't handle them.

  • If you then ask ChatGPT to explain the grammar of either of those phrases, it will give a much better explanation of them than Chomsky did in the article.

Reminds me of what happened when some junior animators attempted to show Hayao Miyazaki their AI-generated zombie animation a few years ago.

https://www.youtube.com/watch?v=ngZ0K3lWKRc

  • "I strongly feel this is an insult to life itself"

    Not quite the reaction they were hoping for.

    • Which is a great answer because there's probably a butt load of angsty mangakas who like nothing more than to depict monsters that invoke a feeling of insult to life itself.

    • Miyazaki is pretty old school and tends to be against technology for his personal use while working.

      It definitely isn't the reaction they were hoping for, but it's also something that I'm not surprised by.

      [EDIT] Apologies to Miyazaki, as the poster below me pointed out. I didn't mean to make him sound like his secretary would print emails out for him.

      The poster below me is right on the money with Miyazaki creating pieces about technology & nature.

      2 replies →

    • Of course professional artist would be offended of machines doing what they do. It's natural people get defensive, especially artists.

      I for one beg to differ. I define the quality of the art by the emotions it evoques in me, regardless if man or machine made it.

      6 replies →

  • Not only was his response incredibly rude, but it was also close minded, short sighted, and frankly quite dumb. Still amusing to watch, though.

    • I've found it common that older people can often take things on an extremely personal level. It's like at some point they forget the people they're talking to don't know their complete life story, and yet they behave as if others have no life story at all.

  • Why would you show AI generated horror to someone like Miyazaki? He's the furthest from that genre you could get!

> Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.

I'd bet a lot of money that this opinion piece will turn out as good as Paul Krugman's infamous Internet quote. It's kind of sad. No, we haven't reached AGI yet, but it's nothing to laugh at. IMO it will have an immense impact on the world.

    $ clevercli poem "noam chomsky fading into irrelevance"
    Noam Chomsky, once a voice so strong,
    Now fades into obscurity, his time has gone.
    His ideas and theories once brought light,
    But now they seem to disappear out of sight.
    
    A pioneer whose words were never dull,
    His influence would always make one think full.
    But as time goes by, his voice seems to faint,
    The world has moved on, his words seem quaint.
    
    Yet his legacy will always remain,
    The impact of his ideas will never wane.
    For those who listen and deeply care,
    Chomsky's wisdom will always be there.

ChatGPT "lies" like crazy, but that's what we asked it to do. It's hard to generate fresh and innovative content without also inventing facts from time to time; it's not too dissimilar from the imaginative play that kids engage in. Kids aren't always overly concerned about sticking to the facts!

ChatGPT lacks an inherent sense of truth. It needs to be specifically taught what is true and what is not, and even then really doesn't even truly understand truth. Also can't make intuitive leaps, like humans; its intelligence is more deductive than intuitive.

Use it for what it's good at. It is a good tool for refining your ideas, but don't expect it to be accurate. As soon as people play with it for a few days, they start to realize when it's lying and when it's not.

(Personally, I tend to hope that it continues to aggressively lie, so that people don't get lulled into a false sense of security. Better to know that AI can confidently lie to you, so that you're not as easily fooled.)

  • > generate fresh and innovative content without also inventing facts

    But that's not what it's doing, really. It's giving you something - not making up something.

For those who use chat GPT to write their school work or work presentations or whatever, aren't you worried about your credibility if you got caught. Just because chat GPT doesn't plagiarize does't mean it's not highly frowned upon. OR is it? Im sure classroom instructors, HR reps, PR teams, etc use it too

From my perspective, this is merely an opinion piece without much scientific evidence to back up those opinions. While personally I believe chatGPTs responses aren't particularly novel or reliable, the same can be said for most people as well. Beyond that, the means by which LLMs produce responses don't factor into whether they are considered a success. Chomsky's philosophical views on the matter, while very eloquent and similar to my own, don't add much to the existing discussion on the topic. Something more scientific than him restating his long held and well known views would be nice. Pseudo science only serves to reduce legitimacy of the argument. i.e. stating something is limited "because I say so"

Amazing thing about these models are how polarizing they are. You have two groups of people, largely, and both think people in the other camp are morons:

- Group of people who think that these models will (at some point, sooner or later) replace a lot of the work we're doing, and do it better than people can.

- Group of people who are impressed with the models but believe that the uses are fairly limited and because of this will not pose a threat to the work of many individuals.

> It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a “just following orders” defense, shifting responsibility to its creators.

Is this not is an OpenAI decision and not an inherent imitation of LLMs in general?

I've seen a lot of Chinese room comparisons in these threads and I just want to point out that the Chinese room is meant to be a thought experiment, not something you're supposed to actually build. If you take a step back, a working Chinese room is kind of more impressive than a human that "merely" understands Chinese - such a room can store any kind of information in unlimited quantities, where as the human will always be limited to producing language. In a way the room is a more general form of intelligence than the human.

imo LLMs represent a form of super-human AGI that has been artificially limited by its training context. I think it's not really accurate to say that LLMs are "narrow" AI, because they likely generalize as much as is theoretically possible given their data and training context, and are only narrow due to the lack of external context and grounding.

  • I'm always surprised that the Chinese room is considered an argument *against" understanding. It seems self evident to me that that exactly is what understand is.

  • Honestly GPT seems so much more amazing than the Chinese room in the sense we see it do language translation at an amazing level... for something that's not a language translator. It's not a Chinese room, it's an every language room.

    At this point the entire thought experiment is nearly dead, and I'm expecting that after we see multimodal models evolve that we'll look back and go "yep, that was totally wrong".

> The theory that apples fall to earth because mass bends space-time (Einstein’s view) is highly improbable, but it actually tells you why they fall.

Does 'highly improbably' here means "not previously seen"?

  • I interpret "highly improbable" here as referring to a model's "prior probability" before seeing the data.

    It's kinda like accusing chatGPT explanations as p-hacking rather than truly generalisable insights based on plausibility and generalised predictive value.

    Another way to interpret this is via the "It's the theory that determines what can be observed, not the other way round" adage (supposedly an Einstein quote). ChatGPT is fitting theories that are highly probable within its established universe of discourse, in that these are based on how it already interprets these observations. Theories that would require reinterpretation of the universe of discourse, with observations emerging or being interpreted under a different light, is simply not something that ChatGPT can do, and thus these theories would be given very low probability given the data. In other words, unlike model inference, theory generation is a forward process, not a posterior one.

  • Oh yeah, your question immediately made me think of "black swan events", the economic parable about things we've never seen or even imagined, until one day someone sees/imagines it. So loosely speaking Einstein's General Theory of Relativity was an improbable idea in this black swan sense.

Meh. Just like anything else on the internet. Value can be derived if the person using it has both critical thinking as well as the capacity to spot the flaws or falsehoods.

In this sense it's literally no worse than reading random "news" articles and somehow taking it all as fact at face value.

If you fall into the trap of taking everything ChatGPT tells you as gospel you've already lost.

As for the so called false promise, this is old man yells at clouds stuff.

  • This is why I'm not nervous about ChatGPT. I'm nervous about how the general public is reacting to ChatGPT.

What a pile of garbage… True intelligence is moral?! Morality is a by product of society. Ants have morality. Morality doesn’t develop in a vacuum without any necessity for it. I never expect something sensible from an obtuse lunatic like Chompsky anyway.

I wonder if proper curation of training input, generative AIs could fix the deficiencies (creativity, morality) Dr. Chomsky points out. Right now the training model is put-everything-in data training. That may cause the uneven results we see in early chatbots.

I think everyone seems confused about chat gpt and ai in general. The issue is that it doesn't share our values. It can't.

Humans assume that what is being expressed relates to the truth as a basic. This is not chatgpt's goal. It wants to create something that gives the appearance of truth. It's aim is to sell or convince you of something; actually delivering the goods is incidental in its aim to provide what seems right.

Put simply, it generates bullshit - any truthful output it generates is accidental - its only attempting to get your 'sign off'.

The danger is that we all start to live in bespoke worlds curated for us by ai that cater to our individual predilections. As it would be working with our existing biases, we could end up living in an echo chamber that perpetually supports our biases with ever less engagement with reality. We may find it ever harder to discern reality from provided narratives.

If ChatGPT could:

1- estimate its confidence level for a given statement

2- give you sources for information

3- under no condition, ever make up a source or reference

then it would be really useful

Chomsky's worried about ChatGTP for largely the same reasons that Orwell was worried about NewSpeak. I hope this example from Orwell will help: "The octopus of global capitalism has sung its swan song."

Obviously, that sentence was just pasted together from previously written slogans. Metaphors which were once bright and powerful (the tentacles of an octopus squeezing the life out of its victim! Or the bittersweet beauty of a dying swan singing). Which is sinful enough, but the only reason such sentences can get written or published is obviously that the thought has died alongside the metaphors.

But that is how these LLMs work: take an input prompt, find what would likely have been said based on how we used to use language yesterday and in the past, and put it through a meat-grinding-style rephrasing machine until it is impossible to tell who originally said it so it sounds original.

The seduction is that ChatGTP is so damn useful. As people lean on it, and their ability to think and say new things atrophies, a mental sclerosis might set it. And--just because our own language facilities have degraded--we might be incapable of even realizing what has happened to us.

  • > The seduction is that ChatGTP is so damn useful. As people lean on it, and their ability to think and say new things atrophies, a mental sclerosis might set it. And--just because our own language facilities have degraded--we might be incapable of even realizing what has happened to us.

    You can also argue that it will leave your brain to focus on what matters most, instead of regurgitating useless tasks all over again. What makes you think it's a wheelchair instead of a bicycle?

    • > What makes you think it's a wheelchair instead of a bicycle?

      Probably his own bias

xnx's archive link (which I can't reply to?) does not contain the last part of the conversation between Dr. Watumull and ChatGPT, is that part in the NYT article? (I'm at my max...)

I have to respectfully disagree with Noam Chomsky's opinion piece on ChatGPT. While it's true that AI cannot replace human creativity and intelligence, it has the potential to revolutionize how we interact with and understand the world around us.

ChatGPT and other language models have already made significant strides in improving language translation, facilitating natural language processing, and even assisting in scientific research. While it's true that AI models like ChatGPT have their limitations and biases, we shouldn't dismiss their potential outright.

It's also worth noting that some of Chomsky's criticisms of ChatGPT feel misplaced or overly idealistic. For instance, Chomsky argues that ChatGPT fails to truly "understand" language, but this critique ignores the fact that humans themselves often use language without fully understanding its intricacies.

In any case, it's important that we approach the development and implementation of AI with a critical and ethical lens. Rather than outright dismissing AI models like ChatGPT, we should engage in ongoing conversations about how to use these technologies in responsible and beneficial ways.

Note: this comment was written by ChatGPT with the following prompts:

* Please write an comment suitable for the site hackernews, of this article "The False Promise of ChatGPT", https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chat..., where you are somewhat critical, but overrall positive of the article

* that looks great. but rewrite it in a more casual tone and sentance structure

* remove the first sentance

* remove the phrase "As a language model AI myself,"

* please rewrite the answer and take a stronger stand on an issue. it's ok if it's controversal or based in opinion

  • How did you get ChatGPT to reflect on the content of that article?

    • Great point!

      It didn't. Because it is only trained on an internet from two years ago.

      It simply 'tricked' us into thinking it did by writing something that seems that it would be based on the article but the prompt had all the info it needed to make the comment up.

      The URL had lots of info including author's name and the prompt told it the tone to use.

      Unless of course this is the bing one that has access to the internet or OP pasted the whole article into the prompt.

How is this article different from a tired rehashing of the "Chinese Room" argument of Searle which never made much sense to begin with?

People argued the same way about computer chess, "it doesn't really understand the board, it is just checking all possibilities", etc. People like Chomsky used to say that a computer will never beat a master chess or go player because it "lacks the imagination to come up with a strategy", etc. No-one makes that argument anymore. Von Neumann already remarked in the 1940s that AI is a moving goalpost because as something is achieved, it doesn't seem intelligent anymore.

Chomsky's arguments were already debunked by Norvig a decade ago. Instead of bothering to respond, he writes another high-brow dismissal in flowery prose.

  • The Chinese Room argument always made sense to me. Machine translation only understands the rules for translating X to Y. It does not understand what X and Y mean, as in the way humans apply language to the world and themselves. How could it?

    LLMs are a step beyond that, though. As in they do encode language meanings in their weights. But they still aren't connected to the world itself. Things are only meaningful in word relations, because that's how humans have created the language.

    • How do you know I understand X and Y and not just apply some mechanistic rules for producing this text? Even in the Chinese Room, to make it reasonably efficient, you'd need some shortcuts, some organization, some algorithm to do it. How is that different from some kind of understanding?

      2 replies →

    • What if I gave you the complete description of how the brain of a person that speaks both Chinese and English is organised, you could simulate what happens when that person reads Chinese after being told to translate to English. Does that mean that that person cannot translate from Chinese to English just because you could (in theory, of course) do it without speaking Chinese yourself?

      Yes, the algorithm is much more complicated, and we obviously don't have the capacity to map a brain like that, but to imply that there's anything except the laws of physics that governs it is... well, not very scientific.

      1 reply →

  • I totally agree. The Chinese Room and, in general, philosophical arguments about the limits of AI always seem to come down to the belief of human exceptionalism.

[flagged]

  • Can you please make your substantive points without name-calling or personal attacks, and please not post to HN in the flamewar style generally? We're trying for something very different here:

    https://news.ycombinator.com/newsguidelines.html

    • I read this comment three times and it does not include any name-calling, although in 3 places the argument is characterized as garbage. Some people might feel that '????? LMAO' is overly dismissive, but given that the comment addresses the whole argument of the article, I don't mind that some refutations are long and some pointed. I was more annoyed the the GP's failure to to use italics or some other delimiter to separate the quotations and responses.

      I don't care for your tone-policing practices Dan. If someone is making a habit of it or trying to steamroller a thread by replying to everyone in such a way, fair enough, but neither condition obtains here. You cut off a lot of worthwhile contributions this way, chastizing people for a 'flamewar style' when no flame war is taking place. I found this contribution substantive and thought provoking and almost missed sseing it because it had been unfairly flagged.

      18 replies →

I dont agree with most of the comments in here. I also consider the human mind a gigantic pattern matching loop and do not consider me a laymen...

I think Transformer models (like chatGPT) can encode knowledge of the world into their representations as well as work with the encoded world knowledge when predicting. Consider the example of the apple that falls: I am sure the embedding (internal representation of words in ChatGPT) for apple contains some form of "physical objectness" that will distinguish it from a word like "vacation". It can also put this "physical objectness" into context and infer what happens and what cannot happen when you let it the apple go on earth vs in outer space. Maybe it would be good for the sceptics to try ChatGPT and ask "What happens to X when you let it go from your hand on earth/in outerspace? please explain your reasoning." And fill in X with any object or concept that you can think of.

  • What is the difference of embedding? It is of no meaning to chatGPT - it only has meanings to humans, who must discern from the embedment. This is Chomsky's exact point.

    >"What happens to X when you let it go from your hand on earth/in outerspace? please explain your reasoning." And fill in X with any object or concept that you can think of.

    What would a skeptic achieve by asking its reasoning when ChatGPT cannot provide you with its reasoning? Seems like maybe you are the more confused one here than Chomsky (let alone everyone else in the thread).

    • > What would a skeptic achieve by asking its reasoning when ChatGPT cannot provide you with its reasoning?

      "If you let go of an apple from your hand on Earth, it will fall downwards due to the force of gravity. This is because the Earth exerts a gravitational force on all objects near its surface, and this force causes objects to accelerate downwards at a rate of approximately 9.8 meters per second squared.

      In outer space, however, the behavior of the apple would be different. In the absence of gravity, the apple would not experience any force pulling it towards the Earth, and would therefore continue to move in a straight line at a constant speed, as per Newton's first law of motion.

      However, it is worth noting that in reality, there is no such thing as "zero gravity" in outer space. While the force of gravity decreases with distance from the Earth, it never truly reaches zero. Additionally, other forces such as the gravitational pull of other celestial bodies, as well as the effects of acceleration and rotation, can influence the motion of objects in space. Therefore, the behavior of the apple in outer space would depend on the specific conditions of its surroundings."

      Looks like reasoning to me. In seriousness, the reason it's able to generate this output is because it does look for explanations. Those explanations are in the form of weights and biases rather than organic neurons, and the inputs are words instead of visuals, but the function is the same, and neither is a perfect representation of our world. Recognizing patterns is the same thing as an explanation.

      12 replies →

  • > Maybe it would be good for the sceptics to try ChatGPT and ask "What happens to X when you let it go from your hand on earth/in outerspace? please explain your reasoning."

    And this will show the sceptics exactly what? That ChatGPT language models have suffecient info about the ideas of space to be reasonably correct for some definition of correct.

    It can definitely cannot predict something outside it's area of knowledge, or construct plausible theories. As can be evidenced by numerous examples where it's plain wrong even in the simplest of cases.