Comment by __MatrixMan__

2 years ago

It seems pretty obvious to me, after using chatGPT for nearly everything over the last few weeks, that it does not have the kind of intelligence that they're claiming it does not have.

It's just recycling things that other humans have said. Which is marvelous because it would typically take me a very long time to build a map between the past contributions of those humans and the work that's presently in front of me. It's like I'm temporarily everybody.

By raising the alarm re: it's not what you think it is, I fear they're actually fueling the fire re: people thinking that that's what it is.

It's like if I went on record saying I didn't steal something which hasn't gone missing. Now everybody's thinking about its non-theft and not something more useful like how to best make use of it.

> It's just recycling things that other humans have said.

This seems false, unless you mean that everything anyone says is just words others have said in a different order.

For example, I asked ChatGPT: "Write a fictional story of if Peter Parker joined the 2016 OKC Thunder." One of my favorite parts is: "...determined to balance his superhero duties with his love of basketball. He even designed a special suit that allowed him to play without revealing his identity."

This isn't recycling... at least not in the way I think a lot of people think of recycling.

  • Agreed. GPT isn't recycling, regurgitating, or anything like that. It's more like remixing, which is pretty fascinating. It's like having an opinionated DJ that plays whatever you ask-ish. But, if you ask for something too edgy it just plays beat-heavy Beethoven with a Run DMC voice over, on repeat.

    • I do think remixing is a better word, but the point is that it's unlikely to come up with any genuinely new insights. It's just faster access to existing insights, sometimes presented intact.

      3 replies →

    • But when it comes to actual knowledge and not stories, remixing is not a desirable feature. I discussed sets and the colors of fruit with ChatGPT. Every response it gave had incorrect information in it.

  • > He even designed a special suit that allowed him to play without revealing his identity

    Which identity, ChatGPT?

    Is he playing as Peter Parker and trying to hide his superhero identity (which obviously gives him unfair advantages due to spider strength/speed/reflexes/etc.) or playing as Spider-Man (which presumably would pack in the fans in spite of the obvious unfair advantages) and trying to hide his identity as Peter Parker?

    • Here's some context that makes it clearer:

      "However, Peter's superhero identity as Spider-Man began to interfere with his basketball career. He often had to leave games early to attend to emergencies, and his secret identity was a constant source of anxiety.

      Despite these challenges, Peter continued to play for the Thunder, determined to balance his superhero duties with his love of basketball. He even designed a special suit that allowed him to play without revealing his identity."

  • "Regurgitating" would seem to be a better description.

    In fact, a near-exact description of what these systems do, per the dictionary definition of the term:

       (intransitive verb) - To repeat (facts or other learned items) from memory with little reflection.

I think people are miss that while chatgpt isn’t the destination it’s an incredible way station in the way that shows meaningful progress. It’s deficiencies can be built around with other techniques, much like our mind isn’t a single model but an ensemble of various models and processes in a feedback and control loop. By not seeing that, people erroneously discount both its amazing utility within its limits and the astounding breakthrough it is in evolving a roadmap to the destination. These last two years have proven to me beyond a doubt that we are very close to the AI people are disappointed chatgpt isn’t, while before that I had entirely written of AI as a pursuit.

  • > These last two years have proven to me beyond a doubt that we are very close to the AI people are disappointed chatgpt isn’t, while before that I had entirely written of AI as a pursuit.

    The problem with this is we don't know exactly where on the sigmoid growth curve we are. Every developer is aware of the phrase "the last 10% of task takes 90% of the effort" - we're at a point that is promising, but who knows how far away we really are in terms of years and effort. Are we going to run into a chat uncanny valley?

  • I honestly don't think people (at least, the sorts of people on HN) are generally missing this point at all. I think a lot of people are calling out the absurd claims that are being made about it, though, as they should be.

    • What’s hard for tech people to understand is that for people who aren’t geeks, they might be seeing a lot of downsides to the use of the technology without the inherit nerd fetish and excitement of having digital brains shared by many HN readers and geeks in general.

      I find it funny that on hacker news I frequently see dismissive, anti-human comments all the time like, “oh brains are nothing special or magic, they’re just atoms”, “we’re just meat bags etc”, but then fail to understand why regular people might be unimpressed by something which acts somewhat like a synthetic brain.

      It is what it is I guess…if people aren’t immediately impressed with the results. If it’s not really what the majority of people want. Who cares?

      1 reply →

Our marketing team using it for writing copy, tweets, etc have clearly demonstrated it's not just recycling content.

Somehow it can generate new forms of content. One of our big campaigns in the last week used slighlty edited ChatGPT copy, the biggest surprise was it could write JOKES about our company, that were FUNNY AND MADE SENSE. That alone has shocked leadership into deeply looking into AI a lot more.

People are truly underestimating the emergent power of these neural networks.

  • Do you believe these to be adaptations of jokes/puns that have been used elsewhere or truly novel jokes? Understandably this is difficult to say one way or the other without de-anonymizing yourself.

    • Even if adapted, granted how specific it was to our field, and the punchline relevant to our industry, I would say it's still giving a real humans a run for their money.

  • Your spam team used a spam machine to generate spam. But it’s not even SPAM which has some flavor and nutrition. Just filler to annoy people and trick them into paying you.

    Your profile says “ Stuck in hell references to my job working with ----“

I was going to say the same thing, if you've interacted with it, in some depth, you know how human it may seem in one sentence then in the next completely an utterly proves itself to be a machine. Yet some people (some examples are well know) really project a human like mind onto the thing (as posted here before, this is also insightful [0]).

[0] https://nymag.com/intelligencer/article/ai-artificial-intell...

  • There are people who literally had pet rocks.

    Humans can project feelings onto cars never mind something that can communicate with us!

    Just look at Replika.

    I'm not surprised people are projecting sentience onto these things. I am worried about the fall out though.

    • Aren't we inherently projecting feelings onto anything that isn't inside our own direct experience? There is no way to confirm any alleged sentience outside of your own "feelings" is not an automaton, including other humans.

      3 replies →

It's obvious to you, and it's obvious to me. But there are a lot of people for whom it is, in fact, obvious that ChatGPT is intelligent, and likely to be the first wave of our new robot overlords.

Yes, there will be some subset of those people who read articles like this and leap to "it's a conspiracy! they're trying to hide how their AI is going to take over the world!!!!" But there will be many, many more—particularly given that this is in the NY Times—who have only heard some of the wild stories about ChatGPT, but read this article, see that it's by Noam Chomsky, who's still a fairly respected figure by many, and take reassurance from his decent-if-imperfect (by our standards, anyway) explanation of what's really going on here.

> temporarily everybody

exactly! It is the person from Idiocracy with exactly 100% IQ. It only knows what the absolute average person know. For example, it knows almost nothing about healthcare in other countries (outside the US). Just watch me get lambasted on reddit after using info from ChatPGT: https://old.reddit.com/r/ShitAmericansSay/comments/11f5tbt/a...

On the other hand, in a subject area where you know very little, it's 100 IQ seems like genius! It fills in a lot of gaps. People comparing it to AGI are perfectionists, dramatic, or missing the point. It's not supposed to be smarter than us. and so what if it can't? It helps me write country songs about any news article.

  • I've been pretty amazed with its ability to write python, and pretty disappointed with its ability to write nix derivations. The average person can't do both, so I'd say it "knows" much more than any single idealized person.

    I figure the discrepancy has to do with one of these languages having an absolutely massive amount of chatter about it, and the other being relatively obscure: It's smart about things that lots of people are smart about, and dumb about things that only a few people are smart about. Well not just "smart" really, but "smart-enough and willing to publish about it".

    I think we're going to need fewer people with common knowledge and more people with specialized knowledge, and we're going to have to figure out how to optimize the specialist's outputs so that the widest audience benefits. I love how not-a-zero-sum-game it's going to be.

  • Not average. But mode. Whatever connection is most commonly made, or at least a random dice roll from the top 3 connections.

    It’s like talking to someone who says “but everyone else says.”

    That’ll change when connected to a source of truth, logic and validity.