Comment by FrustratedMonky

2 years ago

Engineers are always building things that are incredible, then turning their back on it as ordinary once the problem is solved, “oh that’s so normal, it was just a little math, a little tweak, no big deal”.

AI has gone through a lot of stages of “only X can be done by a human”-> “X is done by AI” -> “oh, that’s just some engineering, that’s not really human” or “no longer in the category of mystical things we can’t explain that a human can do”.

LLM is just the latest iteration of, “wow it can do this amazing human only thing X (write a paper indistinguishable from a human)” -> “doh, it’s just some engineering (it’s just a fancy auto complete)”.

Just because AI is a bunch of linear algebra and statistics does not mean the brain isn’t doing something similar. You don’t like terminology, but how is re-enforcement “Learning”, not exactly the same as reading books to a toddler and pointing at a picture and having them repeat what it is?

Start digging into the human with the same engineering view, and suddenly it also just become a bunch of parts. Where is the human in the human once all the human parts are explained like an engineer would. What would be left? The human is computation also, unless you believe in souls or other worldly mysticism. So why not think eventually AI as computation can be equal to human.

Just because Github CoPilot can write bad code, isn't a knock on AI, it's real, a lot of humans write bad code.

The problem with LLM in my view is that they're capped at what already exists.

Using them for "creative" things, is that they can parrot things back in the statistically average way, or maybe attempt to echo it in an existing style.

Copilot cannot use something because it prefers it, or thinks it's better than what's common. It can only repeat what is currently popular (and will likely be self reenforced over time)

When you write prose or code you develop preferences and opinions. "Everyone does it this way, but I think X is important."

You can take your learning and create a new language or framework based on your experiences and opinions working in another.

You develop your own writing style.

LLM cuts out this chance to develop.

---

Images, prose, (maybe) code are not the result of computation.

Two different people compute the same thing they get the same answer. When I ask different people to write the same thing I get wildly different answers.

Sure ChatGPT may give different answers, but they will always be in the ChatGPT style (or parroting the style of an existing someone).

"ChatGPT will get started and I'll edit my voice into what it generated" is not how writing works.

It's difficult for me to see how a world where people are communicating back and forth with the most statistically likely manner is good

  • All artists of every stripe have studied other art, have practiced what has come before, and have influences. What do you think they do in art school; they copy what came before. The old masters had understudies, that learned a style. Is it not an old saying in art that ‘there is nothing original’. Everything was based on something.

    Humans are also regurgitating what they ‘inputted’ to their brain. For programming, isn’t it an old joke that everyone just copy/paste's from stack overflow?

    Why if an AI does it (copy paste), it is somehow now a lesser accomplishment than when a human does it.

    • > Why if an AI does it (copy paste), it is somehow now a lesser accomplishment than when a human does it.

      Because the kind of 'art' the AI will create will end up in a Canva template; it will be clip art for the modern Powerpoint or Facebook ad. Because corporations like Canva are the only ones that will pay the fees to use these tools at scale. And all they produce is marketing detritus, which is the opposite of art.

      Instead of the "Corporate Memphis" art style that's been run into the ground by every big tech company, AI will produce similarly bland, corporate-approved graphics that we'll continue to roll our eyes at.

    • It's a fair point.

      My concern is with the limitations in the creation of new styles.

      I guess my view is that you send 100 people to art school and you get 100 different styles out of it (ok maybe 80).

      With AI you've got a handful of dominant models instead of a unique model for each person based on life experience.

      Apprentices learn and develop into a master. If that works is all moved to an LLM, where do the new masters come from?

      ---

      I take your point about the technology. I have a hard time saying it's not impressive or similar to how humans learn.

      My concern is more with what widespread adoption will mean

  • The style can be influenced, however. It isn't unreasonable to suggest an AI that fine tunes the style of the LLM output to meet whatever metric you're after.

    As far as creativity goes, human creativity is also a product of life experiences. Artistic styles are always influenced by others, etc.

I generally agree that we quickly adjust to new tech and forget how impactful it is.

But I can’t fully get on board with this:

> but how is re-enforcement “Learning”, not exactly the same as reading books to a toddler and pointing at a picture and having them repeat what it is? Start digging into the human with the same engineering view, and suddenly it also just become a bunch of parts. Where is the human in the human once all the human parts are explained like an engineer would.

The parent teaching a toddler bears some vague resemblance to machine learning, but the underlying results of that learning (and the process of learning itself) could not be any more different.

More problematic than this, while you may be correct that we will eventually be able to explain human biology with the precision of an engineer, these recent AI advances have not made meaningful progress towards that goal, and such an achievement is arguably many decades away.

It seems you are concluding that because we might eventually explain human biology, we can draw conclusions now about AI as if such an explanation had already happened.

This seems deeply problematic.

AI is “real” in the sense that we are making good progress on advancing the capabilities of AI software. This does not imply we’ve meaningfully closed the gap with human intelligence.

  • I think the point is that we have been “meaningfully closing” the gap rapidly, and at this point it is only a matter of time, the end can be seen, even if currently not completely written out in some equations.

    It does seem like on HN, the audience is heavily weighted towards software developers that are not biologist, and often cannot see the forest for the trees. They know enough about AI programming to dismiss the hype, and not enough about biology, and miss that this is pretty amazing.

    The understanding of the human ‘parts’ are being chipped away, just as quickly as we have had breakthroughs in AI. These fields are starting to converge and inform each other. I’m saying this is happening fast enough that the end game is in sight, humans are just made of parts, an engineering problem that will be solved.

    Free will and consciousness are overrated, we think of ourselves as having some mystically exceptional consciousness, which clouds the credit we give advancements in AI. ‘AI will never be able to equal a human’, when humans just want lunch, and our ‘free will’ is based on how much sleep we got. DNA is a program; it builds the brain that is just responding to inputs. Read some Robert Sapolsky, human reactions are just hormones, chemicals, responding to inputs. We will eventually have an AI that mimics a human because humans aren’t that special. Even if the function of every single molecule in the body, or every equation in AI, is all fully mapped out, enough is to stop claiming 'specialness'.

    • > I think the point is that we have been “meaningfully closing” the gap rapidly

      In your opinion, how wide is this gap? To claim that it is closing at a meaningful pace brings the implication that we understand the width. Has anyone made a credible claim that we actually understand the width of the gap?

      > The understanding of the human ‘parts’ are being chipped away, just as quickly as we have had breakthroughs in AI.

      This is a thinking trap. Without an understanding or definition of the breadth of the problem space, both fields could be making perfectly equivalent progress and it would still imply nothing regarding the width of the gap or the progress made closing it.

      > These fields are starting to converge and inform each other.

      Collaboration does not imply anything more than the existence of cooperation across fields. Do you have specific examples where the science itself is converging?

      My understanding is that our ability to comprehend neural processes is still so limited that researchers focus on the brains of worms (e.g. a flatworm’s 53 neurons), and we still don’t understand how they work.

      > and at this point it is only a matter of time, the end can be seen

      Who is claiming we have any notion of being close enough to see the end? Most experts on the cutting edge cite the enormous distance yet to be covered.

      I’m not claiming the progress made isn’t meaningful by itself. I’m struggling with your claim that we have any idea how much further we have to go.

      Landing rovers on Mars is a huge achievement, but compared to the array of advancements required to colonize space, it seems like just a small step forward in comparison.

      3 replies →

  • > the underlying results of that learning (and the process of learning itself) could not be any more different

    To drill down a bit, I think the difference is that the child is trying to build a model - their own model - of the world, and how symbols describe or relate to it. Eventually they start to plan their own way through life using that model. Even though we use the term "model" that's not at all what a neural-net/LLM type "AI" is doing. It's just adjusting weight to maximize correlation between outputs and scores. Any internal model is vague at best, and planning (the also-incomplete core of "classical" AI before the winter) is totally absent. That's a huge difference.

    ChadGPT is really not much more than ELIZA (1966) on fancy hardware, and it's worth noting that Eliza's was specifically written to illustrate superficiality of (some) conversation. Its best known DOCTOR script was intentionally a parody of Rogerian therapy. Plus ça change, plus c'est la même chose.

    • Why do we think that inside the 'weights' there is not a model? Where in the brain can you point and say 'there is the model'. The wiggly mass of neurons creates models and symbols, why do we assume that inside large neural nets the same thing isn't happening. When I see pictures of both (brain scan versus weights), they look pretty similar. Sorry I don't have latest citation, but was under assumption that the biggest breakthroughs in AI were around symbolic logic.

      1 reply →

First off, I'm not sure why this is the most upvoted comment. The OP explicitly praises AI, he just smells the same grifters gathering around like they did to crypto and he's absolutely right, it is the exact same folks. He isn't claiming the mind is metaphysical or whatever.

On your claim that the mind is metaphysical OR it is a NN, you have to understand that this extremely false dichotomy is quite the stretch itself, as if there are no other possibilities, that it isn't even a range or it could be something else entirely. One of the critiques people have of NN from the "old guard" is the lack of symbolic intelligence. Claiming you don't need it and fitting is merely enough is suspect because even with OpenAI tier training, just the grammar is there, some of the semantic understanding is lacking. Appealing to the god of the gaps is a fallacy for a reason, although it may in fact turn out to be true, potentially that just more training might be all that is needed. EDIT: Anyway, the point is assuming symbolic reasoning is a part of intelligence (hell, it's how we discuss things) doesn't require mysticism, it just is an aspect that NNs currently don't have, or very charitably do not appear to have quite yet.

Regardless, there isn't really evidence that "what brains do is what NNs do" or vice versa. The argument as many times as it has been pushed has been primarily driven by analogy. But just because a painting looks like an apple doesn't mean you can eat the canvas. Similarities might betray some underlying relationship (an artist who made the painting took reference from an actual apple you can eat), but assuming an equivalence without evidence just strange behavior, and I'm not sure for what purpose.

  • The main post was about burnout and hype. And I was just trying point out that things really are advancing fast and we are producing amazing things, despite the hype.

    Like maybe the hype is not misplaced. There are grifters, and there are companies with products that are basically "IF" statement, and the hype is pretty nutz.

    On other hand, some of this stuff is amazing. Don't let the hype and smarmy sales people take away from the amazing advancements that are happening. Just a few years ago some of this would have been considered impossible, only possible in the province of the 'mystery of the human mind'. And yet, here we are, and what it is to be human is being chipped away more every month, and yeah a lot of people want to profit.

    Or more to the my main thought, a lot of heads down engineers that are cranking out solutions, do loose sight of how far they are moving. So don't get discouraged by the hype, marketing is in every industry, so why not stay in this cool one that is doing all the amazing things.

> The human is computation also, unless you believe in souls or other worldly mysticism.

I think it is incredibly sad that a person can be reduced to believing humans don't have souls. Do something different with yourself so you can discover the miracle of life. If you don't believe there is anything more to people and to the world than mechanical processes, I would challenge you to do a powerful spiritual activity.

  • By spiritual practice, do you mean something like studying the Skanda's or the 5 Aggregates? Or do you mean to open myself to the love of our lord and savior? It does make a difference in how you approach the world if your spiritual practice encourages insight, or if you are blinded by faith in a spiritual entity that is directing you.

    • A powerful spiritual practice like challenging your own limits and fears to the maximum, or meditation and fasting, or immersing yourself in a completely different environment from what you are familiar with until you know it truly. Or if these things sound too abstract, to take a strong dose of psychedelics alone or with others.

      Religious texts are something that can be interesting after sensing some spirituality, but probably not before. I don't think anybody who is not spiritual can become so by reading religious texts.

      6 replies →

  • What is a powerful spiritual activity you’d recommend?

    • I'm pasting my response from above:

      A powerful spiritual practice like challenging your own limits and fears to the maximum, or meditation and fasting, or immersing yourself in a completely different environment from what you are familiar with until you know it truly. Or if these things sound too abstract, to take a strong dose of psychedelics alone or with others.