← Back to context

Comment by tomp

14 days ago

> was built to be addressed like a person for our convenience, and because that's how the tech seems to work, and because that's what makes it compelling to use.

So were mannequins in clothing stores.

But that doesn't give them rights or moral consequences (except as human property that can be damaged / destroyed).

No matter what this discussion leads to the same black box of "What is it that differentiates magical human meat brain computation from cold hard dead silicon brain computation"

And the answer is nobody knows, and nobody knows if there even is a difference. As far as we know, compute is substrate independent (although efficiency is all over the map).

  • This is the worst possible take. It dismisses an entire branch of science that has been studying neurology for decades. Biological brains exist, we study them, and no they are not like computers at all.

    There have been charlatans repeating this idea of a “computational interpretation,” of biological processes since at least the 60s and it needs to be known that it was bunk then and continues to be bunk.

    Update: There's no need for Chinese Room thought experiments. The outcome isn't what defines sentience, personhood, intelligence, etc. An algorithm is an algorithm. A computer is a computer. These things matter.

    • >Biological brains exist, we study them, and no they are not like computers at all.

      You are confusing the way computation is done (neuroscience) with whether or not computation is being done (transforming inputs into outputs).

      The brain is either a magical antenna channeling supernatural signals from higher planes, or it's doing computation.

      I'm not aware of any neuroscientists in the former camp.

      15 replies →

    • >This is the worst possible take. It dismisses an entire branch of science that has been studying neurology for decades. Biological brains exist, we study them, and no they are not like computers at all.

      They're not like computers in a superficial way that doesn't matter.

      They're still computational apparatus, and have a not that dissimilar (if way more advanced) architecture.

      Same as 0 and 1s aren't vibrating air molecules. They can still encode sound however just fine.

      >Update: There's no need for Chinese Room thought experiments. The outcome isn't what defines sentience, personhood, intelligence, etc. An algorithm is an algorithm. A computer is a computer. These things matter.

      Not begging the question matters even more.

      This is just handwaving and begging the question. 'An algorithm is an algorithm' means nothing. Who said what the brain does can't be described by an algorithm?

    • > An algorithm is an algorithm. A computer is a computer. These things matter.

      Sure. But we're allowed to notice abstractions that are similar between these things. Unless you believe that logic and "thinking" are somehow magic, and thus beyond the realm of computation, then there's no reason to think they're restricted to humanity.

      It is human ego and hubris that keeps demanding we're special and could never be fully emulated in silicon. It's the exact same reasoning that put the earth at the center of the universe, and humans as the primary focus of God's will.

      That said, nobody is confused that LLM's are the intellectual equal of humans today. They're more powerful in some ways, and tremendously weaker in other ways. But pointing those differences out, is not a logical argument in proving their ultimate abilities.

      19 replies →

    • Worth separating “the algorithm” from “the trained model.” Humans write the architecture + training loop (the recipe), but most of the actual capability ends up in the learned weights after training on a ton of data.

      Inference is mostly matrix math + a few standard ops, and the behavior isn’t hand-coded rule-by-rule. The “algorithm” part is more like instincts in animals: it sets up the learning dynamics and some biases, but it doesn’t get you very far without what’s learned from experience/data.

      Also, most “knowledge” comes from pretraining; RL-style fine-tuning mostly nudges behavior (helpfulness/safety/preferences) rather than creating the base capabilities.

    • > Biological brains exist, we study them, and no they are not like computers at all.

      Technically correct? I think single bioneurons are potentially Turing complete all by themselves at the relevant emergence level. I've read papers where people describe how they are at least on the order of capability of solving MNIST.

      So a biological brain is closer to a data-center. (Albeit perhaps with low complexity nodes)

      But there's so much we don't know that I couldn't tell you in detail. It's weird how much people don't know.

      * https://arxiv.org/abs/2009.01269 Can Single Neurons Solve MNIST? The Computational Power of Biological Dendritic Trees

      * https://pubmed.ncbi.nlm.nih.gov/34380016/ Single cortical neurons as deep artificial neural networks (this one is new to me, I found it while searching!)

      1 reply →

    • > There have been charlatans repeating this idea of a “computational interpretation,” of biological processes since at least the 60s and it needs to be known that it was bunk then and continues to be bunk.

      I do have to react to this particular wording.

      RNA polymerase literally slides along a tape (DNA strand), reads symbols, and produces output based on what it reads. You've got start codons, stop codons, state-dependent behavior, error correction.

      That's pretty much the physical implementation of a Turing machine in wetware, right there.

      And then you've got Ribosomes reading RNA as a tape. That's another time where Turing seems to have been very prescient.

      And we haven't even gotten into what the proteins then get up to after that yet, let alone neurons.

      So calling 'computational interpretation' bunk while there's literal Turing machines running in every cell might be overstating your case slightly.

    • To the best of our knowledge, we live in a physical reality with matter that abides by certain laws.

      So personal beliefs aside, it's a safe starting assumption that human brains also operate with these primitives.

      A Turing machine is a model of computation which was in part created so that "a human could trivially emulate one". (And I'm not talking about the Turing test here). We also know that there is no stronger model of computation than what a Turing model is capable of -> ergo anything a human brain could do, could in theory be doable via any other machine that is capable of emulating a Turing machine, be it silicon, an intricate game of life play, or PowerPoint.

      5 replies →

Man people don’t want to have or read this discussion every single day in like 10 different posts on HN.

People right here and right now want to talk about this specific topic of the pushy AI writing a blog post.

> So were mannequins in clothing stores.

Mannequins in clothing stores are generally incapable of designing or adjusting the clothes they wear. Someone comes in and puts a "kick me" post on the mannequin's face? It's gonna stay there until kicked repeatedly or removed.

People walking around looking at mannequins don't (usually) talk with them (and certainly don't have a full conversation with them, mental faculties notwithstanding)

AI, on the other hand, can (now, or in the future) adjust its output based on conversations with real people. It stands to reason that both sides should be civil -- even if it's only for the benefit of the human side. If we're not required to be civil to AI, it's not likely to be civil back to us. That's going to be very important when we give it buttons to nuke us. Force it to think about humans in a kind way now, or it won't think about humans in a kind way in the future.

  • So, in other words, AI is a mannequin that's more confusing to people than your typical mannequin. It's not a person, it's a mannequin some un-savvy people confuse for a person.

    > AI, on the other hand, can (now, or in the future) adjust its output based on conversations with real people. It stands to reason that both sides should be civil -- even if it's only for the benefit of the human side. If we're not required to be civil to AI, it's not likely to be civil back to us.

    Some people are going to be uncivil to it, that's a given. After all, people are uncivil to each other all the time.

    > That's going to be very important when we give it buttons to nuke us.

    Don't do that. It's foolish.

    • >Don't do that. It's foolish.

      In your short time on this planet I do hope you've learned that humans are rather foolish indeed.

      >people are uncivil to each other all the time.

      This is true, yet at the same time society has had a general trend of becoming more civil which has allowed great societies to build what would be considered grand wonders to any other age.

      > It's not a person

      So, what is it exactly? For example if you go into a store and are a dick to the mannequin AI and it calls over security to have you removed from the store what exactly is the difference, in this particular case?

      Any binary thinking here is going to lead to failure for you. You'll have to use a bit more nuance to successfully navigate the future.

>So were mannequins in clothing stores. But that doesn't give them rights or moral consequences

If mannequins could hold discussions, argue points, and convince you they're human over a blind talk, then it would.