Comment by godelski

6 months ago

  > It wasn't a simple brute force.

I think you misunderstood me.

"Simple" is the key word here, right? You agree that it is still under the broad class of "brute force"?

I'm not saying Claude is naively brute forcing. In fact, with lack of interpretibility of these machines it is difficult to say what kind of optimization it is doing and how complex that it (this was a key part tbh).

My point was to help with this

  > I really don't want to anthropomorphize these programs, but it's just so hard when it's acting so much like a person...

Which requires you to understand how some actions can be mechanical. You admitted to cognitive dissonance (something we all do and I fully agree is hard not to do) and wanting to fight it. We're just trying to find some helpful avenues to do so.

  > It's "responding" to stimuli in logical ways.

And so too can a simple program, right? A program can respond to user input and there is certainly a logic path it will follow. Our non-ML program is likely going to have a deterministic path (there is still probabilistic programming...), but that doesn't mean it isn't logic, right?

But the real question here, which you have to ask yourself (constantly) is "how do I differentiate a complex program that I don't understand from a conscious entity?" I guarantee you that you don't have the answer (because no one does). But isn't that a really good reason to be careful about anthropomorphizing it?

That's the duck test.

How do you determine if it is a real duck or a highly sophisticated animatronic?

If you anthropomorphize, you rule out the possibility that it is a highly sophisticated animatronic and you *MUST* make the assumption that you are not only an expert, but a perfect, duck detector. But simultaneously we cannot rule out that it is a duck, right? Because, we aren't a perfect duck detector *AND* we aren't an expert in highly sophisticated animatronics (especially of the duck kind).

Remember, there are not two answers to every True-False question, there are three. Every True-False question either has an answer of "True", "False", or "Indeterminate". So don't naively assume it is binary. We all know the Halting Problem, right? (also see my namesake or quantum physics if you want to see such things pop up outside computing)

Though I agree, it can be very spooky. But that only increases the importance of trying to develop mental models that help us more objectively evaluate things. And that requires "indeterminate" be a possibility. This is probably the best place to start to combat the cognitive dissonance.

I have no idea why some people take so much offense to rhe fact humans are just another machine, there's no reason why another machine can't surpass it here as in all other aveneus machines have already. Many of the reasons people give for llms not being conscious are just as applicable to humans too.

  • I don't think the question is if humans are a machine or not but rather what is meant by machine. Most people interpret it as meaning deterministic and thus having no free will. That's probably not what you're trying to convey so might not be the best word to use.

    But the question is what is special about the human machine? What is special about the animal machine? These are different from all the machines we have built. Is it complexity? Is it indeterministic? Is it more? Certainly these machines have feelings, and we need to account for them when interacting with them.

    Though we're getting well off topic from determining if a duck is a duck or is a machine (you know what I mean by this word and that I don't mean a normal duck)

    • What is indeterminism here? I am not sure the question having or not having free will has any impact of how to make human machines. We are just as in the dark about the future, if we have free will or not. I am not certain of any physical problem in which free will or lack of it plays a role. I could be wrong. So its probably an interesting question but rather pointless.

      Even with the everyday machines and programs we have, we can make it behave based on random input taken for example from physical noise. It doesn't suddenly make it a special or different type of machine.

      2 replies →

  • Absolutely possible (I’d say even likely) for humans to be surpassed by machines who have better recall and storage already.

    I’m highly skeptical this will happen with llms though, their output is superficially convincing but without depth and creativity.