Comment by Wowfunhappy
6 months ago
It wasn't a simple brute force. When Claude was working this morning, it was pretty clearly only playing a file when it actually needed to see packets get decoded, otherwise it would simply open and close the document. Similarly, it would only seek or fast forward when it was debugging specific issues related to those actions. And it even "knew" which test files to open for specific channel layouts.
Yes this is still mechanical in a sense, but then I'm not sure what behavior you wouldn't classify as mechanical. It's "responding" to stimuli in logical ways.
But I also don't quite know where I'm going with this. I don't think LLMs are sentient or something, I know they're just math. But it's spooky.
I think you misunderstood me.
"Simple" is the key word here, right? You agree that it is still under the broad class of "brute force"?
I'm not saying Claude is naively brute forcing. In fact, with lack of interpretibility of these machines it is difficult to say what kind of optimization it is doing and how complex that it (this was a key part tbh).
My point was to help with this
Which requires you to understand how some actions can be mechanical. You admitted to cognitive dissonance (something we all do and I fully agree is hard not to do) and wanting to fight it. We're just trying to find some helpful avenues to do so.
And so too can a simple program, right? A program can respond to user input and there is certainly a logic path it will follow. Our non-ML program is likely going to have a deterministic path (there is still probabilistic programming...), but that doesn't mean it isn't logic, right?
But the real question here, which you have to ask yourself (constantly) is "how do I differentiate a complex program that I don't understand from a conscious entity?" I guarantee you that you don't have the answer (because no one does). But isn't that a really good reason to be careful about anthropomorphizing it?
That's the duck test.
How do you determine if it is a real duck or a highly sophisticated animatronic?
If you anthropomorphize, you rule out the possibility that it is a highly sophisticated animatronic and you *MUST* make the assumption that you are not only an expert, but a perfect, duck detector. But simultaneously we cannot rule out that it is a duck, right? Because, we aren't a perfect duck detector *AND* we aren't an expert in highly sophisticated animatronics (especially of the duck kind).
Remember, there are not two answers to every True-False question, there are three. Every True-False question either has an answer of "True", "False", or "Indeterminate". So don't naively assume it is binary. We all know the Halting Problem, right? (also see my namesake or quantum physics if you want to see such things pop up outside computing)
Though I agree, it can be very spooky. But that only increases the importance of trying to develop mental models that help us more objectively evaluate things. And that requires "indeterminate" be a possibility. This is probably the best place to start to combat the cognitive dissonance.
I have no idea why some people take so much offense to rhe fact humans are just another machine, there's no reason why another machine can't surpass it here as in all other aveneus machines have already. Many of the reasons people give for llms not being conscious are just as applicable to humans too.
I don't think the question is if humans are a machine or not but rather what is meant by machine. Most people interpret it as meaning deterministic and thus having no free will. That's probably not what you're trying to convey so might not be the best word to use.
But the question is what is special about the human machine? What is special about the animal machine? These are different from all the machines we have built. Is it complexity? Is it indeterministic? Is it more? Certainly these machines have feelings, and we need to account for them when interacting with them.
Though we're getting well off topic from determining if a duck is a duck or is a machine (you know what I mean by this word and that I don't mean a normal duck)
3 replies →
Absolutely possible (I’d say even likely) for humans to be surpassed by machines who have better recall and storage already.
I’m highly skeptical this will happen with llms though, their output is superficially convincing but without depth and creativity.