Comment by alansammarone

10 days ago

I...I am very interested in this subject. There's a lot to unpack in your comment, but I think it's really pretty simple.

> this does not support your conclusion that artificial systems are "computationally equivalent" to brains in any practical sense.

You're making a point about engineering or practicality, and in that sense, you are absolutely correct.

That's not the most interesting part of the question, however.

> This is like arguing that because weather systems and computers both follow physical laws, you should be able to perfectly simulate weather on your laptop.

Yes, that's exactly what I'd argue, and...hm.. yes, I think that's clearly true. Whether it takes 10 minutes or 10^100 minutes, 1~ or 10^100 human lifetimes to do so, it's irrelevant. Units (including human lifetimes) are arbitrary, and I think fundamental truths probably won't depend on such arbitrary things as how long a particular collection of atoms in a particular corner of the universe (i.e. humans) happens to be stable for. Ratios are closer to being fundamental, but I digress.

To put it a different way - we think we know what the speed of light is. Traveling at v = 0.1c or at v = (1 - 10^(-100))c are equivalent in a fundamental sense, it's an engineering problem. Now, traveling at v = c...that's very different. That's interesting.

Exactly this. I would argue that I believe doing it efficiently is "just engineering", but I would not claim we know that to any reasonable amount of certainty.

I hold beliefs about what LLMs may be capable of that are far stronger than what I argued, but stated only what can be supported by facts for a reason:

That absent evidence we can exceed the Turing computable, we have no reason to believe LLMs can't be trained to "represent ideas that it has not encountered before" or "come up with truly novel concepts".