← Back to context

Comment by delusional

4 days ago

Anybody that doesn't acknowledge this as a base truth of these systems should not be listened to. It's not intelligence, it's statistics.

The AI doesn't reason in any real way. It's calculating the probability of the next word appearing in the training set conditioned on the context that came before, and in cases where there are multiple likely candidates it's picking one at random.

To the extent you want to claim intelligence from these systems, it's actually present in the training data. The intelligence is not emergent, it's encoded by humans in the training data. The weaker that signal is to the noise of random internet garbage, the more likely the AI will be to pick a random choice that's not True.

I'm arguing that this is to simple of an explanation.

The claude paper showed, that it has some internal model when answering in different languages.

The process of learning can have effects in it, which is more than statistics. IF the training itself optimizes itself by having a internal model representation, than its no longer just statistics.

It also sounds like that humans are the origin of intelligence, but if humans do the same thing as LLM, and the only difference is, that we do not train LLMs from scratch (letting them discover the world, letting them inventing languages etc. but priming them with our world), than our intelligence was emergent and the LLMs one by proxy.

  • Since the rise of LLMs, the thought has definitely occurred to me that perhaps our intelligence might also arise from language processing. It might be.

    The big difference between us and LLMs, however, is that we grow up in the real world, where some things really are true, and others really are false, and where truths are really useful to convey information, and falsehoods usually aren't (except truths reported to others may be inconvenient and unwelcome, so we learn to recognize that and learn to lie). LLMs, however, know only text. Immense amounts of text, without any way to test or experience whether it's actually true or false, without any access to a real world to relate it to.

    It's entirely possible that the only way to produce really human-level intelligent AI with a concept of truth, is to train them while having them grow up in the real world in a robot body over a period of 20 years. And that would really restrict the scalability of AI.

    • I just realized that kids (and adults) these days grow up more in virtual environments behind screens than in touch with the real world, and maybe that might have an impact on our ability to discern truth from lies. That would certainly explain a lot about the state of our world.

      1 reply →

The only scientific way to prove intelligence is using statistics. If you can prove that a certain LLM is accurate enough in generalised benchmarks it is sufficient to call it intelligent.

I don't need to know how it works internally, why it works internally.

What you (and parent post) are suggesting is that it is not intelligent based on its working. This is not a scientific take on the subject.

This is in fact how it works for medicine. A drug works because it has been shown to work based on statistical evidence. Even if we don't know how it works internally.

  • Assuming the statistical analysis was sound. It is not always so. See the replication crisis for example