Comment by mcv

4 days ago

Since the rise of LLMs, the thought has definitely occurred to me that perhaps our intelligence might also arise from language processing. It might be.

The big difference between us and LLMs, however, is that we grow up in the real world, where some things really are true, and others really are false, and where truths are really useful to convey information, and falsehoods usually aren't (except truths reported to others may be inconvenient and unwelcome, so we learn to recognize that and learn to lie). LLMs, however, know only text. Immense amounts of text, without any way to test or experience whether it's actually true or false, without any access to a real world to relate it to.

It's entirely possible that the only way to produce really human-level intelligent AI with a concept of truth, is to train them while having them grow up in the real world in a robot body over a period of 20 years. And that would really restrict the scalability of AI.

I just realized that kids (and adults) these days grow up more in virtual environments behind screens than in touch with the real world, and maybe that might have an impact on our ability to discern truth from lies. That would certainly explain a lot about the state of our world.

  • A few years back i saw a documentary about kids in a third world country were it is normal to use plastic bags for drinking soda.

    These kids couldn't understand that the plastic garbage in their own nature is not part of nature.

    Nonetheless, depending on what rules you mean, there are a lot of people who show that logic or 'truth' is not the same for everyone.

    People believing in a god, ghosts, conspiricy theories, flat earth etc.

    I'm more curious if the 'self' can only be trained if you have a clear line of control. We learn what the self is because there is a part which we can control and than there is a part which we can't control.