Comment by freediver

4 years ago

It is not a contradiction as I meant "achieving" in the context of creating it (through software).

The fact it happened to us is undeniable (from our perspective), but the how/why of it is still one of the many mysteries of the universe - one we will likely never solve.

FWIW this is the same argument once made against human flight. In the late 19th century, there were a lot of debates in the form

> Clearly flight is possible, birds do it

> Sure but how/why is one of the many mysteries of the universe, one we will likely never solve.

"Man won't fly for a million years – to build a flying machine would require the combined and continuous efforts of mathematicians and mechanics for 1-10 million years." - NYT 1903

  • The real answer to how birds fly is that they're extremely light weight so that wing muscles can lift them. Common pigeons or seagulls only weigh about 2 or 3 pounds. The largest birds of prey top out around 18. Anything heavier is flightless. A 150-pound human isn't getting anywhere on wing muscle power.

    • The largest Pterosaur are estimated to have had wingspans of more than 9m and weigh up to 250kg (550 pounds) and we believe they were able to fly. [1]

      But that's not the most relevant point here. The fact that humans did achieve to fly, but through a different method than birds is exactly a supporting argument that we might achieve AGI with a different approach than how our brains do it.

      There are countless similar examples. We see a natural phenomenon, we know it's possible and we find a way to replicate the desired effect (not the whole phenomenon) artificially. I haven't heard anything here that it will be any different for intelligence, except that we don't know how yet.

      [1] https://en.m.wikipedia.org/wiki/Pterosaur_size

      3 replies →

I’m curious why you think that. Do you think it’s a fundamental problem with the discrete nature of traditional computers? Or a problem with scale and computational limits? If it’s the latter, if a hypothetical computer has unlimited time and memory capacity, why do you think AGI would remain impossible?

  • Machines are good at computation, which is not equal to reasoning, but rather a subset of reasoning.

    And not only they are good at computation, but they are exceptionally good at it - I have no illusion of trying to compete with a machine doing square roots or playing chess. And increasingly harder problems are being expressed as computation problems, with more or less success - most famously probably self-driving.

    But at the end of the day it feels like using an increasingly longer ladder to reach the surface of the Moon.

    While imaginable, and every time we extend the ladder the Moon does get closer, it is fundamentaly impossible.

    • Ever since Gödel we’ve had a pretty convincing proof that there is nothing that you can do in terms of reasoning that can’t be expressed using computation. And since Turing we’ve got a framework that shows there’s nothing computable that you can’t compute using a universal computer.

      So unless there’s something mystical beyond the realm of mathematics to ‘reasoning’ it can’t be a superset of computing.

      If a finite amount of matter in a brain with a finite amount of energy can do it, then a universal computing machine with a finite amount of storage and a finite amount of time can do it.

      5 replies →