← Back to context

Comment by hgsgm

3 years ago

> Thinking of ML under the PAC learning lens will show you why AGI isn't possible through just ML

Why? PAC looks a lot like how humans think

> But the Symbolists direction is also blocked by fundamental limits of math and CS with Gödel's work being one example.

Why? Gödel's incompleteness appoes equally well to humans as machines. It's an extremely technical statement about self-reference within an axiom systems, pointing out that it's possible to construct paradoxical sentences. That has nothing to do with general theorem proving about the world.

Superficially, some aspects of human learning are similar to PAC learning, but it is not equivalent.

Gödel's incompleteness applies equally well to humans as machines, in writing down axioms and formula, not in general tasks.

The Irony of trying to explain this on a site called Y combinator, but even for just prepositional logic, exponential-time is the best that we can do for for algorithms and general proof tasks.

For first order predicate logic, finding valid formulas is recursively enumerable, thus with unlimited resources they can be found in finite time.

But unlimited resources and finite time are not practical.

Similar with modern SOTA LLMs, while they could be computationally complete they would require unbounded amount of ram to do so, which is also impractical. Also invalid formula cannot reliably be detected.

Why this is ironic.

The Curry's Y combinator: Y = λf.(λx.(x x)) (λx.(x x)), lead to several paradoxes show that untyped lambda calculus is unsound as a deductive system.

Church–Turing thesis shows that lambda calculus and Turing machines are equivalent.

Here is Haskell Curry's paper on the Kleene–Rosser paradox which is related.

https://www.ams.org/journals/tran/1941-050-03/S0002-9947-194...