← Back to context

Comment by nyrikki

3 years ago

There is a bit of a political history between the symbolists and connectivist that complicates that, basically the Symbolic camp is looking for universal quantifers while the connectivist were researching existential or statistical quantifers.

The connectivists left the 'AI' folks and established the ML field in the 90s.

Sometimes those political rifts arise in discussions about what is possible.

Thinking of ML under the PAC learning lens will show you why AGI isn't possible through just ML

But the Symbolists direction is also blocked by fundamental limits of math and CS with Gödel's work being one example.

LLMs are AI id your definition is closer to the general understanding of the word, but you have to agree on a definition to reach agreement between two parties.

The belief that AGI is close is speculative and there are many problems, some which are firmly thought to be unsolvable with current computers.

AGI is pseudo-science today without massive advances. But unfortunately as there isn't a consensus on what intelligence is those discussions are difficult also.

Overloaded terms make it very difficult to have discussions on what is possible.

Your links claim that:

'GPT-4 is not “AI” because AI means “AGI,”'

Is a more strict term that hasn't typically applied to AI as an example of my above claim.

As we lack general definitions, it isn't invalid but no AI is thought to be possible with their claims.

AI being computer systems that perform work that typically requires humans within a restricted domain is closer to what most researchers would use in my experience.

  • "AI" is one of the few words that has a looser definition as jargon than in general discourse. In general discourse, "AI" has a precise meaning: "something that can think like a human." As jargon though, "AI" means "we can get funding for calling this 'AI'." I would say LLMs count as AI exactly because they can simulate human-like reasoning. Of course they still have gaps, but on the other hand, they also have capabilities that humans don't have. On balance, "AI" is fair, but it's only fair because it's close to the general usage of the term. The jargon sense of it is just meaningless and researchers should be ashamed of letting the term get so polluted when it had and has a very clear meaning.

> Thinking of ML under the PAC learning lens will show you why AGI isn't possible through just ML

Why? PAC looks a lot like how humans think

> But the Symbolists direction is also blocked by fundamental limits of math and CS with Gödel's work being one example.

Why? Gödel's incompleteness appoes equally well to humans as machines. It's an extremely technical statement about self-reference within an axiom systems, pointing out that it's possible to construct paradoxical sentences. That has nothing to do with general theorem proving about the world.

  • Superficially, some aspects of human learning are similar to PAC learning, but it is not equivalent.

    Gödel's incompleteness applies equally well to humans as machines, in writing down axioms and formula, not in general tasks.

    The Irony of trying to explain this on a site called Y combinator, but even for just prepositional logic, exponential-time is the best that we can do for for algorithms and general proof tasks.

    For first order predicate logic, finding valid formulas is recursively enumerable, thus with unlimited resources they can be found in finite time.

    But unlimited resources and finite time are not practical.

    Similar with modern SOTA LLMs, while they could be computationally complete they would require unbounded amount of ram to do so, which is also impractical. Also invalid formula cannot reliably be detected.

    Why this is ironic.

    The Curry's Y combinator: Y = λf.(λx.(x x)) (λx.(x x)), lead to several paradoxes show that untyped lambda calculus is unsound as a deductive system.

    Church–Turing thesis shows that lambda calculus and Turing machines are equivalent.

    Here is Haskell Curry's paper on the Kleene–Rosser paradox which is related.

    https://www.ams.org/journals/tran/1941-050-03/S0002-9947-194...

Semantics are nice, but it doesn't matter what name you give to technology that shatters economies and transforms the nature of human creative endeavours.

An AI's ability to contemplate life while sitting under a tree is secondary to the impact it has on society.

  • >technology that shatters economies and transforms the nature of human creative endeavours

    one pattern in modern history has been that communication and creative technologies, be it the television or even the internet, had significantly less economic impact than people expected. Both television and the internet may have transformed business models and had huge cultural impact, but all things considered negligible impact on total productivity or the physical economy.

    Given that generative AI seems much more suited to purely virtual tasks or content creation than physical work I expect that to repeat. Over past cycles people have vastly overstated rather than underestimated the impact tech has on the labor force.

    • > but all things considered negligible impact on total productivity or the physical economy.

      It is obscenely hard to measure those sorts of things.

      OK, here is one example.

      It used to be that engineers (the physical kind, not the software kind) had their own secretaries to manage meetings and fetch documents.

      Those secretaries are all out of work now, replaced by Outlook and PDFs.

      Modern farms are wired with thousands upon thousands of IOT sensors, precisely controlling every aspect of the fields and crops. Soil is maintained in perfectly ideal conditions. The internet is what made this possible.

      The Internet has allowed for individuals to easily trade stocks, which has had who knows how large of an impact on the economy, but I am willing to guess it isn't a small one.

      The Internet also enabled all sorts of algorithmic trading to pop up.

      Television is of course a huge source of economic output in its own right.

      6.9% of the US GDP is Media and Entertainment, not sure if that includes video games or not.

      The tech industry is at least 10% of the US GDP, remove the Internet and that drops dramatically.

    • What, the internet had negligible impact on the productivity or the economy?

      Are you only talking about consumer products like YouTube or are you including everything related to the unparalleled exchange of data globally?

      I cannot imagine the impact of a global information network for businesses having less impact than a few x2’s on just about every relevant axis you can imagine.

>> The connectivists left the 'AI' folks and established the ML field in the 90s.

The way I know the story is that modern machine learning started as an effort to overcome the "knowledge acquisition bottleneck" in expert systems, in the '80s. The "knowledge acquisition bottleneck" was simply the fact that it is very difficult to encode the knowledge of experts in a set of production rules for an expert system's knowledge-base.

So people started looking for ways to acquire knowledge automatically. Since the use case was to automatically create a rule-base for an expert system, the models they built were symbolic models, at least at first. For example, if you read the machine learning literature from that era (again, we're at the late '80s and early '90s) you'll find it dominated by the work of Ryszard Michalski [1], which was all entirely symbolic as far as I can tell. Staple representations used in machine learning models of the era included decision lists, and decision trees, and that's where decision tree learners, like ID4, C45, Random Forests, Gradient Boosted Trees, and so on, come; which btw are all symbolic models (they are and-or trees, propositional logic formulae).

A standard textbook from that era of machine learning is Tom Mitchell's "Machine Learning" [2] where you can find entire chapters about rule learning, decision tree learning, and other symbolic machine learning subjects, as well as one on neural network learning.

I don't think connectionists ever left, as you say, the "AI" folks. I don't know the history of connectionism as well as that of symbolic machine learning (which I've studied) but from what I understand, connectionist approaches found early application in the field of Pattern Recognition, where the subject of study was primarily machine vision.

In any case, the idea that the connectionists and the symbolists are diametrically opposed camps within AI reserach is a bit of a myth. Many of the luminaries of AI would have found it odd, for example Claude Shannon [3] invented both logic gates and information theory, whereas the original artificial neuron, the Pitts and McCulloch neuron, was a propositional logic circuit that learned its own boolean function. And you wouldn't believe it but Jurgen Schmidhuber's doctoral thesis was a genetic algorithm implemented in ... Prolog [4].

It seems that in recent years people have found it easier to argue that symbolic and connectionist approaches are antithetical and somehow inimical to each other, but I think that's more of an excuse to not have to learn at least a bit about both; which is hard work, no doubt.

______________

[1] https://en.wikipedia.org/wiki/Ryszard_S._Michalski

[2] It's available as a free download from Tom Mitchell's wesbite:

http://www.cs.cmu.edu/afs/cs.cmu.edu/user/mitchell/ftp/mlboo...

[3] Shannon was one of the organisers of the Dartmouth Convention where the term "Artificial Intelligence" was coined, alongside John McCarthy and Marvin Minsky.

[4] https://people.idsia.ch/~juergen/genetic-programming-1987.ht...