Comment by js8

22 days ago

> Models are not AGI.

How do you know? What if AGI can be implemented as a reasonably small set of logic rules, which implement what we call "epistemology" and "informal reasoning"? And this set of rules is just being run in a loop, producing better and better models of reality. It might even include RL, for what we know.

And what if LLMs already know all these rules? So they are AGI-complete without us knowing.

To borrow from Dennett, we understand LLMs from the physical stance (they are neural networks) and the design stance (they predict next token of language), but do we understand them from an intentional stance, i.e. what rules they employ when they running chain-of-thought for example?

It's very simple. The model itself doesn't know and can't verify it. It knows that it doesn't know. Do you deny that? Or do you think that a general intelligence would be in the habit of lying to people and concealing why? At the end of the day, that would be not only unintelligent, but hostile. So it's very simple. And there is such a thing as "the truth", and it can be verified by anyone repeatably in the requisite (fair, accurate) circumstances, and it's not based in word games.

  • All I asked for was the OP to substantiate their claim that LLMs are not AGI. I am agnostic on that - either way seems plausible.

    I don't think there even is an agreed criterion of what AGI is. Current models can easily pass the Turing test (except some gotchas, but these don't really test intelligence).

    • What people hope 'AGI' is would at least be able to make confirmations of fact and know what verification means. LLMs don't have 'knowledge' and do not actually 'reason'. Heuristic vs simulation. One can be made to approach the other, but only on a specific and narrow path. Someone who knows something can verify that they know it. An "intelligence" implies it is doing operations based on rules, but LLMs cannot conform themselves to rules that require them to reason everything through. What people have hoped AGI would be could be trained to reliably adopt the practice of reasoning. Necessary but maybe not sufficient, and I'm just gonna blame that on the term "intelligence" actually indicating a still relatively low level of what I will "consciousness".

      2 replies →

  • None of the above are even remotely epistemologically sound.

    "Or do you think that a general intelligence would be in the habit of lying to people and concealing why?"

    First, why couldn't it? "At the end of the day, that would be not only unintelligent, but hostile" is hardly an argument against it. We ourselves are AGI, but we do both unintelligent and hostile actions all the time. And who said it's unintelligent to begin with? As in AGI it might very well be in my intelligent self-interests to lie about it.

    Second, why is "knows it and can verify" a necessary condition? An AGI could very well not know it's one.

    >And there is such a thing as "the truth", and it can be verified by anyone repeatably in the requisite (fair, accurate) circumstances, and it's not based in word games.

    Epistemologically speaking, this is hardly the slam-dunk argument you think it is.

    • no, you missed some of my sentences. you have to take the whole picture together. and I was not making an argument to you to prove the existence of the truth. You are clearly bent on arguing against its existence, which tells me enough about you. We were talking about agents that operate in good faith that know that they are safe. When you're ready to have a discussion in good faith rather than attempting to find counterarguments, then you will find that what I said is verifiable. The question is not whether you think you can come up with a way to make an argument that sounds like it contradicts what I said.

      The question is not whether an AGI knows that it is an AGI. The question is whether it knows that it is not one. And you're missing the fact that there's no such thing as it here.

      If you go around acting hostile to good people that's still not very intelligent. In fact, I would question if you have any concept of why you're doing it at all. chances are you're doing it to run from yourself not because you know what you're doing.

      Anyway, you're just speculating and the fact of the matter is that you don't have to speculate. If you actually wanted to verify what I said, it would be very easy to do so. it's not a surprise that someone who doesn't want to know something will have deaf ears. so I'm not going to pretend that I stand a chance of convincing you when I already know that my argument is accurate.

      don't be so sure that you meet the criteria for AGI.

      and as for my slam dunk, any attempt to argue against the existence of truth, automatically validates your assumption of its existence. so don't make the mistake of assuming I had to argue about it. I was merely stating a fact.

      2 replies →