← Back to context

Comment by josephg

10 months ago

I doubt it. The human mind is a probabilistic computer, at every level. There’s no set definition for what a chair is. It’s fuzzy. Some things are obviously in the category, and some are at the periphery of it. (Eg is a stool a chair? Is a log next to a campfire a chair? How about a tree stump in the woods? Etc). This kind of fuzzy reasoning is the rule, not the exception when it comes to human intuition.

There’s no way to use “rules and facts” to express concepts like “chair” or “grass”, or “face” or “justice” or really anything. Any project trying to use deterministic symbolic logic to represent the world fundamentally misunderstands cognition.

> There’s no set definition for what a chair is.

Sure there is: a chair is anything upon which I can comfortably sit without breaking it.

  • I find this very amusing. In philosophy of science some 20+ years ago I had a wonderful prof who went through 3(?) periods of thought. He laid out this argument, followed by the arguments seen below in this thread in various ways, in a systematic way where he convinced you that one way of thinking was correct, you took the midterm, then the next day he would lead with "everything you know is wrong, here's why.". It was beautiful.

    He noted that this evolution of thought continued on until people generally argued that concepts/definitions that let you do meaningful things (your definition of meaningful, doesn't really matter what it is), are the way to go. The punchline at the very end, which happened to be the last thing I regurgitated on my last undergraduate exam, was him saying something along the lines of "Science, it beats hanging out in malls."

    All this to say that if we read a little philosophy of science, that was done a long time ago (way before the class I took), things would make more sense.

  • I have definitely broken chairs upon sitting in them, which someone else could have sat in just fine. So it's unclear why something particular to me would change the chair-ness of an object.

    Similarly, I've sat in some very uncomfortable chairs. In fact, I'd say the average chair is not a particularly comfortable one.

  • > Sure there is: a chair is anything upon which I can comfortably sit without breaking it.

    « It is often said that a disproportionate obsession with purely academic or abstract matters indicates a retreat from the problems of real life. However, most of the people engaged in such matters say that this attitude is based on three things: ignorance, stupidity, and nothing else.

    Philosophers, for example, argue that they are very much concerned with the problems posed by real life.

    Like, for instance, “what do we mean by real?”, and “how can we reach an empirical definition of life?”, and so on.

    One definition of life, albeit not a particularly useful one, might run something like this: “Life is that property which a being will lose as a result of falling out of a cold and mysterious cave thirteen miles above ground level.”

    This is not a useful definition, (A) because it could equally well refer to the subject’s glasses if he happens to be wearing them, and (B) because it fails to take into account the possibility that the subject might happen to fall onto, say, the back of an extremely large passing bird.

    The first of these flaws is due to sloppy thinking, but the second is understandable, because the mere idea is quite clearly, utterly ludicrous. »

    — Douglas Adams

  • So a warm and smelly compost pile is a chair? A cold metal park bench is not a chair (because it's uncomfortable)?

  • A beanbag is a chair? Perhaps a chair should be something on which one can comfortably sit without breaking that has a back and four legs. I suppose then a horse would be a chair.

> Any project trying to use deterministic symbolic logic to represent the world fundamentally misunderstands cognition.

The counterposition to this is no more convincing: cognition is fuzzy, but it's not really clear at all that it's probabilistic: I don't look at a stump and ascertain its chairness with a confidence of 85%, for example. The actual meta-cognition of "can I sit on this thing" is more like "it looks sittable, and I can try to sit on it, but if it feels unstable then I shouldn't sit on it." In other words, a defeasible inference.

(There's an entire branch of symbolic logic that models fuzziness without probability: non-monotonic logic[1]. I don't think these get us to AGI either.)

[1]: https://en.wikipedia.org/wiki/Non-monotonic_logic

  • Which word will I pick next in this sentence? Is it deterministic? I probably wouldn’t respond the same way if I wrote this comment in a different mood, or at a different time of day.

    What I say is clearly not deterministic for you. You don’t know which word will come next. You have a probability distribution but that’s it. Banana.

    I caught a plane yesterday. I knew there would be a plane (since I booked it) and I knew where it would go. Well, except it wasn’t certain. The flight could have been delayed or cancelled. I guess I knew there would be a plane with 90% certainty. I knew the plane would actually fly to my destination with a 98% certainty or something. (There could have been a malfunction midair). But the probability I made it home on time rose significantly when I saw the flight listed, on time, at the airport.

    Who I sat next to was far less certain - I ended up sitting next to a 30 year old electrician with a sore neck.

    My point is that there is so much reasoning we do all the time that is probabilistic in nature. We don’t even think about it. Other people in this thread are even talking about chairs breaking when you sit on them - every time you sit on a chair there’s a probability calculation you do to decide if the chair is safe, and will support your weight. This is all automatic.

    Simple “fuzzy logic” isn’t enough because so many probabilities change as a result of other events. (If the plane is listed on the departures board, the prediction goes up!). All this needs to be modelled by our brains to reason in the world. And we make these calculations constantly with our subconscious. When you walk down the street, you notice who looks dangerous, who is likely to try and interact with you, and all sorts of things.

    I think that expert systems - even with some fuzzy logic - are a bad approach because systems never capture all of this reasoning. It’s everywhere all the time. I’m typing on my phone. What is the chance I miss a letter? What is the chance autocorrect fixes each mistake I make? And so on, constantly and forever. Examples are everywhere.

    • To be clear, I agree that this is why expert systems fail. My point was only that non-monotonic logics and probability have equal explanatory power when it comes to unpredictability: the latter models with probability, and the former models with relations and defeasible defaults.

      This is why I say the meta-cognitive explanation is important: I don’t think most people assign actual probabilities to events in their lives, and certainly not rigorous ones in any case. Instead, when people use words like “likely” and “unlikely,” they’re typically expressing a defeasible statement (“typically, a stranger who approaches me on the street is going to ask me for money, but if they’re wearing a suit they’re typically a Jehovah’s Witness instead”).

      5 replies →

  • >I don't look at a stump and ascertain its chairness with a confidence of 85%

    But i think you did. Not consciously, but i think your brain definitely did.

    https://www.nature.com/articles/415429a https://pubmed.ncbi.nlm.nih.gov/8891655/

    • These papers don't appear to say that: the first one describes the behavior as statistically optimal, which is exactly what you'd expect for a sound set of defeasible relations.

      Or intuitively: my ability to determine whether a bird flies or not is definitely going to be statistically optimal, but my underlying cognitive process is not itself inherently statistical: I could be looking at a penguin and remembering that birds fly by default except when they're penguins, and only then if the penguin isn't wearing a jetpack. That's a non-statistical set of relations, but its external observation is modeled statistically.

      4 replies →

> The human mind is a probabilistic computer, at every level.

Fair enough, but an airplane's wing is not very similar to a bird's wing.

  • That argument would hold a lot more weight if Cyc could fly. But as this article points out, decades of work and millions of dollars have utterly failed to get it off the ground.

    • right, but as others have pointed out the amount of money invested in Cyc is approx 2 orders of magnitude less than what was invested in LLMs. So maybe the method was OK, but it was insufficiently resourced.

      1 reply →

> There’s no way to use “rules and facts” to express concepts like “chair” or “grass”, or “face” or “justice” or really anything. Any project trying to use deterministic symbolic logic to represent the world fundamentally misunderstands cognition.

Are you sure? In terms of theoretical foundations for AGI, AIXI is probabilistic but godel-machines are proof based and I think they'd meet criteria for deterministic / symbolic. Non-monotonic and temporal logics also exist, where chairness exists as a concept that might be revoked if 2 or more legs are missing. If you really want to get technical then by allowing logics with continuous time and changing discrete truth values, then you can probably manufacture a fuzzy logic where time isn't considered but truth/certainty values are continuous. Your ideas about logic might be too simple, it's more than just Aristotle

  • Not person you are replying to, just FYI.

    I don't know, it all seems like language games to me. The meaning is never in its grammar, but in its usage. The usage is arbitrary and capricious. I've not discovered how more nuanced forms of logics have ever really grappled with this.

  • In the 1980s, we used to talk about the silliness of the “grandmother neuron” - the idea that one neuron would capture an important thing, rather than a distributed representation.

> This kind of fuzzy reasoning is the rule, not the exception when it comes to human intuition.

That is indeed true. But we do have classic fuzzy logic, and it can be used to answer these questions. E.g. a "stool" maybe a "chair", but "automobile" is definitely not.

Maybe the symbolic logic approach could work if it's connected with ML? Maybe we can use a neural network to plot a path in the sea of assertions? Cyc really seems like something that can benefit the world if it's made open under some reasonable conditions.

  • > That is indeed true. But we do have classic fuzzy logic, and it can be used to answer these questions. E.g. a "stool" maybe a "chair", but "automobile" is definitely not.

    I’m not convinced that classical fuzzy logic will ever solve this - at least not if every concept needs to be explicitly programmed in. What a “chair” is sort of subtly changes at a furniture store and at a campsite. Are you going to have someone explicitly, manually program all of those nuances in? No way! And without that subtlety, you aren’t going to end up with a system that’s as smart as chatgpt. Challenge me on this if you like, but we can play this game with just about any word you can name - more or less everything except for pure mathematics.

    And by the way, modern ML approaches understand all of those nuances just fine. It’s not clear to me what value - if any - symbolic logic / expert systems provide that chatgpt isn’t perfectly capable of learning on its own already.

"The human mind is a probabilistic computer, at every level."

We don't know that. It's mostly probabilistic. That innate behavior exists suggests some parts might be deterministic.

Words are used due to the absence of things. They fill an immediate experiential void and stand in for something else, because you want or need another person to evoke some fantasy to fill this absence and make understanding possible.

If you have a mind and it is a computer, then it is because of nurture, because the brain is nothing like a computer, and computers simulating language are nothing like brains.

That is not what is suggested. Llm still fuzzy mess, but supervisor / self editing is rules based

The way I see it:

(1) There is kind of a definition of a chair. But it's very long. Like, extremely long, and includes maybe even millions to billions of logical expressions, assuming your definition might need to use visual or geometric features of a given object to be classified as a chair (or not chair).

This is a kind of unification of neural networks (in particular LLMs) and symbolic thought: large enough symbolic thought can simulate NNs and vice versa. Indeed even the fact that NNs are soft and fuzzy does not matter theoretically, it's easy to show logical circuits can simulate soft and fuzzy boundaries (in fact, that's how NNs are implemented in real hardware! as binary logic circuits). But I think specific problems have varying degrees of more natural formulation as arithmetic, probabilistic, linear or fuzzy logic, on one hand, and binary, boolean-like logic on the other. Or natural formulations could involve arbitrary mixes of them.

(2) As humans, the actual definitions (although they may be said to exist in a certain way at a given time[1]) vary with time. We can, and do, invent new stuff all the time, and often extend or reuse old concepts. For example, I believe the word 'plug' in english likely well predates modern age, probably used to refer to original electrical power connectors. Nowadays there are USB plugs, which may not carry power at all, or audio plugs, etc. (maybe there are better examples). In any case the pioneer(s) usually did not envision all a name could be used for, and uses evolve.

(3) Words are used as tools to allow communication and, crucially, thought. There comes a need to put a fence (or maybe a mark) in abstract conceptual and logic space, and we associate that with a word. Really a word could be "anything we want to communicate", represent anything. In particular changes to the states of our minds, and states themselves. That's usually too general, most words are probably nouns which represent classifications of objects that exist in the world (like the mentioned chair) -- the 'mind state' definition is probably general enough to cover words like 'sadness', 'amazement', etc., and 'mind state transitions' probably can account for everything else.

We use words (and associated concepts) to dramatically reduce the complexity of the world to enable or improve planning. We can then simplify our tasks into a vastly simpler logical plan: even something simple like put shoes, open door, go outside, take train, get to work -- without segmenting the world into things and concepts (it's hard to even imagine thought without using concepts at all -- it probably happens instinctively), the number of possibilities involved in planning and acting would be overwhelming.

Obligatory article about this: https://slatestarcodex.com/2014/11/21/the-categories-were-ma...

---

Now this puts into perspective the work of formalizing things, in particular concepts. If you're formalizing concepts to create a system like Cyc, and expect it to be cheap, simple, reliable, and function well into the future, by our observations that should fail. However, formalization is still possible, even if expensive, complex, and possibly ever changing.

There are still reasons you may want to formalize things, in particular to acquire a deeper understanding of those things, or when you're okay in creating definitions set in stone because they will be confined to a group being attentive and restrictive to their formal definitions (and not, as natural language, evolving organically according to convenience): that's the case with mathematics. The peano axioms still define the same natural numbers; and although names may be reused, you can usually specify them to a particular axiomatic definition that will never change. And thus we can keep building facts on those foundations forever -- while what is a 'plug' in natural language might change (and associated facts about plugs become invalid), we can define mathematical objects (like 'natural numbers') with unchanging properties, and ever-valid and potentially ever-growing valid facts to be known about them, reliably. So fixing concepts in stone more or less (at least when it comes to a particular axiomatization) is not such a foolish endeavor it may look like, quite the opposite! Science in general benefits from those solid foundations.

I think eventually even some concepts related to human emotions and specially ethics will be (with varying degrees of rigor) formalized to be better understood. Which doesn't mean human language should (or will) stop evolving and being fuzzy, it can do so independently of formal more rigid counterparts. Both aspects are useful.

[1] In the sense that, at a given time, you could (theoretically) spend an enormous effort to arrive at a giant rule system that would probably satisfy most people, and most objects referred to as chairs, at a given fixed time.