Comment by hiAndrewQuinn

6 months ago

I'm a mind-body dualist and just happened to come across this list, and I think it's an interesting one. #1 we can answer Yes to, #2 through #6 are all strictly unknowable. The best we might be able to claim is some probability distribution that these things may or may not be conscious.

The intuitive one looks like 100% chance > P(#2 is conscious) > P(#6) > P(#3) > P(#4) > P(#5) > 0% chance, but the problem is solipsism is a real motherfucker and it's entirely possible qualia is meted out based on some wacko distance metric that couldn't possibly feel intuitive. There are many more such metrics out there than there are intuitive ones, so a prior of indifference doesn't help us much. Any ordering is theoretically possible to be ontologically privileged, we simply have no way of knowing.

I think you've fallen into the trap of Descartes' Deus deceptor! Not only is #1 the only question from my list we can definitely answer yes to, but due to this demon this question is actually the only postulate of anything at all that we can answer yes to. All else could be an illusion.

Assuming we escape the null space of solipsism, and can reason about anything at all, we can think about what a model might look like that generates some ordering of P(#). Of course, without a hypothetical consciousness detector (one might believe or not believe that this could exist) P(#) cannot be measured, and therefore will fall outside of the realm of a scientific hypothesis deduction model. This is often a point of contention for rationality-pilled science-cels.

Some of these models might be incoherent - a model that denies P(#1) doesn't seem very good. A model that denies P(#2) but accepts P(#3) is a bit strange. We can't verify these, but we do need to operate under one (or in your suggestion, operate under a probability distribution of these models) if we want to make coherent statements about what is and isn't conscious.

  • To be explicit my P(#) is meant to be the Bayesian probability an observer gives to # being conscious, not the proposition P that # is conscious. It's meant to model Descartes's receptor, as well as disagreement of the kind, "My friend things week 28 fetuses are probably (~% 80%) conscious, and I think they're probably (~20%) not". P(week 28 fetuses) itself is not true or false.

    I don't think it's incoherent to make probabilistic claims like this. It might be incoherent to make deeper claims about what laws given the distribution itself. Either way, what I think is interesting is that, if we also think there is such a thing as an amount of consciousness a thing can have, as in the panpsychic view, these two things create an inverse-square law of moral consideration that matches the shape of most people's intuitions oddly well.

    For example: Let's say rock is probably not conscious, P(rock) < 1%. Even if it is, it doesn't seem like it would be very conscious. A low percentage of a low amount multiplies to a very low expected value, and that matches our intuitions about how much value to give rocks.

    • Ah I understand, you're exactly right I misinterpreted the notation of P(#). I was considering each model as assigning binary truth values to the propositions (e.g., physicalism might reject all but Postulate #1, while an anthropocentric model might affirm only #1, #2, and #6), and modeling the probability distribution over those models instead. I think the expected value computation ends up with the same downstream result of distributions over propositions.

      By incoherent I was referring to the internal inconsistencies of a model, not the probabilistic claims. Ie a model that denies your own consciousness but accepts the consciousness of others is a difficult one to defend. I agree with your statement here.

      Thanks for your comment I enjoyed thinking about this. I learned the estimating distributions approach from the rationalist/betting/LessWrong folks and think it works really well, but I've never thought much about how it applies to something unfalsifiable.

      1 reply →