Comment by elliotto
6 months ago
I think you've fallen into the trap of Descartes' Deus deceptor! Not only is #1 the only question from my list we can definitely answer yes to, but due to this demon this question is actually the only postulate of anything at all that we can answer yes to. All else could be an illusion.
Assuming we escape the null space of solipsism, and can reason about anything at all, we can think about what a model might look like that generates some ordering of P(#). Of course, without a hypothetical consciousness detector (one might believe or not believe that this could exist) P(#) cannot be measured, and therefore will fall outside of the realm of a scientific hypothesis deduction model. This is often a point of contention for rationality-pilled science-cels.
Some of these models might be incoherent - a model that denies P(#1) doesn't seem very good. A model that denies P(#2) but accepts P(#3) is a bit strange. We can't verify these, but we do need to operate under one (or in your suggestion, operate under a probability distribution of these models) if we want to make coherent statements about what is and isn't conscious.
To be explicit my P(#) is meant to be the Bayesian probability an observer gives to # being conscious, not the proposition P that # is conscious. It's meant to model Descartes's receptor, as well as disagreement of the kind, "My friend things week 28 fetuses are probably (~% 80%) conscious, and I think they're probably (~20%) not". P(week 28 fetuses) itself is not true or false.
I don't think it's incoherent to make probabilistic claims like this. It might be incoherent to make deeper claims about what laws given the distribution itself. Either way, what I think is interesting is that, if we also think there is such a thing as an amount of consciousness a thing can have, as in the panpsychic view, these two things create an inverse-square law of moral consideration that matches the shape of most people's intuitions oddly well.
For example: Let's say rock is probably not conscious, P(rock) < 1%. Even if it is, it doesn't seem like it would be very conscious. A low percentage of a low amount multiplies to a very low expected value, and that matches our intuitions about how much value to give rocks.
Ah I understand, you're exactly right I misinterpreted the notation of P(#). I was considering each model as assigning binary truth values to the propositions (e.g., physicalism might reject all but Postulate #1, while an anthropocentric model might affirm only #1, #2, and #6), and modeling the probability distribution over those models instead. I think the expected value computation ends up with the same downstream result of distributions over propositions.
By incoherent I was referring to the internal inconsistencies of a model, not the probabilistic claims. Ie a model that denies your own consciousness but accepts the consciousness of others is a difficult one to defend. I agree with your statement here.
Thanks for your comment I enjoyed thinking about this. I learned the estimating distributions approach from the rationalist/betting/LessWrong folks and think it works really well, but I've never thought much about how it applies to something unfalsifiable.
You're welcome! Probability distributions over inherently unfalsifiable claims is exotic territory at first, but when I see actual philosophers in the wild debate things I often find a back-and-forth of such claims that definitely looks like two people shifting around likelihood values. I take this as evidence that such a process is what's "really" going on when we go one level removed from the arguments and their background assumptions themselves.