Btw, what do you think about/are you familiar with work on logical induction?
We present a computable algorithm that assigns probabilities to every logical statement in a given formal language, and refines those probabilities over time. For instance, if the language is Peano arithmetic, it assigns probabilities to all arithmetical statements, including claims about the twin prime conjecture, the outputs of long-running computations, and its own probabilities. We show that our algorithm, an instance of what we call a logical inductor, satisfies a number of intuitive desiderata, including: (1) it learns to predict patterns of truth and falsehood in logical statements, often long before having the resources to evaluate the statements, so long as the patterns can be written down in polynomial time; (2) it learns to use appropriate statistical summaries to predict sequences of statements whose truth values appear pseudorandom; and (3) it learns to have accurate beliefs about its own current beliefs, in a manner that avoids the standard paradoxes of self-reference
I love that work! And I think this fits in nicely with another comment that you make below about the principle of indifference. The problem, as I see it, is that you have an agent who adopts some credences and a belief structure that defines a full distribution over a set of propositions. It’s either consistent or inconsistent with that distribution to assign some variable X a strictly positive probability. But, let’s suppose, a Turing machine can’t determine that in polynomial time. As I understand Garrabrant et al., I’m fine to pick any credence I like, since logical inconsistencies are only a problem if they allow you to be Dutch booked in polynomial time. As a way of thinking about reasoning under logical uncertainty, it’s ingenious. But once we start thinking about our personal probabilities as guides to what we ought to do, I get nervous. Note that just as I’m free to assign X a strictly positive probability distribution under Garrabrant’s criterion, I’m also free to assign it a distribution that allows for probability zero (even if that ends up being inconsistent, by stipulation I can’t be dutch-booked in polynomial time). One could imagine a precautionary principle that says, in these cases, to always pick a strictly positive probability distribution. But then again I’m worried that once we allow for all these conceivable events that we can’t figure out much about to have positive probability, we’re opening the floodgates for an ever-more-extreme apportionment of resources to lower-and-lower probability catastrophes.
But then again I’m worried that once we allow for all these conceivable events that we can’t figure out much about to have positive probability, we’re opening the floodgates for an ever-more-extreme apportionment of resources to lower-and-lower probability catastrophes.
I don’t have the scheme on the top of my head, but this doesn’t seem right. If you assign probability 0, you would take any odds, and so I could make a lot of money when you eventually shift to a non-zero probability.
But then again I’m worried that once we allow for all these conceivable events that we can’t figure out much about to have positive probability, we’re opening the floodgates for an ever-more-extreme apportionment of resources to lower-and-lower probability catastrophes.
Right, but then that seems like a different objection, e.g., a recluctance to taking Pascal’s wager-type deals, or some preference related to your risk averseness, or some objection to expected value calculations under not-particularly-resilient low probabilities. But then that feels more like the true objection, not the computational complexity part. Would you say that’s a fair characterization?
I do think that the issues with Pascal’s wager-type deals are compounded by the possibility that the positive probability you assign to the relevant outcome might be inconsistent with other beliefs you have, and settling the question of consistency is computationally intractable). In the classic Pascal’s wager, there’s no worry about internal inconsistency in your credences.
Btw, what do you think about/are you familiar with work on logical induction?
I love that work! And I think this fits in nicely with another comment that you make below about the principle of indifference. The problem, as I see it, is that you have an agent who adopts some credences and a belief structure that defines a full distribution over a set of propositions. It’s either consistent or inconsistent with that distribution to assign some variable X a strictly positive probability. But, let’s suppose, a Turing machine can’t determine that in polynomial time. As I understand Garrabrant et al., I’m fine to pick any credence I like, since logical inconsistencies are only a problem if they allow you to be Dutch booked in polynomial time. As a way of thinking about reasoning under logical uncertainty, it’s ingenious. But once we start thinking about our personal probabilities as guides to what we ought to do, I get nervous. Note that just as I’m free to assign X a strictly positive probability distribution under Garrabrant’s criterion, I’m also free to assign it a distribution that allows for probability zero (even if that ends up being inconsistent, by stipulation I can’t be dutch-booked in polynomial time). One could imagine a precautionary principle that says, in these cases, to always pick a strictly positive probability distribution. But then again I’m worried that once we allow for all these conceivable events that we can’t figure out much about to have positive probability, we’re opening the floodgates for an ever-more-extreme apportionment of resources to lower-and-lower probability catastrophes.
I don’t have the scheme on the top of my head, but this doesn’t seem right. If you assign probability 0, you would take any odds, and so I could make a lot of money when you eventually shift to a non-zero probability.
Right, but then that seems like a different objection, e.g., a recluctance to taking Pascal’s wager-type deals, or some preference related to your risk averseness, or some objection to expected value calculations under not-particularly-resilient low probabilities. But then that feels more like the true objection, not the computational complexity part. Would you say that’s a fair characterization?
I do think that the issues with Pascal’s wager-type deals are compounded by the possibility that the positive probability you assign to the relevant outcome might be inconsistent with other beliefs you have, and settling the question of consistency is computationally intractable). In the classic Pascal’s wager, there’s no worry about internal inconsistency in your credences.