Does he think the maximality rule from Maximal Cluelessness is hopelessly too permissive, e.g. between any two options, it will practically never tell us which is better?
I have a few ideas on ways you might be able to get more out of it that I’d be interested in his thoughts on, although this may be too technical for an interview:
More structure on your set of probability distributions or just generally a smaller range of probability distributions you entertain. For example, you might not be willing to put precise probabilities on exclusive possibilities A and B, but you might be willing to say that A is more likely than B, and this cuts down the kinds of probability distributions you need to entertain. Maybe you have a sense that some probability distributions are more “likely” than others, but still be unable to to put precise probabilities on those distributions, and this gives you more structure.
Discounting sufficiently low probabilities or using bounded social welfare functions (possibly with respect to the difference you make, or aggregating from 0), so that extremely unlikely but extreme possibilities (or tiny differences in probabilities for extreme outcomes) don’t tip the balance.
Targeting outcomes with the largest expected differences in value, with relatively small indirect effects from actions targeting them, e.g. among existential risks.
Does he think the maximality rule from Maximal Cluelessness is hopelessly too permissive, e.g. between any two options, it will practically never tell us which is better?
I have a few ideas on ways you might be able to get more out of it that I’d be interested in his thoughts on, although this may be too technical for an interview:
Portfolio approaches, e.g. hedging and diversification.
More structure on your set of probability distributions or just generally a smaller range of probability distributions you entertain. For example, you might not be willing to put precise probabilities on exclusive possibilities A and B, but you might be willing to say that A is more likely than B, and this cuts down the kinds of probability distributions you need to entertain. Maybe you have a sense that some probability distributions are more “likely” than others, but still be unable to to put precise probabilities on those distributions, and this gives you more structure.
Discounting sufficiently low probabilities or using bounded social welfare functions (possibly with respect to the difference you make, or aggregating from 0), so that extremely unlikely but extreme possibilities (or tiny differences in probabilities for extreme outcomes) don’t tip the balance.
Targeting outcomes with the largest expected differences in value, with relatively small indirect effects from actions targeting them, e.g. among existential risks.