I’m thinking in practice, it might just be better to explicitly consider different distributions, and do a sensitivity analysis for the expected value. You could maximize the minimum expected value over the alternative distributions (although maybe there are better alternatives?). This is especially helpful if there are specific parameters you are very concerned about and you can be honest with yourself about what you think a reasonable person could believe about their values, e.g. you can justify ranges for them.
Maybe it’s good to do both, though, since considering other specific distributions could capture most of your known potential biases in cases where you suspect it could be large (and you don’t think your risk of bias is as high in other ways than the ones covered), while the approach you describe can capture further unknown potential biases.
Cross-validation could help set ψ when your data follows relatively predictable trends (and is close to random otherwise), but it could be a problem for issues where there’s little precedent, like transformative AI/AGI.
Yes, Watson and Holmes definitely discuss other approaches which are more like explicitly considering alternative distributions. And I agree that the approach I’ve described does have that benefit that it can uncover potentially unknown biases and work for quite complicated models/simulations. Hence why I’ve found it useful to apply to my portfolio optimization with altruism paper (and actually to some practical work). Along with using common sense exploration of alternative models/distributions.
I’m thinking in practice, it might just be better to explicitly consider different distributions, and do a sensitivity analysis for the expected value. You could maximize the minimum expected value over the alternative distributions (although maybe there are better alternatives?). This is especially helpful if there are specific parameters you are very concerned about and you can be honest with yourself about what you think a reasonable person could believe about their values, e.g. you can justify ranges for them.
Maybe it’s good to do both, though, since considering other specific distributions could capture most of your known potential biases in cases where you suspect it could be large (and you don’t think your risk of bias is as high in other ways than the ones covered), while the approach you describe can capture further unknown potential biases.
Cross-validation could help set ψ when your data follows relatively predictable trends (and is close to random otherwise), but it could be a problem for issues where there’s little precedent, like transformative AI/AGI.
Yes, Watson and Holmes definitely discuss other approaches which are more like explicitly considering alternative distributions. And I agree that the approach I’ve described does have that benefit that it can uncover potentially unknown biases and work for quite complicated models/simulations. Hence why I’ve found it useful to apply to my portfolio optimization with altruism paper (and actually to some practical work). Along with using common sense exploration of alternative models/distributions.