They propose several heuristic methods that use simple rules or visualization to rule out ψ values where the robust distribution becomes ‘degenerate’ (that is, puts an unreasonable amount of weight on a small set of scenarios). How to improve on these heuristics seems to be an open problem.
It seems to me that what seem like different techniques, like cross validation, are ultimately trying to solve the same problem. If so, I wonder if the machine learning community has already found better techniques for ‘setting ψ’?
I’m thinking in practice, it might just be better to explicitly consider different distributions, and do a sensitivity analysis for the expected value. You could maximize the minimum expected value over the alternative distributions (although maybe there are better alternatives?). This is especially helpful if there are specific parameters you are very concerned about and you can be honest with yourself about what you think a reasonable person could believe about their values, e.g. you can justify ranges for them.
Maybe it’s good to do both, though, since considering other specific distributions could capture most of your known potential biases in cases where you suspect it could be large (and you don’t think your risk of bias is as high in other ways than the ones covered), while the approach you describe can capture further unknown potential biases.
Cross-validation could help set ψ when your data follows relatively predictable trends (and is close to random otherwise), but it could be a problem for issues where there’s little precedent, like transformative AI/AGI.
Yes, Watson and Holmes definitely discuss other approaches which are more like explicitly considering alternative distributions. And I agree that the approach I’ve described does have that benefit that it can uncover potentially unknown biases and work for quite complicated models/simulations. Hence why I’ve found it useful to apply to my portfolio optimization with altruism paper (and actually to some practical work). Along with using common sense exploration of alternative models/distributions.
Great question and thanks for looking into this section. I’ve now added a bit on this to the next version of the paper I’ll release.
Watson and Holmes investigate this issue :)
They propose several heuristic methods that use simple rules or visualization to rule out ψ values where the robust distribution becomes ‘degenerate’ (that is, puts an unreasonable amount of weight on a small set of scenarios). How to improve on these heuristics seems to be an open problem.
It seems to me that what seem like different techniques, like cross validation, are ultimately trying to solve the same problem. If so, I wonder if the machine learning community has already found better techniques for ‘setting ψ’?
I’m thinking in practice, it might just be better to explicitly consider different distributions, and do a sensitivity analysis for the expected value. You could maximize the minimum expected value over the alternative distributions (although maybe there are better alternatives?). This is especially helpful if there are specific parameters you are very concerned about and you can be honest with yourself about what you think a reasonable person could believe about their values, e.g. you can justify ranges for them.
Maybe it’s good to do both, though, since considering other specific distributions could capture most of your known potential biases in cases where you suspect it could be large (and you don’t think your risk of bias is as high in other ways than the ones covered), while the approach you describe can capture further unknown potential biases.
Cross-validation could help set ψ when your data follows relatively predictable trends (and is close to random otherwise), but it could be a problem for issues where there’s little precedent, like transformative AI/AGI.
Yes, Watson and Holmes definitely discuss other approaches which are more like explicitly considering alternative distributions. And I agree that the approach I’ve described does have that benefit that it can uncover potentially unknown biases and work for quite complicated models/simulations. Hence why I’ve found it useful to apply to my portfolio optimization with altruism paper (and actually to some practical work). Along with using common sense exploration of alternative models/distributions.