Also, it would be helpful if you said more about how you think I should do things. Should I not use a Bayesian prior at all? Should I use a wider prior or a different distribution? Should I model interventions in a different way? How do you think I could do better?
Right now all I know is that any approach has lots of problems, and my current approach seems the least problematic. If you think something else would be better, please say what it is and why you prefer it.
Compared to this, I would use something that looks more like standard cost-effectiveness analysis. Rather than use the Doomsday Prior and variance approach to assess robustness (which is ultra-sensitive to errors about variance for ranking options, at least comparably to errors about EV in cost-effectiveness analysis) my recommendationsI would include the following:
Do multiple cost-effectiveness analyses using different approaches, methodologies, and people (overall and of local variables); seek robustness in the analysis by actually doing it differently
Use empirical and intuitive priors about more local variables (e.g. make use of data about the success rates of past scientific research and startups in forming your prior about the success of meat substitute research, without hugely penalizing the success rate because the topic has more utilitarian value by your lights) rather than global
Assess the size of the future separately from our ability to prevent short-run catastrophic risks or produce short-run value changes (like adding vegans)
Focus on relative impact of different actions, rather than absolute impact in QALYs (the size of the future mostly cancels out, and even a future as large as Earth’s past is huge relative to the present, hundreds of millions of years; actions like cash transfers also have long-run effects, although a lot less than actions optimized for long-run effects)
Vigorously seek out criticism, missing pieces, and improvements for the models and their components
Part of what I’m getting at is a desire to see you defend the wacky claims implicit in your model posteriors. You present arguments for how the initial estimates could make sense, but not for how the posteriors could make sense. And as I discuss above, it’s hard to make them make sense, and that counts against the outputs.
So I’d like to see some account of why your best picture of the world is in so much tension with your prior, and how we could have an understanding of the world that is consistent with your posterior.
Thanks, this is exactly the sort of thing I was looking for.
Slightly unrelated but:
Part of what I’m getting at is a desire to see you defend the wacky claims implicit in your model posteriors.
The wacky claims you’ve talked about here relate to far-future posteriors. Do you also mean the direct effect posteriors imply wacky claims? I know you’ve said before you think the way I set a prior is arbitrary, is there anything else?
Also, it would be helpful if you said more about how you think I should do things. Should I not use a Bayesian prior at all? Should I use a wider prior or a different distribution? Should I model interventions in a different way? How do you think I could do better?
Right now all I know is that any approach has lots of problems, and my current approach seems the least problematic. If you think something else would be better, please say what it is and why you prefer it.
Compared to this, I would use something that looks more like standard cost-effectiveness analysis. Rather than use the Doomsday Prior and variance approach to assess robustness (which is ultra-sensitive to errors about variance for ranking options, at least comparably to errors about EV in cost-effectiveness analysis) my recommendationsI would include the following:
Do multiple cost-effectiveness analyses using different approaches, methodologies, and people (overall and of local variables); seek robustness in the analysis by actually doing it differently
Use empirical and intuitive priors about more local variables (e.g. make use of data about the success rates of past scientific research and startups in forming your prior about the success of meat substitute research, without hugely penalizing the success rate because the topic has more utilitarian value by your lights) rather than global
Assess the size of the future separately from our ability to prevent short-run catastrophic risks or produce short-run value changes (like adding vegans)
Focus on relative impact of different actions, rather than absolute impact in QALYs (the size of the future mostly cancels out, and even a future as large as Earth’s past is huge relative to the present, hundreds of millions of years; actions like cash transfers also have long-run effects, although a lot less than actions optimized for long-run effects)
Vigorously seek out criticism, missing pieces, and improvements for the models and their components
Part of what I’m getting at is a desire to see you defend the wacky claims implicit in your model posteriors. You present arguments for how the initial estimates could make sense, but not for how the posteriors could make sense. And as I discuss above, it’s hard to make them make sense, and that counts against the outputs.
So I’d like to see some account of why your best picture of the world is in so much tension with your prior, and how we could have an understanding of the world that is consistent with your posterior.
Thanks, this is exactly the sort of thing I was looking for.
Slightly unrelated but:
The wacky claims you’ve talked about here relate to far-future posteriors. Do you also mean the direct effect posteriors imply wacky claims? I know you’ve said before you think the way I set a prior is arbitrary, is there anything else?