I think the issue of worldview diversification is a good one, and coincidentally something I was discussing with Sam the other dayāthough I think he was more interested in seeing how various short-termist stuff compare to each other on non-utilitarian views, as opposed to, say, how different longtermist causes compare when you accept the person affecting view vs not.
So with respect to the issue of focusing on current lives lost (I take this to mean the issue of focusing on actual rather than potential lives, while also making the simplifying assumption that population doesnāt change too much over time) - at a practical level, Iām more concerned with trying to get a sense of the comparative cost-effectiveness of various causes (assuming certain normative and epistemic assumptions), so worldview diversification is taking a backseat for now.
Nonetheless, would be interested in hearing your thoughts about this issue, and on cause prioritization more generally (e.g. the right research methodology to use, what causes you think are being neglected etc). If you donāt mind, Iāll drop you an email, and we can chat more at length?
Roughly I think that you are currently not really calculating cost-effectiveness. That is, whether youāre giving out malaria nets or preventing nuclear war, almost all of the effects of your actions will be affecting people in the future.
To clarify, by āfutureā I donāt necessarily mean ālong run futureā. Where you put that bar is a fascinating question. But focusing on current lives lost seems to approximately ignore most of the (positive or negative) value, so I expect your estimates to not be capturing much about what matters.
(Youāve probably seen this talk by Greaves, but flagging it in case you havenāt! Sam isnāt a huge fan, I think in part because Greaves reinvents a bunch of stuff that non-philosophers have already thought a bunch about, but I think itās a good intro to the problem overall anyway.)
On the cluelessness issueāto be honest, I donāt find myself that bothered, insofar as itās just the standard epistemic objection to utilitarianism, and if (a) we make a good faith effort to estimate the effects that can reasonably be estimated, and (b) have symmetric expectations as to long term value (I think Greaves has written on the indifference solution before, but itās been some time), existing CEAs would still yield be a reasonably accurate signpost to maximization.
Happy to chat more on this, and also to get your views on research methodology in generalāwill drop you an email, then!
I agree with (a). I disagree that (b) is true! And as a result I disagree that existing CEAs give you an accurate signpost.
Why is (b) untrue? Well, we do have some information about the future, so it seems extremely unlikely that you wonāt be able to have any indication as to the sign of your actions, if you do (a) reasonably well.
Again, I donāt purely mean this from an extreme longtermist perspective (although I would certainly be interested in longtermist analyses given my personal ethics). For example, simply thinking about population changes in the above report would be one way to move in this direction. Other possibilities include thinking about the effects of GHW interventions on long-term trajectories, like growth in developing countries (and that these effects may dominate short-term effects like DALYs averted for the very best interventions). I havenāt thought much about what other things youād want to measure to make these estimates, but I would love to see someone try, and it seems pretty crucial if youāre going to be doing accurate CEAs.
Hi Ben,
I think the issue of worldview diversification is a good one, and coincidentally something I was discussing with Sam the other dayāthough I think he was more interested in seeing how various short-termist stuff compare to each other on non-utilitarian views, as opposed to, say, how different longtermist causes compare when you accept the person affecting view vs not.
So with respect to the issue of focusing on current lives lost (I take this to mean the issue of focusing on actual rather than potential lives, while also making the simplifying assumption that population doesnāt change too much over time) - at a practical level, Iām more concerned with trying to get a sense of the comparative cost-effectiveness of various causes (assuming certain normative and epistemic assumptions), so worldview diversification is taking a backseat for now.
Nonetheless, would be interested in hearing your thoughts about this issue, and on cause prioritization more generally (e.g. the right research methodology to use, what causes you think are being neglected etc). If you donāt mind, Iāll drop you an email, and we can chat more at length?
Sure, happy to chat about this!
Roughly I think that you are currently not really calculating cost-effectiveness. That is, whether youāre giving out malaria nets or preventing nuclear war, almost all of the effects of your actions will be affecting people in the future.
To clarify, by āfutureā I donāt necessarily mean ālong run futureā. Where you put that bar is a fascinating question. But focusing on current lives lost seems to approximately ignore most of the (positive or negative) value, so I expect your estimates to not be capturing much about what matters.
(Youāve probably seen this talk by Greaves, but flagging it in case you havenāt! Sam isnāt a huge fan, I think in part because Greaves reinvents a bunch of stuff that non-philosophers have already thought a bunch about, but I think itās a good intro to the problem overall anyway.)
On the cluelessness issueāto be honest, I donāt find myself that bothered, insofar as itās just the standard epistemic objection to utilitarianism, and if (a) we make a good faith effort to estimate the effects that can reasonably be estimated, and (b) have symmetric expectations as to long term value (I think Greaves has written on the indifference solution before, but itās been some time), existing CEAs would still yield be a reasonably accurate signpost to maximization.
Happy to chat more on this, and also to get your views on research methodology in generalāwill drop you an email, then!
I agree with (a). I disagree that (b) is true! And as a result I disagree that existing CEAs give you an accurate signpost.
Why is (b) untrue? Well, we do have some information about the future, so it seems extremely unlikely that you wonāt be able to have any indication as to the sign of your actions, if you do (a) reasonably well.
Again, I donāt purely mean this from an extreme longtermist perspective (although I would certainly be interested in longtermist analyses given my personal ethics). For example, simply thinking about population changes in the above report would be one way to move in this direction. Other possibilities include thinking about the effects of GHW interventions on long-term trajectories, like growth in developing countries (and that these effects may dominate short-term effects like DALYs averted for the very best interventions). I havenāt thought much about what other things youād want to measure to make these estimates, but I would love to see someone try, and it seems pretty crucial if youāre going to be doing accurate CEAs.