I’m curious about the ethical decisions you’ve made in this report. What’s your justification for evaluating current lives lost? I’d be far more interested in cause-X research that considers a variety of worldviews, e.g. a number of different ways of evaluating the medium or long-term consequences of interventions.
I think the issue of worldview diversification is a good one, and coincidentally something I was discussing with Sam the other day—though I think he was more interested in seeing how various short-termist stuff compare to each other on non-utilitarian views, as opposed to, say, how different longtermist causes compare when you accept the person affecting view vs not.
So with respect to the issue of focusing on current lives lost (I take this to mean the issue of focusing on actual rather than potential lives, while also making the simplifying assumption that population doesn’t change too much over time) - at a practical level, I’m more concerned with trying to get a sense of the comparative cost-effectiveness of various causes (assuming certain normative and epistemic assumptions), so worldview diversification is taking a backseat for now.
Nonetheless, would be interested in hearing your thoughts about this issue, and on cause prioritization more generally (e.g. the right research methodology to use, what causes you think are being neglected etc). If you don’t mind, I’ll drop you an email, and we can chat more at length?
Roughly I think that you are currently not really calculating cost-effectiveness. That is, whether you’re giving out malaria nets or preventing nuclear war, almost all of the effects of your actions will be affecting people in the future.
To clarify, by “future” I don’t necessarily mean “long run future”. Where you put that bar is a fascinating question. But focusing on current lives lost seems to approximately ignore most of the (positive or negative) value, so I expect your estimates to not be capturing much about what matters.
(You’ve probably seen this talk by Greaves, but flagging it in case you haven’t! Sam isn’t a huge fan, I think in part because Greaves reinvents a bunch of stuff that non-philosophers have already thought a bunch about, but I think it’s a good intro to the problem overall anyway.)
On the cluelessness issue—to be honest, I don’t find myself that bothered, insofar as it’s just the standard epistemic objection to utilitarianism, and if (a) we make a good faith effort to estimate the effects that can reasonably be estimated, and (b) have symmetric expectations as to long term value (I think Greaves has written on the indifference solution before, but it’s been some time), existing CEAs would still yield be a reasonably accurate signpost to maximization.
Happy to chat more on this, and also to get your views on research methodology in general—will drop you an email, then!
I agree with (a). I disagree that (b) is true! And as a result I disagree that existing CEAs give you an accurate signpost.
Why is (b) untrue? Well, we do have some information about the future, so it seems extremely unlikely that you won’t be able to have any indication as to the sign of your actions, if you do (a) reasonably well.
Again, I don’t purely mean this from an extreme longtermist perspective (although I would certainly be interested in longtermist analyses given my personal ethics). For example, simply thinking about population changes in the above report would be one way to move in this direction. Other possibilities include thinking about the effects of GHW interventions on long-term trajectories, like growth in developing countries (and that these effects may dominate short-term effects like DALYs averted for the very best interventions). I haven’t thought much about what other things you’d want to measure to make these estimates, but I would love to see someone try, and it seems pretty crucial if you’re going to be doing accurate CEAs.
I’m curious about the ethical decisions you’ve made in this report. What’s your justification for evaluating current lives lost? I’d be far more interested in cause-X research that considers a variety of worldviews, e.g. a number of different ways of evaluating the medium or long-term consequences of interventions.
Hi Ben,
I think the issue of worldview diversification is a good one, and coincidentally something I was discussing with Sam the other day—though I think he was more interested in seeing how various short-termist stuff compare to each other on non-utilitarian views, as opposed to, say, how different longtermist causes compare when you accept the person affecting view vs not.
So with respect to the issue of focusing on current lives lost (I take this to mean the issue of focusing on actual rather than potential lives, while also making the simplifying assumption that population doesn’t change too much over time) - at a practical level, I’m more concerned with trying to get a sense of the comparative cost-effectiveness of various causes (assuming certain normative and epistemic assumptions), so worldview diversification is taking a backseat for now.
Nonetheless, would be interested in hearing your thoughts about this issue, and on cause prioritization more generally (e.g. the right research methodology to use, what causes you think are being neglected etc). If you don’t mind, I’ll drop you an email, and we can chat more at length?
Sure, happy to chat about this!
Roughly I think that you are currently not really calculating cost-effectiveness. That is, whether you’re giving out malaria nets or preventing nuclear war, almost all of the effects of your actions will be affecting people in the future.
To clarify, by “future” I don’t necessarily mean “long run future”. Where you put that bar is a fascinating question. But focusing on current lives lost seems to approximately ignore most of the (positive or negative) value, so I expect your estimates to not be capturing much about what matters.
(You’ve probably seen this talk by Greaves, but flagging it in case you haven’t! Sam isn’t a huge fan, I think in part because Greaves reinvents a bunch of stuff that non-philosophers have already thought a bunch about, but I think it’s a good intro to the problem overall anyway.)
On the cluelessness issue—to be honest, I don’t find myself that bothered, insofar as it’s just the standard epistemic objection to utilitarianism, and if (a) we make a good faith effort to estimate the effects that can reasonably be estimated, and (b) have symmetric expectations as to long term value (I think Greaves has written on the indifference solution before, but it’s been some time), existing CEAs would still yield be a reasonably accurate signpost to maximization.
Happy to chat more on this, and also to get your views on research methodology in general—will drop you an email, then!
I agree with (a). I disagree that (b) is true! And as a result I disagree that existing CEAs give you an accurate signpost.
Why is (b) untrue? Well, we do have some information about the future, so it seems extremely unlikely that you won’t be able to have any indication as to the sign of your actions, if you do (a) reasonably well.
Again, I don’t purely mean this from an extreme longtermist perspective (although I would certainly be interested in longtermist analyses given my personal ethics). For example, simply thinking about population changes in the above report would be one way to move in this direction. Other possibilities include thinking about the effects of GHW interventions on long-term trajectories, like growth in developing countries (and that these effects may dominate short-term effects like DALYs averted for the very best interventions). I haven’t thought much about what other things you’d want to measure to make these estimates, but I would love to see someone try, and it seems pretty crucial if you’re going to be doing accurate CEAs.