Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
I’m curious about the ethical decisions you’ve made in this report. What’s your justification for evaluating current lives lost? I’d be far more interested in cause-X research that considers a variety of worldviews, e.g. a number of different ways of evaluating the medium or long-term consequences of interventions.
Hi Ben,
I think the issue of worldview diversification is a good one, and coincidentally something I was discussing with Sam the other day—though I think he was more interested in seeing how various short-termist stuff compare to each other on non-utilitarian views, as opposed to, say, how different longtermist causes compare when you accept the person affecting view vs not.
So with respect to the issue of focusing on current lives lost (I take this to mean the issue of focusing on actual rather than potential lives, while also making the simplifying assumption that population doesn’t change too much over time) - at a practical level, I’m more concerned with trying to get a sense of the comparative cost-effectiveness of various causes (assuming certain normative and epistemic assumptions), so worldview diversification is taking a backseat for now.
Nonetheless, would be interested in hearing your thoughts about this issue, and on cause prioritization more generally (e.g. the right research methodology to use, what causes you think are being neglected etc). If you don’t mind, I’ll drop you an email, and we can chat more at length?
Sure, happy to chat about this!
Roughly I think that you are currently not really calculating cost-effectiveness. That is, whether you’re giving out malaria nets or preventing nuclear war, almost all of the effects of your actions will be affecting people in the future.
To clarify, by “future” I don’t necessarily mean “long run future”. Where you put that bar is a fascinating question. But focusing on current lives lost seems to approximately ignore most of the (positive or negative) value, so I expect your estimates to not be capturing much about what matters.
(You’ve probably seen this talk by Greaves, but flagging it in case you haven’t! Sam isn’t a huge fan, I think in part because Greaves reinvents a bunch of stuff that non-philosophers have already thought a bunch about, but I think it’s a good intro to the problem overall anyway.)
On the cluelessness issue—to be honest, I don’t find myself that bothered, insofar as it’s just the standard epistemic objection to utilitarianism, and if (a) we make a good faith effort to estimate the effects that can reasonably be estimated, and (b) have symmetric expectations as to long term value (I think Greaves has written on the indifference solution before, but it’s been some time), existing CEAs would still yield be a reasonably accurate signpost to maximization.
Happy to chat more on this, and also to get your views on research methodology in general—will drop you an email, then!
I agree with (a). I disagree that (b) is true! And as a result I disagree that existing CEAs give you an accurate signpost.
Why is (b) untrue? Well, we do have some information about the future, so it seems extremely unlikely that you won’t be able to have any indication as to the sign of your actions, if you do (a) reasonably well.
Again, I don’t purely mean this from an extreme longtermist perspective (although I would certainly be interested in longtermist analyses given my personal ethics). For example, simply thinking about population changes in the above report would be one way to move in this direction. Other possibilities include thinking about the effects of GHW interventions on long-term trajectories, like growth in developing countries (and that these effects may dominate short-term effects like DALYs averted for the very best interventions). I haven’t thought much about what other things you’d want to measure to make these estimates, but I would love to see someone try, and it seems pretty crucial if you’re going to be doing accurate CEAs.
Are you gonna include all these values in a big table somewhere?
All values are listed within the CEA itself, as linked to in the summary—it’s probably easier to follow there, rather than in the writeup!
Sorry but is there gonna be an easy way to see the comparisons of different CEAs you do?
Apologies if I’m misunderstanding, but if you’re referring to comparing the headline results of various CEAS (e.g. nuclear war, fungal disease, asteroids, future topics etc), they’ll all be listed here (https://exploratory-altruism.org/research/). Once the list gets longer, I’ll probably work to put everything into a single excel/google sheet for easier comparison.
That sounds great, thanks
Worth flagging that I believe this is based on a (loosely speaking) person-affecting view (mentioned in Joel and Ben’s back-and-forth below). That seems to me to bias the cost-effectiveness of anything that poses a sizable extinction risk dramatically downward.
At the same time, I find both the empirical work and the inside-view thinking here very impressive for a week’s work, and it seems like even those without a person-affecting view can learn a lot from this.
Wow, many thanks, quite an eye-opener though I’m quite new to the literature on this question and to the forum itself!
Just a worry about the modelling which may have been taken care of anyway through the mention of inferential uncertainty (I haven’t checked the definition of this so far).
So, here goes: I wonder whether the inclusion of wars of more than say 20 years ago in the calculations (for example for the conventional war injuries) are pertinent, since the conditions then were significantly different. More specifically, for example, there was no internet and no possibility of cyber-warfare and the world was a far less interconnected place.
More generally, I wonder whether even considering the last ten or twenty years or even two years the political conditions are such that render each conflict a sui generis event, and whether this should be a worry for any modeller.
Best Wishes,
Haris
I think these are fair points, and in particular I’m worried about the reliance on Korean War data to model US-China conflict—if I had more time, I would go look at the expected deaths in a Taiwan conflict, but there aren’t any really available as far as I can tell.
From a bigger picture perspective, all this probably doesn’t matter too much, insofar as the costs of more fatalities/casualties from more conventional war get swamped by the benefits of reduced nuclear risk anyway.
yeah, I agree that what’s most important is the bigger picture, and regarding that, I totally agree with your conclusions!
(though, coming just after listening to Ian Morris’s podcast on his history book (here: https://80000hours.org/podcast/episodes/ian-morris-big-picture-history/) I just got the thought that politics may dictate a quantification also for the scenaria for nuclear conflict as politically speaking, use of a nuclear weapon by say Russia would be qualitatively different to use by say Pakistan because of (for lack of better words) there’s a ‘hierarchy’ of states judged on their power on the world stage. Something which I think it would be hard for a model to capture. But I may be wrong and you have covered this, and as I said before, I do agree with your conclusions and estimates