I run the Centre for Exploratory Altruism Research (CEARCH), a cause prioritization research and grantmaking organization.
Joel Tan
Value of Life: VSL Estimates vs Community Perspective Evaluations
Hi Dan,
Some thoughts on the points you raised:
(1) On whether social desirability bias is an issue for VSL. My understanding is that the economics literature isn’t concerned about this (nor was IDinsight, per the report) - which makes sense to me, because when people are asked to pay to avert small risks, they consider it a pragmatic decision rather than an explicitly moral one where they have to decide whether or not to let someone die for more money. The issue is hence less salient from a moral point of view, and less likely to trigger worries about how one appears to others (kind and compassionate, or cold and selfish). Just think of how much less salient refusing to pay to install a lifebuoy next to a pond is, vs refusing to jump in to save a drowning child right now, even if statistically the former nets out in expected value to the latter.
(2) If we think that both VSL and the community perspective are flawed attempts at getting the true value, and that both are downward biased (per the reasons discussed), then the higher SBD-corrected community perspective is probably a lower bound. In fact, my main worry is that there is significant downward bias—given the strong and very cogent moral reasoning expressed by respondents in the qualitative side of the survey (“life is priceless”, “children have economic potential”), you could easily go up by one magnitude in trade-offs (i.e. 1 life to 100,000 cash transfers to double income) and still get a significant number of people going for that.
(3) & (4) Will definitely be interested in talking to your colleague at the Dignity Initiative, and will drop you an email to discuss your potential work in replicating your preference work! Am excited about your ideas, and would be happy to contribute any way I can.
Centre for Exploratory Altruism Research (CEARCH)
Shallow Report on Nuclear War (Abolishment)
You’re absolutely right that the shallow research part is fairly time-intensive, and not at all ideal. I had started out thinking one could get away with <=1 day’s worth of research at the shallow stage, but I found that just wasn’t sufficient to get a high-confidence evaluation (taking into consider the research, the construction of a CEA, the double-checking of all calculations, writing up a report, etc). To put things in context, Open Phil takes a couple of weeks for their shallow research, and bringing that down to 1 week already involves considerable sacrifice (not being able to get expert opinions beyond what is already published), and getting it further down to 1-3 days would be too detrimental to research quality, I think.
Aside from attempting to shorten the research process, ramping up the size of the research team would be the obvious solution, as you say, and it’s what I’ll be trying to pursue in the near term. Of course, funding constraints (at the organizational level) and general talent constraint (at the movement level) probably constrain us. Hence, I’m fairly enthusiastic about Akhil’s and Leonie’s Cause Innovation Bootcamp!
(a) It’s definitely fairly arbitrary, but the way I find it useful to think about it is that causes are problems, and you can break them down into:
High-level cause area: The broadest possible classification, like (i) problems that primarily affect humans in the here and now; (ii) problems that affect non-human animals; (iii) problems that primarily affect humans in the long run; and (iv) meta problems to do with EA itself.
Cause Area: High-level cause domains (e.g. neartermist human problems) can then be broken down into various intermediate-level cause areas (e.g. global disease and poverty → global health → communicable diseases → vector-borne diseases → mosquito borne diseases) until they reach the narrowest, individual cause level)
Cause: At the bottom, we have problems that are defined in the most narrow way possible (e.g. malaria).
In terms of what level cause prioritization research should focus on—I’m not sure if there’s an optimal level to always focus on. On the one hand, going narrow makes the actual research easier; on the other, you increase the amount of time needed to explore the search space, and also risk missing out on cross-cause solutions (e.g. vaccines for fungal diseases in general and not just, say, candidiasis).
(b) I think Michael Plant’s thesis had a good framing of the issue, and at the risk of summarizing his work poorly, I think the main point is that if causes are problems then interventions are solutions, and since we ultimately care about solving problems in a way that does the most good, we can’t really do cause prioritization research without also doing intervention evaluation.
The real challenge is identifying which solutions are the most effective, since at the shallow research stage we don’t have the time to look into everything. I can’t say I have a good answer this challenge, but in practice I would just briefly research what causes there are, and choose what superficially seems like the most effective. On the public health front, where the data is better, my understanding is that vaccines are (maybe unsurprisingly) very cost-effective, and same for gene drives.
Yep! My fellow 2022 CE incubatees and I probably spent more time than was wise on brainstorming cool-sounding names and backronyms. On hindsight, perhaps I should have just gone with Cause Research Advancement and Prioritization (CRAP)!
Thanks a lot for the feedback!
(a) Agreed that there is a lot of research being down, and I think my main concern (and CE’s too, I understand, though I won’t speak for Joey and his team on this) is the issue of systematicity—causes can appear more or less important based on the specific research methodology employed, and so 1000 causes evaluated by a 1000 people just doesn’t deliver the same actionable information as a 1000 causes evaluated by a single organization employing a single methodology.
My main outstanding uncertainty at this point is just whether such an attempt at broad systematic research is really feasible given how much time research even at the shallow stage is taking.
I understand that GWWC is looking to do evaluation of evaluators (i.e. GiveWell, FP, CE etc) and in many ways, maybe that’s far more feasible in terms of providing the EA community with systematic, comparative results—if you get a sense of how much more optimistic/pessimistic various evaluators are, you can penalize their individual cause/intervention prioritizations, and get a better sense of how disparate causes stack up against one another even if different methodologies/assumptions are used.
(b) The timeline for (hopefully) finding a Cause X is fairly arbitrary! I definitely don’t have a good/strong sense of how long it’ll take, so it’s probably best to see the timeline as a kind of stretch-goal meant to push the organization. I guess the other issue is how much more impactful we expect Cause X to be - the DCP global health interventions vary by like 10,000 in cost-effectiveness, and if you think that interventions within broad cause areas (i.e. global health vs violent conflict vs political reform vs economic policy) vary at least as much, then one might expect there to be some Cause X out there three to four magnitudes more impactful than top GiveWell stuff, but it’s so hard to say.
(c) Wrote about the issue of cause classification in somewhat more detail in the response to Aidan below!
All values are listed within the CEA itself, as linked to in the summary—it’s probably easier to follow there, rather than in the writeup!
Hi Ben,
I think the issue of worldview diversification is a good one, and coincidentally something I was discussing with Sam the other day—though I think he was more interested in seeing how various short-termist stuff compare to each other on non-utilitarian views, as opposed to, say, how different longtermist causes compare when you accept the person affecting view vs not.
So with respect to the issue of focusing on current lives lost (I take this to mean the issue of focusing on actual rather than potential lives, while also making the simplifying assumption that population doesn’t change too much over time) - at a practical level, I’m more concerned with trying to get a sense of the comparative cost-effectiveness of various causes (assuming certain normative and epistemic assumptions), so worldview diversification is taking a backseat for now.
Nonetheless, would be interested in hearing your thoughts about this issue, and on cause prioritization more generally (e.g. the right research methodology to use, what causes you think are being neglected etc). If you don’t mind, I’ll drop you an email, and we can chat more at length?
I think these are fair points, and in particular I’m worried about the reliance on Korean War data to model US-China conflict—if I had more time, I would go look at the expected deaths in a Taiwan conflict, but there aren’t any really available as far as I can tell.
From a bigger picture perspective, all this probably doesn’t matter too much, insofar as the costs of more fatalities/casualties from more conventional war get swamped by the benefits of reduced nuclear risk anyway.
Shallow Report on Fungal Diseases
Apologies if I’m misunderstanding, but if you’re referring to comparing the headline results of various CEAS (e.g. nuclear war, fungal disease, asteroids, future topics etc), they’ll all be listed here (https://exploratory-altruism.org/research/). Once the list gets longer, I’ll probably work to put everything into a single excel/google sheet for easier comparison.
On the cluelessness issue—to be honest, I don’t find myself that bothered, insofar as it’s just the standard epistemic objection to utilitarianism, and if (a) we make a good faith effort to estimate the effects that can reasonably be estimated, and (b) have symmetric expectations as to long term value (I think Greaves has written on the indifference solution before, but it’s been some time), existing CEAs would still yield be a reasonably accurate signpost to maximization.
Happy to chat more on this, and also to get your views on research methodology in general—will drop you an email, then!
Shallow Report on Asteroids
Hi Finm! Your post was definitely a great starting point for me—CEARCH is working through various causes, and we’re relying heavily on Nuno’s big list (which linked to your post as an excellent primer on the issue).
On your two other points:(a) I understand that Matheny’s analysis turns on (i) philosophical views on the view of the value of potential (as opposed to future but actual) human lives, and perhaps more controversially (ii) not implementing standard discounts. Not sure if I would go for (ii), but I do see people reasonably being far more bullish on the value of asteroid defence given (ii).
(b) In any case, I definitely agree that CEAs like this are likely to be overoptimistic, which is why CEARCH is unlikely to be spending more time on this cause. As our research methodology post (link) lays out—only if a cause’s estimated cost-effectiveness is at least one magnitude greater than a GiveWell top charity, will it pass on to the intermediate/deep rounds of research, with the idea being that research at the shallower level tends to overestimate a cause’s cost-effectiveness. So if a cause doesn’t appear effective early on, it’s probably not going to be a better-than-GiveWell bet (initial impressions notwithstanding), let alone a Cause X magnitudes more important than our current top causes.
I am extremely sceptical that you can make an asteroid impact seem like a natural event. The trajectory of asteroids are being tracked, and if one of them drastically changed course after an enemy state’s deep space probe (whose launch cannot be hidden) were in the vicinity, the inference is clear.
In any case, the difficulty of weaponization far outstrips redirection. The energy (and hence payload) as well as the complexity of the supporting calculations needed to redirect an asteroid so it does not hit earth is magnitudes less than the payload and calculations needed to . Even if we were capable of the former (i.e. have deflection capabilities), we would not have the latter—and that’s not even getting into the risk of even marginal errors in calculations of these long orbits causing staggering different predictions of ground zero—you could easily end up striking yourself (or causing a tsunami that drowns your own coastal cities).
That’s not getting into the issue of the military value of such weapons—which by definition cannot deter, if meant to look accidental.
Would suggest creating a “Fungal Diseases” tag.
(a) There are a number of posts that would be tagged by it. Two posts are entirely about fungal diseases, including CEARCH’s 4.5k word cause prioritization research report on the matter.And three other touch upon the matter as well:
(b) 5 taggable articles would meet the threshold of content sufficiency, based on existing standards (e.g. Evidence Action has 5 posts, Giving Multiplier has 4, Fund for Alignment Research (FAR) has 2).
(c) In terms of broader significance of the topic/its notability, it’s a topic listed in Nuno’s big list of cause prioritization research, and the evidence suggests that it is potentially a cost-effective cause area—it would be valuable, from this perspective, to have a tag that allows people interested in funding/working on this issue to learn more about it as they browse the forum.
(1) Theoretically, additional detail to your CEA means: (a) a more discrete and granular theory of change, which necessarily reduces the probability of success, and (b) trying to measure more flow-through effects/externalities, which while typically positive, are more uncertain and tend also to be less important compared to the primary health effects measured. With the impact of (a) > (b), more research attrites the estimated cost-effectiveness.
(2) Empirically, and from past experience, this has been the case for various organizations, to my understand. Eric Hausen has spoken about Charity Science Health’s process (more you look at something, the worse it seems), and GiveWell has written about this before, I believe (somewhere, might dig it up eventually!)
Glad it was useful!