Hi there!
I currently co-lead the biosecurity grantmaking program at Effective Giving. Before that, I’ve worked in various research roles focused on pandemic preparedness and biosecurity.
Joshua TM
Hi there!
I currently co-lead the biosecurity grantmaking program at Effective Giving. Before that, I’ve worked in various research roles focused on pandemic preparedness and biosecurity.
Joshua TM
Thanks for sharing this!
I think this quote from Piper is worth highlighting:
(...) if the shift to longtermism meant that effective altruists would stop helping the people of the present, and would instead put all their money and energy into projects meant to help the distant future, it would be doing an obvious and immediate harm. That would make it hard to be sure EA was a good thing overall, even to someone like me who shares its key assumptions.
I broadly agree with this, except I think the first “if” should be replaced with “insofar as.” Even as someone who works full-time on existential risk reduction, it seems very clear to me that longtermism is causing this obvious and immediate harm; the question is whether that harm is outweighed by the value of pursuing longtermist priorities.
GiveWell growth is entirely compatible with the fact that directing resources toward longtermist priorities means not directing them toward present challenges. Thus, I think the following claim by Piper is unlikely to be true:
My main takeaway from the GiveWell chart is that it’s a mistake to believe that global health and development charities have to fight with AI and biosecurity charities for limited resources.
To make that claim, you have to speculate about the counterfactual situation where effective altruism didn’t include a focus on longtermism. E.g., you can ask:
Would major donors still be using the principles of effective altruism for their philanthropy?
Would support for GiveWell charities have been even greater in that world?
Would even more people have been dedicating their careers to pressing current challenges like global development and animal suffering?
My guess is that the answer to all three is “yes”, though of course I could be wrong and I’d be open to hear arguments to the contrary. In particular, I’d love to see evidence for the idea of a ‘symbiotic’ or synergistic relationship. What are the reasons to think that the focus on longtermism has been helpful for more near-term causes? E.g., does longtermism help bring people on board with Giving What We Can who otherwise wouldn’t have been? I’m sure that’s the case for some people, but how many? I’m genuinely curious here!
To be clear, it’s plausible that longtermism is extremely good for the world all-things-considered and that longtermism can coexist with other effective altruism causes.
But it’s very clear that focusing on longtermism trades off against focusing on other present challenges, and it’s critical to be transparent about that. As Piper says, “prioritization of causes is at the heart of the [effective altruism] movement.”
Thanks for your reply.
My concern is not that the numbers don’t work out. My concern is that the “$100m/0.01%” figure is not an estimate of how cost-effective ‘general x-risk prevention’ actually is in the way that this post implies.
It’s not an empirical estimate, it’s a proposed funding threshold, i.e. an answer to Linch’s question “How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?” But saying that we should fund interventions at that level of cost-effectiveness doesn’t say whether are many (or any) such interventions available at the moment. If I say “I propose that GiveWell should endorse interventions that we expect to save a life per $500”, that doesn’t by itself show whether such interventions exist.
Of course, the proposed funding threshold could be informed by cost-effectiveness estimates for specific interventions; I actually suspect that it is. But then it would be useful to see those estimates – or at the very least know which interventions they are – before establishing that figure as the ‘funding bar’ in this analysis.
This is particularly relevant if those estimates are based on interventions that do not prevent catastrophic events but merely prevent them from reaching existential/extinction levels, as the latter category does not affect all currently living people, meaning that ‘8 billion people’ would be the wrong number for the estimation you wrote above.
Thanks again for writing this. I just wanted to flag a potential issue with the $125 to $1,250 per human-life-equivalent-saved figure for ‘x-risk prevention.’
I think that figure is based on a willingness-to-pay proposal that already assumes some kind of longtermism.
You base the range Linch’s proposal of aiming to reducing x-risk by 0.01% per $100m-$1bn. As far as I can tell, these figures are based on a rough proposal of what we should be willing to pay for existential risk reduction: Linch refers to this post on “How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?”, which includes the proposed answer “we should fund interventions that we have resilient estimates of reducing x-risk ~0.01% at a cost of ~$100M.”
But I think that the willingness to pay from Linch is based on accounting for future lives, rather than the kind of currently-alive-human-life-equivalent-saved figure that you’re looking for. (@Linch, please do correct me if I’m wrong!)
In short, saying that we should fund interventions at the $100m/0.01% bar doesn’t say whether there are many (or any) available interventions at that level of cost-effectiveness. And while I appreciate that some grantmakers have begun leaning more on that kind of quantitative heuristic, I don’t doubt that you can infer from this fact that previously or currently funded work on ‘general x-risk prevention’ has met that bar, or even come particularly close to it.
So, I think the $125-$1,250 figure already assumes longtermism and isn’t applicable to your question. (Though I may have missed something here and would be happy to stand correct – particularly if I have misrepresented Linch’s analysis!)
Of course, if the upshot is that ‘general x-risk prevention’ is less cost-effective than the $125-$1,250 per currently-alive-human-life-equivalent-saved, then your overall point only becomes even stronger.
(PS: As an aside, I think it would be a good practice to add some kind of caption beneath your table stating how these are rough estimates, and perhaps in some cases even the only available estimate for that quantity. I’m pretty concerned about long citation trails in longtermist analysis, where very influential claims sometimes bottom out to some extremely rough and fragile estimates. Given how rough these estimates are, I think it’d be better if others replicated their own analysis from scratch before citing them.)
Thanks for writing this! I think your point is crucial and too often missed or misrepresented in discussions on this.
A related key point is that the best approach to mitigating catastrophic/existential risks depends heavily on whether one comes at it from a longtermist angle or not. For example, this choice determines how compelling it is to focus on strategies or interventions for civilisational resilience and recovery.
To take the example of biosecurity: In some (but not all) cases, interventions to prevent catastrophe from biological risks look quite different from interventions to prevent extinction from biology. And the difference between catastrophe and extinction really does depend on what one thinks about longtermism and the importance of future generations.
Thanks for taking the time to write this up!
I wholeheartedly agree with Holly Morgan here! Thank you for writing this up and for sharing your personal context and perspective in a nuanced way.
Thanks for writing this, Linch! I’m starting a job in grantmaking and found this interesting and helpful.
+1. One concrete application: Offer donation options instead of generous stipends as compensation for speaking engagements.
Hi EKillian! Could you provide some more context on what you’re interested in? Anyone will be welcome to write a submission. If you’re more interested in helping others with their work, you could say a bit more about that here in the comments, and then perhaps someone will reach out.
In terms of serving as a judge in the competition, we haven’t finalised the process for selecting judges – but it would be helpful if you could DM with some more information.
I appreciate hearing that and I’ve appreciated this brief exchange.
And I’m glad to hear that you’re giving the book a try. I expect that you will disagree with some of Farmer’s approaches – as I did – but I hope you will enjoy it nonetheless.
In general, I think the more ‘activist’ approach can be especially useful for (1) arguing, normatively, for what kind of world we want to be in and (2) prompting people to think harder about alternative ways of getting there – this is especially useful if some stakeholders haven’t fully appreciated how bad existing options are for certain parties. Note that neither of these ways to contribute requires concrete solutions to create some value.
Also, to add:
To be clear, I think we both need the more ‘activist’ approach of rejecting options that don’t meet certain standards, as well as the more ‘incrementalist’ approach of maximising on the margin.
For example, we both need advocates to argue that it’s outrageous and unacceptable how the scarcity funds allocated towards global poverty leaves so many without enough, as well as GiveWell-style optimisers to figure out how to do the most with what we currently have.
In a nutshell: Maximise subject to given constraints, and push to relax those constraints.
Thanks for this, I think you articulate your point well, and I understand what you’re saying.
It seems that we disagree, here:
It seems to me that the world would be a much better place if, whenever someone refused to accept either horn of a moral or political dilemma, they were expected to provide an explicit answer to the question “What would you do instead?”
My point is exactly that I don’t think that a world with a very strong version of this norm is necessarily better. Of course, I agree that it is best if you can propose a feasible alternative and I think it’s perfectly reasonable to ask for that. But I don’t think that having an alternative solution should always be a requirement for pointing out that both horns of a dilemma are unacceptable in an absolute sense.
Sometimes, the very act of critiquing both ‘horns’ is what prompts us to find a third way, meaning that such a critique has a longer-term value, even in the absence of a provided short-term solution. Consequently, I think there’s a downside to having too high of a bar for critiquing the default set of options.
To be clear, I think we both need the more ‘activist’ approach of rejecting options that don’t meet certain standards, as well as the more ‘incrementalist’ approach of maximising on the margin. There’s a role for both, and I think that Farmer did a great job at the former, while much of the effective altruism movement has done a great job at the latter. Hence why I found it valuable to learn about his work.
Thanks for writing this, Gavin.
Reading (well, listening to) Mountains Beyond Mountains, I was deeply inspired by Farmer. I think a lot of people in the EA community would benefit from giving the book a chance.
Sure, I sometimes found his rejection of an explicit cost-effectiveness-based approach very frustrating, and it seemed (and still seems) that his strategy was at times poorly aligned with the goal of saving as many lives as possible. But it also taught me the importance of sometimes putting your foot in the ground and insisting that none of the options on the table are acceptable; that we have to find an alternative if none of the present solutions meet a certain standard.
In economics and analytic philosophy (and by extension, in EA) we’re often given two choices and told to choose one, regardless of how unpalatable both may be. Maximisation subject to given constraints, it goes. Do an expensive airlift from Haiti to Boston to save the child or invest in cost-effective preventive interventions, it goes. And in the short term, the best way to save the most lives may indeed be to accept that that is the choice we have, to buckle down and calculate. But I’d argue that, sometimes, outright rejecting certain unpalatable dilemmas, and instead insisting on finding another, more ambitious way, can be part of an effective activist strategy for improving the world, especially in the longer term.
My impression is that this kind of activist strategy has been behind lots of vital social progress that the cost-effectiveness-oriented, incrementalist approach wouldn’t be suited for.
In case you (or anyone else) is interested, there’ll be a panel discussion with a few biosecurity experts this Thursday: 2022 Next Generation for Biosecurity Competition: How can modern science help develop effective verification protocols to strengthen the Biological Weapons Convention? A Conversation with the Experts.
Hi James!
Good question. That estimate was for our entire process of producing the paper, including any relevant research. We wrote on a topic that somewhat overlapped with areas we already knew a bit about, so I can imagine there’d be extra hours if you write on something you’re less familiar with. Also, I generally expect that the time investment might vary a lot between groups, so I wouldn’t put too much weight on my rough estimate. Cheers!
Just here to say that this bit is simultaneously wonderfully hilarious and extraordinarily astute:
The first is that I think infinite ethics punctures a certain type of utilitarian dream. It’s a dream I associate with the utilitarian friend quoted above (though over time he’s become much more of a nihilist), and with various others. In my head (content warning: caricature), it’s the dream of hitching yourself to some simple ideas – e.g., expected utility theory, totalism in population ethics, maybe hedonism about well-being — and riding them wherever they lead, no matter the costs. Yes, you push fat men and harvest organs; yes, you destroy Utopias for tiny chances of creating zillions of evil, slightly-happy rats (plus some torture farms on the side). But you always “know what you’re getting” – e.g., more expected net pleasure. And because you “know what you’re getting,” you can say things like “I bite all the bullets,” confident that you’ll always get at least this one thing, whatever else must go.
Plus, other people have problems you don’t. They end up talking about vague and metaphysically suspicious things like “people,” whereas you only talk about “valenced experiences” which are definitely metaphysically fine and sharp and joint-carving. They end up writing papers entirely devoted to addressing a single category of counter-example – even while you can almost feel the presence of tons of others, just offscreen. And more generally, their theories are often “janky,” complicated, ad hoc, intransitive, or incomplete. Indeed, various theorems prove that non-you people will have problems like this (or so you’re told; did you actually read the theorems in question?). You, unlike others, have the courage to just do what the theorems say, ‘intuitions’ be damned. In this sense, you are hardcore. You are rigorous. You are on solid ground.
Thanks for your comment, much appreciated!
I wholeheartedly agree that taking action to do something is often the most important, and most desperately lacking, component. Why is it lacking?
One potential cause could be if many people agree with a critical take, but those people are not the ones who have a lot of influence, e.g. because decision-making power is concentrated.
Another explanation could be that there are actually many people who agree with a critical take on the direction of effective altruism and would have the ability to do something about it, but they just can’t/won’t dedicate time to it given their other professional commitments. (I count myself in that category. If I had a clone of myself that had to work on something other than biorisk, I might ask them to work full-time on ‘steering’ for this movement.)
Thankfully, we can expect a large influx of new, exciting members of the community (thanks to the awesome rowing of so many community builders!) looking for projects to take up. For that reason, I think it’s important that (1) there’s a culture of dissent that prompts people to think about new directions to pull the movement in, and (2) there are institutions in place that can facilitate the implementation of critical work, e.g. funding, positions within organisations, or mechanisms for distributed decision-making for the movement.
Hey Linch, thanks for this thoughtful comment!
Yeah, I agree that my examples of steering sometimes are closely related to other terms in Holden’s framework, particularly equity – indeed I have a comment about that buried deep in a footnote.
One reason I think this happens is because I think a super important concept for steering is the idea of moral uncertainty, and taking moral uncertainty seriously can imply putting a greater weight on equity than you otherwise might.
I guess another reason is that I tend to assume that effective steering is, as an empirical matter, more likely to be achieved if you incorporate a wide range of voices and perspectives. And this does in practice end up being similar to efforts to “amplify the voices and advance the interests of historically marginalized groups” that Holden puts under the category of equity. But yeah, like you say, it can be hard to differentiate whether people advocate for equity and diversity of perspectives for instrumental or intrinsic reasons (I’m keen on both).
I also think your last remark is a fair critique of my post – perhaps I did bring in some more controversial (though, to me, compelling!) perspectives under the less controversial heading of steering.
A very similar critique I’ve heard from two others is something like: “Is your argument purely that there isn’t enough steering going on, or is it also that you disagree with the current direction of steering?” And I think, to be fair, that it’s also partly the latter for me, at least on some very specific domains.
But one response to that is that, yes, I disagree with some of the current steering – but a necessary condition for changing direction is that people talk/care/focus more on steering, so I’m going to make the case for that first.
Thanks again for your comment!
As discussed in a bit more detail in this post, I’d love to see themed prizes focusing specifically on critical engagement with effective altruism. This could be very broad (e.g., “Best critique of the effective altruism movement”) or more narrow (e.g., something like “Best critique of a specific assumption that is widely made in the community” or “Best writeup on how applied longtermism could go wrong”).
To the next content specialist on the Forum: I’d be happy to discuss further!
In a nutshell: I agree that caring about the future doesn’t mean ignoring the present. But it does mean deprioritising the present, and this comes with very real costs that we should be transparent about.