Hi there!
I currently co-lead the biosecurity grantmaking program at Effective Giving. Before that, I’ve worked in various research roles focused on pandemic preparedness and biosecurity.
Joshua TM
Hi there!
I currently co-lead the biosecurity grantmaking program at Effective Giving. Before that, I’ve worked in various research roles focused on pandemic preparedness and biosecurity.
Joshua TM
Thanks for writing this!
One thing I really agreed with.
For instance, I’m worried people will feel bait-and-switched if they get into EA via WWOTF then do an 80,000 Hours call or hang out around their EA university group and realize most people think AI risk is the biggest longtermist priority, many thinking this by a large margin.
I particularly appreciate your point about avoiding ‘bait-and-switch’ dynamics. I appreciate that it’s important to build broad support for a movement, but I ultimately think that it’s crucial to be transparent about what the key considerations and motivations are within longtermism. If, for example, the prospect of ‘digital minds’ is an essential part of how leading people in the movement think about the future, then I think that should be part of public outreach, notwithstanding how offputting or unintuitive it may be. (MacAskill has a comment about excluding the subject here).
One thing I disagreed with.
MacAskill at times seemed reluctant to quantify his best-guess credences, especially in the main text.
I agree it’s good to be transparent about priorities, including regarding the weight placed on AI risk within the movement. But I tend to disagree that it’s so important to share subjective numerical credences and it sometimes has real downsides, especially for extremely speculative subjects. Making implicit beliefs explicit is helpful. But it also causes people to anchor on what may ultimately be an extremely shaky and speculative guess, hindering further independent analysis and leading to long citation trails. For example, I think the “1-in-6” estimate from The Precipice may have led to premature anchoring on that figure, and likely is relied upon too much relative to how speculative it necessarily is.
I appreciate that there are many benefits of sharing numerical credences and you seem like an avid proponent of sharing subjective credences (you do a great job at it in this post!), so we don’t have to agree. I just wanted to highlight one substantial downside of the practice.
In a nutshell: I agree that caring about the future doesn’t mean ignoring the present. But it does mean deprioritising the present, and this comes with very real costs that we should be transparent about.
Thanks for sharing this!
I think this quote from Piper is worth highlighting:
(...) if the shift to longtermism meant that effective altruists would stop helping the people of the present, and would instead put all their money and energy into projects meant to help the distant future, it would be doing an obvious and immediate harm. That would make it hard to be sure EA was a good thing overall, even to someone like me who shares its key assumptions.
I broadly agree with this, except I think the first “if” should be replaced with “insofar as.” Even as someone who works full-time on existential risk reduction, it seems very clear to me that longtermism is causing this obvious and immediate harm; the question is whether that harm is outweighed by the value of pursuing longtermist priorities.
GiveWell growth is entirely compatible with the fact that directing resources toward longtermist priorities means not directing them toward present challenges. Thus, I think the following claim by Piper is unlikely to be true:
My main takeaway from the GiveWell chart is that it’s a mistake to believe that global health and development charities have to fight with AI and biosecurity charities for limited resources.
To make that claim, you have to speculate about the counterfactual situation where effective altruism didn’t include a focus on longtermism. E.g., you can ask:
Would major donors still be using the principles of effective altruism for their philanthropy?
Would support for GiveWell charities have been even greater in that world?
Would even more people have been dedicating their careers to pressing current challenges like global development and animal suffering?
My guess is that the answer to all three is “yes”, though of course I could be wrong and I’d be open to hear arguments to the contrary. In particular, I’d love to see evidence for the idea of a ‘symbiotic’ or synergistic relationship. What are the reasons to think that the focus on longtermism has been helpful for more near-term causes? E.g., does longtermism help bring people on board with Giving What We Can who otherwise wouldn’t have been? I’m sure that’s the case for some people, but how many? I’m genuinely curious here!
To be clear, it’s plausible that longtermism is extremely good for the world all-things-considered and that longtermism can coexist with other effective altruism causes.
But it’s very clear that focusing on longtermism trades off against focusing on other present challenges, and it’s critical to be transparent about that. As Piper says, “prioritization of causes is at the heart of the [effective altruism] movement.”
Thanks for your reply.
My concern is not that the numbers don’t work out. My concern is that the “$100m/0.01%” figure is not an estimate of how cost-effective ‘general x-risk prevention’ actually is in the way that this post implies.
It’s not an empirical estimate, it’s a proposed funding threshold, i.e. an answer to Linch’s question “How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?” But saying that we should fund interventions at that level of cost-effectiveness doesn’t say whether are many (or any) such interventions available at the moment. If I say “I propose that GiveWell should endorse interventions that we expect to save a life per $500”, that doesn’t by itself show whether such interventions exist.
Of course, the proposed funding threshold could be informed by cost-effectiveness estimates for specific interventions; I actually suspect that it is. But then it would be useful to see those estimates – or at the very least know which interventions they are – before establishing that figure as the ‘funding bar’ in this analysis.
This is particularly relevant if those estimates are based on interventions that do not prevent catastrophic events but merely prevent them from reaching existential/extinction levels, as the latter category does not affect all currently living people, meaning that ‘8 billion people’ would be the wrong number for the estimation you wrote above.
Thanks again for writing this. I just wanted to flag a potential issue with the $125 to $1,250 per human-life-equivalent-saved figure for ‘x-risk prevention.’
I think that figure is based on a willingness-to-pay proposal that already assumes some kind of longtermism.
You base the range Linch’s proposal of aiming to reducing x-risk by 0.01% per $100m-$1bn. As far as I can tell, these figures are based on a rough proposal of what we should be willing to pay for existential risk reduction: Linch refers to this post on “How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?”, which includes the proposed answer “we should fund interventions that we have resilient estimates of reducing x-risk ~0.01% at a cost of ~$100M.”
But I think that the willingness to pay from Linch is based on accounting for future lives, rather than the kind of currently-alive-human-life-equivalent-saved figure that you’re looking for. (@Linch, please do correct me if I’m wrong!)
In short, saying that we should fund interventions at the $100m/0.01% bar doesn’t say whether there are many (or any) available interventions at that level of cost-effectiveness. And while I appreciate that some grantmakers have begun leaning more on that kind of quantitative heuristic, I don’t doubt that you can infer from this fact that previously or currently funded work on ‘general x-risk prevention’ has met that bar, or even come particularly close to it.
So, I think the $125-$1,250 figure already assumes longtermism and isn’t applicable to your question. (Though I may have missed something here and would be happy to stand correct – particularly if I have misrepresented Linch’s analysis!)
Of course, if the upshot is that ‘general x-risk prevention’ is less cost-effective than the $125-$1,250 per currently-alive-human-life-equivalent-saved, then your overall point only becomes even stronger.
(PS: As an aside, I think it would be a good practice to add some kind of caption beneath your table stating how these are rough estimates, and perhaps in some cases even the only available estimate for that quantity. I’m pretty concerned about long citation trails in longtermist analysis, where very influential claims sometimes bottom out to some extremely rough and fragile estimates. Given how rough these estimates are, I think it’d be better if others replicated their own analysis from scratch before citing them.)
Thanks for writing this! I think your point is crucial and too often missed or misrepresented in discussions on this.
A related key point is that the best approach to mitigating catastrophic/existential risks depends heavily on whether one comes at it from a longtermist angle or not. For example, this choice determines how compelling it is to focus on strategies or interventions for civilisational resilience and recovery.
To take the example of biosecurity: In some (but not all) cases, interventions to prevent catastrophe from biological risks look quite different from interventions to prevent extinction from biology. And the difference between catastrophe and extinction really does depend on what one thinks about longtermism and the importance of future generations.
Thanks for taking the time to write this up!
I wholeheartedly agree with Holly Morgan here! Thank you for writing this up and for sharing your personal context and perspective in a nuanced way.
Thanks for writing this, Linch! I’m starting a job in grantmaking and found this interesting and helpful.
+1. One concrete application: Offer donation options instead of generous stipends as compensation for speaking engagements.
Hi EKillian! Could you provide some more context on what you’re interested in? Anyone will be welcome to write a submission. If you’re more interested in helping others with their work, you could say a bit more about that here in the comments, and then perhaps someone will reach out.
In terms of serving as a judge in the competition, we haven’t finalised the process for selecting judges – but it would be helpful if you could DM with some more information.
I appreciate hearing that and I’ve appreciated this brief exchange.
And I’m glad to hear that you’re giving the book a try. I expect that you will disagree with some of Farmer’s approaches – as I did – but I hope you will enjoy it nonetheless.
In general, I think the more ‘activist’ approach can be especially useful for (1) arguing, normatively, for what kind of world we want to be in and (2) prompting people to think harder about alternative ways of getting there – this is especially useful if some stakeholders haven’t fully appreciated how bad existing options are for certain parties. Note that neither of these ways to contribute requires concrete solutions to create some value.
Also, to add:
To be clear, I think we both need the more ‘activist’ approach of rejecting options that don’t meet certain standards, as well as the more ‘incrementalist’ approach of maximising on the margin.
For example, we both need advocates to argue that it’s outrageous and unacceptable how the scarcity funds allocated towards global poverty leaves so many without enough, as well as GiveWell-style optimisers to figure out how to do the most with what we currently have.
In a nutshell: Maximise subject to given constraints, and push to relax those constraints.
Thanks for this, I think you articulate your point well, and I understand what you’re saying.
It seems that we disagree, here:
It seems to me that the world would be a much better place if, whenever someone refused to accept either horn of a moral or political dilemma, they were expected to provide an explicit answer to the question “What would you do instead?”
My point is exactly that I don’t think that a world with a very strong version of this norm is necessarily better. Of course, I agree that it is best if you can propose a feasible alternative and I think it’s perfectly reasonable to ask for that. But I don’t think that having an alternative solution should always be a requirement for pointing out that both horns of a dilemma are unacceptable in an absolute sense.
Sometimes, the very act of critiquing both ‘horns’ is what prompts us to find a third way, meaning that such a critique has a longer-term value, even in the absence of a provided short-term solution. Consequently, I think there’s a downside to having too high of a bar for critiquing the default set of options.
To be clear, I think we both need the more ‘activist’ approach of rejecting options that don’t meet certain standards, as well as the more ‘incrementalist’ approach of maximising on the margin. There’s a role for both, and I think that Farmer did a great job at the former, while much of the effective altruism movement has done a great job at the latter. Hence why I found it valuable to learn about his work.
Thanks for writing this, Gavin.
Reading (well, listening to) Mountains Beyond Mountains, I was deeply inspired by Farmer. I think a lot of people in the EA community would benefit from giving the book a chance.
Sure, I sometimes found his rejection of an explicit cost-effectiveness-based approach very frustrating, and it seemed (and still seems) that his strategy was at times poorly aligned with the goal of saving as many lives as possible. But it also taught me the importance of sometimes putting your foot in the ground and insisting that none of the options on the table are acceptable; that we have to find an alternative if none of the present solutions meet a certain standard.
In economics and analytic philosophy (and by extension, in EA) we’re often given two choices and told to choose one, regardless of how unpalatable both may be. Maximisation subject to given constraints, it goes. Do an expensive airlift from Haiti to Boston to save the child or invest in cost-effective preventive interventions, it goes. And in the short term, the best way to save the most lives may indeed be to accept that that is the choice we have, to buckle down and calculate. But I’d argue that, sometimes, outright rejecting certain unpalatable dilemmas, and instead insisting on finding another, more ambitious way, can be part of an effective activist strategy for improving the world, especially in the longer term.
My impression is that this kind of activist strategy has been behind lots of vital social progress that the cost-effectiveness-oriented, incrementalist approach wouldn’t be suited for.
Oh, and I also quite liked your section on ‘the balance of positive vs negative value in current lives’!