I disagree with a couple specific points as well as the overall thrust of this post. Thank you for writing it!
A maximizing viewpoint can say that we need to be cautious lest we do something wonderful but not maximally so. But in practice, embracing a pragmatic viewpoint, saving money while searching for the maximum seems bad.
I think I strongly disagree with this because opportunities for impact appear heavy tailed. Funding 2 interventions that are in the 90th percentile is likely less good than funding 1 intervention in the 99th percentile. Given this state of the world, spending much of our resources trying to identify the maximum is worthwhile. I think the default of the world is that I donate to a charity in the 50th percentile. And if I adopt a weak mandate to do lots of good (a non-maximizing frame, or an early EA movement), I will probably identify and donate to a charity in the 90th percentile. It is only when I take a maximizing stance, and a strong mandate to do lots of good (or when many thousands of hours have been spent on global priorities research), that I will find and donate to the very best charities. The ratios matter of course, and probably if I was faced with donating $1,000 to 90th percentile charities or $1 to a 99th percentile charity, I would probably donate to the 90th percentile charities, but if the numbers are $2 and $1, I should donate to the 99th percentile charity. I am claiming: the distribution of altruistic opportunities is roughly heavy tailed; the best (and maybe only) way to end up in the heavy tail is to take a maximizing approach; the “wonderful” thing that we would do without maximizing is, as measured ex post (looking at the results in retrospect), significantly worse than the best thing; a claim that I am also making, though which I think is weakest, is that we can differentiate between the “wonderful” and the “maximal available” opportunities ex ante (before hand) given research and reflection; the thing I care about is impact, and the EA movement is good insofar as it creates positive impact in the world (including for members of the EA community, but they are a small piece of the universe).
There are presumably people who would have pursued PhDs in computer science, and would have been EA-aligned tenure track professors now, but who instead decided to earn-to-give back in 2014. Whoops!
To me this seems like it doesn’t support the rest of your argument. I agree that the correct allocation of EA labor is not all doing AI Safety research, and we need to have outreach and career related resources to support people with various skills, but to me this is more-so a claim that we are not maximizing well enough — we are not properly seeking the optimal labor allocation because we’re a relatively uncoordinated set of individuals. If we were better at maximizing at a high level, and doing a good job of it, the problem you are describing would not happen, and I think it’s extremely likely that we can solve this problem.
With regard to the thrust of your post: I cannot honestly tell a story about how the non-maximizing strategy wins. That is, when I think about all the problems in the world: pandemics, climate change, existential threats from advanced AI, malaria, mass suffering of animals, unjust political imprisonment, etc., I can’t imagine that we solve these problems if we approach them like exercise or saving for retirement. If I actually cared about exercise or saving for retirement, I would treat them very differently than I currently do (and I have had periods in my life where I cared more about exercise and thus spent 12 hours a week in the gym). I actually care about the suffering and happiness in the world, and I actually care that everybody I know and love doesn’t die from unaligned AI or a pandemic or a nuclear war. I actually care, so I should try really hard to make sure we win. I should maximize my chances of winning, and practically this means maximizing for some of the proxy goals I have along the way. And yes, it’s really easy to mess up this maximize thing and to neglect something important (like our own mental health), but that is an issue with the implementation, not with the method.
Perhaps my disagreement here is not a disagreement about what EA descriptively is and more-so a claim about what I think a good EA movement should be. I want a community that’s not a binary in / out, that’s inclusive and can bring joy and purpose to many people’s lives, but what I want more than those things is for the problems in the world to be solved — for kids to never go hungry or die from horrible diseases, for the existence of humanity a hundred years from now to not be an open research question, for billions+ of sentience beings around the world to not live lives of intense suffering. To the extent that many in the EA community share this common goal, perhaps we differ in how to get there, but the strategy of maximizing seems to me like it will do a lot better than treating EA like I do exercise or saving for retirement.
I agree that we mostly agree. That said, I think that I either disagree with what it seems you recommend operationally, or we’re talking past one another.
”Funding 2 interventions that are in the 90th percentile is likely less good than funding 1 intervention in the 99th percentile. Given this state of the world, spending much of our resources trying to identify the maximum is worthwhile.” Yes, we should do that second thing. But how much of our resources do we spend on identifying what exists? I’d agree that1% of total EA giving going to cause exploration is obviously good, 10% is justifiable, and 50% is not even reasonable. It was probably the right allocation when Givewell was started, but it isn’t now, and as a community we’ve looked at thousands of potential cause areas and interventions, and are colelctively sitting on quite a bit of money, an amount that seems to be increasing over time. Now we need to do things. The question now is whether we care about funding the 99.9% interventions we have, versus waiting for certainty that it’s a 99.99% intervention, or a a 99.999% intervention, and spending to find it, or saving to fund it.
″ I think the default of the world is that I donate to a charity in the 50th percentile...”
Agreed, and we need to fix that.
”...And if I adopt a weak mandate to do lots of good (a non-maximizing frame, or an early EA movement), I will probably identify and donate to a charity in the 90th percentile.”
And that’s where this lost me. In the early EA movement, this was true, and I would have pushed for more research and less giving early on. (But they did that.) And for people who haven’t previously been exposed to EA, yes, there’s a danger of under-optimizing, though it is mostly mitigated about an hour after looking at Givewell’s web site. The community is not at the point of looking at 90th percentile charities now, and continuing to think the things we’ve found are 90th percentile, and acting that way, and we need to evaluate another million interventions to be certain, that we should save until near-certainty is found, seems like an obviously bad decision today.
”I cannot honestly tell a story about how the non-maximizing strategy wins.”
I think there’s a conceptual confusion about optimizing versus maximizing here. If we use a binary maximal/non-maximal approach to altruism, we aren’t just optimizing. And I’m not advising non-optimizing, or caring less. I’m advising pragmatic and limited application of maximization mindset, in favor of pragmatic optimization with clear understanding that our instrumental goals are poorly operationalized. I listed a bunch of places where I think we’ve gone too far, and now that it’s happened, we should at least stop pushing further in the places we’ve seen it works poorly.
I disagree with a couple specific points as well as the overall thrust of this post. Thank you for writing it!
I think I strongly disagree with this because opportunities for impact appear heavy tailed. Funding 2 interventions that are in the 90th percentile is likely less good than funding 1 intervention in the 99th percentile. Given this state of the world, spending much of our resources trying to identify the maximum is worthwhile. I think the default of the world is that I donate to a charity in the 50th percentile. And if I adopt a weak mandate to do lots of good (a non-maximizing frame, or an early EA movement), I will probably identify and donate to a charity in the 90th percentile. It is only when I take a maximizing stance, and a strong mandate to do lots of good (or when many thousands of hours have been spent on global priorities research), that I will find and donate to the very best charities. The ratios matter of course, and probably if I was faced with donating $1,000 to 90th percentile charities or $1 to a 99th percentile charity, I would probably donate to the 90th percentile charities, but if the numbers are $2 and $1, I should donate to the 99th percentile charity. I am claiming: the distribution of altruistic opportunities is roughly heavy tailed; the best (and maybe only) way to end up in the heavy tail is to take a maximizing approach; the “wonderful” thing that we would do without maximizing is, as measured ex post (looking at the results in retrospect), significantly worse than the best thing; a claim that I am also making, though which I think is weakest, is that we can differentiate between the “wonderful” and the “maximal available” opportunities ex ante (before hand) given research and reflection; the thing I care about is impact, and the EA movement is good insofar as it creates positive impact in the world (including for members of the EA community, but they are a small piece of the universe).
To me this seems like it doesn’t support the rest of your argument. I agree that the correct allocation of EA labor is not all doing AI Safety research, and we need to have outreach and career related resources to support people with various skills, but to me this is more-so a claim that we are not maximizing well enough — we are not properly seeking the optimal labor allocation because we’re a relatively uncoordinated set of individuals. If we were better at maximizing at a high level, and doing a good job of it, the problem you are describing would not happen, and I think it’s extremely likely that we can solve this problem.
With regard to the thrust of your post: I cannot honestly tell a story about how the non-maximizing strategy wins. That is, when I think about all the problems in the world: pandemics, climate change, existential threats from advanced AI, malaria, mass suffering of animals, unjust political imprisonment, etc., I can’t imagine that we solve these problems if we approach them like exercise or saving for retirement. If I actually cared about exercise or saving for retirement, I would treat them very differently than I currently do (and I have had periods in my life where I cared more about exercise and thus spent 12 hours a week in the gym). I actually care about the suffering and happiness in the world, and I actually care that everybody I know and love doesn’t die from unaligned AI or a pandemic or a nuclear war. I actually care, so I should try really hard to make sure we win. I should maximize my chances of winning, and practically this means maximizing for some of the proxy goals I have along the way. And yes, it’s really easy to mess up this maximize thing and to neglect something important (like our own mental health), but that is an issue with the implementation, not with the method.
Perhaps my disagreement here is not a disagreement about what EA descriptively is and more-so a claim about what I think a good EA movement should be. I want a community that’s not a binary in / out, that’s inclusive and can bring joy and purpose to many people’s lives, but what I want more than those things is for the problems in the world to be solved — for kids to never go hungry or die from horrible diseases, for the existence of humanity a hundred years from now to not be an open research question, for billions+ of sentience beings around the world to not live lives of intense suffering. To the extent that many in the EA community share this common goal, perhaps we differ in how to get there, but the strategy of maximizing seems to me like it will do a lot better than treating EA like I do exercise or saving for retirement.
I agree that we mostly agree. That said, I think that I either disagree with what it seems you recommend operationally, or we’re talking past one another.
”Funding 2 interventions that are in the 90th percentile is likely less good than funding 1 intervention in the 99th percentile. Given this state of the world, spending much of our resources trying to identify the maximum is worthwhile.”
Yes, we should do that second thing. But how much of our resources do we spend on identifying what exists? I’d agree that1% of total EA giving going to cause exploration is obviously good, 10% is justifiable, and 50% is not even reasonable. It was probably the right allocation when Givewell was started, but it isn’t now, and as a community we’ve looked at thousands of potential cause areas and interventions, and are colelctively sitting on quite a bit of money, an amount that seems to be increasing over time. Now we need to do things. The question now is whether we care about funding the 99.9% interventions we have, versus waiting for certainty that it’s a 99.99% intervention, or a a 99.999% intervention, and spending to find it, or saving to fund it.
″ I think the default of the world is that I donate to a charity in the 50th percentile...”
Agreed, and we need to fix that.
”...And if I adopt a weak mandate to do lots of good (a non-maximizing frame, or an early EA movement), I will probably identify and donate to a charity in the 90th percentile.”
And that’s where this lost me. In the early EA movement, this was true, and I would have pushed for more research and less giving early on. (But they did that.) And for people who haven’t previously been exposed to EA, yes, there’s a danger of under-optimizing, though it is mostly mitigated about an hour after looking at Givewell’s web site. The community is not at the point of looking at 90th percentile charities now, and continuing to think the things we’ve found are 90th percentile, and acting that way, and we need to evaluate another million interventions to be certain, that we should save until near-certainty is found, seems like an obviously bad decision today.
”I cannot honestly tell a story about how the non-maximizing strategy wins.”
I think there’s a conceptual confusion about optimizing versus maximizing here. If we use a binary maximal/non-maximal approach to altruism, we aren’t just optimizing. And I’m not advising non-optimizing, or caring less. I’m advising pragmatic and limited application of maximization mindset, in favor of pragmatic optimization with clear understanding that our instrumental goals are poorly operationalized. I listed a bunch of places where I think we’ve gone too far, and now that it’s happened, we should at least stop pushing further in the places we’ve seen it works poorly.