I agree that we mostly agree. That said, I think that I either disagree with what it seems you recommend operationally, or we’re talking past one another.
”Funding 2 interventions that are in the 90th percentile is likely less good than funding 1 intervention in the 99th percentile. Given this state of the world, spending much of our resources trying to identify the maximum is worthwhile.” Yes, we should do that second thing. But how much of our resources do we spend on identifying what exists? I’d agree that1% of total EA giving going to cause exploration is obviously good, 10% is justifiable, and 50% is not even reasonable. It was probably the right allocation when Givewell was started, but it isn’t now, and as a community we’ve looked at thousands of potential cause areas and interventions, and are colelctively sitting on quite a bit of money, an amount that seems to be increasing over time. Now we need to do things. The question now is whether we care about funding the 99.9% interventions we have, versus waiting for certainty that it’s a 99.99% intervention, or a a 99.999% intervention, and spending to find it, or saving to fund it.
″ I think the default of the world is that I donate to a charity in the 50th percentile...”
Agreed, and we need to fix that.
”...And if I adopt a weak mandate to do lots of good (a non-maximizing frame, or an early EA movement), I will probably identify and donate to a charity in the 90th percentile.”
And that’s where this lost me. In the early EA movement, this was true, and I would have pushed for more research and less giving early on. (But they did that.) And for people who haven’t previously been exposed to EA, yes, there’s a danger of under-optimizing, though it is mostly mitigated about an hour after looking at Givewell’s web site. The community is not at the point of looking at 90th percentile charities now, and continuing to think the things we’ve found are 90th percentile, and acting that way, and we need to evaluate another million interventions to be certain, that we should save until near-certainty is found, seems like an obviously bad decision today.
”I cannot honestly tell a story about how the non-maximizing strategy wins.”
I think there’s a conceptual confusion about optimizing versus maximizing here. If we use a binary maximal/non-maximal approach to altruism, we aren’t just optimizing. And I’m not advising non-optimizing, or caring less. I’m advising pragmatic and limited application of maximization mindset, in favor of pragmatic optimization with clear understanding that our instrumental goals are poorly operationalized. I listed a bunch of places where I think we’ve gone too far, and now that it’s happened, we should at least stop pushing further in the places we’ve seen it works poorly.
I agree that we mostly agree. That said, I think that I either disagree with what it seems you recommend operationally, or we’re talking past one another.
”Funding 2 interventions that are in the 90th percentile is likely less good than funding 1 intervention in the 99th percentile. Given this state of the world, spending much of our resources trying to identify the maximum is worthwhile.”
Yes, we should do that second thing. But how much of our resources do we spend on identifying what exists? I’d agree that1% of total EA giving going to cause exploration is obviously good, 10% is justifiable, and 50% is not even reasonable. It was probably the right allocation when Givewell was started, but it isn’t now, and as a community we’ve looked at thousands of potential cause areas and interventions, and are colelctively sitting on quite a bit of money, an amount that seems to be increasing over time. Now we need to do things. The question now is whether we care about funding the 99.9% interventions we have, versus waiting for certainty that it’s a 99.99% intervention, or a a 99.999% intervention, and spending to find it, or saving to fund it.
″ I think the default of the world is that I donate to a charity in the 50th percentile...”
Agreed, and we need to fix that.
”...And if I adopt a weak mandate to do lots of good (a non-maximizing frame, or an early EA movement), I will probably identify and donate to a charity in the 90th percentile.”
And that’s where this lost me. In the early EA movement, this was true, and I would have pushed for more research and less giving early on. (But they did that.) And for people who haven’t previously been exposed to EA, yes, there’s a danger of under-optimizing, though it is mostly mitigated about an hour after looking at Givewell’s web site. The community is not at the point of looking at 90th percentile charities now, and continuing to think the things we’ve found are 90th percentile, and acting that way, and we need to evaluate another million interventions to be certain, that we should save until near-certainty is found, seems like an obviously bad decision today.
”I cannot honestly tell a story about how the non-maximizing strategy wins.”
I think there’s a conceptual confusion about optimizing versus maximizing here. If we use a binary maximal/non-maximal approach to altruism, we aren’t just optimizing. And I’m not advising non-optimizing, or caring less. I’m advising pragmatic and limited application of maximization mindset, in favor of pragmatic optimization with clear understanding that our instrumental goals are poorly operationalized. I listed a bunch of places where I think we’ve gone too far, and now that it’s happened, we should at least stop pushing further in the places we’ve seen it works poorly.