I might be wrong but I don’t think many EAs actually believe that say, donating to GiveWell is the single most good they can do for the world. In the actual situation, given epistemic uncertainty, it happens to be a clear example of what you mentioned—“actions that they can be reasonably sure do a lot of good” So there is an implicit belief, revealed in actions that merely doing a lot of good is not only an acceptable, but a recommended behaviour.
However I’m not sure it logically follows from this that seeking to do “the most” good should be abandoned as a goal. This is particularly the case if effective altruism is not defined as an imperative of any kind but as an overall approach that says “given that I’ve already decided on my own to be more altruistic, how can my time/money make the biggest difference”?
Despite being an unattainable ideal if you take it literally, the “most” framing is still fruitful—it gives altruistic, open minded, but resource constrained people (which describes a lot more people than we might’ve thought) a scope sensitive framework to prioritize resource allocations.
To see why, let’s take an example. It could be argued that giving to the community theatre does not just a little, but a lot of good. If you are a billionaire giving millions to community theatres all over the world there is a reasonable chance that you are doing a lot of good. (And such altruism should be praised, compared to spending those same millions say lobbying for big tobacco).
What effective altruism then brings to the table is to say “look, if you have a sentimental attachment to giving to the community theatre, that’s fine. But if you’re indifferent towards particular means and your goal is simply to be a good person and help the world, the same money could carry you much further towards your goal if you did X.”
Of course you can then say sure, X sounds good, but what about Y? What about Z? And so on, ad infinitum. At some point though, you have to make a decision. That decision will be far from perfect, since you lack perfect information. However, by using a scope sensitive optimization framework, you will have been able to achieve a lot more good than you would have otherwise.
So while optimization has its flaws, I would characterize it on the whole as one of those “wrong, but useful” models.
Neat stuff here—thank you for the thoughtful comment!
I agree that few people believe that their choice of intervention is actually the most useful and that we often lavish praise onto people who do just a lot of good. For example, many people consider characters like Warren Buffett and Bill Gates very praiseworthy because, even though they have private jets, they still do a lot of good.
I also agree that maximization ought not be reveled as an imperative. An imperative to maximize, like thick consequentialism as a moral theory generally, is too demanding. Following this, I struggle to see why we need it at all. Folks who truly want to do a lot of good will still perform optimization calculations even if they aren’t explicitly trying to maximize. This makes maximization neither a normative nor descriptive part of anything we do.
In your example about “the same money could carry you much further towards your goal if you did X”, there is no maximization rhetoric present. If you were using maximization as a “wrong but useful” model, you would likely say something like, “I deduced that the same money could carry you farthest if you did X, so don’t give to community theater and don’t do Y or Z either unless you show me why they’re more effective than X”.
As an analogy, you don’t have to try to be the best philosopher of all time in order to produce great thinking.
Great post!
I might be wrong but I don’t think many EAs actually believe that say, donating to GiveWell is the single most good they can do for the world. In the actual situation, given epistemic uncertainty, it happens to be a clear example of what you mentioned—“actions that they can be reasonably sure do a lot of good” So there is an implicit belief, revealed in actions that merely doing a lot of good is not only an acceptable, but a recommended behaviour.
However I’m not sure it logically follows from this that seeking to do “the most” good should be abandoned as a goal. This is particularly the case if effective altruism is not defined as an imperative of any kind but as an overall approach that says “given that I’ve already decided on my own to be more altruistic, how can my time/money make the biggest difference”?
Despite being an unattainable ideal if you take it literally, the “most” framing is still fruitful—it gives altruistic, open minded, but resource constrained people (which describes a lot more people than we might’ve thought) a scope sensitive framework to prioritize resource allocations.
To see why, let’s take an example. It could be argued that giving to the community theatre does not just a little, but a lot of good. If you are a billionaire giving millions to community theatres all over the world there is a reasonable chance that you are doing a lot of good. (And such altruism should be praised, compared to spending those same millions say lobbying for big tobacco).
What effective altruism then brings to the table is to say “look, if you have a sentimental attachment to giving to the community theatre, that’s fine. But if you’re indifferent towards particular means and your goal is simply to be a good person and help the world, the same money could carry you much further towards your goal if you did X.”
Of course you can then say sure, X sounds good, but what about Y? What about Z? And so on, ad infinitum. At some point though, you have to make a decision. That decision will be far from perfect, since you lack perfect information. However, by using a scope sensitive optimization framework, you will have been able to achieve a lot more good than you would have otherwise.
So while optimization has its flaws, I would characterize it on the whole as one of those “wrong, but useful” models.
Neat stuff here—thank you for the thoughtful comment!
I agree that few people believe that their choice of intervention is actually the most useful and that we often lavish praise onto people who do just a lot of good. For example, many people consider characters like Warren Buffett and Bill Gates very praiseworthy because, even though they have private jets, they still do a lot of good.
I also agree that maximization ought not be reveled as an imperative. An imperative to maximize, like thick consequentialism as a moral theory generally, is too demanding. Following this, I struggle to see why we need it at all. Folks who truly want to do a lot of good will still perform optimization calculations even if they aren’t explicitly trying to maximize. This makes maximization neither a normative nor descriptive part of anything we do.
In your example about “the same money could carry you much further towards your goal if you did X”, there is no maximization rhetoric present. If you were using maximization as a “wrong but useful” model, you would likely say something like, “I deduced that the same money could carry you farthest if you did X, so don’t give to community theater and don’t do Y or Z either unless you show me why they’re more effective than X”.
As an analogy, you don’t have to try to be the best philosopher of all time in order to produce great thinking.