Interesting write-up, thanks. However, I don’t think that’s quite the right claim. You said:
The claim: If you want to contribute to the common good, it’s a mistake not to pursue the project of effective altruism.
But this claim isn’t true. If I only want to make a contribution to the common good, but I’m not all fussed about doing more good rather than less, (given whatever resources I’m deploying) then I don’t have any reason to pursue the project of effective altruism, which you say is searching for the actions that do the most good.
A true alternative to the claim would be:
New claim: if you want to contribute to the common good as much as possible, it’s a mistake not the pursue the project of effective altruism.
But this claim is effectively a tautology, seeing as effective altruism is defined as searching for the actions that do the most good. (I suppose someone who thought how to do the most good was just totally obvious would see no reason to pursue the project of EA).
Maybe the claim of EA should emphasise the non-obvious of what doing the most good is. Something like:
If you want to have the biggest positive impact with your resources, it’s a mistake just trust your instincts(/common sense?) about what to do rather than engage in the project of effective altruism: to thoroughly and carefully evaluate what does the most good.
This is an empirical claim, not a conceptual one, and its justification would seem to be the three main premises you give.
That’s an interesting point. I was thinking that most people would say that if my goal is X, and I achieve far less of X than I easily could have, then that would qualify as a ‘mistake’ in normal language. I also wondered whether another premise should be something very roughly like ‘maximising: it’s better to achieve more rather than less of my goal (if the costs are the same)’. I could see contrasting with some kind of alternative approach could be another good option.
If your goal is to do X, but you’re not doing as much as you can of X, you are failing (with respect to X).
But your claim is more like “If your goal is to do X, you need to Y, otherwise you will not do as much as of X as you can”. The Y here is “the project of effective altruism”. Hence there needs to be an explanation of why you need to do Y to achieve X. If X and Y are the same thing, we have a tautology (“If you want do X, but you do not-X, you won’t do X”).
In short, it seems necessary to say that is distinctive about the project of EA.
Analogy: say I want to be a really good mountain climber. Someone could say, oh, if you want to do that, you need to “train really hard, invest in high quality gear, and get advice from pros”. That would be helpful, specific advice about what the right means to achieve my end are. Someone who says “if you want to be good at mountain climbing, follow the best advice on how to good at mountain climbing” hasn’t yet told me anything I don’t already know.
I was thinking that most people would say that if my goal is X, and I achieve far less of X than I easily could have, then that would qualify as a ‘mistake’ in normal language
I see where you’re coming from but I actually agree with Michael.
In reality a lot of people are interested in contributing to the common good but actually aren’t interested in doing this to the greatest extent possible. A lot of people are quite happy to engage in satisficing behaviour whereby they do some amount of good that gives them a certain amount of satisfaction, but then forget about doing further good. In fact this will be the case for many in the EA community, except the satisficing level is likely to be much higher than average.
So, whilst it’s possible this is over pedantic, I think “the claim” could use a rethink. It’s too late in the evening for me to be able to advise on anything better though...
If “its a mistake not to do X” means “its in alignment with the persons goal to do X”, then I think there are a few ways in which the claim could be false.
I see two cases where you want to maximize your contribution to the common good, but it would still be a mistake (in the above sense) to pursue EA:
you are already close to optimal effectiveness and the increase in effectiveness by some additional research in EA is so small that you would be maximizing by just using that time to earn money and donate it or have a direct impact
pursuing EA causes you to not achieve another goal which you value at least equally or a set of goals which you, in total, value at least equally
If that’s true, then we need to reduce the scope of the conclusion VERY much. I estimate that the fraction of people caring about the common good for whom Bens claim holds is in [1/10000,1/100000]. So in the end the claim can be made for hardly anyone right?
Interesting write-up, thanks. However, I don’t think that’s quite the right claim. You said:
But this claim isn’t true. If I only want to make a contribution to the common good, but I’m not all fussed about doing more good rather than less, (given whatever resources I’m deploying) then I don’t have any reason to pursue the project of effective altruism, which you say is searching for the actions that do the most good.
A true alternative to the claim would be:
But this claim is effectively a tautology, seeing as effective altruism is defined as searching for the actions that do the most good. (I suppose someone who thought how to do the most good was just totally obvious would see no reason to pursue the project of EA).
Maybe the claim of EA should emphasise the non-obvious of what doing the most good is. Something like:
This is an empirical claim, not a conceptual one, and its justification would seem to be the three main premises you give.
That’s an interesting point. I was thinking that most people would say that if my goal is X, and I achieve far less of X than I easily could have, then that would qualify as a ‘mistake’ in normal language. I also wondered whether another premise should be something very roughly like ‘maximising: it’s better to achieve more rather than less of my goal (if the costs are the same)’. I could see contrasting with some kind of alternative approach could be another good option.
If your goal is to do X, but you’re not doing as much as you can of X, you are failing (with respect to X).
But your claim is more like “If your goal is to do X, you need to Y, otherwise you will not do as much as of X as you can”. The Y here is “the project of effective altruism”. Hence there needs to be an explanation of why you need to do Y to achieve X. If X and Y are the same thing, we have a tautology (“If you want do X, but you do not-X, you won’t do X”).
In short, it seems necessary to say that is distinctive about the project of EA.
Analogy: say I want to be a really good mountain climber. Someone could say, oh, if you want to do that, you need to “train really hard, invest in high quality gear, and get advice from pros”. That would be helpful, specific advice about what the right means to achieve my end are. Someone who says “if you want to be good at mountain climbing, follow the best advice on how to good at mountain climbing” hasn’t yet told me anything I don’t already know.
I see where you’re coming from but I actually agree with Michael.
In reality a lot of people are interested in contributing to the common good but actually aren’t interested in doing this to the greatest extent possible. A lot of people are quite happy to engage in satisficing behaviour whereby they do some amount of good that gives them a certain amount of satisfaction, but then forget about doing further good. In fact this will be the case for many in the EA community, except the satisficing level is likely to be much higher than average.
So, whilst it’s possible this is over pedantic, I think “the claim” could use a rethink. It’s too late in the evening for me to be able to advise on anything better though...
I think adding a maximizing premise like the one you mention could work to assuage these worries.
I actually think there is more needed.
If “its a mistake not to do X” means “its in alignment with the persons goal to do X”, then I think there are a few ways in which the claim could be false.
I see two cases where you want to maximize your contribution to the common good, but it would still be a mistake (in the above sense) to pursue EA:
you are already close to optimal effectiveness and the increase in effectiveness by some additional research in EA is so small that you would be maximizing by just using that time to earn money and donate it or have a direct impact
pursuing EA causes you to not achieve another goal which you value at least equally or a set of goals which you, in total, value at least equally
If that’s true, then we need to reduce the scope of the conclusion VERY much. I estimate that the fraction of people caring about the common good for whom Bens claim holds is in [1/10000,1/100000]. So in the end the claim can be made for hardly anyone right?