If “its a mistake not to do X” means “its in alignment with the persons goal to do X”, then I think there are a few ways in which the claim could be false.
I see two cases where you want to maximize your contribution to the common good, but it would still be a mistake (in the above sense) to pursue EA:
you are already close to optimal effectiveness and the increase in effectiveness by some additional research in EA is so small that you would be maximizing by just using that time to earn money and donate it or have a direct impact
pursuing EA causes you to not achieve another goal which you value at least equally or a set of goals which you, in total, value at least equally
If that’s true, then we need to reduce the scope of the conclusion VERY much. I estimate that the fraction of people caring about the common good for whom Bens claim holds is in [1/10000,1/100000]. So in the end the claim can be made for hardly anyone right?
I actually think there is more needed.
If “its a mistake not to do X” means “its in alignment with the persons goal to do X”, then I think there are a few ways in which the claim could be false.
I see two cases where you want to maximize your contribution to the common good, but it would still be a mistake (in the above sense) to pursue EA:
you are already close to optimal effectiveness and the increase in effectiveness by some additional research in EA is so small that you would be maximizing by just using that time to earn money and donate it or have a direct impact
pursuing EA causes you to not achieve another goal which you value at least equally or a set of goals which you, in total, value at least equally
If that’s true, then we need to reduce the scope of the conclusion VERY much. I estimate that the fraction of people caring about the common good for whom Bens claim holds is in [1/10000,1/100000]. So in the end the claim can be made for hardly anyone right?