This post seems to confuse Effective Altruism, which is a methodology, for a value system. Valuing the ‘impartial good’ or ′ general good’ is entirely independent of wanting to do ‘good’ effectively, whatever you may find to be good.
You articulate this confusion most clearly in the paragraph starting “Maybe it would help to make the implications more explicit.” You make two comparisons of goals that one can choose between (shrimp or human, 10% chance of a millions lives, or 1000 lives for sure). But the value of the options is not dictated by effective altruism; this depends on ones valuation of shrimp vs human life in the first case, and ones risk profile in the second.
You’re welcome to disagree with me about whether what’s most distinctive about EA is its values or its methodology, but it’s gratuitous to claim that I am “confusing” the two just because you disagree. (One might say that you are confusing disagreement with confusion.)
A simple reason why EA can’t just be a value-neutral methodology: that leaves out the “altruism” part. Effective Nazism is not a possible sub-category of EA, even if they follow an evidence-based methodology for optimizing their Nazi goals.
A second reason, more directly connected to the argument of this post: there’s nothing especially distinctive about “trying to achieve your goals effectively”. Cause-agnostic beneficentrism, by contrast, is a very distinctive value system that can help distinguish the principled “core” of EA from more ordinary sorts of (cause-specific) do-gooding.
But the value of the options is not dictated by effective altruism; this depends on ones valuation of shrimp vs human life in the first case, and ones risk profile in the second.
This is a misunderstanding of my view. I never suggested that EA “dictates” how to resolves disputes about the impartial good. I merely suggested that it (at core; one might participate in some sub-projects without endorsing the core principles) involves a commitment to being guided by considerations of the impartial good. The idea that value “depends on one’s valuation” is a fairly crude and contestable form of anti-realism. Obviously if it’s possible for one’s valuations to be mistaken, then one should instead be guided by the correct way to balance these competing interests.
This post seems to confuse Effective Altruism, which is a methodology, for a value system. Valuing the ‘impartial good’ or ′ general good’ is entirely independent of wanting to do ‘good’ effectively, whatever you may find to be good.
You articulate this confusion most clearly in the paragraph starting “Maybe it would help to make the implications more explicit.” You make two comparisons of goals that one can choose between (shrimp or human, 10% chance of a millions lives, or 1000 lives for sure). But the value of the options is not dictated by effective altruism; this depends on ones valuation of shrimp vs human life in the first case, and ones risk profile in the second.
You’re welcome to disagree with me about whether what’s most distinctive about EA is its values or its methodology, but it’s gratuitous to claim that I am “confusing” the two just because you disagree. (One might say that you are confusing disagreement with confusion.)
A simple reason why EA can’t just be a value-neutral methodology: that leaves out the “altruism” part. Effective Nazism is not a possible sub-category of EA, even if they follow an evidence-based methodology for optimizing their Nazi goals.
A second reason, more directly connected to the argument of this post: there’s nothing especially distinctive about “trying to achieve your goals effectively”. Cause-agnostic beneficentrism, by contrast, is a very distinctive value system that can help distinguish the principled “core” of EA from more ordinary sorts of (cause-specific) do-gooding.
This is a misunderstanding of my view. I never suggested that EA “dictates” how to resolves disputes about the impartial good. I merely suggested that it (at core; one might participate in some sub-projects without endorsing the core principles) involves a commitment to being guided by considerations of the impartial good. The idea that value “depends on one’s valuation” is a fairly crude and contestable form of anti-realism. Obviously if it’s possible for one’s valuations to be mistaken, then one should instead be guided by the correct way to balance these competing interests.