I’m skeptical of the section of your argument that goes “weak EA doesn’t suffer from totalization, but strong EA does, and therefore EA does.”
The presence of a weak EA does not undermine the logic of a strong EA. If EA’s fundamental goal is to achieve “as much [good] as possible”, its default position will always point towards totalisation.
Why do you take strong EA as the “default” and weak EA as something that’s just “present”? I could equally say
The presence of a strong EA does not undermine the logic of a weak EA. If EA’s fundamental goal of achieving as much good as possible is subject to various self-imposed exemptions, its default position does not point towards totalization.
Adjudicating between these boils down to whether strong EA or weak EA is the better “true representation” of EA. And in answering that, I want to emphasize—EA is not a person with goals or positions. EA is what EAs do. This is normally a semantic quibble because we use “EA has the position X” as a useful shorthand for “most EAs believe X, motivated by their EA values and beliefs”. But making this distinction is important here, because it distinguishes between weak EA (what EAs do) and strong EA (what EAs mostly do not do). If most EAs believe in and practice weak EA, then I feel like it’s the only reasonable “true representation” of EA.
You address this later on by saying that weak EA may be dominant today, but we can’t speak to how it might be tomorrow. This doesn’t feel very substantial. Suppose someone objects to utilitarianism on the grounds “the utilitarian mindset could lead people to do horrible things in the name of the greater good, like harvesting people’s organs.” They then clarify, “of course no utilitarian today would do that, but we can’t speak to the behavior of utilitarians tomorrow, so this is a reason to be skeptical of utilitarianism today.” Does this feel like a useful criticism of utilitarianism? Reasonable people could disagree, but to me it feels like appealing to the future is a way to attribute beliefs to a large group even when almost nobody holds them, because they could hold those views.
Moreover, I think future beliefs and practices are reasonably predictable, because movements experience a lot of path-dependency. The next generation of EAs is unlikely to derive their beliefs just by introspecting towards the most extreme possible conclusions of EA principles. Rather, they are much more likely to derive their beliefs from a) their pre-existing values, b) the beliefs and practices of their EA peers and other EAs who they respect. Both of these are likely to be significantly more moderate than the most extreme possible EA positions.
Internalizing this point moderates your argument to a different form, “EA principles support a totalitarian morality”. I believe this claim to be true, but the significance of that as “EA criticism” is fairly limited when it is so removed from practice.
I’m skeptical of the section of your argument that goes “weak EA doesn’t suffer from totalization, but strong EA does, and therefore EA does.”
Why do you take strong EA as the “default” and weak EA as something that’s just “present”? I could equally say
Adjudicating between these boils down to whether strong EA or weak EA is the better “true representation” of EA. And in answering that, I want to emphasize—EA is not a person with goals or positions. EA is what EAs do. This is normally a semantic quibble because we use “EA has the position X” as a useful shorthand for “most EAs believe X, motivated by their EA values and beliefs”. But making this distinction is important here, because it distinguishes between weak EA (what EAs do) and strong EA (what EAs mostly do not do). If most EAs believe in and practice weak EA, then I feel like it’s the only reasonable “true representation” of EA.
You address this later on by saying that weak EA may be dominant today, but we can’t speak to how it might be tomorrow. This doesn’t feel very substantial. Suppose someone objects to utilitarianism on the grounds “the utilitarian mindset could lead people to do horrible things in the name of the greater good, like harvesting people’s organs.” They then clarify, “of course no utilitarian today would do that, but we can’t speak to the behavior of utilitarians tomorrow, so this is a reason to be skeptical of utilitarianism today.” Does this feel like a useful criticism of utilitarianism? Reasonable people could disagree, but to me it feels like appealing to the future is a way to attribute beliefs to a large group even when almost nobody holds them, because they could hold those views.
Moreover, I think future beliefs and practices are reasonably predictable, because movements experience a lot of path-dependency. The next generation of EAs is unlikely to derive their beliefs just by introspecting towards the most extreme possible conclusions of EA principles. Rather, they are much more likely to derive their beliefs from a) their pre-existing values, b) the beliefs and practices of their EA peers and other EAs who they respect. Both of these are likely to be significantly more moderate than the most extreme possible EA positions.
Internalizing this point moderates your argument to a different form, “EA principles support a totalitarian morality”. I believe this claim to be true, but the significance of that as “EA criticism” is fairly limited when it is so removed from practice.