Thanks for your remarks. I’m looking forward to her full article being published, because I agreed that as it is, she’s been pretty vague. The full article might clear up some of the gaps here.
From what you and others have said, the most important gap seems to be “why we should not be consequentialists”, which is much bigger than just EA! I think there is something compelling; I might reconstruct her argument something like:
EAs want to do “the most good possible”.
Ensuring more systemic equality and justice is good.
We can do things that ensure systemic equality and justice; doing this is good (this follows from 2), even if it’s welfare-neutral.
If you want to do “the most good” then you will need to do things that ensure systemic equality and justice, too (from 3).
Therefore (from 1 and 4) it follows that EAs should care about more than just welfare.
You can’t quantify systemic equality and justice.
Therefore (from 5 and 6) if EAs want to achieve their own goals they will need to move beyond quantifications.
Probably consequentialists will reply that (3) is wrong; actually if you improve justice and equality but this doesn’t improve long-term well-being, it’s not actually good. I suppose I believe that, but I’m unsure about it.
I think what you’ve written is not an argument against consequentialism, it’s about trying to put numbers on things in order to rank the consequences?
Regardless, that wasn’t how I interpreted her case. It doesn’t feel like she cares about the total amount of systemic equality and justice in the world. She fundamentally cares about this from the perspective of the individual doing the act, rather than the state of the world, which seems importantly different. And to me, THIS part breaks consequentialism
Thanks for your remarks. I’m looking forward to her full article being published, because I agreed that as it is, she’s been pretty vague. The full article might clear up some of the gaps here.
From what you and others have said, the most important gap seems to be “why we should not be consequentialists”, which is much bigger than just EA! I think there is something compelling; I might reconstruct her argument something like:
EAs want to do “the most good possible”.
Ensuring more systemic equality and justice is good.
We can do things that ensure systemic equality and justice; doing this is good (this follows from 2), even if it’s welfare-neutral.
If you want to do “the most good” then you will need to do things that ensure systemic equality and justice, too (from 3).
Therefore (from 1 and 4) it follows that EAs should care about more than just welfare.
You can’t quantify systemic equality and justice.
Therefore (from 5 and 6) if EAs want to achieve their own goals they will need to move beyond quantifications.
Probably consequentialists will reply that (3) is wrong; actually if you improve justice and equality but this doesn’t improve long-term well-being, it’s not actually good. I suppose I believe that, but I’m unsure about it.
I think what you’ve written is not an argument against consequentialism, it’s about trying to put numbers on things in order to rank the consequences?
Regardless, that wasn’t how I interpreted her case. It doesn’t feel like she cares about the total amount of systemic equality and justice in the world. She fundamentally cares about this from the perspective of the individual doing the act, rather than the state of the world, which seems importantly different. And to me, THIS part breaks consequentialism