Thanks Aaron, I think youâre responses to me and Jason do clear things up. I still think the framing of it is a bit off though:
I accept that you didnât intend your framing to be insulting to others, but using âupdating downâ about the âgenuine interestâ of others read as hurtful on my first read. As a (relative to EA) high contextualiser itâs the thing that stood out for me, so Iâm glad you endorse that the âgenuine interestâ part isnât what youâre focusing on, and you could probably reframe your critique without it.
My current understanding of your position is that it is actually: âIâve come to realise over the last year that many people in EA arenât directing their marginal dollars/âresources to the efforts that I see as most cost-effective, since I also think those are the efforts that EA principles imply are the most effective.â[1] To me, this claim is about the object-level disagreement on what EA principles imply.
However, in your response to Jason you say âitâs possible Iâm mistaken over the degree to which direct resources to the place you think needs them mostâ is a consensus-EA principle which switches back to people not being EA? Or not endorsing this view? But youâve yet to provide any evidence that people arenât doing this, as opposed to just disagreeing about what those places are.[2]
Secondary interpretation is: âEA principles imply one should make a quantitative point estimate of the good of all your relevant moral actions, and then act on the leading option in a âshut-up-and-calculateâ way. I now believe many fewer actors in the EA space actually do this than I did last yearâ
For example, in Arielâs piece, Emily from OpenPhil implies that they have much lower moral weights on animal life than Rethink does, not that they donât endorse doing âthe most goodâ (I think this is separable from OPâs commitment to worldview diversification).
Thanks Aaron, I think youâre responses to me and Jason do clear things up. I still think the framing of it is a bit off though:
I accept that you didnât intend your framing to be insulting to others, but using âupdating downâ about the âgenuine interestâ of others read as hurtful on my first read. As a (relative to EA) high contextualiser itâs the thing that stood out for me, so Iâm glad you endorse that the âgenuine interestâ part isnât what youâre focusing on, and you could probably reframe your critique without it.
My current understanding of your position is that it is actually: âIâve come to realise over the last year that many people in EA arenât directing their marginal dollars/âresources to the efforts that I see as most cost-effective, since I also think those are the efforts that EA principles imply are the most effective.â[1] To me, this claim is about the object-level disagreement on what EA principles imply.
However, in your response to Jason you say âitâs possible Iâm mistaken over the degree to which direct resources to the place you think needs them mostâ is a consensus-EA principle which switches back to people not being EA? Or not endorsing this view? But youâve yet to provide any evidence that people arenât doing this, as opposed to just disagreeing about what those places are.[2]
Secondary interpretation is: âEA principles imply one should make a quantitative point estimate of the good of all your relevant moral actions, and then act on the leading option in a âshut-up-and-calculateâ way. I now believe many fewer actors in the EA space actually do this than I did last yearâ
For example, in Arielâs piece, Emily from OpenPhil implies that they have much lower moral weights on animal life than Rethink does, not that they donât endorse doing âthe most goodâ (I think this is separable from OPâs commitment to worldview diversification).