In your original post, you talk about explicit reasoning, in the your later edit, you switch to implicit reasoning. Feels like this criticism canât be both. I also think the implicit reasoning critique just collapses into object-level disagreements, and the explicit critique just doesnât have much evidence.
The phenomenon youâre looking at, for instance, is:
âI am trying to get at the phenomenon where people implicitly say/âreason âyes, EA principles imply that the best thing to do would be to donate to X, but I am going to donate to Y instead.â
And I think this might just be an ~empty set, compared to people having different object-level beliefs about what EA principles are or imply they should do, and also disagree with you on what the best thing to do would be.[1] I really donât think thereâs many people saying âthe bestthing to do is donate to X, but I will donate to Yâ. (References please if soâclarification in footnotes[2]) Even on OpenPhil, I think Dustin just genuinely believes in worldview diversification is the best thing, so thereâs no contradiction there where he implies the best thing would be to do X but in practice does do Y.
I think causing this to âupdate downwardsâ on your views of the genuine interest of othersâas opposed to, say, them being human and fallible despite trying to do the best they canâin the movement feels⌠well Jason used âharshâ, I might use a harsher word to describe this behavior.
I think maybe there might be a difference between the best thing (or best thing using simple calculations) and the right thing. I think people think in terms of the latter and not the former, and unless you buy into strong or even naĂŻve consequentialism we shouldnât always expect the two to go together
Thanks and I think your second footnote makes an excellent distinction that I failed to get across well in my post.
I do think itâs at least directionally an âEA principleâ that âbestâ and ârightâ should go together, although of course thereâs plenty of room for naive first-order calculation critiques, heuristics/âintuitions/ânorms that might push against some less nuanced understanding of âbestâ.
I still think thereâs a useful conceptual distinction to be made between these terms, but maybe those ancillary (for lack of a better word) considerations relevant to what one thinks is the âbestâ use of money blur the line enough to make it too difficult to distinguish these in practice.
Re: your last paragraph, I want to emphasize that my dispute is with the terms âusing EA principlesâ. I have no doubt whatsoever about the first part, âgenuinely interested in making the world betterâ
Thanks Aaron, I think youâre responses to me and Jason do clear things up. I still think the framing of it is a bit off though:
I accept that you didnât intend your framing to be insulting to others, but using âupdating downâ about the âgenuine interestâ of others read as hurtful on my first read. As a (relative to EA) high contextualiser itâs the thing that stood out for me, so Iâm glad you endorse that the âgenuine interestâ part isnât what youâre focusing on, and you could probably reframe your critique without it.
My current understanding of your position is that it is actually: âIâve come to realise over the last year that many people in EA arenât directing their marginal dollars/âresources to the efforts that I see as most cost-effective, since I also think those are the efforts that EA principles imply are the most effective.â[1] To me, this claim is about the object-level disagreement on what EA principles imply.
However, in your response to Jason you say âitâs possible Iâm mistaken over the degree to which direct resources to the place you think needs them mostâ is a consensus-EA principle which switches back to people not being EA? Or not endorsing this view? But youâve yet to provide any evidence that people arenât doing this, as opposed to just disagreeing about what those places are.[2]
Secondary interpretation is: âEA principles imply one should make a quantitative point estimate of the good of all your relevant moral actions, and then act on the leading option in a âshut-up-and-calculateâ way. I now believe many fewer actors in the EA space actually do this than I did last yearâ
For example, in Arielâs piece, Emily from OpenPhil implies that they have much lower moral weights on animal life than Rethink does, not that they donât endorse doing âthe most goodâ (I think this is separable from OPâs commitment to worldview diversification).
In your original post, you talk about explicit reasoning, in the your later edit, you switch to implicit reasoning. Feels like this criticism canât be both. I also think the implicit reasoning critique just collapses into object-level disagreements, and the explicit critique just doesnât have much evidence.
The phenomenon youâre looking at, for instance, is:
And I think this might just be an ~empty set, compared to people having different object-level beliefs about what EA principles are or imply they should do, and also disagree with you on what the best thing to do would be.[1] I really donât think thereâs many people saying âthe best thing to do is donate to X, but I will donate to Yâ. (References please if soâclarification in footnotes[2]) Even on OpenPhil, I think Dustin just genuinely believes in worldview diversification is the best thing, so thereâs no contradiction there where he implies the best thing would be to do X but in practice does do Y.
I think causing this to âupdate downwardsâ on your views of the genuine interest of othersâas opposed to, say, them being human and fallible despite trying to do the best they canâin the movement feels⌠well Jason used âharshâ, I might use a harsher word to describe this behavior.
For context, I think Aaron thinks that GiveWell deserves ~0 EA funding afaict
I think maybe there might be a difference between the best thing (or best thing using simple calculations) and the right thing. I think people think in terms of the latter and not the former, and unless you buy into strong or even naĂŻve consequentialism we shouldnât always expect the two to go together
Thanks and I think your second footnote makes an excellent distinction that I failed to get across well in my post.
I do think itâs at least directionally an âEA principleâ that âbestâ and ârightâ should go together, although of course thereâs plenty of room for naive first-order calculation critiques, heuristics/âintuitions/ânorms that might push against some less nuanced understanding of âbestâ.
I still think thereâs a useful conceptual distinction to be made between these terms, but maybe those ancillary (for lack of a better word) considerations relevant to what one thinks is the âbestâ use of money blur the line enough to make it too difficult to distinguish these in practice.
Re: your last paragraph, I want to emphasize that my dispute is with the terms âusing EA principlesâ. I have no doubt whatsoever about the first part, âgenuinely interested in making the world betterâ
Thanks Aaron, I think youâre responses to me and Jason do clear things up. I still think the framing of it is a bit off though:
I accept that you didnât intend your framing to be insulting to others, but using âupdating downâ about the âgenuine interestâ of others read as hurtful on my first read. As a (relative to EA) high contextualiser itâs the thing that stood out for me, so Iâm glad you endorse that the âgenuine interestâ part isnât what youâre focusing on, and you could probably reframe your critique without it.
My current understanding of your position is that it is actually: âIâve come to realise over the last year that many people in EA arenât directing their marginal dollars/âresources to the efforts that I see as most cost-effective, since I also think those are the efforts that EA principles imply are the most effective.â[1] To me, this claim is about the object-level disagreement on what EA principles imply.
However, in your response to Jason you say âitâs possible Iâm mistaken over the degree to which direct resources to the place you think needs them mostâ is a consensus-EA principle which switches back to people not being EA? Or not endorsing this view? But youâve yet to provide any evidence that people arenât doing this, as opposed to just disagreeing about what those places are.[2]
Secondary interpretation is: âEA principles imply one should make a quantitative point estimate of the good of all your relevant moral actions, and then act on the leading option in a âshut-up-and-calculateâ way. I now believe many fewer actors in the EA space actually do this than I did last yearâ
For example, in Arielâs piece, Emily from OpenPhil implies that they have much lower moral weights on animal life than Rethink does, not that they donât endorse doing âthe most goodâ (I think this is separable from OPâs commitment to worldview diversification).