How much do you trust other EA Forum users to be genuinely interested in making the world better using EA principles?
This is one thing I’ve updated down quite a bit over the last year.
It seems to me that relatively few self-identified EA donors mostly or entirely give to the organization/whatever that they would explicitly endorse as being the single best recipient of a marginal dollar (do others disagree?)
Of course the more important question is whether most EA-inspired dollars are given in such a way (rather than most donors). Unfortunately, I think the answer to this is “no” as well, seeing as OpenPhil continues to donate a majority of dollars to human global health and development[1] (I threw together a Claude artifact that lets you get a decent picture of how OpenPhil has funded cause areas over time and in aggregate)[2]
Edit: to clarify, it could be the case that others have object-level disagreements about what the best use of a marginal dollar is. Clearly this is sometimes the case, but it’s not what I am getting at here. I am trying to get at the phenomenon where people implicitly say/reason “yes, EA principles imply that the best thing to do would be to donate to X, but I am going to donate to Y instead.” I’m guessing this mostly takes the form of people failing to endorse that they’re donations are optimally directed rather than that they do a bunch of ground-up reasoning and then decide to ignore the conclusion it gives, though.
Data is a few days old and there’s a bit of judgement about how to bin various subcategories of grants, but I doubt the general picture would change much if others redid the analysis/binning
In your original post, you talk about explicit reasoning, in the your later edit, you switch to implicit reasoning. Feels like this criticism can’t be both. I also think the implicit reasoning critique just collapses into object-level disagreements, and the explicit critique just doesn’t have much evidence.
The phenomenon you’re looking at, for instance, is:
“I am trying to get at the phenomenon where people implicitly say/reason “yes, EA principles imply that the best thing to do would be to donate to X, but I am going to donate to Y instead.”
And I think this might just be an ~empty set, compared to people having different object-level beliefs about what EA principles are or imply they should do, and also disagree with you on what the best thing to do would be.[1] I really don’t think there’s many people saying “the bestthing to do is donate to X, but I will donate to Y”. (References please if so—clarification in footnotes[2]) Even on OpenPhil, I think Dustin just genuinely believes in worldview diversification is the best thing, so there’s no contradiction there where he implies the best thing would be to do X but in practice does do Y.
I think causing this to ‘update downwards’ on your views of the genuine interest of others—as opposed to, say, them being human and fallible despite trying to do the best they can—in the movement feels… well Jason used ‘harsh’, I might use a harsher word to describe this behavior.
I think maybe there might be a difference between the best thing (or best thing using simple calculations) and the right thing. I think people think in terms of the latter and not the former, and unless you buy into strong or even naïve consequentialism we shouldn’t always expect the two to go together
Thanks and I think your second footnote makes an excellent distinction that I failed to get across well in my post.
I do think it’s at least directionally an “EA principle” that “best” and “right” should go together, although of course there’s plenty of room for naive first-order calculation critiques, heuristics/intuitions/norms that might push against some less nuanced understanding of “best”.
I still think there’s a useful conceptual distinction to be made between these terms, but maybe those ancillary (for lack of a better word) considerations relevant to what one thinks is the “best” use of money blur the line enough to make it too difficult to distinguish these in practice.
Re: your last paragraph, I want to emphasize that my dispute is with the terms “using EA principles”. I have no doubt whatsoever about the first part, “genuinely interested in making the world better”
Thanks Aaron, I think you’re responses to me and Jason do clear things up. I still think the framing of it is a bit off though:
I accept that you didn’t intend your framing to be insulting to others, but using “updating down” about the “genuine interest” of others read as hurtful on my first read. As a (relative to EA) high contextualiser it’s the thing that stood out for me, so I’m glad you endorse that the ‘genuine interest’ part isn’t what you’re focusing on, and you could probably reframe your critique without it.
My current understanding of your position is that it is actually: “I’ve come to realise over the last year that many people in EA aren’t directing their marginal dollars/resources to the efforts that I see as most cost-effective, since I also think those are the efforts that EA principles imply are the most effective.”[1] To me, this claim is about the object-level disagreement on what EA principles imply.
However, in your response to Jason you say “it’s possible I’m mistaken over the degree to which direct resources to the place you think needs them most” is a consensus-EA principle which switches back to people not being EA? Or not endorsing this view? But you’ve yet to provide any evidence that people aren’t doing this, as opposed to just disagreeing about what those places are.[2]
Secondary interpretation is: “EA principles imply one should make a quantitative point estimate of the good of all your relevant moral actions, and then act on the leading option in a ‘shut-up-and-calculate’ way. I now believe many fewer actors in the EA space actually do this than I did last year”
For example, in Ariel’s piece, Emily from OpenPhil implies that they have much lower moral weights on animal life than Rethink does, not that they don’t endorse doing ‘the most good’ (I think this is separable from OP’s commitment to worldview diversification).
This is one thing I’ve updated down quite a bit over the last year.
It seems a bit harsh to treat other user-donors’ disagreement with your views on concentrating funding on their top-choice org (or even cause area) as significant evidence against the proposition that they are “genuinely interested in making the world better using EA principles.”
It seems to me that relatively few self-identified EA donors mostly or entirely give to the organization/whatever that they would explicitly endorse as being the single best recipient of a marginal dollar (do others disagree?)
I think a world in which everyone did this would have some significant drawbacks. While I understand how that approach would make sense through an individual lens, and am open to the idea that people should concentrate their giving more, I’d submit that we are trying to do the most good collectively. For instance: org funding is already too concentrated on a too-small number of donors. If (say) each EA is donating to an average of 5 orgs, then a norm of giving 100% to a single org would decrease the number of donors by 80%. That would impose significant risks on orgs even if their total funding level was not changed.
It’s also plausible that the number of first-place votes an org (or even a cause area) would get isn’t a super-strong reflection of overall community sentiment. If a wide range of people identified Org X as in their top 10%, then that likely points to some collective wisdom about Org X’s cost-effectiveness even if no one has them at number 1. Moreover, spreading the wealth can be seen as deferring to broader community views to some extent—which could be beneficial insofar as one found little reason to believe that wealthier community members are better at deciding where donation dollars should go than the community’s collective wisdom. Thus, there are reasons—other than a lack of genuine interest in EA principles by donors—that donors might reasonably choose to act in accordance with a practice of donation spreading.
Thanks, it’s possible I’m mistaken over the degree to which “direct resources to the place you think needs them most” is a consensus-EA principle.
Also, I recognize that “genuinely interested in making the world better using EA principles” is implicitly value-laden, and to be clear I do wish it was more the case, but I also genuinely intend my claim to be an observation that might have pessimistic implications depending on other beliefs people may have rather than an insult or anything like it, if that makes any sense.
There’s a question on the forum user survey:
This is one thing I’ve updated down quite a bit over the last year.
It seems to me that relatively few self-identified EA donors mostly or entirely give to the organization/whatever that they would explicitly endorse as being the single best recipient of a marginal dollar (do others disagree?)
Of course the more important question is whether most EA-inspired dollars are given in such a way (rather than most donors). Unfortunately, I think the answer to this is “no” as well, seeing as OpenPhil continues to donate a majority of dollars to human global health and development[1] (I threw together a Claude artifact that lets you get a decent picture of how OpenPhil has funded cause areas over time and in aggregate)[2]
Edit: to clarify, it could be the case that others have object-level disagreements about what the best use of a marginal dollar is. Clearly this is sometimes the case, but it’s not what I am getting at here. I am trying to get at the phenomenon where people implicitly say/reason “yes, EA principles imply that the best thing to do would be to donate to X, but I am going to donate to Y instead.” I’m guessing this mostly takes the form of people failing to endorse that they’re donations are optimally directed rather than that they do a bunch of ground-up reasoning and then decide to ignore the conclusion it gives, though.
See Open Phil Should Allocate Most Neartermist Funding to Animal Welfare for a sufficient but not necessary case against this.
Data is a few days old and there’s a bit of judgement about how to bin various subcategories of grants, but I doubt the general picture would change much if others redid the analysis/binning
In your original post, you talk about explicit reasoning, in the your later edit, you switch to implicit reasoning. Feels like this criticism can’t be both. I also think the implicit reasoning critique just collapses into object-level disagreements, and the explicit critique just doesn’t have much evidence.
The phenomenon you’re looking at, for instance, is:
And I think this might just be an ~empty set, compared to people having different object-level beliefs about what EA principles are or imply they should do, and also disagree with you on what the best thing to do would be.[1] I really don’t think there’s many people saying “the best thing to do is donate to X, but I will donate to Y”. (References please if so—clarification in footnotes[2]) Even on OpenPhil, I think Dustin just genuinely believes in worldview diversification is the best thing, so there’s no contradiction there where he implies the best thing would be to do X but in practice does do Y.
I think causing this to ‘update downwards’ on your views of the genuine interest of others—as opposed to, say, them being human and fallible despite trying to do the best they can—in the movement feels… well Jason used ‘harsh’, I might use a harsher word to describe this behavior.
For context, I think Aaron thinks that GiveWell deserves ~0 EA funding afaict
I think maybe there might be a difference between the best thing (or best thing using simple calculations) and the right thing. I think people think in terms of the latter and not the former, and unless you buy into strong or even naïve consequentialism we shouldn’t always expect the two to go together
Thanks and I think your second footnote makes an excellent distinction that I failed to get across well in my post.
I do think it’s at least directionally an “EA principle” that “best” and “right” should go together, although of course there’s plenty of room for naive first-order calculation critiques, heuristics/intuitions/norms that might push against some less nuanced understanding of “best”.
I still think there’s a useful conceptual distinction to be made between these terms, but maybe those ancillary (for lack of a better word) considerations relevant to what one thinks is the “best” use of money blur the line enough to make it too difficult to distinguish these in practice.
Re: your last paragraph, I want to emphasize that my dispute is with the terms “using EA principles”. I have no doubt whatsoever about the first part, “genuinely interested in making the world better”
Thanks Aaron, I think you’re responses to me and Jason do clear things up. I still think the framing of it is a bit off though:
I accept that you didn’t intend your framing to be insulting to others, but using “updating down” about the “genuine interest” of others read as hurtful on my first read. As a (relative to EA) high contextualiser it’s the thing that stood out for me, so I’m glad you endorse that the ‘genuine interest’ part isn’t what you’re focusing on, and you could probably reframe your critique without it.
My current understanding of your position is that it is actually: “I’ve come to realise over the last year that many people in EA aren’t directing their marginal dollars/resources to the efforts that I see as most cost-effective, since I also think those are the efforts that EA principles imply are the most effective.”[1] To me, this claim is about the object-level disagreement on what EA principles imply.
However, in your response to Jason you say “it’s possible I’m mistaken over the degree to which direct resources to the place you think needs them most” is a consensus-EA principle which switches back to people not being EA? Or not endorsing this view? But you’ve yet to provide any evidence that people aren’t doing this, as opposed to just disagreeing about what those places are.[2]
Secondary interpretation is: “EA principles imply one should make a quantitative point estimate of the good of all your relevant moral actions, and then act on the leading option in a ‘shut-up-and-calculate’ way. I now believe many fewer actors in the EA space actually do this than I did last year”
For example, in Ariel’s piece, Emily from OpenPhil implies that they have much lower moral weights on animal life than Rethink does, not that they don’t endorse doing ‘the most good’ (I think this is separable from OP’s commitment to worldview diversification).
It seems a bit harsh to treat other user-donors’ disagreement with your views on concentrating funding on their top-choice org (or even cause area) as significant evidence against the proposition that they are “genuinely interested in making the world better using EA principles.”
I think a world in which everyone did this would have some significant drawbacks. While I understand how that approach would make sense through an individual lens, and am open to the idea that people should concentrate their giving more, I’d submit that we are trying to do the most good collectively. For instance: org funding is already too concentrated on a too-small number of donors. If (say) each EA is donating to an average of 5 orgs, then a norm of giving 100% to a single org would decrease the number of donors by 80%. That would impose significant risks on orgs even if their total funding level was not changed.
It’s also plausible that the number of first-place votes an org (or even a cause area) would get isn’t a super-strong reflection of overall community sentiment. If a wide range of people identified Org X as in their top 10%, then that likely points to some collective wisdom about Org X’s cost-effectiveness even if no one has them at number 1. Moreover, spreading the wealth can be seen as deferring to broader community views to some extent—which could be beneficial insofar as one found little reason to believe that wealthier community members are better at deciding where donation dollars should go than the community’s collective wisdom. Thus, there are reasons—other than a lack of genuine interest in EA principles by donors—that donors might reasonably choose to act in accordance with a practice of donation spreading.
Thanks, it’s possible I’m mistaken over the degree to which “direct resources to the place you think needs them most” is a consensus-EA principle.
Also, I recognize that “genuinely interested in making the world better using EA principles” is implicitly value-laden, and to be clear I do wish it was more the case, but I also genuinely intend my claim to be an observation that might have pessimistic implications depending on other beliefs people may have rather than an insult or anything like it, if that makes any sense.