The authors are claiming that anything that helps the far future can also be accomplished by helping people in the present.
This is in tension with âWe Are Not in a Position to Predict the Best Actions for the Far Futureâ, isnât it?
The authors are claiming that anything that helps the far future can also be accomplished by helping people in the present.
This is in tension with âWe Are Not in a Position to Predict the Best Actions for the Far Futureâ, isnât it?
I would guess that some of the welfare benefits of democracy and human rights both flow from and flow to greater economic prosperity, so holding those things constant may suppress some of the âtrueâ effect.
Is there anything interesting about supporting different currencies here? E.g. if I pledge to give $25 per month, but Iâm actually likely to donate in ÂŁ, do you tell me a dollar amount and ask me to convert it at time of donation? If so, do you need me to submit evidence of the FX rate in addition to evidence of the donation? Or perhaps you could ask me for my preferred currency, and send me pre-converted amounts when it comes time to do the donations?
you could look at 10 analogous industries, see what processes or institutions are valuable
I feel like youâre making this sound simple when Iâd expect questions like this to involve quite a bit of work, and potentially skills and expertise that I wouldnât expect people at the start of their careers to have yet.
Do you have any specific ideas for something that seems obviously missing to you?
However, I worry that in the EA community, thereâs an overemphasis on the âscoutâ mindsetâbeing skeptical of oneâs own work and too quick to defer to critiques from others.
Perhaps a minor point: the scout mindset encourages skepticism, but not deference. Thereâs a big difference between deferring to a critique vs. listening to and agreeing with it. I think we should hesitate to describe people as deferring to others unless either (a) they say they are doing so or (b) we have some specific reason to think they canât be critically analysing the arguments for themselves.
I think this might merit a top-level post instead of a mere shortform
Since the discussion on this thread, Iâve had the view that the meat-eater problem is dwarfed by the cause prioritisation problem, in the sense that if you give money to a global health and development charity, overwhelmingly the biggest harm to animals is that you didnât give that money to animal welfare charities: the actual negative effect of your donation is likely very small by comparison.
(Thereâs obviously an act-omission difference here, but I donât personally find that an important difference.)
80k could be much better than nothing and yet still missing out on a lot of potential impact, so I think your first paragraph doesnât refute the point.
Yeah, in retrospect maybe it was kind of doomed to expect that I might influence FarmKindâs behaviour directly, and maybe the best I could hope for is influencing the audience to prefer other methods of promoting effective giving.
If youâre just saying âthis other case might inform whether and when we think donation matches are OKâ, then sure, that seems reasonable, although Iâm really more interested in people saying something like âthis other case is not bad, so we should draw the distinction in this wayâ or âthis other case is also bad, so we should make sure to include that tooâ, rather than just âthis other case existsâ.
If youâre saying âwe have to be consistent, going forward, with how we treated OpenPhil /â EA Funds in the pastâ, then surely no: at a minimum we also have the option of deciding it was a mistake to let them off so lightly, and then we can think about whether we need to do anything now to redress that omission. Maybe now is the time we start having the norm, having accepted we didnât have it before?
FWIW having read the post a couple of times I mostly donât understand why using a match seemed helpful to them. I think how bad it was depends partly on how EA Funds communicated to donors about the match: if they said âthis match will multiply your impact!â uncritically then I think thatâs misleading and bad, if they said âOpenPhil decided to structure our offramp funding in this particular way in order to push us to fundraise more, mostly you should not worry about it when donatingâ, that seems fine, I guess. I looked through my e-mails (though not very exhaustively) but didnât find communications from them that explicitly mentioned the match, so idk.
Yes, but then the standard donor doesnât care about having the influence either, right?
oh, I also want to add that:
I think itâs relevant and useful that I noticed the âinfluence bonusâ that the standard donor gets comes from the bonus donor losing influence.
I didnât notice this at first! But I thought to myself âok, Jasonâs argument that the standard donor gets more than $1 of value sounds right, but I know thereâs only $2 value in the inputs. Where is the extra value coming from? If itâs being created, how is it being created?â
This kind of question really only makes sense if you stand by the â$2 in, $2 outâ kind of thinking, so I think this is a good example of why thatâs a good principle that led me to clarify my thinking and notice something new.
If providing funds that will contribute to a match has the effect of increasing funds generated to effective charities and is transparent and forthright about the process involved, I donât really see the problem.
I think this is a pretty interesting aspect of the discussion, and I can see why people would not only agree with this but think it kind of obvious. Here are some reasons why I donât think itâs so obvious:
I think itâs worth noticing when you offer other people something that you wouldnât take yourself. Why is it good for them, but not good for you?
I think you should include your assessment of what value the deal has to someone else in your assessment of whether you have successfully communicated what the deal is. As above, if people are taking an offer that you wouldnât take, and that as far as you see doesnât benefit them, I think you should take that as evidence that you havenât explained the deal (Jason makes this point in another comment)
I think a useful analogy is a casino, say a roulette wheel. The rules of roulette wheels are pretty simple, and completely public. People play them willingly, often believing itâs in their interests to do so. Yet I think roulette wheels are kind of exploitative and even verge on predatory, and I wouldnât feel comfortable running one even to donate the proceeds to charity. Again, opinions can differ here, but for me I see facilitating and encouraging other people to make decisions that you know are bad isnât ethical behaviour, even if they ultimately make the decision freely and willingly. (To be clear, I think offering a donation match is less unethical than running a casino, but I bring up the casino to illustrate how offering people a free and informed choice sometimes still doesnât sit right with me.)
I do see how this could be adversarial or uncooperative. Do X, or else Iâll stop buying medicine for dying kids. What?!
Right, I feel like itâs easy to not notice this framing, but it feels pretty weird once you do frame it in that way.
I do agree that there are some circumstances under which donation matches make sense, and increasing marginal returns to donations is perhaps one of them (which is not exactly what you said I think? but similar). I just think these circumstances tend to be relatively niche and I donât see how e.g. the FarmKind case is one of them.
But I feel strongly that it is not tenable to have a general community norm against something your most influential actors are doing without pushback, and checking the comments on the linked post Iâm not seeing that pushback.
I hear this as âyou canât complain about FarmKind, because you didnât complain about OpenPhilâ. But:
Jeff didnât complain about the GiveWell match at the time it was offered, because he didnât notice it. I donât think we can draw too much adverse inference from any specific person not commenting on any specific situation.
A big part of my motivation to leave a comment on the FarmKind post was anticipating that they might not have heard about common objections to donation matching, whereas I think Claire Zabel probably has heard of them.
Similarly, one might have differing expectations about how much one vs. the other organisation would be interested in your feedback, or likely to change path based on feedback (I donât really have a considered view on how FarmKind vs. OpenPhil score here)
idk, sometimes I feel like leaving comments and sometimes I donât?
I think itâs better to focus on the actual question of whether matches are good or bad, or what the essential features are for a match to be honest or not. Based on that question, we can decide âit was a mistake not to push back more on OpenPhilâ or âwhat OpenPhil did was fineâ if we think thatâs still worth adjudicating.
+1 to the idea that AIM has idiosyncratically low senior staff costs that I think are pretty strongly influenced by Joey Savoieâs personal attitude to cost minimization and sacrifice; I think the main scarce resource that AIM staff spends is not exactly âamount of moneyâ but more like ânumber of Joeysâ, and that if you wanted to start a second AIM you wouldnât get it nearly as cheaply.
One point of view I havenât seen represented so much here is that it can simultaneously be true that publically allying yourself with the community is good for the community, and yet overall not the right call. Iâd like e.g. politicians to be able to take our good ideas and implement them without giving anything back, if thatâs the best way to get those ideas done. (That said, Iâd like them to read this post and weigh up the costs and benefits, and particularly consider the points about transparency and ensure they arenât being outright deceitful about their links to the movement.)
One thing Iâd like to add is that when thereâs pushback on donation matching initiatives, the discussion often focuses on âis the communication of what happens clear?â and âare donors misled?â and so on. I think (lack of) honesty and clarity is the biggest problem with most donor matches, but even where those problems are resolved, I think there are still other problems (e.g. it just seems kind of uncooperative /â adversarial to refuse to do something good that youâre willing and able to do unless someone else does what you want). My favoured outcome is not âhonest matchingâ but âno matchingâ, and Iâd like the discussion to at least have that on the table as a possible conclusion.
(To be clear, I think neither Giving Multiplier nor FarmKind are intending to be dishonest, although I think itâs still unclear to me whether they are unintentionally misleading people. I perhaps think that they are applying ordinary standards of truthfulness and care, whereas Iâd like us to have extraordinary standards.)
You say âfind a compromiseâ as if this is a big and contentious issue, but I⌠donât really see it coming up a lot? I know Kat Woods has recently posted elsewhere about how lots of unpaid internships are being suppressed because random bystanders on the internet object to them, but I just donât actually see that happening. I would imagine that often management capacity is more of a bottleneck than pay anyway?
Yeah, this seems a little⌠sneaky, for want of a better word. It might be useful to imagine how you think the non-EA donors would feel if the âcommissionâ were proactively disclosed. (Not necessarily terribly! After all, fundraising is often a paid job. Just seems like a useful intuition prompt.)