I’m not sure that you’re making the wrong call, but I think it’s sort of weird/hypocritical to advertise EA by making donation choices that sacrifice altruistic impact in order to seem more normal.
Another effect is that I’d much rather evangelize EA to the kind of people who understand donor lotteries quickly.
“Seem more normal” isn’t quite what I’m going for; it’s more about there being value in doing things that are easier to explain, or that are are clearly valuable even from worldviews different from your own. For example, someone choosing to live with roommates so they’re able to work for a non-profit or donate more is weird, but it’s not hard to explain and people’s reaction is much more likely to be “I wouldn’t do that” than “that’s not actually good”.
I’d feel differently if I thought we were talking about a large amount of altruistic impact. If you think AI safety research or building pandemic shelters are what most needs doing you shouldn’t go do something else just because it’s easier to explain to the average person. But I think the gains from lotteries are pretty low, low enough that when you consider the downside it being more confusing it’s not worth it?
Another place this tradeoff comes up is with salary sacrifice: it’s more legible to donate money, but asking for a reduced salary has more altruistic impact.
I think that (normal charity) vs (lottery) is a clear improvement for a much wider range of worldviews than (normal charity) vs (defer to GiveWell).
I do agree that “defer to GiveWell” is easier to explain though. Or slightly more precisely: I think it’s easier to explain what GiveWell does well enough that someone can understand why you might think it’s the best option, but harder to explain what GiveWell does in enough detail that someone can verify for themselves that it’s actually better than their alternatives.
It feels like a bit of a Catch-22. To purposefully make donations in a way that you know will lead to less overall altruistic impact in order to increase first order impact also seems to run afoul of similar, albeit maybe less severe, issues.
I’m not sure that you’re making the wrong call, but I think it’s sort of weird/hypocritical to advertise EA by making donation choices that sacrifice altruistic impact in order to seem more normal.
Another effect is that I’d much rather evangelize EA to the kind of people who understand donor lotteries quickly.
“Seem more normal” isn’t quite what I’m going for; it’s more about there being value in doing things that are easier to explain, or that are are clearly valuable even from worldviews different from your own. For example, someone choosing to live with roommates so they’re able to work for a non-profit or donate more is weird, but it’s not hard to explain and people’s reaction is much more likely to be “I wouldn’t do that” than “that’s not actually good”.
I’d feel differently if I thought we were talking about a large amount of altruistic impact. If you think AI safety research or building pandemic shelters are what most needs doing you shouldn’t go do something else just because it’s easier to explain to the average person. But I think the gains from lotteries are pretty low, low enough that when you consider the downside it being more confusing it’s not worth it?
Another place this tradeoff comes up is with salary sacrifice: it’s more legible to donate money, but asking for a reduced salary has more altruistic impact.
I think that (normal charity) vs (lottery) is a clear improvement for a much wider range of worldviews than (normal charity) vs (defer to GiveWell).
I do agree that “defer to GiveWell” is easier to explain though. Or slightly more precisely: I think it’s easier to explain what GiveWell does well enough that someone can understand why you might think it’s the best option, but harder to explain what GiveWell does in enough detail that someone can verify for themselves that it’s actually better than their alternatives.
It feels like a bit of a Catch-22. To purposefully make donations in a way that you know will lead to less overall altruistic impact in order to increase first order impact also seems to run afoul of similar, albeit maybe less severe, issues.