What makes you skeptical of the intervention?
Nathaniel
This news seems like it increases the value of marginal donations this year relative to what we expected. Are 2022 donations also likely to be (much) more valuable relative to 2023? Is donating in December 2022 too late to take advantage of this effect?
Also, the quoted passage seems to assume that EA orgs optimize for their org’s impact rather than for the impact of the movement/good of the world. I’m not convinced that’s true. I would be surprised if EA orgs were attempting to poach workers they explicitly believed were having more impact at other organizations.
It does seem possible that orgs overestimate their own impact/the impact of roles they hire for. However, this would still lead to a much smaller effect than if they completely ignore the impact of candidates at their current roles, as the post seems to assume.
Thanks for link-posting, I enjoyed this!
I didn’t understand the section about EA being too centralized and focused on absolute advantage. Can anyone explain?
EA-in-practice is too centralized, too focused on absolute advantage; the market often does a far better job of providing certain kinds of private (or privatizable) good. However, EA-in-practice likely does a better job of providing certain kinds of public good than do many existing institutions.
And footnote 11:
It’s interesting to conceive of EA principally as a means of providing public goods which are undersupplied by the market. A slightly deeper critique here is that the market provides a very powerful set of signals which aggregate decentralized knowledge, and help people act on their comparative advantage. EA, by comparison, is relatively centralized, and focused on absolute advantage. That tends to centralize people’s actions, and compounds mistakes. It’s also likely a far weaker resource allocation model, though it does have the advantage of focusing on public goods. I’ve sometimes wondered about a kind of “libertarian EA”, more market-focused, but systematically correcting for well-known failures of the market.↩
Don’t global health charities provide private goods (bed nets, medicine) that markets cannot? Markets only supply things people will pay for and poor people can’t pay much.
X-risk reduction seems like a public good, and animal welfare improvements are either a public good of a private good where the consumers definitely cannot pay.
I take it that centralization is in contrast to markets. But it seems like in a very real way EA is harnessing markets to provide these things. EA-aligned charities are competing in a market to provide QALYs as cheaply as possible, since EAs will pay for them. EAs also seem very fond of markets generally (ex: impact certificates, prediction markets).
How is EA focused on absolute advantage? Isn’t earning to give using one’s relative advantage?
I think there’s just something that feels less reliable about individual, perhaps anonymous funders rather than an official, professional organization.
For example, one could worry about applications taking a long time to process because it takes time to coordinate with the individual funders who are volunteer and not employees. Or people might have concerns about giving bank info to the funders (or transferring large amounts of money some other way.)
But if Basefund handles everything and the funders just feel like a detail that’s happening behind the scenes, this would be less of a concern.
Also, I was unclear, but I meant to say that the “refund version feels like something you can rely on.” Sorry about that, I’ve edited to clarify. But I understood that in both cases it’s capped at 50%.
Would the following structure be possible? (Apologies if this idea has already been considered.)
Your organization allows EAs to register their donations and then later apply to “refund” up to 50% of them. When someone applies, they receive a refund in the form of a grant from an established non-profit like OpenPhil. Essentially, this lets people apply for guaranteed transitional/emergency funds and uses donation history to verify that the person is aligned and not trying to game the system.
Since the original donation still goes directly and fully to the charity, there should (hopefully) be no legal issues. However, this proposal would require the insurance pool to be funded by something than the “refundable” donations.
The only “gaming” of the system that one could do here would be to essentially force the pool to do a 1-for-1 match of any donation. (By donating, refunding 50%, donating that 50%, refunding 25%, etc.)
Thanks for all your work on this! I agree with Charles that it would be worthwhile to check whether this would be a legitimate charity in the US and UK.
Personally, I think the old, “refund” version (where one can easily recoup 50% of one’s donations) is likely to be much more impactful than the new “bread fund” version.
To me, the bread fund version feels like a last resort on the order of asking relatives for help, while the refund version feels like something you can rely on. Most people are risk averse with respect to their personal finances, and I think the (perceived) uncertainty of going through individual funders would feel significant to most people.
I’m thinking about a case like an emergency fund that there’s a 10% chance you need to use at some point during the year. Personally, I wouldn’t want a 10% chance of needing to rely on the bread fund. However, I would be happy with a 10% chance that I need to reverse some of my donations.
In some sense, the refund version allows one to donate aggressively and then “undo” some of the donations later if necessary. This seems like it more directly encourages/enables donations.
Have you run a poll to see how people feel about the two versions?
Although it’s against longtermism and not EA, this recent blog post by Boaz Barak (a computer science professor at Harvard) might qualify.
I’m worried your subdivision misses a significant proportion of harms that don’t fall into either category. For instance, interactions that don’t involve malice or power dynamics and are innocuous in isolation but harmful when repeated. This repetition can be made more likely by imbalanced gender ratios.
I think being flirted with during the day at an EAG like Nathan discussed above is a good example of this. If you’re flirted with once over the weekend, perhaps it’s fine or even nice, especially if it’s from the person you found most interesting. But if you’re flirted with several times, you may start to feel uncomfortable.
Well if a conference has 3x more men than woman and 1-on-1s are matched uniformly at random, then women have 3x more cross-gender 1-on-1s than men. Assuming all people are equally likely to flirt with someone of a different gender than them, it’s very possible that the average man receives a comfortable amount of flirting while the average woman receives an uncomfortable amount.
And it probably gets worse when one considers that these are random variables and we don’t care about the average but rather about how many people exceed the uncomfortable threshold and to what degree. And perhaps worse again if certain “attractive” people are more likely to receive flirting.
Overall, my point is that behaviors and norms that would be fine with balanced gender ratios can be harmful with imbalanced ones. Unfortunately, we have imbalanced ones and we need to adapt accordingly.