What Role Do Small-to-Medium Donors Play In the Future of Effective Altruism
I think this fits into a bigger picture. To punch above your weight in terms of impact, you need to know something (or have a skill) that most other people don’t. Currently the thing you have to know is “there’s this thing called EA and earning to give”. As that meme spreads, you’d expect its impact to dwindle, assuming an upper bound on the total amount of good that can be done given current resources.
The number of earning-to-givers * average good done by earning to give ⇐ total amount of good available to be done.
The same equation applies to “knowing about everything that’s going on inside EA”, so creating better memes than earning to give doesn’t appear to solve the problem.
What would help though, would be:
finding where my model of what’s going on is an oversimplification, and focussing some attention there (maybe with xrisk the amount of good to be done is so huge that we don’t hit a limit for a while)
increasing the “total amount of good that can be done given current resources”.
The second one would seem to suggest increasing the total resources available to doing good—this isn’t quite the same as growing the economy, because many agents in the economy are selfish, but it feels related and probably involves an entrepeneurial spirit.
I think the EA algorithm would look something like this:
Do what everyone else in EA is doing
Think of something new, and if it can be shown to be effective (in the sense of growing things, not just directing resources away from somewhere else even in an indirect sense) then roll it out to the rest of the EA movement.
I don’t consider this rambling. I didn’t grok it the first time I read your comment, but it seems plenty insightful now. Thanks for helping out!
maybe with xrisk the amount of good to be done is so huge that we don’t hit a limit for a while
It seems to me the bottleneck here isn’t the output of good to be achieved in the future. However, the bottleneck could be the input of donation targets for the present. For example, every organization seeking to reduce existential risk reduction we can think of could hit points at which further donation isn’t a good giving opportunity.
This scenario isn’t too implausible. The Future of Life Institute could grant the $10 million donation it received from Elon Musk to the MIRI, the FHI, and all other low-hanging fruits for existential risk reduction. If those organizations hit more similar windfalls, or retain the current body of donors, all those organizations might not be able to allocate further funds effectively. I.e., they may hit a point of room-for-more funding issues, for multiple years. Suddenly, effective altruism would need seek brand new opportunities for reducing existential risk, which could be difficult.
I think you’re imagining a scenario where every organization either:
is not seriously addressing existential risk, or
has run out of room for more funding
One reason this could happen would be organizational: organizations lose their sense of direction or initiative, perhaps by becoming bloated on money or dragged away from their core purpose by pushy donors. This doesn’t feel stable, as you can always start new organizations, but there may be a lag of a few years between noticing that existing orgs have become rubbish and getting new ones to do useful stuff.
Another reason this could happen would be more strategic: that humanity actually can’t think of any things it can do that will reduce existential risk. Perhaps there’s a fear that meddling will make things worse? Orgs like FHI certainly put resources into strategizing, so this setup wouldn’t be a result of a lack of creative thinking. It might be something more fundamental to do with ensuring the stability of a system as complex as today’s technological world being a Really Hard Problem.
Even if we don’t hit a complete wall, we might hit diminishing returns. If there turns out to be some moral or practical reason why xrisk is on parity with poverty and animals (in terms of importance) then EA would essentially be running out of stuff to do.
Which we eventually want—but not while the world is full of danger and suffering.
I think this fits into a bigger picture. To punch above your weight in terms of impact, you need to know something (or have a skill) that most other people don’t. Currently the thing you have to know is “there’s this thing called EA and earning to give”. As that meme spreads, you’d expect its impact to dwindle, assuming an upper bound on the total amount of good that can be done given current resources.
The number of earning-to-givers * average good done by earning to give ⇐ total amount of good available to be done.
The same equation applies to “knowing about everything that’s going on inside EA”, so creating better memes than earning to give doesn’t appear to solve the problem.
What would help though, would be:
finding where my model of what’s going on is an oversimplification, and focussing some attention there (maybe with xrisk the amount of good to be done is so huge that we don’t hit a limit for a while)
increasing the “total amount of good that can be done given current resources”.
The second one would seem to suggest increasing the total resources available to doing good—this isn’t quite the same as growing the economy, because many agents in the economy are selfish, but it feels related and probably involves an entrepeneurial spirit.
I think the EA algorithm would look something like this:
Do what everyone else in EA is doing
Think of something new, and if it can be shown to be effective (in the sense of growing things, not just directing resources away from somewhere else even in an indirect sense) then roll it out to the rest of the EA movement.
End ramble.
I don’t consider this rambling. I didn’t grok it the first time I read your comment, but it seems plenty insightful now. Thanks for helping out!
It seems to me the bottleneck here isn’t the output of good to be achieved in the future. However, the bottleneck could be the input of donation targets for the present. For example, every organization seeking to reduce existential risk reduction we can think of could hit points at which further donation isn’t a good giving opportunity.
This scenario isn’t too implausible. The Future of Life Institute could grant the $10 million donation it received from Elon Musk to the MIRI, the FHI, and all other low-hanging fruits for existential risk reduction. If those organizations hit more similar windfalls, or retain the current body of donors, all those organizations might not be able to allocate further funds effectively. I.e., they may hit a point of room-for-more funding issues, for multiple years. Suddenly, effective altruism would need seek brand new opportunities for reducing existential risk, which could be difficult.
I think you’re imagining a scenario where every organization either:
is not seriously addressing existential risk, or
has run out of room for more funding
One reason this could happen would be organizational: organizations lose their sense of direction or initiative, perhaps by becoming bloated on money or dragged away from their core purpose by pushy donors. This doesn’t feel stable, as you can always start new organizations, but there may be a lag of a few years between noticing that existing orgs have become rubbish and getting new ones to do useful stuff.
Another reason this could happen would be more strategic: that humanity actually can’t think of any things it can do that will reduce existential risk. Perhaps there’s a fear that meddling will make things worse? Orgs like FHI certainly put resources into strategizing, so this setup wouldn’t be a result of a lack of creative thinking. It might be something more fundamental to do with ensuring the stability of a system as complex as today’s technological world being a Really Hard Problem.
Even if we don’t hit a complete wall, we might hit diminishing returns. If there turns out to be some moral or practical reason why xrisk is on parity with poverty and animals (in terms of importance) then EA would essentially be running out of stuff to do.
Which we eventually want—but not while the world is full of danger and suffering.