Forget replaceability? (for ~community projects)
I’m interested in questions of resource allocation within the community of people trying seriously to do good with their resources. The cleanest case is donor behaviour, and I’ll focus on that, but I think it’s relevant for thinking about other resources too (saliently, allocation of talent between projects, or for people considering whether to earn to give). I’m particularly interested to identify allocation/coordination mechanisms which result in globally good outcomes.
Starting point: “just give to good things” view
I think from a kind of commonsense perspective, just trying to find things that are doing good things and giving them resources is a reasonable strategy. You should probably be a bit responsive to how desperately they seem to need money.
The ideas of “effective altruism” might change your conception of what “doing good things” means (e.g. perhaps you now assess this in terms of how much they do per dollar spent, if you weren’t doing that already).
If you care about counterfactual effects of your actions, it looks like it makes sense to think about what will happen if you don’t put resources in. After all, if something will get funded anyway, then your counterfactual contribution can’t be that large?
I think this is an important insight, but it’s only clean to consider when you model the rest of the world as unresponsive to your actions.
Coordination issues — donors of last resort
Say you have two donors who agree that Org X can make excellent use of money, but disagree about whether Org Y or Org Z is the next best use of funds. Each donor would prefer that the other fills X’s funding gap, so that marginal funds go to the place that seems better to them. If the donors are each trying to maximise their counterfactual impact (as estimated by them), they might end up entering into a game of chicken where each tries to precommit to not funding X in order to get the other to fund it. This is potentially costly in time/attention to both donors, as well as to X (and carries a risk that X fails to fill all of its funding needs as the donors engage in costly signalling that they’re prepared to let that happen).
A simple patch to this would be to have norms where the donors work out their fair shares of X’s costs, and each pay that. (But working out who the relevant pool of possible donors is, and what fair shares really look like given varying empirical views, is quite complicated.)
Coordination issues such as the above (see the comments for another example coordination issue) are handled fairly gracefully in the for-profit world. The investors who are most excited about them give them money (and prefer that they are the ones to give money, since they get a stake for their investment, rather than holding out and hoping that other investors will cough up). And orgs can attract employees who would add a lot of value by offering great compensation.
I’ve been excited about the possibility of getting some of these benefits via some explicit mechanism for explicit credit assignation. Certificates of impact are the best known proposal for this, although they aren’t strictly necessary.
What would a world with some kind of impact markets look like, anyway? At least insofar as it relates to allocation of resources to projects, I think that it would involve people being excited to fill funding gaps if they were being offered a decent rate by the projects in question. Projects above some absolute bar of “worth funding” could get the resources they needed, so long as the people running them were willing to offer a large enough slice of their impact. It would be higher prestige for project founders to hold onto a larger share of their impact, as this would imply a higher valuation.
People considering what jobs to take would look at the impact equity they were being offered for each role. People with good ideas might often try to turn them into startups, as the payoff for success would be high. People who had invested early in successful projects might become something like professional VCs, and build up more capital to invest in new things.
Implicit impact markets without infrastructure
There are some downsides to setting up explicit markets. They might be weird; they might be high overhead to run; and there might be issues we don’t notice until we try to implement them at scale.
So if we’ve identified the type of social behaviour that we’d like to achieve via markets, why don’t we just aim at that behaviour directly? We have the usual tools for creating social norms (creating common knowledge, praising the desired behaviour, etc.). I think this could work decently well, particularly as the recommended behaviour would be close to a commonsense position.
What would the recommendations actually be? There’s a certain audience to whom you can say “just imagine there was a (crude) market in impact certificates, and take the actions you guess you’d take there”, but I don’t think that’s the most easily digested advice. I guess the key points are:
For projects which are drawing their support from communities of people making a serious attempt to good in the world, feel free not to worry about the counterfactuals if you don’t commit resources
There’s a question about where to draw the boundary
I think because the suggested behaviour ends up tracking common sense reasonably well, we can be reasonably generous, not just a niche audience who’s read a particular blog post; but of course it’s still worth thinking about counterfactuals/replaceability for contributions which aren’t even engaged in the pursuit of having a big difference
Instead be attentive/responsive to how strongly projects are asking for resources, but only if the asks are made publicly (or it is publicly confirmed how strong they are)
If we’re going to be responsive to bids of desperation, we need some way to keep them honest, and not create incentives to systematically overstate need
As an extreme example, it would obviously be bad if a project founder was telling each of their dozen staff that their personal contributions were absolutely critical, such that they each imagined that they were personally credited with the majority of the impact from the project
Even if that were the true counterfactual in each case, it could easily lead to everyone staying working on the project when it would be better for the entire project to fold and release them to do other things
A proper impact market would get this because it wouldn’t be possible to assign more than 100% of the credit from a given project
Asking that the strength of asks be made public is a way of trying to create that honesty
“Strength of asks” could be made public in purely qualitative terms
They could also be specified numerically, as in explicit credit allocation, e.g. “if we raise our $1M budget, we think that 40% of our impact this year will be attributable to our donors”
I think this seems achievable for asks for funding; it’s more complicated for asks to individuals to join projects, as there might be privacy reasons to not make the strengths of asks public (similar to the fact that compensation figures are often treated as sensitive)
Really “public” isn’t what’s needed so much as “way to verify that promised credit isn’t adding up to more than 100%”
You could bring a trusted third party in to verify things; it feels like there could also be a simple software solution (so long as everything has been made quantitative)
It can still make sense to pay some attention to the general funding situation and possible counterfactuals, as that situation can give independent data points on how strong the ask really is / should be
e.g. if an org has a long funding runway already, then it should have a hard time making strong asks for money, and we should be a bit suspicious if it seems like it is
Socially, give more credit to people who have stepped in to fill strong needs, and less credit to the people making strong asks
And the converse: give more credit to people when they make it explicit that their asks are unusually weak (and a bit less credit to people filling such needs)
Take founding valuable projects (or providing seed funding for things that turn out well but were having difficulty finding funding) as particularly praiseworthy
How good would this be? It seems to me like it would very likely be better than the status quo (perhaps not that much better, as it’s not that dissimilar). I don’t know how it would compare to a fully explicit impact market. It’s compatible with gradually making things more explicit, though, so it might provide a helpful foundation to enable some local experimentation with more explicit structures.
Acknowledgements: Conversations with lots of people over the last few years have fed into my understanding of these issues; especially Paul Christiano, Nick Beckstead, and Toby Ord. The basic idea of trying to get the benefits of impact markets without explicit markets came out of a conversation with Holden Karnofsky.