Certificates of impact
0. Introduction
1. Some existing challenges
Evaluating philanthropic opportunities is difficult, and most are not very good. Predictions are expensive and inaccurate, and often infeasible for small donors. The problem is worst when evaluating novel interventions. Prizes can address some of these difficulties, but have their own set of problems.
One person might have the ability to do a project, another the desire to fund it, and a third the knowledge to evaluate it. Coordination is hard, and existing incentives misaligned, both for disseminating good information and executing good projects.
Thinking about crowding out and causal responsibility is hard: if I don’t do something, will someone else? Does it matter which of Oxfam’s activities I fund? How is causal responsibility divided between donors and employees? Between the philanthropist who buys malaria nets and the government which distributes them?
Prices are an elegant and flexible system for communication and coordination, which are often unavailable for altruists.
Reasoning about leverage, and interacting with funders with different priorities, is tough. It’s easy to end up neglecting a large possible upside or engaged in zero-sum conflict.
2. Certificates of impact
It was issued by the group or individual that performed the associated activity.
It is the unique certificate associated with that activity.
3. Commentary
Some simple examples
So what?
Allocating certificates requires explicit and transparent allocation of causal responsibility, both within teams and between teams and donors. In addition to the obvious cultural effects (which I would welcome despite the costs), this aligns individual incentives with altruistic goals, reducing incentives to mislead donors, volunteers, and employees.
Exchanging certificates for money leads to a more consistent conversion between good done and compensation, especially if funders use a mix of prizes and grants. It can prevent good deeds from falling through the cracks, ameliorate some winner-take-all PR dynamics, and eliminate some zero-sum conflict. It also makes it easier to move between different funding modes.
Purchasing certificates helps funders with different values coordinate; if I see a good deal I have an incentive to take it, without worrying about whether it might be an even better deal for someone else.
The ability to resell certificates makes a purchase less of a commitment of philanthropic capital, and less of a strategic decision; instead it represents a direct vote of confidence in the work being funded.
Some more examples
Some theory
-1. Note
- Big List of Cause Candidates by 25 Dec 2020 16:34 UTC; 282 points) (
- Replacing Karma with Good Heart Tokens (Worth $1!) by 1 Apr 2022 9:31 UTC; 224 points) (LessWrong;
- Impact markets may incentivize predictably net-negative projects by 21 Jun 2022 13:00 UTC; 113 points) (
- Altruistic equity allocation by 16 Oct 2019 5:54 UTC; 85 points) (
- Long-Term Future Fund: August 2019 grant recommendations by 3 Oct 2019 18:46 UTC; 79 points) (
- Forget replaceability? (for ~community projects) by 31 Mar 2021 14:41 UTC; 75 points) (
- List of possible EA meta-charities and projects by 9 Jan 2019 11:28 UTC; 74 points) (
- Prediction-Augmented Evaluation Systems by 9 Nov 2018 10:55 UTC; 44 points) (LessWrong;
- Prize: Interesting Examples of Evaluations by 28 Nov 2020 21:11 UTC; 43 points) (LessWrong;
- Long-Term Future Fund: August 2019 grant recommendations by 3 Oct 2019 20:41 UTC; 35 points) (LessWrong;
- Should Grants Fund EA Projects Retrospectively? by 8 Sep 2021 2:58 UTC; 32 points) (
- A cognitive intervention for wrist pain by 17 Mar 2019 5:26 UTC; 28 points) (LessWrong;
- Link Collection: Impact Markets by 26 Dec 2023 9:01 UTC; 27 points) (LessWrong;
- Prize: Interesting Examples of Evaluations by 28 Nov 2020 21:11 UTC; 26 points) (
- Crypto loves impact markets: Notes from Schelling Point Bogotá by 22 Oct 2022 15:58 UTC; 22 points) (
- 9 Dec 2019 8:23 UTC; 22 points) 's comment on Should we use wiki to improve knowledge management within the community? by (
- My Updating Thoughts on AI policy by 1 Mar 2020 7:06 UTC; 20 points) (LessWrong;
- 5 May 2020 9:13 UTC; 19 points) 's comment on Ben Snodin’s Quick takes by (
- Crypto loves impact markets: Notes from Schelling Point Bogotá by 22 Oct 2022 15:58 UTC; 17 points) (LessWrong;
- HealthCredit: a carbon credit for health by 8 Apr 2022 22:21 UTC; 16 points) (
- Certificates of Impact [Paul Christiano; Link] by 11 Nov 2014 16:58 UTC; 12 points) (LessWrong;
- Link Collection: Impact Markets by 26 Dec 2023 9:07 UTC; 10 points) (
- 26 Feb 2015 3:38 UTC; 7 points) 's comment on $10k of Experimental EA Funding by (
- 5 Mar 2022 14:30 UTC; 7 points) 's comment on The Future Fund’s Project Ideas Competition by (
- 4 Oct 2020 16:21 UTC; 7 points) 's comment on Ofer’s Quick takes by (
- 15 Nov 2020 12:17 UTC; 6 points) 's comment on MichaelA’s Quick takes by (
- 15 Nov 2020 12:33 UTC; 6 points) 's comment on Propose and vote on potential EA Wiki entries by (
- Economic altruism by 5 Dec 2014 0:51 UTC; 5 points) (
- 20 Feb 2019 17:42 UTC; 4 points) 's comment on Impact Prizes as an alternative to Certificates of Impact by (
- Final Round of the Impact Purchase by 16 Dec 2015 20:28 UTC; 4 points) (
- 28 May 2015 9:58 UTC; 3 points) 's comment on Solving donation coordination problems by (
- 3 Oct 2017 22:08 UTC; 3 points) 's comment on Open thread, October 2 - October 8, 2017 by (LessWrong;
- 14 Jun 2021 9:31 UTC; 2 points) 's comment on Forget replaceability? (for ~community projects) by (
- 7 Jul 2015 7:31 UTC; 2 points) 's comment on What If You Could Save the World in Your Free Time? by (
- 課題候補のビッグリスト by 20 Aug 2023 14:59 UTC; 2 points) (
- 16 Dec 2014 5:06 UTC; 1 point) 's comment on Help a Canadian give with a tax-deduction by swapping donations with them! by (
- 10 Oct 2016 14:28 UTC; 1 point) 's comment on Is the community short of software engineers after all? by (
I wonder if it would make sense to sell certificates of Impact as non-fungible tokens (NFTs), given that NFTs are emerging as a lucrative way of publicly representing the “ownership” of non-physical assets like digital artwork.
Paul Graham writes that Noora Health is doing something like this.
https://twitter.com/Jess_Riedel/status/1389599895502278659
https://opensea.io/assets/0x495f947276749ce646f68ac8c248420045cb7b5e/96773753706640817147890456629920587151705670001482122310561805592519359070209
On the one hand, the current NFT hype would plausibly lead to a much bigger inflow of money from speculators and general hype-buyers that could help get the market off the ground. One can also tap into useful existing infrastructure (coding, software, etc.). On the other hand, this would draw in a lot of noise traders (which defeats the idea of impact certificates leading to greater clarity about the relative value of different altruistic projects), and could also make EA look weird and even more connected to crypto than it already is. Also, going into crypto now is a bit like building a dot-com business in 1998 … You better be sure to build something of actual value, and mentally prepare for a lot of turbulence.
Certificates seem like a nice match for NFTs because if you are serious about the status/prestige thing, you do want a global visible registry so you can brag about what impacts you retrocausally funded; and for creators, this makes a lot more sense than doing one-off negotiations over, like, email.* I was thinking about Harberger taxes on NFTs and how to ensure that NFT collectibles can always be transferred without needing a tax and ratcheting up price as a mechanisms, and that doesn’t work because of wash trades with oneself (esp powered by flash loans), but something like that might make sense for certificate of impact NFTs.
A CoI NFT would be a NFT linked to a specific action or object, such as a research paper; it would be sold by the responsible agent; a CoI NFT contains the creator’s address; a CoI NFT can be purchased/transferred at any time from its current owner by sending the last price + N ETH to the contract, where the last owner gets the last price as a refund and the creator gets the marginal N ETH as further payment for their impact.
So you might buy the NFT for Paul’s latest blog post for 1 ETH from Paul, and then Jess decides it’s actually more important, and buys it away from you for 1.1 ETH (you are then break-even, and Paul is at 1 + 0.1 ETH, and Jess is at −1.1 ETH); then pg decides he really likes it and buys it for 10 ETH, refunding Jess’s 1.1 ETH and sending an additional 8.9 ETH to Paul… At that point, people collectively agree that the true worth of Paul’s post is indeed about 10 ETH, and the NFT stops moving and pg gets the prestige of having the good philanthropic taste to have (retro-causally) patronized Paul & caused by commissioning that post.
The creator of impact gets all the revenue irreversibly so there’s no pernicious speculative financial bubble problems; any person worldwide can contribute more at any time permissionlessly; only one person at a time ‘owns’ the collectible and gets the status & prestige of “I own and retroactively commissioned awesome thing X”; and the faster you bid it up to its true price (as you believe it), the more likely you are to win the game of musical chairs, incentivizing everyone to weigh in fast. (And to the extent there’s a winner’s curse, well, that’s a good thing, since this is for public goods and other underincentivized things.)
* I turned down one or two impact requests for my own work because I couldn’t decide if it was really a good idea to irrevocably sell this sort of nebulous right to my works, and if it was a good idea, didn’t it then logically follow that I’d want to maximize my gains by some sort of public auction rather than negotiating one on one with the first buyer to come along & make an offer?
As I understand, you’re having the profits/losses from resale accrue to the creator, rather than the reseller. But then, why would an impact certificate ever be resold? And I see a lot of other potential disadvantages:
You lose benefit 3 (coordination)
You lose benefit 4 (less commitment of capital required)
You lose the incentive for resellers to research the effectiveness of philanthropic activities.
No-longer will we find that “at equilibrium, the price of certificates of impact on X is equal to the marginal cost of achieving an impact on X.”
If an impact certificate is ever sold at a loss, then the creator could be in for an unwelcome surprise, so they would always need to account for all impact certificates sold, and store much of the sum in cash (!!)
If you’re having only profits accrue to the creator, but not the losses, then all of these concerns except for the last would still hold, and the price discovery mechanism would be even more messed up.
It seems like your main goal is to avoid a scenario where creators sell their ICs for too little, thereby being exploited. But in that case, maybe you could just use a better auction, or have the creator only sell some fraction of any impact certificate, for a period of time, until some price discovery has taken place. Or if you insist, you could interpolate between the two proposals—requiring resellers to donate n% of any profits/losses to the creator—and still preserve some of the good properties. Which would dampen speculation, if you want that.
The impact certificate is resold when someone wants to become the owner of it and pays more than the current owner paid for it; that’s just built-in, like a Harberger tax except there’s no ongoing ‘tax’. (I thought about what if you made a tax that paid to the creator—sort of like an annuity? “Research X is great, as a reward, here’s the NPV of it but in the form of a perpetuity.” But the pros and cons were unclear to me.) The current owner has no choice about it, and if they want to keep owning it, well, they can then just buy it back at their real valuation of it, and then they are indifferent to any further transfers.
I don’t see how CoI NFTs are any worse for coordination?
You do need capital upfront for the refund, but your capital loss is another EA’ers capital gain: on net, it cancels out. The person you just bought the CoI NFT from now has X ETH they can deploy to new CoI NFTs, if they wish.
If you’re worried about lumpiness in prices, NFTs can be subdivided—they are just tokens, after all, there’s no reason you couldn’t have NFTs on scales anywhere from “a year of a nonprofit organization’s work” to “the first paragraph of this blog post” or just ‘shares’ of each. Or pool funds in a DAO to buy them collectively. Plenty of options for that. (This would be set more by things like blockchain fees and mental accounting costs.)
I don’t see why that wouldn’t be the case? If the cost of 1 utilon is $1, what stops a creator from spending $1, issuing a new CoI NFT, and eventually receiving ~$1? People won’t pay >$1 because then they could have bought more utilons by paying for a new NFT. The person who first paid $1 for the NFT keeps it, and then a new one gets made, which gets bid up to $1, and then a new one gets made, and so on and so forth.
A certificate can’t be sold at a ‘loss’ by the terms of the smart contract. It just ratchets. If the price is not so low that someone is tempted to buy it, it just stops trading and remains with the last buyer and has reached its charitable equilibrium, as the creator has been paid in full by philanthropists based on their belief of the impact of that NFT. (A “loss” is a weird thing to talk about in a philanthropic or fan context like this; almost by definition, every single impact certificate is a ‘loss’ in the sense that you don’t get back more money than you put into it, that’s the point!)
The main thing is to avoid the pathologies of NFTs as collectibles and speculative bubbles, where the price and activities have nothing whatsoever to do with any fundamentals. The “Beepleification” of EA, if you will. If certificates of impact are subject to the same dynamics as Beeple NFTs are, for example, then they are useless. What the creators skim off is not the main problem, and in fact, to the extent that creators successfully skim off more (the way Beeple has) while those financial dynamics remain intact, they worsen the problem.
Oh, selling is compulsory.
OK. That’s what I meant when I said “If you’re having only profits accrue to the creator, but not the losses, then all of these concerns except for the last would still hold, and the price discovery mechanism would be even more messed up.” I’ll call my understanding of Paul’s proposal the “capitalist” model and your model the “ratchet” model BTW.
OK.
Re the downsides of the “ratchet” model, here are my responses:
(Coordination). If Anne writes a blog post, Bob and Chris may both want Anne to be funded, but not want to have to personally lose the cash. In the capitalist model, Bob can just buy Anne’s IC, knowing that he’s not any worse off, because he has gained an asset that he can easily sell later. Whereas in the ratchet model, Bob and Chris don’t gain any profitable asset.
(Capital requirement). Sorry, I was unclear about the fact that I was referencing Paul’s quote “The ability to resell certificates makes a purchase less of a commitment of philanthropic capital, and less of a strategic decision; instead it represents a direct vote of confidence in the work being funded.” In the capitalist model, talent scouts who buy up undervalued projects can retain and grow their capital, and scout more talent. Not so in the “ratchet” version.
(Equilibrium). Price discovery will have problems due to the price not being able to go down. Suppose I do an activity that further investigation will be revealed to have had value $0 or $2, with equal probability. Until we figure that out, the price will be $1. If someone discovers that the value was really $0, there is no way for that information to be revealed via the price (which can only increase). Edit: or alternatively, the price never goes up to $1 in the first place. So then the price only reaches a level $n when people are sure it really couldn’t be worth less than that, and the price will only serve as a lower bound on the EV of the impact.
(Incentive for resellers to research).
(Selling at a loss). OK, I agree this is not an issue if you ratchet.
Your example doesn’t make sense to me. If Bob is not providing any money and cannot ‘personally lose the cash’ and is never ‘any worse off’ because he just resells it, what is he doing, exactly? Extending Anne some sort of disguised interest-free loan? (Guaranteed and risk-free how?) Why can’t he be replaced by a smart contract if there are zero losses?
It seems like in any sensible Paul-like capitalist system, he must be providing money somewhere in the process—if only by eating the loss when no one shows up to buy it at the same or higher price! If Bob gets involved and does anything useful at all, he’s personally losing cash, somehow, in expectation.
So, I don’t see how this is any different from the ratchet system where the ‘loss’ is upfront and Bob buys half the tokens for the blog post and Chris buys the other half, or Bob+Chris pool in a DAO to jointly buy the post’s NFT, or something. Maybe someone will show up to buy out those tokens, and they get their money back. Or they don’t. Just like the capitalist system. But the ‘loss’ goes to Anne either way.
Yes, I regard this as a feature and not a bug, and a problem with capitalist CoI schemes. There is no difference between ‘talent scouting’ and ‘speculative bubble unmoored from fundamentals’, as this is implemented. It becomes a Keynesian beauty contest: buying CoIs because you think many someones will think it’s a CoI to buy...
There is no ground truth which verifies the ‘talent’ which has been shouted up. The only ‘verification’ is that there is a greater fool who does buy the CoI from you, so that means ‘talent scout’ here actually means ‘snake oil salesman and marketer’ as the scheme collapses under Goodhart, and Paul (or whoever outcompetes him in marketing rather than impacting) starts spending all his time shilling his NFTs on Instagram and talking about how EA CoIs are going to be auctioned at Christies soon, and his followers DM you saying that their inside source says that a new Paul blog is going to drop at midnight on Thursday and if you join their Discord the dropbot can get you in on the token buy early to flip them for guaranteed profits! don’t be a sucker or left holding the bag!...
CoIs should be about paying for past performance, and not playing at being covert prediction markets, and doing so poorly. Mixing pay for making more accurate predictions and pay for performance is an uneasy combination at the best of times. If PMs and CoIs are going to be con-fused into the same financial instrument, it needs to be thought through much more carefully. There is probably a role for PMs with subsidies on EA-relevant questions, which can then be used to help price CoIs of any type, but not by directly determining their prices as the answer to their prices, circularly.
Price discovery is implemented by new NFTs. As they reach equilibrium and stop trading, new NFTs have to come out (as one would hope, as the world needs new impacts every day). If an activity is discovered to be worthless, people will just stop buying the new NFTs involving that activity.
Note that new NFTs need to be issued under capitalist CoI too, because time marches on and the world changes: maybe an activity did have the impact back then, but that’s not the same question as “today, I, a potential impacter, should do something; what should that something be?” A CoI for fighting iodine deficiency 20 years ago may have a high price, and may always trade at around that high price, and the value of further fighting iodine today be ~$0. The price of the old CoI does not answer the current question; what does is… issuing a new CoI, which people can refuse to buy—“don’t you know, iodization is solved, dude? Just check the last national nutrition survey! I’m not buying it, not at that price.”
Buyers can buy the CoI of researchers of CoIs. :) Think of how much a CoI must be worth to an altruistic philanthropist when that research affects the purchase of hundreds of later CoIs by other altruists! So much impact.
1. (Coordination). Bob does lose cash of his balance sheet, but his net asset position stays the same, because he’s gained an IC that he can resell.
3. (Price discovery). I agree that in cases of repeated events, the issues with price discovery can be somewhat routed around.
2&4. (Philanthropic capital requirement & Incentive for resellers to research). The capitalist IC system gives non-altruistic people an incentive to do altruistic work, scout talent, and research activities’ impact, and it rewards altruists for these. Moreover, it reallocates capital to individuals—altruistic or otherwise—who perform these tasks better, which allows them to do more. Nice features, and very standard ones for a capitalist system. I do agree that the ratchet system will allow altruists to fund some talent scouting and impact research, but in a way that is more in-line with current philanthropic behaviour. We might ask the question: do we really want to create a truly capitalist strand of philanthropy? So long as prices are somewhat tethered to reality, then this kind of strand might be really valuable, especially since it need not totally displace other modes of funding.
What if no one buys it?
If the market value of the IC is 0, then ICs aren’t gonna work. But it’s OK if very few ICs are sold, so long as the market clears at a decent price.
Shouldn’t impact be fungible at some level though?
Ohh, I should’ve made this clearer.
The NFT would be used to represent responsibility for (patronage of) a particular impactful action. Just as with impact certificates as previously proposed, a person who, for example, runs EA Harvard for 2018, could put responsibility for this impact onto the marketplace. Buyers, when pricing this asset, can then evaluate how well EA Harvard did at creating things (that may be fungible) like number of EAs produced, or net effect on wellbeing, and pay accordingly.
I think it’s useful to sell responsibility for the impact of a particular action (which is non-fungible), rather than a some responsibility for some (fungible) quantum N of impact, so that the job of judging the impactfulness of the action can be left to the markets.
It might make more sense to call them “patronage” certificates, or similar. Because the certificates can really be bought or sold by patrons who value them for any reason, not just impact. Rather, there is some subset of the funders who are focused on impact and value the certificates on that basis. Basically, impact-oriented patrons. This name is easier to understand because we are familiar with people wanting & receiving credit for being the patron of some art, or as a funder on Patreon.
Ah, that makes sense :-)
That’s compatible with the systems being built, I believe. Impact Certs would be aggregated/componentized into impact class pools. If I grow a bunch of forests, the impact cert I file this as, could then be submitted to, and if found legitimate, permanently locked/ingested by some qualified authority to produce a corresponding quantity of fungible Carbon credits and JobCreator credits, which I could then sell to whoever likes those.
This whole structure of certificates of impact seems to derive its main benefit from allowing market mechanisms to work in the altruistic domain. When I’ve thought about trying to access these market mechanisms, the main problem has appeared to be anchoring the value so that expectations work properly. For instance the value of stock in the stock market is anchored by the profits that the firms will eventually make.
You don’t spend much time addressing this problem. I’m not sure if you mean to include it in the note at the end of things you are setting aside, but at least to my mind it seems relevant to the question of whether such a system could work properly if we arrived there.
My previous attempts to solve this problem had involved anchoring by (occasional, stochastic) explicit external evaluation, but this turns up other difficulties. If I understand it correctly, you’re thinking instead of anchoring to how much people really value things. The issue with this is that values can fluctuate over time, so I don’t know that it’s really well-founded. If the amount its value today comes apart quite a bit from how its expected to be valued tomorrow (and this will continue), I’m not sure how it would stabilise.
What I’m particularly worried is how people would value old certificates. It seems plausible that people would have little interest in certificates from 80 years ago, and expect future interest to continue to drop off. Do you envisage some mechanism to counteract this? Old certificates would by their nature be irreplacable, so we might hope that possessing them achieved some cachet like possessing old artwork has in the world today. But I don’t feel confident that this would work.
A philanthropist (or funding agency) gives certificates value by their efforts to acquire certificates.
I.e. rather than funding a bunch of research that it thinks looks promising, the NSF tries to purchase research output. People may buy certificates (or hire researchers in exchange for a fraction of their certificates) because they expect the NSF to buy them later.
The expectation is that the NSF won’t subsequently resell all of these certificates. Doing so would be an explicit preference reversal (unless the value of the certificates grew faster than the NSF’s rate of return, in which case other donors have decided that the NSF funded good things, and the NSF might decide to sell them).
ETA: this subsumes the proposal of occasional, stochastic valuations. A funder interested in tying the value of certificates to some explicit benchmark X can periodically buy certificates at a value determined by the benchmark X.
Yes, I see how you can get short term value from this. But how do you get a long-term stable state? How do you envision funding agencies or philanthropists making decisions about how to value different certificates? (Particular interest in this question for old certificates, because I think it makes many of the problems more salient.)
Summary: the value of certificates is generally not fixed but will change over time. But this seems like a feature, not a bug. And If everyone stopped using the certificates system, the funders who are left with the certificates should be happy with that outcome as well, for the same reason they are happy making a grant which they can’t later unwind.
Suppose in 2000 a funder buys some certificates that (they think) represent lives saved in 2000. They are happy with that investment, and just to keep the certificates indefinitely; it is not clear that those certificates will ever change hands again after 2005. If a funder had the opportunity to “undo” their earlier funding opportunities and get back the money, how often would they take it?
When valuing certificates at different times, the idea is for a funder to consider how many dollars they would pay to do a good deed in 2000, vs. a good deed in 2005. This is a question that funders already face, when choosing whether to invest or do a good deed now. Allowing them to answer the question with the benefit of hindsight (and allowing speculators to bet on what their answers will be) seems like a bonus.
It seems like the main concern is if they don’t think of selling certificates as unwinding their original good deed, i.e. if they aren’t using the certificates system. If they are using the system, then the overall transaction is just a straight-up loss for them. If they aren’t using the system, then it’s not really our business how they interact with it (adding people who sometimes buy certificates but who don’t value them can’t do any harm, it just drives up the value of certificates and benefits the users of the system).
If a funder did decide to unload their life-saved-in-2000 certificate, the idea would be for someone else to step up to buy them for the reduced price. If people interested in saving lives in 2000 are actually using the certificates system, then the price can’t fall far before someone will become very interested in buying it.
In fact it’s quite likely that the value of certificates for realistic interventions will vary hugely over time as more information is revealed about how good they were. This happens on top of the overall discount rate between good-in-2000 and good-in-2005, and seems like a further bonus.
I agree with your general point about changing value of certificates as we get more information being a feature (this is the kind of thing I meant by tapping into market mechanisms).
OK, I see you’re envisaging a less liquid market than I was. Though there are certainly some situations where I’d expect people to sell long after the fact. For instance if someone dies 40 years later and the certificates pass to their next-of-kin who doesn’t value the work, they might well seek to sell them.
In order to understand whether this is a state which would be desirable, I’m trying to picture what the world would look like today if we’d been using these since, say, 1800, and there were lots of old certificates lying around. I haven’t been able to provide myself with a stable picture of this, which makes me somewhat sceptical.
I think this would happen a lot as people gained information. Then funding gives you an option value of cashing out, whereas not funding wouldn’t necessarily give you the chance of retroactively buying the thing (people would also fund more high-variance things). Of course that doesn’t mean that people would want to sell the certificates, because information that made them want their money back would also tend to drop the going price for the certificates.
My point is that the existence of future funders who don’t care much about “Lives saved in 2000” can never drive down the value of such a certificate, it can merely drive up the value of certificates that the future funders care about.
If the market becomes illuiqid (once the funders who care about lives saved in 2000 are gone), this shouldn’t be troubling to the funders left with the certificates, since that just means they are assuming responsibility for the things they funded (as in the status quo).
That said, I don’t see why there is any problem with valuating the old certificates, aside from skepticism about whether anyone in 2000 cares enough about the good deeds done in 1800 and would actually honor certificates.
You might be interested in:
http://en.wikipedia.org/wiki/Health_Impact_Fund
http://en.wikipedia.org/wiki/Social_impact_bond
Which are practical prize type solutions along similar lines.
The HIF in particular is very closely related; that’s roughly what I imagine an early implementation looking like.
Thanks for the pointers.
This idea also reminds me a little bit of performance based incentives in aid, such as Cash On Delivery programs. Performance based incentives are relatively new, though, so I haven’t been able to find many impact evaluations, although initial reports are promising for health interventions (1, 2). There was even talk a few years ago of creating a stock market for charities, but I don’t think that has gone anywhere. Robert Shiller also proposed something called a Participation Nonprofit at one point where people would buy shares in a nonprofit then spend the returns on charities. All of those have elements of what you’re talking about, but don’t put it all together.
There might be some interesting tricks here. Like suppose you saved someone’s life, and wrote a certificate for yourself, but they turned out to be an ax muderer. In such a situation, you might have to pay other people to take the certificate from you. Alternatively, you would might hide the certificate. But by hiding the certificate, you’re avoiding culpability for your harmful actions to some extent. Of course, people are able to avoid culpability for altruistic actions that turn out to be harmful in the status quo, I’m just observing that for this sort of reason, an economy based on Impact Certificates would not symmetrically deliver the impacts of people’s actions to them.
Yes, this system offers no protection against people doing bad things. (Even if they pay people to take certificates off of their hands, the price would be far too low.) That responsibility falls to the usual mechanisms, e.g. legal protection.
A problem is that it would incentivise people to do risky altruistic activities like enacting big political changes or developing risky tech.
Even if some people think that a particular kind of certificate is bad (has negative social value), as long their opinion isn’t in the majority, I think a liquid market for certificates would be able to handle this efficiently. If most people think that outcome A is good but I think it’s bad, I can short-sell A-certificates, and this works as long as the price is positive (i.e. I’m in the minority).
If directly thwarting A is cheaper than short-selling, I might be tempted to do that, but it would be inefficient (my actions cancel out others’ actions, and the net effect is to waste money for nothing). Fortunately, it seems like the certificate system still provides an efficient way to do “moral trade” in this case! Other people who agree with me can set up a market for anti-A-certificates—backed up by my ability to directly prevent A. Essentially, the others would pay me to carry out the anti-A intervention. The pro-A people can then shut this down by shorting the anti-A-certificates until I’m no longer able to fund my anti-A activities.
My guess: After a while, each side would be paying the other money not to carry out their intervention. Some pro-A interventions would be funded, but less than if the anti-A people couldn’t short-sell. No anti-A interventions would be funded, so no obvious inefficiencies happen.
Does that sound correct? I’m no expert, and I’m not sure whether that’s actually the stable equilibrium.
Have you considered selling a certificate of impact for your development and popularization of the concept of certificates of impact?
I’m guessing that for these to work, the ownership of certificates should end up reflecting who actually had what impact. I can think of two cases where that might not be so.
Regret swapping:
Person A donates $100 to charity X. Person B donates $100 to charity Y.
Five years later they both change their minds about which charity was better. They swap certificates.
So person A ends up owning a certificate for Y, and person B ends up owning a certificate for X, even though neither of them can really be said to have “caused” that particular impact.
Mistrust in certificate system
Foundation F buys impact certificates. It believes that by spending $1 on certificates, it is causing an equivalent amount of good as if it had donated $2 to charity X.
Person A is skeptical of the impact certificate system. She believes that foundation F is only accomplishing $0.50 worth of good with every $1 it spends on certificates (she believes the projects themselves are high value, but that if foundation F didn’t exist then the work would have got done anyway).
Person A has a $100 budget to spend on charity.
Person A borrows $50 from her savings account and donates $150 to charity X. She sells the entire certificate to foundation F for $50 and deposits this back in her savings account.
Why would person A do this? She doesn’t care about certificates, just about maximizing positive impact. As far as she is concerned, she has caused foundation F to give $50 to charity X, where otherwise that money would only have accomplished half as much good.
Why would foundation F do this? It believes in certificates, so as far as F is concerned, it has spent $50 to cause a $150 donation to charity X, where the other certificates it could have bought would only be equivalent to a $100 donation.
Case in point.
I’m still a little confused as to whether these certificates are intended to confer social status. If not, why should I value universes in which I own certificates more highly than universes in which I don’t?
Should I just look at the big picture and decide it’s beneficial to self-modify so as to give ownership of certificates intrinsic value in my utility function?
One possible use for certificates other than bragging rights is A/B testing—pick two EAs with similar skills and resources but different strategies, and see who ends up with more certificates.
You can think of it as a way of doing accounting for causal responsibility if you want. But yes, the argument is aiming for “if we did this, the outcome would be good,” and I’m leaving it up to your decision theory to justify doing things that lead to good outcomes.
I worry about people’s preferences changing over time, either as they get older or as a result of running into financial difficulties. I can imagine buying a bunch of certificates in my idealistic youth and then selling them off again in my cynical old age. At any point in time I’d feel like I was doing the right thing, and whatever philanthropist bought the certificates off me would think they were restrospectively funding the original project when actually they were just putting money into my pocket.
This is sort of the opposite problem of what (I think) Owen_Cotton-Barret was describing with old certificates becoming valueless.
Thinking about this a bit more… If I don’t trust my future self with these certificates, I can always send them to some other entity which will look after them in a way consistent with present-self’s wishes.
This could be an account in a different name, corresponding to an entity which I believed caused my behavior, and which I believe will responsibly hoard the certificates (e.g. GiveWell).
Alternatively it could be an ethereum-style contract which allows me to either hold on to the certificate or give it away (given enough other signatures verifying that I’m giving it to a reputable party and that I’m not benefitting financially from giving it away).
This sort of lock-in could also be a mechanical part of how certificates work, e.g. they allow themselves to be traded freely for a month and after that it gets more and more difficult.
The situation where certificates are bought and sold at near the original donation price is somewhat peculiar. Essentially, rather than giving away your assets to charity you’d be exchanging real money for some riskier and weirder asset, but which is nonetheless still worth money. Giving away certificates then feels like a sort of “second order altruism” which then maybe deserves a certificate of its own...
I like this idea. Thinking about the following case was helpful for me:
Suppose for the sake of argument
I have two career options, Charity Worker or Search Engine Optimizer.
CW generates 5 utilons in direct impact, and 0 utilons via earning-to-give
SEO generates 0 utilons in direct impact, and 3 utilons via earning-to-give
There are plenty of people who don’t identify as EAs and/or don’t take Paul_Christiano’s certificate idea seriously, but who want to work as CWs.
From first glance it looks like the system would fail here—if I’m trying to maximize my certificates, and most other people in the market don’t care, then I’d choose CW and crowd-out somebody else.
But I think what would actually happen is that I’d choose the SEO option, earn a bunch of money and then say “hey, charity worker, over here on the internet there’s an apparently meaningless collection of numbers with your name at the top. I’ll give you $5 if you log in and change it to my name”. I’d end up with certificates valued at more utilons than if I’d just taken the CW option.
Even if a typical person didn’t view these certificates as valuable or meaningful initially, they’d start to once they heard about this mysterious community who was willing to pay money for them.
In practice this might look different depending on the scale of implementation (how fine grained are certificates; who are the certificate-issuers).
One extreme is:
You are issued with a certificate whenever you help someone;
There is a liquid market in the certificates;
Helping someone is seen (by the market) as good to the degree to which it benefits them;
Helping two individuals by the same amount is seen (by the market) as equally good.
Then the certificates essentially become a currency system, which has something like a guaranteed basic income built into it (the amount that you can be helped in a given year).
This extreme probably isn’t exactly what you have in mind (for instance, I’m not sure how it would interact with helping future people). But I don’t have a good sense of how far in this direction you’re thinking.
Presumably there is a market for certificates amongst those concerned with e.g. the global poor, and it might function in this way. There is no provision in this system for allowing the beneficiaries to determine the value of these certificates. As with contemporary philanthropy, this responsibility falls to the philanthropists, who may or may not execute it responsibly.
(This judgment would be reflected in the relative prices of “Helped Alice by doing X” and “Helped Alice by giving her $1,” which may have some benefits in terms of transparency.)
Cash transfers are a more direct way to capture these benefits. (After cash transfers are in place, providing goods to the poor can then be a profit-oriented enterprise.) The key question is just whether you think philanthropists or beneficiaries are better at comparing benefits.
Full disclosure: I fear I do not completely understand your idea. Having said that, I hope my comment is at list a little useful to you.
Think about the following cases: (1) I donate to an organization that distributes bednets in Africa and receive a certificate. I then trade that certificate for a new pair of shoes. My money, which normally can only be used for one of these purposes, is now used for both. (2) I work for a non-profit and receive a salary. I also receive certificates. So I am being paid double?
The second case is easily solved, just give the employee either or. But then, what is the benefit of a certificate over a dollar bill? The first case presents a bigger problem I think, since essentially something is created from nothing. Notice that donations are not investments the donor can expect a return on (even if they are an investment in others).
If you buy and then sell a certificate, you aren’t funding the charity, the ultimate holder of the certificate is. They will only buy the certificate if they are interested in funding the charity.
You could pretend you are funding the charity, but that wouldn’t be true—the person you sold the certificate to would otherwise have bought it from someone else, perhaps directly from the charity. So your net effect on the charity’s funding is 0. I could just as well give some money to my friend and pretend I was funding an effective charity.
(I’m setting aside the tax treatment for now.)
You would pay an employee with certificates for the same reason a company might pay an emplyee in equity. If there is no secondary market, this can be better for the company for liquidity reasons, and can introduce a component of performance pay. But even if there is a secondary market (e.g. for Google stock), it can still be a financially attractive way for a company to pay a large part of its salary, because it passes some of the risk on to the employee without having to constantly adjust dollar-denominated salaries. (There are also default effects, where paying employees in certificates would likely lead to them holding some certificates.)