There might be some interesting tricks here. Like suppose you saved someone’s life, and wrote a certificate for yourself, but they turned out to be an ax muderer. In such a situation, you might have to pay other people to take the certificate from you. Alternatively, you would might hide the certificate. But by hiding the certificate, you’re avoiding culpability for your harmful actions to some extent. Of course, people are able to avoid culpability for altruistic actions that turn out to be harmful in the status quo, I’m just observing that for this sort of reason, an economy based on Impact Certificates would not symmetrically deliver the impacts of people’s actions to them.
Yes, this system offers no protection against people doing bad things. (Even if they pay people to take certificates off of their hands, the price would be far too low.) That responsibility falls to the usual mechanisms, e.g. legal protection.
Even if some people think that a particular kind of certificate is bad (has negative social value), as long their opinion isn’t in the majority, I think a liquid market for certificates would be able to handle this efficiently. If most people think that outcome A is good but I think it’s bad, I can short-sell A-certificates, and this works as long as the price is positive (i.e. I’m in the minority).
If directly thwarting A is cheaper than short-selling, I might be tempted to do that, but it would be inefficient (my actions cancel out others’ actions, and the net effect is to waste money for nothing). Fortunately, it seems like the certificate system still provides an efficient way to do “moral trade” in this case! Other people who agree with me can set up a market for anti-A-certificates—backed up by my ability to directly prevent A. Essentially, the others would pay me to carry out the anti-A intervention. The pro-A people can then shut this down by shorting the anti-A-certificates until I’m no longer able to fund my anti-A activities.
My guess: After a while, each side would be paying the other money not to carry out their intervention. Some pro-A interventions would be funded, but less than if the anti-A people couldn’t short-sell. No anti-A interventions would be funded, so no obvious inefficiencies happen.
Does that sound correct? I’m no expert, and I’m not sure whether that’s actually the stable equilibrium.
There might be some interesting tricks here. Like suppose you saved someone’s life, and wrote a certificate for yourself, but they turned out to be an ax muderer. In such a situation, you might have to pay other people to take the certificate from you. Alternatively, you would might hide the certificate. But by hiding the certificate, you’re avoiding culpability for your harmful actions to some extent. Of course, people are able to avoid culpability for altruistic actions that turn out to be harmful in the status quo, I’m just observing that for this sort of reason, an economy based on Impact Certificates would not symmetrically deliver the impacts of people’s actions to them.
Yes, this system offers no protection against people doing bad things. (Even if they pay people to take certificates off of their hands, the price would be far too low.) That responsibility falls to the usual mechanisms, e.g. legal protection.
A problem is that it would incentivise people to do risky altruistic activities like enacting big political changes or developing risky tech.
Even if some people think that a particular kind of certificate is bad (has negative social value), as long their opinion isn’t in the majority, I think a liquid market for certificates would be able to handle this efficiently. If most people think that outcome A is good but I think it’s bad, I can short-sell A-certificates, and this works as long as the price is positive (i.e. I’m in the minority).
If directly thwarting A is cheaper than short-selling, I might be tempted to do that, but it would be inefficient (my actions cancel out others’ actions, and the net effect is to waste money for nothing). Fortunately, it seems like the certificate system still provides an efficient way to do “moral trade” in this case! Other people who agree with me can set up a market for anti-A-certificates—backed up by my ability to directly prevent A. Essentially, the others would pay me to carry out the anti-A intervention. The pro-A people can then shut this down by shorting the anti-A-certificates until I’m no longer able to fund my anti-A activities.
My guess: After a while, each side would be paying the other money not to carry out their intervention. Some pro-A interventions would be funded, but less than if the anti-A people couldn’t short-sell. No anti-A interventions would be funded, so no obvious inefficiencies happen.
Does that sound correct? I’m no expert, and I’m not sure whether that’s actually the stable equilibrium.