Yeah; by default people have entangled assets which will be put at risk by starting or investing in a new project. Limiting the liability that originates from that project to just the assets held by that project means that investors and founders can do things that seem to have positive return on their own, rather than ‘positive return given that you’re putting all of your other assets at stake.’
[Like I agree that there’s issues where the social benefit of actions and the private benefits of actions don’t line up, and we should try to line them up as well as we can in order to incentivize the best action. I’m just noting that the standard guess for businesses is “we should try to decrease the private risk of starting new businesses”; I could buy that it’s different for the x-risk environment, where we should not try to decrease the private risk of starting new risk reduction projects, but it’s not obviously the case.]
Therefore, we should be very wary of funding mechanisms that incentivize people to treat extremely harmful outcomes as if they were neutral (when making decisions about doing/funding projects that are related to anthropogenic x-risks).
Sure, I agree with this, and with the sense that the costs are large. The thing I’m looking for is the comparison between the benefits and the costs; are the costs larger?
[EDIT: Also, interventions that are carried out if and only if impact markets fund them seem selected for being net-negative, because they are ones that no classical EA funder would fund.]
Sure, I buy that adverse selection can make things worse; my guess was that the hope was that classical EA funders would also operate thru the market. [Like, at some point your private markets become big enough that they become public markets, and I think we have solid reasons to believe a market mechanism can outperform specific experts, if there’s enough profit at stake to attract substantial trading effort.]
The thing I’m looking for is the comparison between the benefits and the costs; are the costs larger?
Efficient impact markets would allow anyone to create certificates for a project and then sell them for a price that corresponds to a very good prediction of their expected future value. Therefore, sufficiently efficient impact markets will probably fund some high EV projects that wouldn’t otherwise be funded (because it’s not easy for classical EA funders to evaluate them or even find them in the space of possible projects). If we look at that set of projects in isolation, we can regard it as the main upside of creating the impact market. The problem is that the market does not reliably distinguish between those high EV projects and net-negative projects, because a potential outcome that is extremely harmful affect the expected future value of the certificate as if the outcome were neutral.
Suppose x is a “random” project that has a substantial chance to prevent an existential catastrophe. If you believe that the EV of x is much smaller than the EV of x conditional on x not causing a harmful outcome, then you should be very skeptical about impact markets. Finally, we should consider that if a project is funded if and only if impact markets exist then it means that no classical EA funder would fund it in a world without impact markets, and thus it seems more likely than otherwise to be net-negative.
Sure, I buy that adverse selection can make things worse; my guess was that the hope was that classical EA funders would also operate thru the market.
(Even if all EA funders switched to operate solely as retro funders in impact markets, I think it would still be true that an intervention that gets funded by an impact market—and wouldn’t get funded in a world without impact markets—seems more likely than otherwise to be net-negative.)
Yeah; by default people have entangled assets which will be put at risk by starting or investing in a new project. Limiting the liability that originates from that project to just the assets held by that project means that investors and founders can do things that seem to have positive return on their own, rather than ‘positive return given that you’re putting all of your other assets at stake.’
[Like I agree that there’s issues where the social benefit of actions and the private benefits of actions don’t line up, and we should try to line them up as well as we can in order to incentivize the best action. I’m just noting that the standard guess for businesses is “we should try to decrease the private risk of starting new businesses”; I could buy that it’s different for the x-risk environment, where we should not try to decrease the private risk of starting new risk reduction projects, but it’s not obviously the case.]
Sure, I agree with this, and with the sense that the costs are large. The thing I’m looking for is the comparison between the benefits and the costs; are the costs larger?
Sure, I buy that adverse selection can make things worse; my guess was that the hope was that classical EA funders would also operate thru the market. [Like, at some point your private markets become big enough that they become public markets, and I think we have solid reasons to believe a market mechanism can outperform specific experts, if there’s enough profit at stake to attract substantial trading effort.]
Efficient impact markets would allow anyone to create certificates for a project and then sell them for a price that corresponds to a very good prediction of their expected future value. Therefore, sufficiently efficient impact markets will probably fund some high EV projects that wouldn’t otherwise be funded (because it’s not easy for classical EA funders to evaluate them or even find them in the space of possible projects). If we look at that set of projects in isolation, we can regard it as the main upside of creating the impact market. The problem is that the market does not reliably distinguish between those high EV projects and net-negative projects, because a potential outcome that is extremely harmful affect the expected future value of the certificate as if the outcome were neutral.
Suppose x is a “random” project that has a substantial chance to prevent an existential catastrophe. If you believe that the EV of x is much smaller than the EV of x conditional on x not causing a harmful outcome, then you should be very skeptical about impact markets. Finally, we should consider that if a project is funded if and only if impact markets exist then it means that no classical EA funder would fund it in a world without impact markets, and thus it seems more likely than otherwise to be net-negative.
(Even if all EA funders switched to operate solely as retro funders in impact markets, I think it would still be true that an intervention that gets funded by an impact market—and wouldn’t get funded in a world without impact markets—seems more likely than otherwise to be net-negative.)