Hm, naively—is this any different than the risks of net-negative projects in the for-profit startup funding markets? If not, I don’t think this a unique reason to avoid impact markets.
Impact markets can incentivize/fund net-negative projects that are not currently of interest to for-profit investors. For example, today it can be impossible for someone to make a huge amount of money by launching an aggressive outreach campaign to make people join EA, or publishing a list of “the most dangerous ongoing experiments in virology that we should advocate to stop”; which are interventions that may be net-negative. (Also, in cases where both impact markets and classic classical for-profit investors incentivize a project, one can flip your statement and say that there’s no unique reason to launch impact markets; I’m not sure that “uniqueness” is the right thing to look at.)
Finally: on a meta level, the amount of risk you’re willing to spend on trying new funding mechanisms with potential downsides should basically be proportional to the amount of risk you see in our society at the moment. Basically, if you think existing funding mechanisms are doing a good job, and we’re likely to get through the hinge of history safely, then new mechanisms are to be avoided and we want to stay the course. (That’s not my current read of our xrisk situation, but would love to be convinced otherwise!)
[EDIT: removed unnecessary text.] I tentatively think that launching impact markets seem worse than a “random” change to the world’s trajectory. Conditional on an existential catastrophe occurring, I think there’s a high substantial chance that the catastrophe will be caused by individuals who followed their local financial incentives. We should be cautious about pushing the world (and EA especially) further towards the “big things happen due to individuals following their local financial incentives” dynamics.
I’m not sure that “uniqueness” is the right thing to look at.
Mostly, I meant: the for-profit world already incentivizes people to take high amounts of risk for financial gain. In addition, there are no special mechanisms to prevent for-profit entities from producing large net-negative harms. So asking that some special mechanism be introduced for impact-focused entities is an isolated demand for rigor.
There are mechanisms like pollution regulation, labor laws, etc which apply to for-profit entities—but these would apply equally to impact-focused entities too.
We should be cautious about pushing the world (and EA especially) further towards the “big things happen due to individuals following their local financial incentives” dynamics.
I think I disagree with this? I think people following local financial incentives is always going to happen, and the point of an impact market is to structure financial incentives to be aligned with what the EA community broadly thinks is good.
Agree that xrisk/catastrophe can happen via eg AI researchers following local financial incentives to make a lot of money—but unless your proposal is to overhaul the capitalist market system somehow, I think building a better competing alternative is the correct path forward.
I think people following local financial incentives is always going to happen, and the point of an impact market is to structure financial incentives to be aligned with what the EA community broadly thinks is good.
It may be useful to think about it this way: Suppose an impact market is launched (without any safety mechanisms) and $10M of EA funding are pledged to be used for buying certificates as final buyers 5 years from now. No other final buyers join the market. The creation of the market causes some set of projects X to be funded and some other set of project Y to not get funded (due to the opportunity cost of those $10M). We should ask: is [the EV of X minus the EV of Y] positive or negative? I tentatively think it’s negative. The projects in Y would have been judged by the funder to have positive ex-ante EV, while the projects in X got funded because they had a chance to end up having a high ex-post EV.
Also, I think complex cluelessness is a common phenomenon in the realms of anthropogenic x-risks and meta-EA. It seems that interventions that have a substantial chance to prevent existential catastrophes usually have an EV that is much closer to 0 than we would otherwise think due to also having a chance to cause an existential catastrophe. Therefore, the EV of Y seems much closer to 0 than the EV of X (assuming that the EV of X is not 0).
[EDIT: adding the text below.]
Sorry, I messed up when writing this comment (I wrote it at 03:00 am...). Firstly, I confused X and Y in the sentence that I now crossed out. But more fundamentally: I tentatively think that the EV of X is negative (rather than positive but smaller than the EV of Y), because the projects in X are ones that no funder in EA decides to fund (in a world without impact markets). Therefore, letting an impact market fund a project in X seems even worse than falling into the regular unilateralist’s curse, because here there need not be even a single person who thinks that the project is (ex-ante) a good idea.
Impact markets can incentivize/fund net-negative projects that are not currently of interest to for-profit investors. For example, today it can be impossible for someone to make a huge amount of money by launching an aggressive outreach campaign to make people join EA, or publishing a list of “the most dangerous ongoing experiments in virology that we should advocate to stop”; which are interventions that may be net-negative. (Also, in cases where both impact markets and
classicclassical for-profit investors incentivize a project, one can flip your statement and say that there’s no unique reason to launch impact markets; I’m not sure that “uniqueness” is the right thing to look at.)[EDIT: removed unnecessary text.] I tentatively think that launching impact markets seem worse than a “random” change to the world’s trajectory. Conditional on an existential catastrophe occurring, I think there’s a
highsubstantial chance that the catastrophe will be caused by individuals who followed their local financial incentives. We should be cautious about pushing the world (and EA especially) further towards the “big things happen due to individuals following their local financial incentives” dynamics.Thanks for your responses!
Mostly, I meant: the for-profit world already incentivizes people to take high amounts of risk for financial gain. In addition, there are no special mechanisms to prevent for-profit entities from producing large net-negative harms. So asking that some special mechanism be introduced for impact-focused entities is an isolated demand for rigor.
There are mechanisms like pollution regulation, labor laws, etc which apply to for-profit entities—but these would apply equally to impact-focused entities too.
I think I disagree with this? I think people following local financial incentives is always going to happen, and the point of an impact market is to structure financial incentives to be aligned with what the EA community broadly thinks is good.
Agree that xrisk/catastrophe can happen via eg AI researchers following local financial incentives to make a lot of money—but unless your proposal is to overhaul the capitalist market system somehow, I think building a better competing alternative is the correct path forward.
It may be useful to think about it this way: Suppose an impact market is launched (without any safety mechanisms) and $10M of EA funding are pledged to be used for buying certificates as final buyers 5 years from now. No other final buyers join the market. The creation of the market causes some set of projects X to be funded and some other set of project Y to not get funded (due to the opportunity cost of those $10M). We should ask: is [the EV of X minus the EV of Y] positive or negative? I tentatively think it’s negative. The projects in Y would have been judged by the funder to have positive ex-ante EV, while the projects in X got funded because they had a chance to end up having a high ex-post EV.
Also, I think complex cluelessness is a common phenomenon in the realms of anthropogenic x-risks and meta-EA. It seems that interventions that have a substantial chance to prevent existential catastrophes usually have an EV that is much closer to 0 than we would otherwise think due to also having a chance to cause an existential catastrophe.
Therefore, the EV of Y seems much closer to 0 than the EV of X (assuming that the EV of X is not 0).[EDIT: adding the text below.]
Sorry, I messed up when writing this comment (I wrote it at 03:00 am...). Firstly, I confused X and Y in the sentence that I now crossed out. But more fundamentally: I tentatively think that the EV of X is negative (rather than positive but smaller than the EV of Y), because the projects in X are ones that no funder in EA decides to fund (in a world without impact markets). Therefore, letting an impact market fund a project in X seems even worse than falling into the regular unilateralist’s curse, because here there need not be even a single person who thinks that the project is (ex-ante) a good idea.
I messed up when writing that comment (see the EDIT block).