I’m not sure that “uniqueness” is the right thing to look at.
Mostly, I meant: the for-profit world already incentivizes people to take high amounts of risk for financial gain. In addition, there are no special mechanisms to prevent for-profit entities from producing large net-negative harms. So asking that some special mechanism be introduced for impact-focused entities is an isolated demand for rigor.
There are mechanisms like pollution regulation, labor laws, etc which apply to for-profit entities—but these would apply equally to impact-focused entities too.
We should be cautious about pushing the world (and EA especially) further towards the “big things happen due to individuals following their local financial incentives” dynamics.
I think I disagree with this? I think people following local financial incentives is always going to happen, and the point of an impact market is to structure financial incentives to be aligned with what the EA community broadly thinks is good.
Agree that xrisk/catastrophe can happen via eg AI researchers following local financial incentives to make a lot of money—but unless your proposal is to overhaul the capitalist market system somehow, I think building a better competing alternative is the correct path forward.
I think people following local financial incentives is always going to happen, and the point of an impact market is to structure financial incentives to be aligned with what the EA community broadly thinks is good.
It may be useful to think about it this way: Suppose an impact market is launched (without any safety mechanisms) and $10M of EA funding are pledged to be used for buying certificates as final buyers 5 years from now. No other final buyers join the market. The creation of the market causes some set of projects X to be funded and some other set of project Y to not get funded (due to the opportunity cost of those $10M). We should ask: is [the EV of X minus the EV of Y] positive or negative? I tentatively think it’s negative. The projects in Y would have been judged by the funder to have positive ex-ante EV, while the projects in X got funded because they had a chance to end up having a high ex-post EV.
Also, I think complex cluelessness is a common phenomenon in the realms of anthropogenic x-risks and meta-EA. It seems that interventions that have a substantial chance to prevent existential catastrophes usually have an EV that is much closer to 0 than we would otherwise think due to also having a chance to cause an existential catastrophe. Therefore, the EV of Y seems much closer to 0 than the EV of X (assuming that the EV of X is not 0).
[EDIT: adding the text below.]
Sorry, I messed up when writing this comment (I wrote it at 03:00 am...). Firstly, I confused X and Y in the sentence that I now crossed out. But more fundamentally: I tentatively think that the EV of X is negative (rather than positive but smaller than the EV of Y), because the projects in X are ones that no funder in EA decides to fund (in a world without impact markets). Therefore, letting an impact market fund a project in X seems even worse than falling into the regular unilateralist’s curse, because here there need not be even a single person who thinks that the project is (ex-ante) a good idea.
Thanks for your responses!
Mostly, I meant: the for-profit world already incentivizes people to take high amounts of risk for financial gain. In addition, there are no special mechanisms to prevent for-profit entities from producing large net-negative harms. So asking that some special mechanism be introduced for impact-focused entities is an isolated demand for rigor.
There are mechanisms like pollution regulation, labor laws, etc which apply to for-profit entities—but these would apply equally to impact-focused entities too.
I think I disagree with this? I think people following local financial incentives is always going to happen, and the point of an impact market is to structure financial incentives to be aligned with what the EA community broadly thinks is good.
Agree that xrisk/catastrophe can happen via eg AI researchers following local financial incentives to make a lot of money—but unless your proposal is to overhaul the capitalist market system somehow, I think building a better competing alternative is the correct path forward.
It may be useful to think about it this way: Suppose an impact market is launched (without any safety mechanisms) and $10M of EA funding are pledged to be used for buying certificates as final buyers 5 years from now. No other final buyers join the market. The creation of the market causes some set of projects X to be funded and some other set of project Y to not get funded (due to the opportunity cost of those $10M). We should ask: is [the EV of X minus the EV of Y] positive or negative? I tentatively think it’s negative. The projects in Y would have been judged by the funder to have positive ex-ante EV, while the projects in X got funded because they had a chance to end up having a high ex-post EV.
Also, I think complex cluelessness is a common phenomenon in the realms of anthropogenic x-risks and meta-EA. It seems that interventions that have a substantial chance to prevent existential catastrophes usually have an EV that is much closer to 0 than we would otherwise think due to also having a chance to cause an existential catastrophe.
Therefore, the EV of Y seems much closer to 0 than the EV of X (assuming that the EV of X is not 0).[EDIT: adding the text below.]
Sorry, I messed up when writing this comment (I wrote it at 03:00 am...). Firstly, I confused X and Y in the sentence that I now crossed out. But more fundamentally: I tentatively think that the EV of X is negative (rather than positive but smaller than the EV of Y), because the projects in X are ones that no funder in EA decides to fund (in a world without impact markets). Therefore, letting an impact market fund a project in X seems even worse than falling into the regular unilateralist’s curse, because here there need not be even a single person who thinks that the project is (ex-ante) a good idea.
I messed up when writing that comment (see the EDIT block).