Hm, naively—is this any different than the risks of net-negative projects in the for-profit startup funding markets? If not, I don’t think this a unique reason to avoid impact markets.
My very rough guess is that impact markets should be at a bare minimum better than the for-profit landscape, which already makes it a worthwhile intervention. People participating as final buyers of impact will at least be looking to do good rather than generate additional profits; it would be very surprising to me if the net impact of that was worse than “the thing that happens in regular markets already”.
Additionally—I think the negative externalities may be addressed with additional impact projects, further funded through other impact markets?
Finally: on a meta level, the amount of risk you’re willing to spend on trying new funding mechanisms with potential downsides should basically be proportional to the amount of risk you see in our society at the moment. Basically, if you think existing funding mechanisms are doing a good job, and we’re likely to get through the hinge of history safely, then new mechanisms are to be avoided and we want to stay the course. (That’s not my current read of our xrisk situation, but would love to be convinced otherwise!)
I think startups are usually doing an activity which scales if it’s good and stops if it’s bad. People can sue if it’s causing harm to them. Overall this kind of feedback mechanism does a fine job.
In the impact markets case I’m most worried about activities which have long-lasting impacts even without continuing/scaling them. I’m more into the possibility of markets for scalable/repeatable activities (seems less fraught).
In general the story for concern here is something like:
At the moment a lot of particularly high-leverage areas are have disproportionate attention from people who are earnestly trying to do good things
Impact markets could shift this to “attention from people earnestly trying to do high-variance things”
In cases where the resolution on what was successful or not takes a long time, and people potentially do a lot of the activity before we know whether it was eventually valued, this seems pretty bad
But we think that is unlikely to happen by default. There is a mismatch between the probability distribution of investor profits and that of impact. Impact can go vastly negative while investor profits are capped at only losing the investment. We therefore risk that our market exacerbates negative externalities.
Standard distribution mismatch. Standard investment vehicles work the way that if you invest into a project and it fails, you lose 1 x your investment; but if you invest into a project and it’s a great success, you may make back 1,000 x your investment. So investors want to invest into many (say, 100) moonshot projects hoping that one will succeed.
When it comes to for-profits, governments are to some extent trying to limit or tax externalities, and one could also argue that if one company didn’t cause them, then another would’ve done so only briefly later. That’s cold comfort to most people, but it’s the status quo, so we would like to at least not make it worse.
Charities are more (even more) of a minefield because there is less competition, so it’s harder to argue that anything anyone does would’ve been done anyway. But at least they don’t have as much capital at their disposal. They have other motives than profit, so the externalities are not quite the same ones, but they too increase incarceration rates (Scared Straight), increase poverty (preventing contraception), reduce access to safe water (some Playpumps), maybe even exacerbate s-risks from multipolar AGI takeoffs (some AI labs), etc. These externalities will only get worse if we make them more profitable for venture capitalists to invest in.
We’re most worried about charities that have extreme upsides and extreme downsides (say, intergalactic utopia vs. suffering catastrophe). Those are the ones that will be very interesting for profit-oriented investors because of their upsides and because they don’t pay for the at least equally extreme downsides.
Finally: on a meta level, the amount of risk you’re willing to spend on trying new funding mechanisms with potential downsides should basically be proportional to the amount of risk you see in our society at the moment.
I think this is not quite right. It shouldn’t be about what we think about existing funding mechanisms, but what we think about the course we’re set to be on. I think that ~EA is doing quite a good job of reshaping the funding landscape especially for the highest-priority areas. I certainly think it could be doing better still, and I’m in favour of experiments I expect to see there, but I think that spinning up impact markets right now is more likely to crowd out later better-understood versions than to help them.
I think impact markets should be viewed in that experimental lens, for what it’s worth (it’s barely been tested outside of a few experiments on the Optimism blockchain). I’m not sure if we disagree much!
Curious to hear what experiments and better funding mechanisms you’re excited about~
Hm, naively—is this any different than the risks of net-negative projects in the for-profit startup funding markets? If not, I don’t think this a unique reason to avoid impact markets.
Impact markets can incentivize/fund net-negative projects that are not currently of interest to for-profit investors. For example, today it can be impossible for someone to make a huge amount of money by launching an aggressive outreach campaign to make people join EA, or publishing a list of “the most dangerous ongoing experiments in virology that we should advocate to stop”; which are interventions that may be net-negative. (Also, in cases where both impact markets and classic classical for-profit investors incentivize a project, one can flip your statement and say that there’s no unique reason to launch impact markets; I’m not sure that “uniqueness” is the right thing to look at.)
Finally: on a meta level, the amount of risk you’re willing to spend on trying new funding mechanisms with potential downsides should basically be proportional to the amount of risk you see in our society at the moment. Basically, if you think existing funding mechanisms are doing a good job, and we’re likely to get through the hinge of history safely, then new mechanisms are to be avoided and we want to stay the course. (That’s not my current read of our xrisk situation, but would love to be convinced otherwise!)
[EDIT: removed unnecessary text.] I tentatively think that launching impact markets seem worse than a “random” change to the world’s trajectory. Conditional on an existential catastrophe occurring, I think there’s a high substantial chance that the catastrophe will be caused by individuals who followed their local financial incentives. We should be cautious about pushing the world (and EA especially) further towards the “big things happen due to individuals following their local financial incentives” dynamics.
I’m not sure that “uniqueness” is the right thing to look at.
Mostly, I meant: the for-profit world already incentivizes people to take high amounts of risk for financial gain. In addition, there are no special mechanisms to prevent for-profit entities from producing large net-negative harms. So asking that some special mechanism be introduced for impact-focused entities is an isolated demand for rigor.
There are mechanisms like pollution regulation, labor laws, etc which apply to for-profit entities—but these would apply equally to impact-focused entities too.
We should be cautious about pushing the world (and EA especially) further towards the “big things happen due to individuals following their local financial incentives” dynamics.
I think I disagree with this? I think people following local financial incentives is always going to happen, and the point of an impact market is to structure financial incentives to be aligned with what the EA community broadly thinks is good.
Agree that xrisk/catastrophe can happen via eg AI researchers following local financial incentives to make a lot of money—but unless your proposal is to overhaul the capitalist market system somehow, I think building a better competing alternative is the correct path forward.
I think people following local financial incentives is always going to happen, and the point of an impact market is to structure financial incentives to be aligned with what the EA community broadly thinks is good.
It may be useful to think about it this way: Suppose an impact market is launched (without any safety mechanisms) and $10M of EA funding are pledged to be used for buying certificates as final buyers 5 years from now. No other final buyers join the market. The creation of the market causes some set of projects X to be funded and some other set of project Y to not get funded (due to the opportunity cost of those $10M). We should ask: is [the EV of X minus the EV of Y] positive or negative? I tentatively think it’s negative. The projects in Y would have been judged by the funder to have positive ex-ante EV, while the projects in X got funded because they had a chance to end up having a high ex-post EV.
Also, I think complex cluelessness is a common phenomenon in the realms of anthropogenic x-risks and meta-EA. It seems that interventions that have a substantial chance to prevent existential catastrophes usually have an EV that is much closer to 0 than we would otherwise think due to also having a chance to cause an existential catastrophe. Therefore, the EV of Y seems much closer to 0 than the EV of X (assuming that the EV of X is not 0).
[EDIT: adding the text below.]
Sorry, I messed up when writing this comment (I wrote it at 03:00 am...). Firstly, I confused X and Y in the sentence that I now crossed out. But more fundamentally: I tentatively think that the EV of X is negative (rather than positive but smaller than the EV of Y), because the projects in X are ones that no funder in EA decides to fund (in a world without impact markets). Therefore, letting an impact market fund a project in X seems even worse than falling into the regular unilateralist’s curse, because here there need not be even a single person who thinks that the project is (ex-ante) a good idea.
Hm, naively—is this any different than the risks of net-negative projects in the for-profit startup funding markets? If not, I don’t think this a unique reason to avoid impact markets.
My very rough guess is that impact markets should be at a bare minimum better than the for-profit landscape, which already makes it a worthwhile intervention. People participating as final buyers of impact will at least be looking to do good rather than generate additional profits; it would be very surprising to me if the net impact of that was worse than “the thing that happens in regular markets already”.
Additionally—I think the negative externalities may be addressed with additional impact projects, further funded through other impact markets?
Finally: on a meta level, the amount of risk you’re willing to spend on trying new funding mechanisms with potential downsides should basically be proportional to the amount of risk you see in our society at the moment. Basically, if you think existing funding mechanisms are doing a good job, and we’re likely to get through the hinge of history safely, then new mechanisms are to be avoided and we want to stay the course. (That’s not my current read of our xrisk situation, but would love to be convinced otherwise!)
I think startups are usually doing an activity which scales if it’s good and stops if it’s bad. People can sue if it’s causing harm to them. Overall this kind of feedback mechanism does a fine job.
In the impact markets case I’m most worried about activities which have long-lasting impacts even without continuing/scaling them. I’m more into the possibility of markets for scalable/repeatable activities (seems less fraught).
In general the story for concern here is something like:
At the moment a lot of particularly high-leverage areas are have disproportionate attention from people who are earnestly trying to do good things
Impact markets could shift this to “attention from people earnestly trying to do high-variance things”
In cases where the resolution on what was successful or not takes a long time, and people potentially do a lot of the activity before we know whether it was eventually valued, this seems pretty bad
They refer to Drescher’s post. He writes:
I think this is not quite right. It shouldn’t be about what we think about existing funding mechanisms, but what we think about the course we’re set to be on. I think that ~EA is doing quite a good job of reshaping the funding landscape especially for the highest-priority areas. I certainly think it could be doing better still, and I’m in favour of experiments I expect to see there, but I think that spinning up impact markets right now is more likely to crowd out later better-understood versions than to help them.
I think impact markets should be viewed in that experimental lens, for what it’s worth (it’s barely been tested outside of a few experiments on the Optimism blockchain). I’m not sure if we disagree much!
Curious to hear what experiments and better funding mechanisms you’re excited about~
Impact markets can incentivize/fund net-negative projects that are not currently of interest to for-profit investors. For example, today it can be impossible for someone to make a huge amount of money by launching an aggressive outreach campaign to make people join EA, or publishing a list of “the most dangerous ongoing experiments in virology that we should advocate to stop”; which are interventions that may be net-negative. (Also, in cases where both impact markets and
classicclassical for-profit investors incentivize a project, one can flip your statement and say that there’s no unique reason to launch impact markets; I’m not sure that “uniqueness” is the right thing to look at.)[EDIT: removed unnecessary text.] I tentatively think that launching impact markets seem worse than a “random” change to the world’s trajectory. Conditional on an existential catastrophe occurring, I think there’s a
highsubstantial chance that the catastrophe will be caused by individuals who followed their local financial incentives. We should be cautious about pushing the world (and EA especially) further towards the “big things happen due to individuals following their local financial incentives” dynamics.Thanks for your responses!
Mostly, I meant: the for-profit world already incentivizes people to take high amounts of risk for financial gain. In addition, there are no special mechanisms to prevent for-profit entities from producing large net-negative harms. So asking that some special mechanism be introduced for impact-focused entities is an isolated demand for rigor.
There are mechanisms like pollution regulation, labor laws, etc which apply to for-profit entities—but these would apply equally to impact-focused entities too.
I think I disagree with this? I think people following local financial incentives is always going to happen, and the point of an impact market is to structure financial incentives to be aligned with what the EA community broadly thinks is good.
Agree that xrisk/catastrophe can happen via eg AI researchers following local financial incentives to make a lot of money—but unless your proposal is to overhaul the capitalist market system somehow, I think building a better competing alternative is the correct path forward.
It may be useful to think about it this way: Suppose an impact market is launched (without any safety mechanisms) and $10M of EA funding are pledged to be used for buying certificates as final buyers 5 years from now. No other final buyers join the market. The creation of the market causes some set of projects X to be funded and some other set of project Y to not get funded (due to the opportunity cost of those $10M). We should ask: is [the EV of X minus the EV of Y] positive or negative? I tentatively think it’s negative. The projects in Y would have been judged by the funder to have positive ex-ante EV, while the projects in X got funded because they had a chance to end up having a high ex-post EV.
Also, I think complex cluelessness is a common phenomenon in the realms of anthropogenic x-risks and meta-EA. It seems that interventions that have a substantial chance to prevent existential catastrophes usually have an EV that is much closer to 0 than we would otherwise think due to also having a chance to cause an existential catastrophe.
Therefore, the EV of Y seems much closer to 0 than the EV of X (assuming that the EV of X is not 0).[EDIT: adding the text below.]
Sorry, I messed up when writing this comment (I wrote it at 03:00 am...). Firstly, I confused X and Y in the sentence that I now crossed out. But more fundamentally: I tentatively think that the EV of X is negative (rather than positive but smaller than the EV of Y), because the projects in X are ones that no funder in EA decides to fund (in a world without impact markets). Therefore, letting an impact market fund a project in X seems even worse than falling into the regular unilateralist’s curse, because here there need not be even a single person who thinks that the project is (ex-ante) a good idea.
I messed up when writing that comment (see the EDIT block).
I didn’t follow this; could you elaborate? (/give an example?)