I expect this will reduce the price at which OpenAI is traded
But an impact market can still make OpenAI’s certificates be worth $100M if, for example, investors have at least 10% credence in some future retro funder being willing to buy them for $1B (+interest). And that could be true even if everyone today believed that creating OpenAI is net-negative. See the “Mitigating the risk is hard” section in the OP for some additional reasons to be skeptical about such an approach.
I missed what you’re replying to though. Is it the “The problem of funding net-negative projects exists also now.” ?
Yes. You respond to examples of potential harm that impact markets can cause by pointing out that these things can happen even without impact markets. I don’t see why these arguments should be more convincing than the flipped argument: “everything that impact markets can fund can already be funded in other ways, so we don’t need impact markets”. (Again, I’m not saying that the flipped argument makes sense.)
Your overall view seems to be something like: we should just create an impact market and if it causes harm then the retro funders will notice and stop buying certificates (or they will stop buying some particular certificates that are net-negative to buy). I disagree with this view because:
There is a dire lack of feedback signal in the realm of x-risk mitigation. It’s usually very hard to judge whether a given intervention was net-positive or net-negative. It’s not just a matter of asking CEA / LW / anyone else what they think about a particular intervention, because usually no one on Earth can do a reliable, robust evaluation. (e.g. is the creation of OpenAI/Anthropic net positive or net negative?) So, if you buy the core argument in the OP (about how naive impact markets incentivize people to carry out interventions without considering potential outcomes that are extremely harmful), I think that you shouldn’t create an impact market and rely on some unspecified future feedback signals to make retro funders stop buying certificates in a net-negative way at some unspecified point in the future.
As I argued in the grandparent comment, we should expect the things that people in EA say about the impact of others in EA to be positively biased.
All the above assumes that by “retro funders” here you mean a set of carefully appointed Final Buyers. If instead we’re talking about an impact market where anyone can become a retro funder, and retro funders can resell their impact to arbitrary future retro funders, I think things would go worse in expectation (see the first three points in the section “Mitigating the risk is hard” in the OP).
But an impact market can still make OpenAI’s certificates be worth $100M if, for example, investors have at least 10% credence in some future retro funder being willing to buy them for $1B (+interest). And that could be true even if everyone today believed that creating OpenAI is net-negative. See the “Mitigating the risk is hard” section in the OP for some additional reasons to be skeptical about such an approach.
Yes. You respond to examples of potential harm that impact markets can cause by pointing out that these things can happen even without impact markets. I don’t see why these arguments should be more convincing than the flipped argument: “everything that impact markets can fund can already be funded in other ways, so we don’t need impact markets”. (Again, I’m not saying that the flipped argument makes sense.)
Your overall view seems to be something like: we should just create an impact market and if it causes harm then the retro funders will notice and stop buying certificates (or they will stop buying some particular certificates that are net-negative to buy). I disagree with this view because:
There is a dire lack of feedback signal in the realm of x-risk mitigation. It’s usually very hard to judge whether a given intervention was net-positive or net-negative. It’s not just a matter of asking CEA / LW / anyone else what they think about a particular intervention, because usually no one on Earth can do a reliable, robust evaluation. (e.g. is the creation of OpenAI/Anthropic net positive or net negative?) So, if you buy the core argument in the OP (about how naive impact markets incentivize people to carry out interventions without considering potential outcomes that are extremely harmful), I think that you shouldn’t create an impact market and rely on some unspecified future feedback signals to make retro funders stop buying certificates in a net-negative way at some unspecified point in the future.
As I argued in the grandparent comment, we should expect the things that people in EA say about the impact of others in EA to be positively biased.
All the above assumes that by “retro funders” here you mean a set of carefully appointed Final Buyers. If instead we’re talking about an impact market where anyone can become a retro funder, and retro funders can resell their impact to arbitrary future retro funders, I think things would go worse in expectation (see the first three points in the section “Mitigating the risk is hard” in the OP).