I should have phrased differently: It’s not that hard to pick out highly risky projects retroactively, relative to identifying them prospectively. I also think that the reference class which is most worrying is genuinely not that hard to identify as strongly negative EV.
Impact markets don’t solve the problem of funders being able to fund harmful projects. But they don’t make it differentially worse (it empowers funders generally, but I don’t expect you would argue that grantmakers are net negative, so this still comes out net-positive).
I would welcome attempts to cause the culture of big grantmakers to more reliably make sure the recipients stay focused on the major challenges, but that is a separate project.
The classes of problem you list are all important questions of what should be funded, and it would be great to have better models of the EV of funding them, but none of them are impact-market specific. It’s already true that the funder who is most enthusiastic about a project can fund it unilaterally, and that this will sometimes be EV-negative. People can already recruit new funders who are risk-tolerant.
We’re making grantmakers generally more powerful, and that’s not entirely free of negative effects[1], but it does seem very likely net-positive.[2]
I do think it makes sense to not rush into creating a decentralized unregulatable system on general principles of caution, as we certainly should watch the operation of a more controllable one for some time before moving towards that.
The community as a whole cannot come to consensus on each of the huge number of important decisions to be made, the bandwidth of common knowledge is far, far to low, and we are faced with too many choices for that to be a viable option. Having several of the most relevant people strongly on board is about as good a sign as you could expect currently. I’m open to more opinions coming in, and would be very interested in seeing you debate with people on the other side and try to double crux on this or get more people on board with your position, but turning this into a committee is going to stall the project.
There are some other effects around cultural effects of making money flows more legible which seem possibly concerning, but I’m not super worried about negative EV projects being run.
It’s not that hard to pick out highly risky projects retroactively, relative to identifying them prospectively.
Do you mean that, if a project ends up being harmful we have Bayesian evidence that it was ex-ante highly risky? If so, I agree. But that fact does not alleviate the distribution mismatch problem, which is caused by the prospect of a risky project ending up going well.
Impact markets don’t solve the problem of funders being able to fund harmful projects. But they don’t make it differentially worse (it empowers funders generally, but I don’t expect you would argue that grantmakers are net negative, so this still comes out net-positive).
If the distribution mismatch problem is not mitigated (and it seems hard to mitigate), investors are incentivized to fund high-stakes projects while regarding potential harmful outcomes as if they were neutral. (Including in anthropogenic x-risks and meta-EA domains.) That is not the case with EA funders today.
There are some other effects around cultural effects of making money flows more legible which seem possibly concerning, but I’m not super worried about negative EV projects being run.
I think this is a highly over-optimistic take about cranking up the profit-seeking lever in EA and the ability to mitigate the effects of Goodhart’s law. It seems that when humans have an opportunity to make a lot of money (without breaking laws or norms) at the expense of some altruistic values, they usually behave in a way that is aligned with their local incentives (while convincing themselves it’s also the altruistic thing to do).
I do think it makes sense to not rush into creating a decentralized unregulatable system on general principles of caution, as we certainly should watch the operation of a more controllable one for some time before moving towards that.
If you run a fully controlled (Web2) impact market for 6-12 months, and the market funds great projects/posts and there’s no sign of trouble, will you then launch a decentralized impact market that no one can control (in which people can sell the impact of recruiting additional retro funders, and the impact of establishing that very market)?
If the distribution mismatch problem is not mitigated (and it seems hard to mitigate), investors are incentivized to fund high-stakes projects while regarding potential harmful outcomes as if they were neutral.
I thought a bunch more about this, and I do think there is something here worth paying attention to.
I am not certain that there is a notable enough pool of the projects in the category you’re worried about to offset the benefits of impact markets, but we would incentivise those that exist, and that has a cost.
If we’re limited to accredited investors, as Scott proposed, we have some pretty strong mitigation options. In particular, we can let oraculars pay to mark projects as having been strongly net negative, and have this detract from the ability of those who funded that project to earn on their entire portfolio. Since accounts will be hard to generate and only available to accredited investors, generating a new account for each item is not an available option.
I think I can make some modifications to the Awesome Auto Auction to include this fairly simply, and AAA does not allow selling as an action which removes the other risk of people dumping their money / provides a natural structure for limiting withdrawals (just cut off their automatic payments until the “debt” is repayed).
Would this be sufficient mitigation? And if not, what might you still fear about this?
If you run a fully controlled (Web2) impact market for 6-12 months, and the market funds great projects/posts and there’s no sign of trouble, will you then launch a decentralized impact market that no one can control (in which people can sell the impact of recruiting additional retro funders, and the impact of establishing that very market)?
I don’t see much benefit to a Web3 one assuming we can do microtransactions on a Web2, so I’d be fine with either not doing the Web3 or only doing it after several years of having a Web2 without any of those restrictions and nothing going badly wrong (retaining the option to restrict new markets for it at any time).
In particular, we can let oraculars pay to mark projects as having been strongly net negative, and have this detract from the ability of those who funded that project to earn on their entire portfolio.
I think this approach has the following problems:
Investors will still be risking only the total amount of money they invest in the market (or place as a collateral), while their potential gain is unlimited.
People tend to avoid doing things that directly financially harm other individuals. Therefore, I expect retro funders would usually not use their power to mark a project as “ex-ante net negative”, even if it was a free action and the project was clearly ex-ante net negative (let alone if the retro funders need to spend money on doing it; and if it’s very hard to judge whether the project was ex-ante net negative, which seems a much more common situation).
Seems essentially fine. There’s a reason society converged to loss-limited companies being the right thing to do, even though there is unlimited gain and limited downside, and that’s that individuals tend to be far too risk averse. Exposing them to a risk to the rest of their portfolio should be more than sufficient to make this not a concern.
Might be a fair point, but remember, this is in the case where some project was predictably net negative and then actually was badly net negative. My guess is at least some funders would be willing to step in and disincentivise that kind of activity, and the threat of it would keep people off the worst projects.
There’s a reason society converged to loss-limited companies being the right thing to do, even though there is unlimited gain and limited downside, and that’s that individuals tend to be far too risk averse.
I think the reason that states tend to allow loss-limited companies is that it causes them to have larger GDP (and thus all the good/adaptive things that are caused by having larger GDP). But loss-limited companies may be a bad thing from an EA perspective, considering that such companies may be financially incentivized to act in net-negative ways (e.g. exacerbating x-risks), especially in situations where lawmakers/regulators are lagging behind.
Yes, and greater GDP maps fairly well to greater effectiveness of altruism. I think you’re focused on downside risks too strongly. They exist, and they are worth mitigating, but inaction due to fear of them will cause far more harm. Inaction due to heckler’s veto is a not a free outcome.
Companies not being loss-limited would not cause them to stop producing x-risks when the literal death of all their humans is an insufficient motivation to discourage them. It would reduce a bunch of other categories of harm, but we’ve converged to accepting that risk to avoid crippling risk aversion in the economy.
I should have phrased differently: It’s not that hard to pick out highly risky projects retroactively, relative to identifying them prospectively. I also think that the reference class which is most worrying is genuinely not that hard to identify as strongly negative EV.
Impact markets don’t solve the problem of funders being able to fund harmful projects. But they don’t make it differentially worse (it empowers funders generally, but I don’t expect you would argue that grantmakers are net negative, so this still comes out net-positive).
I would welcome attempts to cause the culture of big grantmakers to more reliably make sure the recipients stay focused on the major challenges, but that is a separate project.
The classes of problem you list are all important questions of what should be funded, and it would be great to have better models of the EV of funding them, but none of them are impact-market specific. It’s already true that the funder who is most enthusiastic about a project can fund it unilaterally, and that this will sometimes be EV-negative. People can already recruit new funders who are risk-tolerant.
We’re making grantmakers generally more powerful, and that’s not entirely free of negative effects[1], but it does seem very likely net-positive.[2]
I do think it makes sense to not rush into creating a decentralized unregulatable system on general principles of caution, as we certainly should watch the operation of a more controllable one for some time before moving towards that.
The community as a whole cannot come to consensus on each of the huge number of important decisions to be made, the bandwidth of common knowledge is far, far to low, and we are faced with too many choices for that to be a viable option. Having several of the most relevant people strongly on board is about as good a sign as you could expect currently. I’m open to more opinions coming in, and would be very interested in seeing you debate with people on the other side and try to double crux on this or get more people on board with your position, but turning this into a committee is going to stall the project.
And your cataloging of them does seem useful, there are things we can adjust to minimize them.
There are some other effects around cultural effects of making money flows more legible which seem possibly concerning, but I’m not super worried about negative EV projects being run.
Do you mean that, if a project ends up being harmful we have Bayesian evidence that it was ex-ante highly risky? If so, I agree. But that fact does not alleviate the distribution mismatch problem, which is caused by the prospect of a risky project ending up going well.
If the distribution mismatch problem is not mitigated (and it seems hard to mitigate), investors are incentivized to fund high-stakes projects while regarding potential harmful outcomes as if they were neutral. (Including in anthropogenic x-risks and meta-EA domains.) That is not the case with EA funders today.
I think this is a highly over-optimistic take about cranking up the profit-seeking lever in EA and the ability to mitigate the effects of Goodhart’s law. It seems that when humans have an opportunity to make a lot of money (without breaking laws or norms) at the expense of some altruistic values, they usually behave in a way that is aligned with their local incentives (while convincing themselves it’s also the altruistic thing to do).
If you run a fully controlled (Web2) impact market for 6-12 months, and the market funds great projects/posts and there’s no sign of trouble, will you then launch a decentralized impact market that no one can control (in which people can sell the impact of recruiting additional retro funders, and the impact of establishing that very market)?
I thought a bunch more about this, and I do think there is something here worth paying attention to.
I am not certain that there is a notable enough pool of the projects in the category you’re worried about to offset the benefits of impact markets, but we would incentivise those that exist, and that has a cost.
If we’re limited to accredited investors, as Scott proposed, we have some pretty strong mitigation options. In particular, we can let oraculars pay to mark projects as having been strongly net negative, and have this detract from the ability of those who funded that project to earn on their entire portfolio. Since accounts will be hard to generate and only available to accredited investors, generating a new account for each item is not an available option.
I think I can make some modifications to the Awesome Auto Auction to include this fairly simply, and AAA does not allow selling as an action which removes the other risk of people dumping their money / provides a natural structure for limiting withdrawals (just cut off their automatic payments until the “debt” is repayed).
Would this be sufficient mitigation? And if not, what might you still fear about this?
I don’t see much benefit to a Web3 one assuming we can do microtransactions on a Web2, so I’d be fine with either not doing the Web3 or only doing it after several years of having a Web2 without any of those restrictions and nothing going badly wrong (retaining the option to restrict new markets for it at any time).
I think this approach has the following problems:
Investors will still be risking only the total amount of money they invest in the market (or place as a collateral), while their potential gain is unlimited.
People tend to avoid doing things that directly financially harm other individuals. Therefore, I expect retro funders would usually not use their power to mark a project as “ex-ante net negative”, even if it was a free action and the project was clearly ex-ante net negative (let alone if the retro funders need to spend money on doing it; and if it’s very hard to judge whether the project was ex-ante net negative, which seems a much more common situation).
Seems essentially fine. There’s a reason society converged to loss-limited companies being the right thing to do, even though there is unlimited gain and limited downside, and that’s that individuals tend to be far too risk averse. Exposing them to a risk to the rest of their portfolio should be more than sufficient to make this not a concern.
Might be a fair point, but remember, this is in the case where some project was predictably net negative and then actually was badly net negative. My guess is at least some funders would be willing to step in and disincentivise that kind of activity, and the threat of it would keep people off the worst projects.
I think the reason that states tend to allow loss-limited companies is that it causes them to have larger GDP (and thus all the good/adaptive things that are caused by having larger GDP). But loss-limited companies may be a bad thing from an EA perspective, considering that such companies may be financially incentivized to act in net-negative ways (e.g. exacerbating x-risks), especially in situations where lawmakers/regulators are lagging behind.
Yes, and greater GDP maps fairly well to greater effectiveness of altruism. I think you’re focused on downside risks too strongly. They exist, and they are worth mitigating, but inaction due to fear of them will cause far more harm. Inaction due to heckler’s veto is a not a free outcome.
Companies not being loss-limited would not cause them to stop producing x-risks when the literal death of all their humans is an insufficient motivation to discourage them. It would reduce a bunch of other categories of harm, but we’ve converged to accepting that risk to avoid crippling risk aversion in the economy.